Skip to content

Instantly share code, notes, and snippets.

View philschmid's full-sized avatar

Philipp Schmid philschmid

View GitHub Profile
service: serverless-bert-lambda-docker
provider:
name: aws # provider
region: eu-central-1 # aws region
memorySize: 5120 # optional, in MB, default is 1024
timeout: 30 # optional, in seconds, default is 6
functions:
questionanswering:
docker push $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/bert-lambda
docker tag bert-lambda $aws_account_id.dkr.ecr.$aws_region.amazonaws.com/bert-lambda
aws_region=eu-central-1
aws_account_id=XXXXXXXXXX
aws ecr get-login-password \
--region $aws_region \
| docker login \
--username AWS \
--password-stdin $aws_account_id.dkr.ecr.$aws_region.amazonaws.com
aws ecr create-repository --repository-name bert-lambda > /dev/null
curl --request POST \
--url http://localhost:8080/2015-03-31/functions/function/invocations \
--header 'Content-Type: application/json' \
--data '{"body":"{\"context\":\"We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be finetuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial taskspecific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolu
docker run -p 8080:8080 bert-lambda
docker build -t bert-lambda .
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
serverless.yaml
get_model.py
FROM public.ecr.aws/lambda/python:3.8
# Copy function code and models into our /var/task
COPY ./ ${LAMBDA_TASK_ROOT}/
# install our dependencies
RUN python3 -m pip install -r requirements.txt --target ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "handler.handler" ]