- Published on
Containerised Lambdas, Terraform & GitHub Actions
- Authors
- Name
- Chloe McAree (McAteer)
- @ChloeMcAteer3
Recently, in a side project I had my first interaction with containerised Lambdas. Reason being is that I’ve began working with a number of machine learning libraries and the dependencies started to become large.
I did consider going down the EC2 route to deal with needing to use these large dependencies, but at the minute this work is very much trial and error, it’s still in dev mode and I didn’t want to be paying for an instance to continually run, when this service will have little to no traffic.
Container Lambdas are perfect for this scenario, as the images support up to 10GB! I also use Docker in most other projects I’m working on, so this really help with keeping the technology stack and practices consistent whilst also making it a lot easier to run and test locally.
When initially investigating them, here are a few things I was unsure about:
How to utilise Terraform to manage the setup
How to deploy the container images
How to update the Lambda to pull the latest container image
How to do all the above in CI/CD
It took me a lot of Googling to answer the above questions and so I want to take you through each of these step by step in this blog. I will be working with the following technologies:
AWS Lambda (Python)
AWS ECR
Terraform
GitHub Actions
A container is an executable package of software and in order for Lambdas to use the container images created, they needs to be deployed to Elastic Container Registry (ECR), which is AWS’ service that allows us to store, share and deploy container images anywhere.
Infrastructure as code
So we know we need a Lambda, a repository in ECR and we will also need some roles to allow them to communicate. For my project I have chosen to use Terraform for my infrastructure as code.
Let’s start with the ECR repository resource. It’s fairly straightforward, all we really need to do is specify the name we want our repository to have. Tags are optional, but I like to add them to any resources I create — so that I can filter on tags in my billing console. There are other optional arguments you can use and you can check them out here in the Terraform docs.
# ECR repository
resource "aws_ecr_repository" "example" {
name = "example"
tags = {
"project" : "blog-example"
}
}
We will need to create images to upload to this repository. Using Terraforms null_resource allows us to implement a lifecycle on a resource and the triggers within it allows us to define a set of values that once updated will cause a resource to be replaced — in this case we want the resource to be replaced if there are any changes to our main python file or Dockerfile.
resource "null_resource" "ecr_image" {
triggers = {
python_file = md5(file("../app.py"))
docker_file = md5(file("../Dockerfile"))
}
}
data "aws_ecr_image" "lambda_image" {
depends_on = [null_resource.ecr_image]
repository_name = aws_ecr_repository.example.name
image_tag = "latest"
}
We now also need code for our Lambda — you’ll notice this Terraform declaration of the Lambda isn’t too different from a standard lambda, the only real difference is that we specify an image URI. You may also notice the :latest at the end of the URI and this allows our Lambda to use the latest version of image in ECR. The memory size of this Lambda is also increased a lot to deal with the intensive machine learning workloads that it will be carrying out — but this can be changed if your own workload does not require it.
resource "aws_lambda_function" "example" {
depends_on = [
null_resource.ecr_image
]
function_name = "example-lambda"
architectures = ["arm64"]
role = aws_iam_role.lambda.arn
timeout = 180
memory_size = 10240
image_uri = "${aws_ecr_repository.example.repository_url}:latest"
package_type = "Image"
}
resource "aws_cloudwatch_log_group" "example_service" {
name = "/aws/lambda/example_service"
retention_in_days = 14
}
Of course we also need to allow our Lambda to communicate with ECR and CloudWatch and so have to provide an IAM role and associated policy.
resource "aws_iam_role" "lambda" {
name = "example-lambda-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}
data "aws_iam_policy_document" "lambda" {
statement {
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
effect = "Allow"
resources = ["*"]
sid = "CreateCloudWatchLogs"
}
}
resource "aws_iam_policy" "lambda" {
name = "example-lambda-policy"
path = "/"
policy = data.aws_iam_policy_document.lambda.json
}
We then also need our main Terraform file that specifies where the Terraform state is stored, the region we are using and our AWS Profile.
terraform {
backend "s3" {
bucket = "example-blog-state"
key = "blog/terraform.tfstate"
region = "eu-west-2"
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.41"
}
}
}
provider "aws" {
region = "eu-west-2"
profile = "default"
}
Ok so with the Terraform code for our infrastructure in place we still need to create the code for our Lambda function and our docker file before we apply the Terraform.
Docker containers & ECR
We need to create our main python file that will have our Lambda handler within it — for the purposes of this blog I just have a greeting returned from this file — but in reality, I will be working with some machine learning libraries for Natural Language Processing.
app.py file:
def handler(event, context):
return 'Hello AWS Lambda'
We then need make our docker file that specifies the image we want to build on and all the specific install commands that are required.
AWS provides a bunch of different base images that you can use for their supported run times which are Python, Node.js, Java, .NET, Go and Ruby. However, there still is the option to create your own custom image.
For my use case I am going to use their python base image, as you can see from the first line in the docker file below:
FROM public.ecr.aws/lambda/python:3.8
# Copy function code
COPY app.py /var/task
# Install the function's dependencies using file requirements.txt
COPY requirements.txt .
RUN pip3 install -r requirements.txt --target "/var/runtime"
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "app.handler" ]
In the file above you will also see a requirements.txt file being copied and installed, this file lists all the libraries I am using and also specifies the required version — in the example Lambda handler code above, I’m obviously not making use of any libraries, but in the real executable, I am.
We then need to finish by setting the CMD to be the Lambda handler function.
Ok so with with our Terraform code in place, when we run our init , plan and apply we should see a failure, with the following message:
Error: Error describing ECR images: ImageNotFoundException: The image with imageId {imageDigest:’null’, imageTag:’latest’} does not exist within the repository with name ‘example’ in the registry with id
This is because there is currently no image in ECR yet! Let’s do something about that.
Uploading to ECR
In order to do this you will need to log in to ECR and then build and tag your image, before pushing it up!
# Log in to ECR
aws ecr get-login-password --region INSERT_REGION | docker login --username AWS --password-stdin INSERT_ACCOUNT_ID.dkr.ecr.INSERT_REGION.amazonaws.com
# Build Docker image
docker build -t example .
# Tag Image
docker tag example:latest INSERT_ACCOUNT_ID.dkr.ecr.INSERT_REGION.amazonaws.com/example:latest
# Push image to ECR
docker push INSERT_ACCOUNT_ID.dkr.ecr.INSERT_REGION.amazonaws.com/example:latest
Now we have an image in ECR lets try to run our Terraform again to link the Lambda to it! Note: make sure you are in your Terraform directory when running terraform plan and terraform apply .
With the Terraform now building, we can double check everything is connected as expected by logging into the AWS console and navigating to the Lambda section and selecting the Lambda that was just created:
In this Lambda once you click on the Image tab at the bottom you should see that its image is being pulled from ECR! You can also configure a test event by clicking on the Testtab and Triggering a test event. Once the Lambda has executed you should see your response, for example in my case — I can see Hello AWS Lambda!
But if we update this code to say something else how will our Lambda know? If we push up a new image, will Lambda automatically know to pull the update?
Let’s try this scenario out, by changing our Lambda function in some way e.g. in this case changing the greeting and re-running the ECR build, tag and push commands.
Turns out no — with my changes in place and the new image pushed to ECR, the Lambda has not automatically picked up the change, the old greeting is still being displayed.
To update the Lambda to pull in the uploaded image, you need to run the following command:
aws lambda update-function-code --function-name example-lambda --image-uri ${{AWS_ACCOUNT_NUMBER }}.dkr.ecr.eu-west-2.amazonaws.com/example:latest
But we don’t want to have to build, tag, push and update our container manually every time we make a change — so let’s add all these steps to a CI/CD pipeline .
CI/CD Pipeline
For CI/CD I am using GitHub Actions, I’ve used them before for a few other projects before and it allows you to define your pipeline in yaml files within a .github/workflows directory— lets dig into the example below:
name: Backend CI
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
deploy-example:
name: Deploy Blog Example
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{secrets.AWS_ACCESS_KEY}}
aws-secret-access-key: ${{secrets.AWS_SECRET_KEY}}
aws-region: eu-west-2
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{steps.login-ecr.outputs.registry}}
ECR_REPOSITORY: example
IMAGE_TAG: latest
run: |
docker build -t example .
docker tag example:latest ${{ secrets.AWS_ACCOUNT_NUM}}.dkr.ecr.eu-west-2.amazonaws.com/example:latest
docker push ${{ secrets.AWS_ACCOUNT_NUM }}.dkr.ecr.eu-west-2.amazonaws.com/example:latest
aws lambda update-function-code --function-name example-lambda --image-uri ${{ secrets.AWS_ACCOUNT_NUM }}.dkr.ecr.eu-west-2.amazonaws.com/example:latest
At the top of the file you will see the on section and this controls when the jobs we have define actually run — so in this case you can see they will run when a pull request is merged into the main branch, or if for some reason code is pushed directly up to the main branch!
The jobs section is then where we can define the different jobs we want to run in this pipeline — so here you could have multiple different jobs for things like: running tests, building project assets and deploying. For this tutorial I just have one job defined and this is for deploying our code changes to ECR and then updating the Lambda to pull in the new image. You will see all the commands are the same as before, but now they will automatically run for us when we merge our new changes on GitHub.
Hopefully this walk through was useful , I will be extending this project more so, stayed tuned for more blogs and content on what I am building by following me on twitter — @chloemcateer3