Terraforming Serverless
By Jon Archer
In my day job recently I’ve been rewriting AWS deployment infrastructure-as-code taking Serverless-Framework and raw Cloudformation transforming it into Terraform. While I’ve used Terraform to deploy infrastructure for many years, this particular task was rather interesting as I had to replicate what Serverless-Framework did with regard to building the bundles and deploying them.
For obvious reasons I’m not going to use exact examples of code bundles here, but the Hello sample code from a freshly created Lambda function.
export const handler = async (event) => {
const response = {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!'),
};
return response;
};
The code and infrastructure are currently deployed using Serverless Framework, while this does the job I much prefer Terraform and the fine grained control it gives you over infrastructure which is abstracted away a little with Serverless Framework, however Terraform doesn’t deal with the bundling process for Lambda Functions. There are numerous ways to handle this in Terraform, but I like the idea of separating build from deploy too. Giving us the ability to constantly check the deployed functions are based on a known version, or build, of code without constantly deploying the latest code, thus avoiding potential dependency changes between dev and prod deployments.
To handle this process I took advantage of a pipeline, that when a PR was approved to merge code into the main/master branch it would be triggered and generate the appropriate bundle and drop it into an S3 bucket.
- step:
name: 'Building bundle'
deployment: staging
script:
- npm ci
- rm -rf build
- npx tsc
-> npx esbuild
--log-level=error
--outdir=artefacts
--platform=node
--target=node18
--bundle
--minify build/*.js
- mkdir zip
- cd artefacts
- zip -9 -r ../zip/helloworld.zip *
artifacts:
- zip/*
- step:
name: Deploy to S3
deployment: production
script:
- pipe: atlassian/aws-s3-deploy:1.2.0
variables:
AWS_DEFAULT_REGION: 'eu-west-2'
S3_BUCKET: 'lambdabundlesbucket12345-eu-west-2'
LOCAL_PATH: 'zip'
This pipeline code is for Bitbucket, but the relevant pieces can easily be extracted and used in many other pipelines such as GitHub or GitLab.
The first step in the pipeline follows the usual process of installing npm packages, transpiling the code, bundling the code and zipping it up. The second simply copies the resultant zipped bundle to an S3 bucket ready for consumption by our Terraform later.
To aid with redeploying, we also need to create a hash of the zip file that terraform can reference. This will enable it to check to see if it has the current version present in the bucket or not. We can do this by extending the pipeline:
- step:
name: 'Building bundle'
deployment: staging
script:
- npm ci
- rm -rf build
- npx tsc
-> npx esbuild
--log-level=error
--outdir=artefacts
--platform=node
--target=node18
--bundle
--minify build/*.js
- mkdir zip
- cd artefacts
- zip -9 -r ../zip/helloworld.zip *
- cd ../zip
-> openssl dgst
-sha256
-binary helloworld.zip |
openssl enc
-base64 >helloworld.zip.sha256.txt
artifacts:
- zip/*
- step:
name: Deploy to S3
deployment: production
script:
- pipe: atlassian/aws-s3-deploy:1.2.0
variables:
AWS_DEFAULT_REGION: 'eu-west-2'
S3_BUCKET: 'lambdabundlesbucket12345-eu-west-2'
LOCAL_PATH: 'zip'
This will now upload the zipped bundle, and our newly created hash file to the target s3 bucket. Notice the .txt suffix on the hash file, this is so S3 classifies the object type correctly and allows us to read the content later in Terraform. There are only certain object types where the body of the object can be read.
Now we’ve created our artefacts ready we can start to look at actually deploying the lambda using Terraform.
First off we need to create some data source references to the actual objects we previously uploaded, I’m going to assume you have already setup your provider and have a working terraform config.
variable "lambda_bundles_s3" {
type = string
default = "lambdabundlesbucket12345-eu-west-2
}
data "aws_s3_object" "helloworld" {
bucket = var.lambda_bundles_s3
key = "helloworld.zip"
}
data "aws_s3_object" "helloworld-sha256" {
bucket = var.lambda_bundles_s3
key = "helloworld.sha256.txt"
}
We can now add the Terraform to deploy the Lambda itself:
resource "aws_lambda_function" "helloworld" {
function_name = "helloworld"
role = aws_iam_role.helloworld.arn
runtime = "nodejs18.x"
s3_bucket = var.lambda_bundles_s3
s3_key = data.aws_s3_object.helloworld.key
source_code_hash = chomp(data.aws_s3_object.helloworld-sha256.body)
}
data "aws_iam_policy_document" "helloworld" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "helloworld" {
name = "helloworld-role"
assume_role_policy = data.aws_iam_policy_document.helloworld.json
}
This should be enough for us to get the Lambda deployed successfully. Notice the source_code_hash option, this reads the content of the hash file we generated earlier with chomp cleaning any unwanted newlines at the end. I had tried multiple combinations of hash comparison but this is the only one I found to successfully work.
It’s worth mentioning the fact we’ve created a role in the Terraform also, this is a requirement and sometimes taken care of by tools like Serverless Framework. I personally prefer having that finer grain control of things like roles to ensure we follow the least privilege principals. Here we’ve just created a basic role but you would extend it to access resources required by the Lambda such as dynamodb access etc. This goes for other related resources such as triggers, like EventBridge rules.
Taking this now one step further, I use a secondary pipeline (AWS Codebuild) to deploy the terraform.
version: 0.2
env:
parameter-store:
bootstrap: bootstrap
phases:
install:
runtime-versions:
nodejs: latest
commands:
- curl -s -qL -o terraform_install.zip https://releases.hashicorp.com/terraform/1.5.5/terraform_1.5.5_linux_amd64.zip
- unzip terraform_install.zip -d /usr/bin
- chmod +x /usr/bin/terraform
- export
- terraform init -no-color
- terraform plan -no-color
- |
if expr "${APPLY}" : "true" >/dev/null; then
terraform apply -auto-approve -no-color
fi
As previously mentioned, this all depends on a fully configured Terraform stack, with statefile configuration and AWS permissions from the pipeline in question so is only served as an example of how to perform the above tasks. I’m hopefully going to be putting together a video series on building a sample microservices app with the above being part of the equation.
To conclude, here we’ve created a simple Lambda function in AWS using Terraform. The code itself was built, bundled and stored prior to deployment so it was a known entity or artefact. This is a much safer method of building applications, or more specifically microservices as the deployment to multiple environments won’t vary so can be confidently smoke tested after deployment with the expectation of the same results. Both the build/bundle and deployment are handled via pipelines, so although very automated strict controls can also be included to make sure code is only released on approval or code scanning success.