Terraform & serverless framework, a match made in heaven? (Part III)

A. Jasinski
7 min readMar 12, 2021

More complex deployments of AWS resources using the serverless framework.

Previously, in Part II of this series, we looked at how we can deploy a simple HTTP endpoint and relatively simple set of resources using the serverless framework.

Here we will continue building on the foundations, and expanding the simple HTTP application to use DynamoDB as it’s datastore whilst controlling access using IAM policies.

Please do bear in mind that this series is AWS and Node.JS biased, but similar concepts still apply across any cloud and any programming language.

As always the end goal of what we will be working towards deserves a pictorial representation, so here it is:

The “post” and “fetch” functions using DynamoDB as a datastore with custom IAM roles providing only required permissions.

As you can see we will be creating a little bit more code and creating a simple fetch and post methods to save and retrieve data from the store. All of that will be governed using IAM roles and policies specific to service and each function, providing us with a solid foundation based on a well-known the principle of least privilege (PoLP).

We achieve such security measures by defining direct access to specific resources, such as DynamoDB and actions such as “scan” on a per-function basis.

Defining more complex infrastructure resources

Let’s kick off with making changes to our serverless.yml file, where we need to accommodate for the creation of the table and adapt the use of the IAM roles.

# Name of our service
service: myservice
# defime a variable for table name
custom:
tableName: 'myservice-${self:provider.stage}'
# Name and runtime of the selected provider
provider:
name: aws
runtime: nodejs12.x
role: serviceRole
environment:
DDB_TABLE: ${self:custom.tableName}
DEPLOYMENT_REGION: ${self:provider.region}
# Function definition
functions:
post:
handler: handler.post
role: postRole
events:
- httpApi:
method: POST
path: /
fetch:
handler: handler.fetch
role: fetchRole
events:
- httpApi:
method: GET
path: /
resources:
Resources:
# shared service role
serviceRole:
Type: AWS::IAM::Role
Properties:
Path: /
RoleName:
'Fn::Join':
- '-'
-
- 'iam-rol'
- 'lambda'
- 'myservice'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName:
'Fn::Join':
- '-'
-
- 'iam-pol'
- 'lambda'
- 'myservice'
- 'execution'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource:
- 'Fn::Join':
- ':'
-
- 'arn:aws:logs'
- Ref: 'AWS::Region'
- Ref: 'AWS::AccountId'
- 'log-group:/aws/lambda/*:*:*'
# post api role to put new item
postRole:
Type: AWS::IAM::Role
Properties:
Path: /
RoleName:
'Fn::Join':
- '-'
-
- 'iam-rol'
- 'lambda'
- 'myservice'
- 'post'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName:
'Fn::Join':
- '-'
-
- 'iam-pol'
- 'lambda'
- 'myservice'
- 'post'
- 'dynamodb'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:PutItem
Resource:
Fn::GetAtt:
- dynamoDbTable
- Arn
# fetch api role to scan the table
fetchRole:
Type: AWS::IAM::Role
Properties:
Path: /
RoleName:
'Fn::Join':
- '-'
-
- 'iam-rol'
- 'lambda'
- 'myservice'
- 'fetch'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName:
'Fn::Join':
- '-'
-
- 'iam-pol'
- 'lambda'
- 'myservice'
- 'fetch'
- 'dynamodb'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:Scan
- dynamodb:GetItem
Resource:
Fn::GetAtt:
- dynamoDbTable
- Arn
# table resource definition
dynamoDbTable:
Type: AWS::DynamoDB::Table
Properties:
# set up attribute definitions
AttributeDefinitions:
- AttributeName: messageId
AttributeType: S
KeySchema:
- AttributeName: messageId
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
# define table name using the custom field
TableName: ${self:custom.tableName}

As you can see above the file is getting quite long, let’s go through some of the critical bits we define:

  • the variable of tableName is being set within the custom field and is available to be used within the file, it’s initial value is going to be myservice-dev (stage typically defaults to dev)
  • a service level environment variables DDB_NAME & DEPLOYMENT_REGION that will be aviailable to both post and fetch functions
  • myservice service to use the serviceRole IAM role with an execution policy — iam-pol-lambda-myservice-execution
  • fetch function to use an IAM role iam-rol-lambda-myservice-fetch with a policy iam-pol-lamda-myservice-fetch-dynamodb that provides the fetch function access to Scan& GetItem operations on the DynamoDB table
  • post function to use an IAM role iam-rol-lambda-myservice-post with a policy iam-pol-lamda-myservice-post-dynamodb that provides the fetch function access to the PutItem operation on the DynamoDB table
  • dynamoDBTable resource with the name as per the custom tableName variable and the messageId as it’s key

Serverless function to read and write to DynamoDB

Now, we are ready to change our handler.js file. The idea here is to create a post and fetch functions which will be able to save and retrieve the data from our DynamoDB table.

Let’s update the handler.js with the following code:

'use strict';const AWS = require('aws-sdk') // import AWS SDK
const DDB_TABLE = process.env.DDB_TABLE; // get name of the DynamoDb table
const DEPLOYMENT_REGION = process.env.DEPLOYMENT_REGION; // check location of lambda
// instantiate DynamoDB (ddb) client
const ddb = new AWS.DynamoDB.DocumentClient({
api_version: '2012-08-10',
region: DEPLOYMENT_REGION
});
// write message to the table
module.exports.post = async event => {
console.log('Executing post @', new Date().toUTCString())
console.log("Request: " + JSON.stringify(event));
const body = JSON.parse(event.body)

const params = {
TableName: DDB_TABLE,
Item: {
messageId: new Date().getTime().toString(),
message: body.message
},
};
try {
// write data (message) to DynamoDB table
const data = await ddb.put(params).promise();
return { statusCode: 200, body: JSON.stringify({ event, params, data })};
} catch (error) {
console.log(`Error -> Post: ${error.stack}`)
return { statusCode: 400, request: { event, params}, error: `Error -> Post: ${error.stack}`};
}
}
// Fetch the data from the table
module.exports.fetch = async event => {
console.log('Executing fetch @', new Date().toUTCString())
console.log("request: " + JSON.stringify(event));

const params = {
TableName: DDB_TABLE
};
try {
// fetch data (message) to DynamoDB table
const data = await ddb.scan(params).promise();
return { statusCode: 200, body: JSON.stringify({ event, params, data })};
} catch (error) {
console.log(`Error -> Post: ${error.stack}`)
return { statusCode: 400, request: { event, params}, error: `Error -> Fetch: ${error.stack}`};
}
}

The above code allows us to use the AWS SDK for Node.js and execute the Scan and PutItem operations on our new table.

We use the service-level environment variable DEPLOYMENT_REGION to instantiate the new instance of the DynamoDB.DocumentClient, which allows both (post/fetch) functions to execute appropriate operations on the table defined in the TABLE_NAME service-level environment variable.

In the post function, we create a new item with messageId being a timestamp of a message and the body being the content of the message property of the incoming event.

The fetch function, simply uses the same TABLE_NAME variable to scan for all the items in the database using the scan operation.

If you can remember from Part II we can use the package command to manually initiate transpilation of the function and packaging it, like so:

sls package

After packaging is complete, we can again go through the output in the .serverless directory and poke around the CloudFormation templates that were generated. More info on that you can find in Part II.

From here we can deploy our function:

serverless deploy --region 

After a successful deployment, it is possible to get a list of resources using the following AWS CLI command:

aws --region eu-west-2 cloudformation describe-stack-resources \
--stack-name myservice-dev --output json \
--query 'StackResources[*].[LogicalResourceId, PhysicalResourceId, ResourceType]'

We would expect to see three IAM roles (service, two function level) with associated policies as well as the new DynamoDB table.

We can also double check that the correct roles exist using:

aws --region eu-west-2 iam list-roles --query 'Roles[*].RoleName'## expected output:
#
# iam-rol-lambda-myservice
# iam-rol-lambda-myservice-fetch
# iam-rol-lambda-myservice-post

And then for a single role check for an associated policy:

aws --region eu-west-2 iam list-role-policies --role-name iam-rol-lambda-myservice-fetch## expected output:
#
# iam-pol-lambda-myservice-fetch-dynamodb

Finally, let’s poke around DynamoDB and check if the table exists:

aws --region eu-west-2 dynamodb list-tables## expected output:
#
# "myservice-dev"

And that there are no items in the table:

aws --region eu-west-2 dynamodb scan --table-name myservice-dev## expected output:
#
# "Items": [],
# "Count": 0

Finally, we can test our function. First, we can create a new item in the datastore:

curl -X POST -H "Content-Type: application/json" \
-d '{"message": "This is my cURL HTTP message"}' \
https://c6x1c1m6u0.execute-api.eu-west-2.amazonaws.com/

Upon success, we will receive HTTP response 201 and otherwise a 400 error code within the response payload with error details.

If all went well, let’s check for the data in the database by running a fetch by using the GET method:

curl -X GET -H "Content-Type: application/json" \
https://c6x1c1m6u0.execute-api.eu-west-2.amazonaws.com/
# expected output:
#
# "data":{"Items":[{"messageId":"1604797685270","message":"This is my cURL HTTP message"}]}

We can also run a manual check for the items using AWS CLI:

aws --region eu-west-2 dynamodb scan --table-name myservice-dev## expected output:
#
#"Items": [
# {
# "messageId": {
# "S": "1604797685270"
# },
# "message": {
# "S": "This is my cURL HTTP message"
# }
# },
# "Count": 0

And, that is it! We are now able to call our functions to store and fetch the data from the DynamoDB table.

And after this, you got to the end of Part III. We have covered how we can deploy more complex applications and supporting infrastructure resources using the serverless framework.

We are now ready to start thinking about deploying our supporting resources using terraform such as DynamoDB table and IAM permissions and incorporating those into our lambda function,

We will also explore an exciting pattern used to bridge terraform and serverless framework.

Again, if you are ready for more, then I shall see you in Part IV…

--

--

A. Jasinski

Digital Strategist falling in love with significant problems, discovering pragmatic solutions, and applying cross-contextual thinking to assist me and others.