If you’ve read the blog posts on CloudJourney.io before, you’ve likely read the term “Continuous Verification”. If you haven’t that’s okay too. There’s an amazing article from Dan Illson and Bill Shetti on The New Stack explaining in detail what Continuous Verification is. In a nutshell, the Continuous Verification comes down to making sure that DevOps teams put as many checks as possible into their CI/CD pipelines. Adding checks into a pipeline means there are fewer manual tasks and that means you have access to more data tot smooth out and improve your development and deployment process.
In part one we covered the tools and technologies and in part two we covered the Continuous Integration aspect of the ACME Serverless Fitness Shop. Now, it’s time to dive into Infrastructure as Code!
What is the ACME Serverless Fitness Shop
Let’s go over what the ACME Serverless Fitness Shop is just one more time. It’s a shop combines the concepts of Serverless and Fitness, which are two of my favorite things, because combining two amazing things can only lead to more amazing outcomes. There are seven distinct domains, that all contain one or more serverless functions. Some of these services are event-driven, while others have an HTTP API and all of them are written in Go.
Infrastructure as Code
As with anything, starting with a definition is probably a good idea to make sure we’re all on the same page. The Wikipedia page for Infrastructure as Code describes it as “the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.”
Slightly paraphrasing it, I think that Infrastructure as Code makes our infrastructure programmable. It makes the infrastructure apps need to run on interesting to developers. When I say infrastructure I absolutely realize that serverless means you shouldn’t worry about servers and virtual machines. However, things like storage, API gateways, and databases are definitely something serverless developers need to think about. There are lots of reasons why Infrastructure as Code is becoming the new norm. It makes provisioning resources a lot faster, especially in the cloud. Infrastructure as Code helps to eliminate human configuration errors, assuming the code didn’t have errors. Ideally, the code also helps you move from one region to another if that’s needed or deploy to multiple regions. It also helps to mitigate risk. If one of the developers that wrote the Infrastructure as Code leaves, you’ll still have the code. Other developers can take over and carry on.
Tools for Infrastructure as Code
As I wrote in the first part of the series, Infrastructure as Code means moving the creation of infrastructure into the CI/CD pipeline as much as possible too. Luckily, there are a ton of options when it comes to Infrastructure as Code:
- Terraform: This is an awesome tool but as a developer, you’ll have to write the infrastructure in a different language than the rest of your code.
- Serverless Framework: This was one of the first companies that made building and deploying serverless functions easier, and while they do an amazing job, developers still have to orchestrate different parts of their apps.
- AWS CloudFormation and the Serverless Application Model (SAM): The AWS specific language, with a set of awesome templates from SAM, but it requires you to learn a new syntax as well.
- Pulumi: An open-source Infrastructure as Code tool that works across clouds and allows you to create all sorts of resources.
What makes Pulumi different
I’m a developer (or developer advocate), which means I’m most definitely not a YAML expert. The programming languages that I enjoy are TypeScript and Go. When I think about those languages, and pretty much any other programming language as well, I expect things like loops, variables, and using modules and frameworks. Pulumi is, at least so far, the only tool that mixes infrastructure with actual code. To create three similar IAM roles, I can create a “for” loop as opposed to copying and pasting a statement three times. To me, that is an essential part of the developer experience. As we’re at the subject of developer experience, we tend to expect certain things when it comes to code. We want to have nice syntax highlighting, we want to use it inside of our IDEs, and we want strongly typed objects. The mix of defining infrastructure with concepts that we know as developers is what sets Pulumi apart from the rest.
Configuration
All of the domains have a pulumi folder, that contains all the configuration and code needed to deploy the services of that domain to AWS. Pulumi works with a configuration file, where you can set variables.
config:
aws:region: us-west-2 ## The region you want to deploy to
awsconfig:generic:
sentrydsn: ## The DSN to connect to Sentry
accountid: ## Your AWS Account ID
awsconfig:tags:
author: retgits ## The author, you...
feature: acmeserverless ## The resources are part of a specific app (the ACME Serverless Fitness Shop)
team: vcs ## The team you're on
version: 0.2.0 ## The version of the app
To use the configuration within a Pulumi program, there are two Go structs. These Go structs make sure that the key/value pairs from the configuration file are available as strongly typed variables.
// Tags are key-value pairs to apply to the resources created by this stack
type Tags struct {
// Author is the person who created the code, or performed the deployment
Author pulumi.String
// Feature is the project that this resource belongs to
Feature pulumi.String
// Team is the team that is responsible to manage this resource
Team pulumi.String
// Version is the version of the code for this resource
Version pulumi.String
}
// GenericConfig contains the key-value pairs for the configuration of AWS in this stack
type GenericConfig struct {
// The AWS region used
Region string
// The DSN used to connect to Sentry
SentryDSN string `json:"sentrydsn"`
// The AWS AccountID to use
AccountID string `json:"accountid"`
}
To fill the structs with values, so to actually read the configuration and create Go objects from them, Pulumi added a method called RequireObject. That specific method returns an error when the YAML element it looks for isn’t found.
// Get the region
region, found := ctx.GetConfig("aws:region")
if !found {
return fmt.Errorf("region not found")
}
// Read the configuration data from Pulumi.<stack>.yaml
conf := config.New(ctx, "awsconfig")
// Create a new Tags object with the data from the configuration
var tags Tags
conf.RequireObject("tags", &tags)
// Create a new GenericConfig object with the data from the configuration
var genericConfig GenericConfig
conf.RequireObject("generic", &genericConfig)
genericConfig.Region = region
Building code
To build the Go executable and the zip file that Lambda needs, you can use Make. However, since we’re already using Go why not use Go to build and zip the functions? I built a Go module to help do that! In four lines of Go code, the program creates the executable and zip file. Because Pulumi mixes the creation of infrastructure with actual code, you can also add loops (to build multiple functions) or add conditions (to sometimes build).
fnFolder := path.Join(wd, "..", "cmd", "lambda-payment-sqs")
buildFactory := builder.NewFactory().WithFolder(fnFolder)
buildFactory.MustBuild()
buildFactory.MustZip()
Finding resources
Not all resources that your app needs might be deployed in the same stack. Things like SQS queues or DynamoDB tables could be in a completely different stack, but you still need access to those resources to receive messages from or store data in.
// Lookup the SQS queues
responseQueue, err := sqs.LookupQueue(ctx, &sqs.LookupQueueArgs{
Name: fmt.Sprintf("%s-acmeserverless-sqs-payment-response", ctx.Stack()),
})
if err != nil {
return err
}
requestQueue, err := sqs.LookupQueue(ctx, &sqs.LookupQueueArgs{
Name: fmt.Sprintf("%s-acmeserverless-sqs-payment-request", ctx.Stack()),
})
if err != nil {
return err
}
In this case, we need to know the two SQS queues that are used to receive payment requests from and send credit card validation messages to. The names and Amazon Resource Names (ARNs) that identify the queues are needed to configure the right IAM policies and event source mappings.
Creating IAM policies
While I absolutely enjoy the Go SDK that Pulumi offers, there are certainly a few places where AWS’ Serverless Application Model speeds up developer productivity. One of those areas is creating IAM policies. AWS SAM allows you to choose from a list of policy templates to scope the permissions of your Lambda functions to the resources that are used by your application. To get a similar effect within Pulumi, I built a Go module that wraps those policy templates in a way you can use them within virtually any Go app.
// Create a factory to get policies from
iamFactory := sampolicies.NewFactory().WithAccountID(genericConfig.AccountID).WithPartition("aws").WithRegion(genericConfig.Region)
// Add a policy document to allow the function to use SQS as event source
iamFactory.AddSQSSendMessagePolicy(responseQueue.Name)
iamFactory.AddSQSPollerPolicy(requestQueue.Name)
policies, err := iamFactory.GetPolicyStatement()
if err != nil {
return err
}
_, err = iam.NewRolePolicy(ctx, "ACMEServerlessPaymentSQSPolicy", &iam.RolePolicyArgs{
Name: pulumi.String("ACMEServerlessPaymentSQSPolicy"),
Role: role.Name,
Policy: pulumi.String(policies),
})
if err != nil {
return err
}
These few lines of Go, create an IAM policy that allows the Lambda function using it to receive messages from and send messages to the two queues we looked up above. The Go module saves me from writing a bunch of IAM statements.
Deploying functions
// Create the AWS Lambda function
functionArgs := &lambda.FunctionArgs{
Description: pulumi.String("A Lambda function to validate creditcard payments"),
Runtime: pulumi.String("go1.x"),
Name: pulumi.String(fmt.Sprintf("%s-lambda-payment", ctx.Stack())),
MemorySize: pulumi.Int(256),
Timeout: pulumi.Int(10),
Handler: pulumi.String("lambda-payment-sqs"),
Environment: environment,
Code: pulumi.NewFileArchive("../cmd/lambda-payment-sqs/lambda-payment-sqs.zip"),
Role: role.Arn,
Tags: pulumi.Map(tagMap),
}
function, err := lambda.NewFunction(ctx, fmt.Sprintf("%s-lambda-payment", ctx.Stack()), functionArgs)
if err != nil {
return err
}
_, err = lambda.NewEventSourceMapping(ctx, fmt.Sprintf("%s-lambda-payment", ctx.Stack()), &lambda.EventSourceMappingArgs{
BatchSize: pulumi.Int(1),
Enabled: pulumi.Bool(true),
FunctionName: function.Arn,
EventSourceArn: pulumi.String(requestQueue.Arn),
})
if err != nil {
return err
}
The function arguments are pretty much the same as they would be in any other tool to deploy to AWS Lambda. For example, it has the same variables for the runtime, the memory size, and the IAM role that you’d also see in the AWS console. In Lambda, functions can be triggered via HTTP calls or events. In the case of the Payment service, the function is triggered by a message from an SQS queue. To make sure that trigger reaches the function, you need a “NewEventSourceMapping()
”. That eventsource mapping has all the data it needs to connect the function to the SQS queue. The mapping will use the IAM role from the function arguments to make sure the function is allowed to receive messages from the queue, and it will throw an error if it doesn’t have the required permissions.
Why use Pulumi and how does Continuous Verification play a role?
Now that I’ve walked through the Go code to deploy a Lambda function to AWS, you might be asking yourself “why would I use Pulumi?” That’s a valid question, the Pulumi code is about twice the size of the CloudFormation template to do the same thing. The answer to why Pulumi will be different for everyone, but for me, it comes down to the same arguments we looked at earlier. Pulumi allows me to write my deployments in the same language as the rest of my code, gives me strongly typed variables, and access to all resources that I have while developing code. To me, things like IDE support, testing, and language specifics like loops are very helpful to build and maintain the serverless infrastructure that is needed to power the ACME Serverless Fitness Shop.
The reasons why I love Pulumi are also why it fits so well within the concept of Continuous Verification. The previews that Pulumi offers out of the box, the ability to verify that everything has been created as you expect it to, and the ability to iterate over your development all help to make an informed decision whether or not your code should go to production.
What’s next?
We’ve looked at what role Pulumi plays in the ACME Serverless Fitness Shop. Next time I’ll dive a bit more into the observability side of serverless, with VMware Tanzu Observability by Wavefront. In the meanwhile, let me know your thoughts and send me or the team a note on Twitter.
Photo by panumas nikhomkhai from Pexels