Black Sheep Code

Create an AWS ECS + Postgres application using Terraform CDK

The following is quick guide for running a simple Todo Docker container running on AWS Elastic Container Service (ECS), and talking to a AWS RDS Postgres instance, using Terraform CDK.

First make sure you have CDKTF installed via the instructions here.

Also make sure you have AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY and AWS_REGION environment variables set in your local visitLexicalEnvironment.

This example is largely adapted from this official example here, though a decent amount of finagling was required.

I've found it's actually quite difficult to google for these guides, so hopefully this is helpful for someone.

Our application

Our application is a simple todo app running on ExpressJS.

We can see the core logic here:

The CDKTF configuration

Create the CDKTF boilerplate

mkdir infra 
cdktf init 

(Follow prompts, use TypeScript as your language, install the aws, docker, null and random providers).

Add the AWS VPC and RDS modules to your ctktf.json

Populate your main.ts

If all you wanted was the raw boilerplate, then you can stop reading here - the rest of the post is explaining the main.ts.

Explaining all the bits

Instantiate providers

Any of the terraform providers need to instantiated.

Create a VPC

A VPC serves as a 'grouping' for our application, where we can logically separate various components and only allow the components that need to talk to each other, to do so.

Importantly this will isolate our application from the internet at large, we rely on AWS's infrastructure to prevent the items in our VPC being probed by the wider internet, meaning our internals won't be subject to brute force or DDOS attacks.

Create an ECS Cluster

Here we declare a an ECS instance, and we declare its task - to run a docker image. We add roles and configuration to allow this application to write logs.

Create and expose load balancer

Here we create an AWS Application Load Balancer - for our purposes this services as a mechanism for selectively exposing components in our VPC - in this case - our running Docker container.

Create a Postgres Database

Some notes here:

By default AWS will want your Postgres to only allow have a SSL connection, and causes this common error: connect to PostgreSQL server: FATAL: no pg_hba.conf entry for host.

So we turn it off. Alternatively we would need to make our application have the SSL PEM file for AWS's default SSL certificate, but for simplicity's sake we'll turn it off.

If manageMasterUserPassword is on then the configuration will completely ignore the password we provided it, and create a secret using Secrets Manager.

For our purposes we want to provide the password via an environment variable, and I can't see a way to otherwise retrieve the secret at deploy time.

We could provide the secret arn, which is accessible as db.dbInstanceMasterUserSecretArnOutput and then at runtime retrieve the secret via AWS's SDK.

However we still shouldn't provide the database root password to our application. What we really should do at this point is be creating some specific passwords for our application(s) to use.

Create Docker Image

Here we run the command to create the docker image locally and push it up to AWS ECR.

Note the use of the asset hash here. We'll only rebuild the image if any of the composing artifacts (ie the source code) have changed.

Run Docker Image

We run our docker image, passing in the requisite environment variables.

Add Cloudfront

We add the Cloudfront CDN - conveniently giving us an SSL certificate.

Spotted an error? Edit this page with Github