Create an AWS ECS + Postgres application using Terraform CDK
The following is quick guide for running a simple Todo Docker container running on AWS Elastic Container Service (ECS), and talking to a AWS RDS Postgres instance, using Terraform CDK.
First make sure you have CDKTF installed via the instructions here.
Also make sure you have AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
and AWS_REGION
environment variables set in your local visitLexicalEnvironment.
This example is largely adapted from this official example here, though a decent amount of finagling was required.
I've found it's actually quite difficult to google for these guides, so hopefully this is helpful for someone.
Our application
Our application is a simple todo app running on ExpressJS.
We can see the core logic here:
Lines 5 to 45 in 7faa1c2
5export function createTodosApp(db: Db): Express {
6
7 const app = express();
8 app.use(express.json());
9
10 // Health check
11 app.get("/ready", (req, res) => {
12 res.send(200);
13 })
14
15 app.get('/todos', async (req: Request, res: Response) => {
16
17 const todos = await db.getAllTodos();
18 res.json(todos);
19 });
20
21
22
23
24 app.get('/todos/:id', async (req: Request, res: Response) => {
25
26 const todo = await db.getSingleTodo(req.params.id);
27 if (!todo) {
28 return res.status(404).json({ message: 'Todo not found' });
29 }
30 res.json(todo);
31 });
32
33 app.post('/todos', async (req: Request, res: Response) => {
34 const { description, isComplete } = req.body;
35
36 console.log(description, isComplete)
37
38 // TODO use zod here.
39 if (!description || typeof isComplete !== 'boolean') {
40 return res.status(400).json({ message: 'Please provide description, and isComplete fields' });
41 }
42 const newTodo = db.createTodo({ description, isComplete });
43 res.status(201).json(newTodo);
44 });
45
The CDKTF configuration
Create the CDKTF boilerplate
mkdir infra
cdktf init
(Follow prompts, use TypeScript as your language, install the aws, docker, null and random providers).
Add the AWS VPC and RDS modules to your ctktf.json
Lines 7 to 10 in 7faa1c2
7 "terraformModules": [
8 "terraform-aws-modules/vpc/aws@ ~> 5.5.1",
9 "terraform-aws-modules/rds/aws@~> 6.4.0"
10 ],
Populate your main.ts
Lines 1 to 622 in 7faa1c2
1import { Construct } from "constructs";
2import { App, TerraformStack, Fn, TerraformAsset, TerraformOutput } from "cdktf";
3import { Vpc} from "./.gen/modules/terraform-aws-modules/aws/vpc";
4import { AwsProvider } from "@cdktf/provider-aws/lib/provider";
5import { EcsCluster } from "@cdktf/provider-aws/lib/ecs-cluster";
6import { Resource } from "@cdktf/provider-null/lib/resource";
7import { IamRole } from "@cdktf/provider-aws/lib/iam-role";
8import { EcsTaskDefinition } from "@cdktf/provider-aws/lib/ecs-task-definition";
9import { Rds } from "./.gen/modules/terraform-aws-modules/aws/rds";
10
11import { SecurityGroup } from "@cdktf/provider-aws/lib/security-group";
12import { Password } from "@cdktf/provider-random/lib/password";
13import { Lb } from "@cdktf/provider-aws/lib/lb";
14import { LbListener } from "@cdktf/provider-aws/lib/lb-listener";
15import { EcsService } from "@cdktf/provider-aws/lib/ecs-service";
16import { LbListenerRule } from "@cdktf/provider-aws/lib/lb-listener-rule";
17import { LbTargetGroup } from "@cdktf/provider-aws/lib/lb-target-group";
18import { EcrRepository } from "@cdktf/provider-aws/lib/ecr-repository";
19import { DataAwsEcrAuthorizationToken } from "@cdktf/provider-aws/lib/data-aws-ecr-authorization-token";
20import { CloudfrontDistribution } from "@cdktf/provider-aws/lib/cloudfront-distribution";
21import path = require("path");
22import { CloudwatchLogGroup } from "@cdktf/provider-aws/lib/cloudwatch-log-group";
23import { NullProvider } from "@cdktf/provider-null/lib/provider";
24import { RandomProvider } from "@cdktf/provider-random/lib/provider";
25
26const BACKEND_ORIGIN_ID = "backendOrigin";
27
28const REGION = "ap-southeast-2";
29const PROJECT_NAME = "k6-test";
30const tags = {
31 projectName: PROJECT_NAME,
32}
33
34class Cluster extends Construct {
35 public cluster: EcsCluster;
36 constructor(scope: Construct, clusterName: string) {
37 super(scope, clusterName);
38
39 const cluster = new EcsCluster(this, `ecs-${clusterName}`, {
40 name: clusterName,
41 tags,
42 });
43
44 this.cluster = cluster;
45 }
46
47 public runDockerImage(
48 name: string,
49 image: Resource,
50 backendTag: string,
51 env: Record<string, string | undefined>
52 ) {
53 // Role that allows us to get the Docker image
54 const executionRole = new IamRole(this, `execution-role`, {
55 name: `${name}-execution-role`,
56 tags,
57 inlinePolicy: [
58 {
59 name: "allow-ecr-pull",
60 policy: JSON.stringify({
61 Version: "2012-10-17",
62 Statement: [
63 {
64 Effect: "Allow",
65 Action: [
66 "ecr:GetAuthorizationToken",
67 "ecr:BatchCheckLayerAvailability",
68 "ecr:GetDownloadUrlForLayer",
69 "ecr:BatchGetImage",
70 "logs:CreateLogStream",
71 "logs:PutLogEvents",
72 ],
73 Resource: "*",
74 },
75 ],
76 }),
77 },
78 ],
79 // this role shall only be used by an ECS task
80 assumeRolePolicy: JSON.stringify({
81 Version: "2012-10-17",
82 Statement: [
83 {
84 Action: "sts:AssumeRole",
85 Effect: "Allow",
86 Sid: "",
87 Principal: {
88 Service: "ecs-tasks.amazonaws.com",
89 },
90 },
91 ],
92 }),
93 });
94
95 // Role that allows us to push logs
96 const taskRole = new IamRole(this, `task-role`, {
97 name: `${name}-task-role`,
98 tags,
99 inlinePolicy: [
100 {
101 name: "allow-logs",
102 policy: JSON.stringify({
103 Version: "2012-10-17",
104 Statement: [
105 {
106 Effect: "Allow",
107 Action: ["logs:CreateLogStream", "logs:PutLogEvents"],
108 Resource: "*",
109 },
110 ],
111 }),
112 },
113 ],
114 assumeRolePolicy: JSON.stringify({
115 Version: "2012-10-17",
116 Statement: [
117 {
118 Action: "sts:AssumeRole",
119 Effect: "Allow",
120 Sid: "",
121 Principal: {
122 Service: "ecs-tasks.amazonaws.com",
123 },
124 },
125 ],
126 }),
127 });
128
129 // Creates a log group for the task
130 const logGroup = new CloudwatchLogGroup(this, `loggroup`, {
131 name: `${this.cluster.name}/${name}`,
132 retentionInDays: 30,
133 tags,
134 });
135
136
137
138 const multiplier = 1;
139
140 // Creates a task that runs the docker container
141 const task = new EcsTaskDefinition(this, `task`, {
142 // We want to wait until the image is actually pushed
143 dependsOn: [image],
144 tags,
145 // These values are fixed for the example, we can make them part of our function invocation if we want to change them
146 cpu: `${256 * multiplier}`,
147 memory: `${512 * multiplier}`,
148 requiresCompatibilities: ["FARGATE", "EC2"],
149 networkMode: "awsvpc",
150 executionRoleArn: executionRole.arn,
151 taskRoleArn: taskRole.arn,
152
153
154 containerDefinitions: JSON.stringify([
155 {
156 name,
157 image: backendTag,
158 cpu: 256 * multiplier,
159 memory: 512 * multiplier,
160 environment: Object.entries(env).map(([name, value]) => ({
161 name,
162 value,
163 })),
164 portMappings: [
165 {
166 containerPort: 80,
167 hostPort: 80,
168 },
169 ],
170 logConfiguration: {
171 logDriver: "awslogs",
172 options: {
173 // Defines the log
174 "awslogs-group": logGroup.name,
175 "awslogs-region": REGION,
176 "awslogs-stream-prefix": name,
177 },
178 },
179 },
180 ]),
181 family: "service",
182 });
183
184 return task;
185 }
186}
187
188class LoadBalancer extends Construct {
189 lb: Lb;
190 lbl: LbListener;
191 vpc: Vpc;
192 cluster: EcsCluster;
193
194 constructor(scope: Construct, name: string, vpc: Vpc, cluster: EcsCluster) {
195 super(scope, name);
196 this.vpc = vpc;
197 this.cluster = cluster;
198
199
200 const lbSecurityGroup = new SecurityGroup(
201 scope,
202 `lb-security-group`,
203 {
204 vpcId: vpc.vpcIdOutput,
205 tags,
206 ingress: [
207 // allow HTTP traffic from everywhere
208 {
209 protocol: "TCP",
210 fromPort: 80,
211 toPort: 80,
212 cidrBlocks: ["0.0.0.0/0"],
213 ipv6CidrBlocks: ["::/0"],
214 },
215 ],
216 egress: [
217 // allow all traffic to every destination
218 {
219 fromPort: 0,
220 toPort: 0,
221 protocol: "-1",
222 cidrBlocks: ["0.0.0.0/0"],
223 ipv6CidrBlocks: ["::/0"],
224 },
225 ],
226 }
227 );
228
229 this.lb = new Lb(scope, `lb`, {
230 name: `${name}-lb`,
231 tags,
232 // we want this to be our public load balancer so that cloudfront can access it
233 internal: false,
234 loadBalancerType: "application",
235 securityGroups: [lbSecurityGroup.id],
236 subnets: Fn.tolist(vpc.publicSubnetsOutput),
237 });
238
239 this.lbl = new LbListener(scope, `lb-listener`, {
240 loadBalancerArn: this.lb.arn,
241 port: 80,
242 protocol: "HTTP",
243 tags,
244 defaultAction: [
245 // We define a fixed 404 message, just in case
246 {
247 type: "fixed-response",
248 fixedResponse: {
249 contentType: "text/plain",
250 statusCode: "404",
251 messageBody: "Could not find the resource you are looking for",
252 },
253
254 },
255 ],
256 });
257 }
258
259
260 exposeService(
261 name: string,
262 task: EcsTaskDefinition,
263 serviceSecurityGroup: SecurityGroup,
264 path: string,
265 ) {
266 const targetGroup = new LbTargetGroup(this, `target-group`, {
267 dependsOn: [this.lbl],
268 tags,
269 name: `${name}-target-group`,
270 port: 80,
271 protocol: "HTTP",
272 targetType: "ip",
273 vpcId: Fn.tostring(this.vpc.vpcIdOutput),
274 healthCheck: {
275 enabled: true,
276 path: "/ready",
277 },
278 });
279
280 // Makes the listener forward requests from subpath to the target group
281 new LbListenerRule(this, `rule`, {
282 listenerArn: this.lbl.arn,
283 priority: 100,
284 tags,
285 action: [
286 {
287 type: "forward",
288 targetGroupArn: targetGroup.arn,
289 },
290 ],
291
292 condition: [
293 {
294 pathPattern: { values: [`${path}*`] },
295 },
296 ],
297 });
298
299 // Ensure the task is running and wired to the target group, within the right security group
300 const ecs = new EcsService(this, `service`, {
301 dependsOn: [this.lbl],
302 tags,
303 name: `${name}-service`,
304 launchType: "FARGATE",
305 cluster: this.cluster.id,
306 desiredCount: 1,
307 taskDefinition: task.arn,
308 networkConfiguration: {
309 subnets: Fn.tolist(this.vpc.publicSubnetsOutput),
310 assignPublicIp: true,
311 securityGroups: [serviceSecurityGroup.id],
312 },
313 loadBalancer: [
314 {
315 containerPort: 80,
316 containerName: name,
317 targetGroupArn: targetGroup.arn,
318 },
319 ],
320 });
321
322 }
323 }
324
325
326class PostgresDB extends Construct {
327 public instance: Rds;
328
329 constructor(
330 scope: Construct,
331 name: string,
332 vpc: Vpc,
333 serviceSecurityGroup: SecurityGroup
334 ) {
335 super(scope, name);
336
337
338 const dbPort = 5432;
339
340 const dbSecurityGroup = new SecurityGroup(this, `db-security-group`, {
341 vpcId: Fn.tostring(vpc.vpcIdOutput),
342 ingress: [
343 // allow traffic to the DBs port from the service
344 {
345 fromPort: dbPort,
346 toPort: dbPort,
347 protocol: "TCP",
348 securityGroups: [serviceSecurityGroup.id],
349 },
350 ],
351 tags,
352 });
353
354
355 const dbSecurityGroup2 = new SecurityGroup(
356 this,
357 `db-security-group-public`,
358 {
359 vpcId: Fn.tostring(vpc.vpcIdOutput),
360
361 ingress: [
362 // allow HTTP traffic from everywhere
363 {
364 protocol: "TCP",
365 fromPort: dbPort,
366 toPort: dbPort,
367 cidrBlocks: ["0.0.0.0/0"],
368 ipv6CidrBlocks: ["::/0"],
369 },
370 ],
371 }
372 );
373
374 const password = new Password(this, "password", {length: 16});
375
376 // Using this module: https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest
377 const db = new Rds(this, `db`, {
378 identifier: `${name}-db`,
379
380 "parameters": [
381 {
382 "name": "rds.force_ssl",
383 "value": "0"
384 }
385 ],
386
387 engine: "postgres",
388 engineVersion: "16.1",
389 family: "postgres16",
390 majorEngineVersion: "16",
391 instanceClass: "db.t3.micro",
392 allocatedStorage: 5,
393
394
395 createDbOptionGroup: false,
396 createDbParameterGroup: true,
397 applyImmediately: true,
398
399 port: String(dbPort),
400 username: `postgres`,
401 manageMasterUserPassword: false,
402 dbName: "postgres",
403 password: password.result,
404 skipFinalSnapshot: true,
405
406
407 maintenanceWindow: "Mon:00:00-Mon:03:00",
408 backupWindow: "03:00-06:00",
409
410
411 // This is necessary due to a shortcoming in our token system to be adressed in
412 // https://github.com/hashicorp/terraform-cdk/issues/651
413 subnetIds: vpc.databaseSubnetsOutput as unknown as any,
414 vpcSecurityGroupIds: [dbSecurityGroup.id, dbSecurityGroup2.id],
415 dbSubnetGroupName: vpc.databaseSubnetGroupNameOutput,
416
417 tags,
418 });
419
420
421
422
423 this.instance = db;
424 }
425}
426
427class PushedECRImage extends Construct {
428 tag: string;
429 image: Resource;
430 constructor(scope: Construct, name: string, projectPath: string) {
431 super(scope, name);
432
433 const assetBackend = new TerraformAsset(this, `dockerArtifactSource`, {
434 path: projectPath
435 });
436
437 const artifactHash = assetBackend.assetHash;
438
439 const repo = new EcrRepository(this, `ecr`, {
440 name,
441 tags,
442 forceDelete: true,
443 });
444
445 const auth = new DataAwsEcrAuthorizationToken(this, `auth`, {
446 dependsOn: [repo],
447 registryId: repo.registryId,
448 });
449
450 this.tag = `${repo.repositoryUrl}:${artifactHash}`;
451 // Workaround due to https://github.com/kreuzwerker/terraform-provider-docker/issues/189
452 this.image = new Resource(this, `image`, {
453
454 triggers: {
455 key: artifactHash
456 },
457 provisioners: [
458 {
459 type: "local-exec",
460 workingDir: projectPath,
461 command: `docker login -u ${auth.userName} -p ${auth.password} ${auth.proxyEndpoint} &&
462 docker build -t ${this.tag} . &&
463 docker push ${this.tag}`,
464 },
465 ],
466 });
467
468 }
469}
470
471
472class MyStack extends TerraformStack {
473 constructor(scope: Construct, name: string) {
474 super(scope, name);
475
476 new AwsProvider(this, "aws", {
477 region: REGION,
478 });
479
480 new NullProvider(this, "null", {});
481 new RandomProvider(this, "random", {});
482
483 const vpc = new Vpc(this, `vpc`, {
484 // We use the name of the stack
485 name,
486 // We tag every resource with the same set of tags to easily identify the resources
487 cidr: "10.0.0.0/16",
488 // We want to run on three availability zones
489 azs: ["a", "b", "c"].map((i) => `${REGION}${i}`),
490 // We need three CIDR blocks as we have three availability zones
491 privateSubnets: ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"],
492 publicSubnets: ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"],
493 databaseSubnets: ["10.0.201.0/24", "10.0.202.0/24", "10.0.203.0/24"],
494 enableNatGateway: true,
495 // Using a single NAT Gateway will save us some money, coming with the cost of less redundancy
496 singleNatGateway: true,
497 });
498
499 const cluster = new Cluster(this, `${name}-cluster`);
500
501 const loadBalancer = new LoadBalancer(
502 this,
503 `${name}-load-balancer`,
504 vpc,
505 cluster.cluster
506 );
507
508
509 const serviceSecurityGroup = new SecurityGroup(
510 this,
511 `${name}-service-security-group`,
512 {
513 vpcId: vpc.vpcIdOutput,
514 tags,
515 ingress: [
516 // only allow incoming traffic from our load balancer
517 {
518 protocol: "TCP",
519 fromPort: 80,
520 toPort: 80,
521 securityGroups: loadBalancer.lb.securityGroups,
522 },
523
524 ],
525 egress: [
526 // allow all outgoing traffic
527 {
528 fromPort: 0,
529 toPort: 0,
530 protocol: "-1",
531 cidrBlocks: ["0.0.0.0/0"],
532 ipv6CidrBlocks: ["::/0"],
533 },
534 ],
535 }
536 );
537
538 const db = new PostgresDB(
539 this,
540 `${name}-pg`,
541 vpc,
542 serviceSecurityGroup
543 );
544
545 const { image: backendImage, tag: backendTag } = new PushedECRImage(
546 this,
547 name,
548 path.resolve(__dirname, "../backend"),
549 );
550
551 const task = cluster.runDockerImage(name, backendImage, backendTag,{
552 PORT: "3000",
553 PG_USER: db.instance.username,
554 PG_PASSWORD: db.instance.password,
555 PG_DATABASE: db.instance.dbInstanceNameOutput,
556 PG_HOST: Fn.tostring(db.instance.dbInstanceAddressOutput),
557 PG_PORT: Fn.tostring(db.instance.dbInstancePortOutput),
558 });
559
560 loadBalancer.exposeService(
561 name,
562 task,
563 serviceSecurityGroup,
564 '/'
565 );
566
567 const cdn = new CloudfrontDistribution(this, "cf", {
568 comment: `Docker example frontend`,
569 tags,
570 enabled: true,
571 defaultCacheBehavior: {
572 targetOriginId: BACKEND_ORIGIN_ID,
573 // Allow every method as we want to also serve the backend through this
574 allowedMethods: [
575 "DELETE",
576 "GET",
577 "HEAD",
578 "OPTIONS",
579 "PATCH",
580 "POST",
581 "PUT",
582 ],
583 cachedMethods: ["GET", "HEAD"],
584 viewerProtocolPolicy: "redirect-to-https", // ensure we serve https
585 forwardedValues: { queryString: false, cookies: { forward: "none" } },
586
587 },
588
589 // origins describe different entities that can serve traffic
590 origin: [
591 {
592 domainName: loadBalancer.lb.dnsName, // our backend is served by the load balancer
593 originId: BACKEND_ORIGIN_ID,
594 customOriginConfig: {
595 originProtocolPolicy: "http-only",
596 httpPort: 80,
597 httpsPort: 443,
598 originSslProtocols: ["TLSv1.2", "TLSv1.1", "TLSv1"],
599 },
600
601 },
602 ],
603
604 restrictions: { geoRestriction: { restrictionType: "none" } },
605 viewerCertificate: { cloudfrontDefaultCertificate: true }, // we use the default SSL Certificate
606 });
607
608
609 new TerraformOutput(this, "domainName", {
610 value: cdn.domainName,
611 });
612
613 }
614}
615
616//https://github.com/hashicorp/terraform-provider-aws/issues/30902
617const app = new App();
618const stack = new MyStack(app, PROJECT_NAME);
619app.synth();
620
621
622
If all you wanted was the raw boilerplate, then you can stop reading here - the rest of the post is explaining the main.ts.
Explaining all the bits
Instantiate providers
Lines 472 to 481 in 7faa1c2
472class MyStack extends TerraformStack {
473 constructor(scope: Construct, name: string) {
474 super(scope, name);
475
476 new AwsProvider(this, "aws", {
477 region: REGION,
478 });
479
480 new NullProvider(this, "null", {});
481 new RandomProvider(this, "random", {});
Any of the terraform providers need to instantiated.
Create a VPC
Lines 482 to 497 in 7faa1c2
482
483 const vpc = new Vpc(this, `vpc`, {
484 // We use the name of the stack
485 name,
486 // We tag every resource with the same set of tags to easily identify the resources
487 cidr: "10.0.0.0/16",
488 // We want to run on three availability zones
489 azs: ["a", "b", "c"].map((i) => `${REGION}${i}`),
490 // We need three CIDR blocks as we have three availability zones
491 privateSubnets: ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"],
492 publicSubnets: ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"],
493 databaseSubnets: ["10.0.201.0/24", "10.0.202.0/24", "10.0.203.0/24"],
494 enableNatGateway: true,
495 // Using a single NAT Gateway will save us some money, coming with the cost of less redundancy
496 singleNatGateway: true,
497 });
A VPC serves as a 'grouping' for our application, where we can logically separate various components and only allow the components that need to talk to each other, to do so.
Importantly this will isolate our application from the internet at large, we rely on AWS's infrastructure to prevent the items in our VPC being probed by the wider internet, meaning our internals won't be subject to brute force or DDOS attacks.
Create an ECS Cluster
Lines 499 to 500 in 7faa1c2
499 const cluster = new Cluster(this, `${name}-cluster`);
500
Lines 34 to 186 in 7faa1c2
34class Cluster extends Construct {
35 public cluster: EcsCluster;
36 constructor(scope: Construct, clusterName: string) {
37 super(scope, clusterName);
38
39 const cluster = new EcsCluster(this, `ecs-${clusterName}`, {
40 name: clusterName,
41 tags,
42 });
43
44 this.cluster = cluster;
45 }
46
47 public runDockerImage(
48 name: string,
49 image: Resource,
50 backendTag: string,
51 env: Record<string, string | undefined>
52 ) {
53 // Role that allows us to get the Docker image
54 const executionRole = new IamRole(this, `execution-role`, {
55 name: `${name}-execution-role`,
56 tags,
57 inlinePolicy: [
58 {
59 name: "allow-ecr-pull",
60 policy: JSON.stringify({
61 Version: "2012-10-17",
62 Statement: [
63 {
64 Effect: "Allow",
65 Action: [
66 "ecr:GetAuthorizationToken",
67 "ecr:BatchCheckLayerAvailability",
68 "ecr:GetDownloadUrlForLayer",
69 "ecr:BatchGetImage",
70 "logs:CreateLogStream",
71 "logs:PutLogEvents",
72 ],
73 Resource: "*",
74 },
75 ],
76 }),
77 },
78 ],
79 // this role shall only be used by an ECS task
80 assumeRolePolicy: JSON.stringify({
81 Version: "2012-10-17",
82 Statement: [
83 {
84 Action: "sts:AssumeRole",
85 Effect: "Allow",
86 Sid: "",
87 Principal: {
88 Service: "ecs-tasks.amazonaws.com",
89 },
90 },
91 ],
92 }),
93 });
94
95 // Role that allows us to push logs
96 const taskRole = new IamRole(this, `task-role`, {
97 name: `${name}-task-role`,
98 tags,
99 inlinePolicy: [
100 {
101 name: "allow-logs",
102 policy: JSON.stringify({
103 Version: "2012-10-17",
104 Statement: [
105 {
106 Effect: "Allow",
107 Action: ["logs:CreateLogStream", "logs:PutLogEvents"],
108 Resource: "*",
109 },
110 ],
111 }),
112 },
113 ],
114 assumeRolePolicy: JSON.stringify({
115 Version: "2012-10-17",
116 Statement: [
117 {
118 Action: "sts:AssumeRole",
119 Effect: "Allow",
120 Sid: "",
121 Principal: {
122 Service: "ecs-tasks.amazonaws.com",
123 },
124 },
125 ],
126 }),
127 });
128
129 // Creates a log group for the task
130 const logGroup = new CloudwatchLogGroup(this, `loggroup`, {
131 name: `${this.cluster.name}/${name}`,
132 retentionInDays: 30,
133 tags,
134 });
135
136
137
138 const multiplier = 1;
139
140 // Creates a task that runs the docker container
141 const task = new EcsTaskDefinition(this, `task`, {
142 // We want to wait until the image is actually pushed
143 dependsOn: [image],
144 tags,
145 // These values are fixed for the example, we can make them part of our function invocation if we want to change them
146 cpu: `${256 * multiplier}`,
147 memory: `${512 * multiplier}`,
148 requiresCompatibilities: ["FARGATE", "EC2"],
149 networkMode: "awsvpc",
150 executionRoleArn: executionRole.arn,
151 taskRoleArn: taskRole.arn,
152
153
154 containerDefinitions: JSON.stringify([
155 {
156 name,
157 image: backendTag,
158 cpu: 256 * multiplier,
159 memory: 512 * multiplier,
160 environment: Object.entries(env).map(([name, value]) => ({
161 name,
162 value,
163 })),
164 portMappings: [
165 {
166 containerPort: 80,
167 hostPort: 80,
168 },
169 ],
170 logConfiguration: {
171 logDriver: "awslogs",
172 options: {
173 // Defines the log
174 "awslogs-group": logGroup.name,
175 "awslogs-region": REGION,
176 "awslogs-stream-prefix": name,
177 },
178 },
179 },
180 ]),
181 family: "service",
182 });
183
184 return task;
185 }
186}
Here we declare a an ECS instance, and we declare its task - to run a docker image. We add roles and configuration to allow this application to write logs.
Create and expose load balancer
Lines 501 to 535 in 7faa1c2
501 const loadBalancer = new LoadBalancer(
502 this,
503 `${name}-load-balancer`,
504 vpc,
505 cluster.cluster
506 );
507
508
509 const serviceSecurityGroup = new SecurityGroup(
510 this,
511 `${name}-service-security-group`,
512 {
513 vpcId: vpc.vpcIdOutput,
514 tags,
515 ingress: [
516 // only allow incoming traffic from our load balancer
517 {
518 protocol: "TCP",
519 fromPort: 80,
520 toPort: 80,
521 securityGroups: loadBalancer.lb.securityGroups,
522 },
523
524 ],
525 egress: [
526 // allow all outgoing traffic
527 {
528 fromPort: 0,
529 toPort: 0,
530 protocol: "-1",
531 cidrBlocks: ["0.0.0.0/0"],
532 ipv6CidrBlocks: ["::/0"],
533 },
534 ],
535 }
Lines 188 to 323 in 7faa1c2
188class LoadBalancer extends Construct {
189 lb: Lb;
190 lbl: LbListener;
191 vpc: Vpc;
192 cluster: EcsCluster;
193
194 constructor(scope: Construct, name: string, vpc: Vpc, cluster: EcsCluster) {
195 super(scope, name);
196 this.vpc = vpc;
197 this.cluster = cluster;
198
199
200 const lbSecurityGroup = new SecurityGroup(
201 scope,
202 `lb-security-group`,
203 {
204 vpcId: vpc.vpcIdOutput,
205 tags,
206 ingress: [
207 // allow HTTP traffic from everywhere
208 {
209 protocol: "TCP",
210 fromPort: 80,
211 toPort: 80,
212 cidrBlocks: ["0.0.0.0/0"],
213 ipv6CidrBlocks: ["::/0"],
214 },
215 ],
216 egress: [
217 // allow all traffic to every destination
218 {
219 fromPort: 0,
220 toPort: 0,
221 protocol: "-1",
222 cidrBlocks: ["0.0.0.0/0"],
223 ipv6CidrBlocks: ["::/0"],
224 },
225 ],
226 }
227 );
228
229 this.lb = new Lb(scope, `lb`, {
230 name: `${name}-lb`,
231 tags,
232 // we want this to be our public load balancer so that cloudfront can access it
233 internal: false,
234 loadBalancerType: "application",
235 securityGroups: [lbSecurityGroup.id],
236 subnets: Fn.tolist(vpc.publicSubnetsOutput),
237 });
238
239 this.lbl = new LbListener(scope, `lb-listener`, {
240 loadBalancerArn: this.lb.arn,
241 port: 80,
242 protocol: "HTTP",
243 tags,
244 defaultAction: [
245 // We define a fixed 404 message, just in case
246 {
247 type: "fixed-response",
248 fixedResponse: {
249 contentType: "text/plain",
250 statusCode: "404",
251 messageBody: "Could not find the resource you are looking for",
252 },
253
254 },
255 ],
256 });
257 }
258
259
260 exposeService(
261 name: string,
262 task: EcsTaskDefinition,
263 serviceSecurityGroup: SecurityGroup,
264 path: string,
265 ) {
266 const targetGroup = new LbTargetGroup(this, `target-group`, {
267 dependsOn: [this.lbl],
268 tags,
269 name: `${name}-target-group`,
270 port: 80,
271 protocol: "HTTP",
272 targetType: "ip",
273 vpcId: Fn.tostring(this.vpc.vpcIdOutput),
274 healthCheck: {
275 enabled: true,
276 path: "/ready",
277 },
278 });
279
280 // Makes the listener forward requests from subpath to the target group
281 new LbListenerRule(this, `rule`, {
282 listenerArn: this.lbl.arn,
283 priority: 100,
284 tags,
285 action: [
286 {
287 type: "forward",
288 targetGroupArn: targetGroup.arn,
289 },
290 ],
291
292 condition: [
293 {
294 pathPattern: { values: [`${path}*`] },
295 },
296 ],
297 });
298
299 // Ensure the task is running and wired to the target group, within the right security group
300 const ecs = new EcsService(this, `service`, {
301 dependsOn: [this.lbl],
302 tags,
303 name: `${name}-service`,
304 launchType: "FARGATE",
305 cluster: this.cluster.id,
306 desiredCount: 1,
307 taskDefinition: task.arn,
308 networkConfiguration: {
309 subnets: Fn.tolist(this.vpc.publicSubnetsOutput),
310 assignPublicIp: true,
311 securityGroups: [serviceSecurityGroup.id],
312 },
313 loadBalancer: [
314 {
315 containerPort: 80,
316 containerName: name,
317 targetGroupArn: targetGroup.arn,
318 },
319 ],
320 });
321
322 }
323 }
Here we create an AWS Application Load Balancer - for our purposes this services as a mechanism for selectively exposing components in our VPC - in this case - our running Docker container.
Create a Postgres Database
Lines 517 to 522 in 7faa1c2
517 {
518 protocol: "TCP",
519 fromPort: 80,
520 toPort: 80,
521 securityGroups: loadBalancer.lb.securityGroups,
522 },
Lines 326 to 402 in 7faa1c2
326class PostgresDB extends Construct {
327 public instance: Rds;
328
329 constructor(
330 scope: Construct,
331 name: string,
332 vpc: Vpc,
333 serviceSecurityGroup: SecurityGroup
334 ) {
335 super(scope, name);
336
337
338 const dbPort = 5432;
339
340 const dbSecurityGroup = new SecurityGroup(this, `db-security-group`, {
341 vpcId: Fn.tostring(vpc.vpcIdOutput),
342 ingress: [
343 // allow traffic to the DBs port from the service
344 {
345 fromPort: dbPort,
346 toPort: dbPort,
347 protocol: "TCP",
348 securityGroups: [serviceSecurityGroup.id],
349 },
350 ],
351 tags,
352 });
353
354
355 const dbSecurityGroup2 = new SecurityGroup(
356 this,
357 `db-security-group-public`,
358 {
359 vpcId: Fn.tostring(vpc.vpcIdOutput),
360
361 ingress: [
362 // allow HTTP traffic from everywhere
363 {
364 protocol: "TCP",
365 fromPort: dbPort,
366 toPort: dbPort,
367 cidrBlocks: ["0.0.0.0/0"],
368 ipv6CidrBlocks: ["::/0"],
369 },
370 ],
371 }
372 );
373
374 const password = new Password(this, "password", {length: 16});
375
376 // Using this module: https://registry.terraform.io/modules/terraform-aws-modules/rds/aws/latest
377 const db = new Rds(this, `db`, {
378 identifier: `${name}-db`,
379
380 "parameters": [
381 {
382 "name": "rds.force_ssl",
383 "value": "0"
384 }
385 ],
386
387 engine: "postgres",
388 engineVersion: "16.1",
389 family: "postgres16",
390 majorEngineVersion: "16",
391 instanceClass: "db.t3.micro",
392 allocatedStorage: 5,
393
394
395 createDbOptionGroup: false,
396 createDbParameterGroup: true,
397 applyImmediately: true,
398
399 port: String(dbPort),
400 username: `postgres`,
401 manageMasterUserPassword: false,
402 dbName: "postgres",
Some notes here:
Lines 360 to 365 in 931a805
360 "parameters": [
361 {
362 "name": "rds.force_ssl",
363 "value": "0"
364 }
365 ],
By default AWS will want your Postgres to only allow have a SSL connection, and causes this common error: connect to PostgreSQL server: FATAL: no pg_hba.conf entry for host
.
So we turn it off. Alternatively we would need to make our application have the SSL PEM file for AWS's default SSL certificate, but for simplicity's sake we'll turn it off.
Lines 382 to 383 in 931a805
382 manageMasterUserPassword: false,
383 password: password.result,
If manageMasterUserPassword
is on then the configuration will completely ignore the password we provided it, and create a secret using Secrets Manager.
For our purposes we want to provide the password via an environment variable, and I can't see a way to otherwise retrieve the secret at deploy time.
We could provide the secret arn, which is accessible as db.dbInstanceMasterUserSecretArnOutput
and then at runtime retrieve the secret via AWS's SDK.
However we still shouldn't provide the database root password to our application. What we really should do at this point is be creating some specific passwords for our application(s) to use.
Create Docker Image
Lines 521 to 525 in 931a805
521 const { image: backendImage, tag: backendTag } = new PushedECRImage(
522 this,
523 name,
524 path.resolve(__dirname, "../backend"),
525 );
Lines 404 to 445 in 931a805
404class PushedECRImage extends Construct {
405 tag: string;
406 image: Resource;
407 constructor(scope: Construct, name: string, projectPath: string) {
408 super(scope, name);
409
410 const assetBackend = new TerraformAsset(this, `dockerArtifactSource`, {
411 path: projectPath
412 });
413
414 const artifactHash = assetBackend.assetHash;
415
416 const repo = new EcrRepository(this, `ecr`, {
417 name,
418 tags,
419 forceDelete: true,
420 });
421
422 const auth = new DataAwsEcrAuthorizationToken(this, `auth`, {
423 dependsOn: [repo],
424 registryId: repo.registryId,
425 });
426
427 this.tag = `${repo.repositoryUrl}:${artifactHash}`;
428 // Workaround due to https://github.com/kreuzwerker/terraform-provider-docker/issues/189
429 this.image = new Resource(this, `image`, {
430
431 triggers: {
432 key: artifactHash
433 },
434 provisioners: [
435 {
436 type: "local-exec",
437 workingDir: projectPath,
438 command: `docker login -u ${auth.userName} -p ${auth.password} ${auth.proxyEndpoint} &&
439 docker build -t ${this.tag} . &&
440 docker push ${this.tag}`,
441 },
442 ],
443 });
444
445 }
Here we run the command to create the docker image locally and push it up to AWS ECR.
Lines 409 to 414 in 931a805
409
410 const assetBackend = new TerraformAsset(this, `dockerArtifactSource`, {
411 path: projectPath
412 });
413
414 const artifactHash = assetBackend.assetHash;
Note the use of the asset hash here. We'll only rebuild the image if any of the composing artifacts (ie the source code) have changed.
Run Docker Image
Lines 526 to 534 in 931a805
526
527 const task = cluster.runDockerImage(name, backendImage, backendTag,{
528 PORT: "3000",
529 PG_USER: db.instance.username,
530 PG_PASSWORD: db.instance.password,
531 PG_DATABASE: db.instance.dbInstanceNameOutput,
532 PG_HOST: Fn.tostring(db.instance.dbInstanceAddressOutput),
533 PG_PORT: Fn.tostring(db.instance.dbInstancePortOutput),
534 });
We run our docker image, passing in the requisite environment variables.
Add Cloudfront
Lines 543 to 582 in 931a805
543 const cdn = new CloudfrontDistribution(this, "cf", {
544 comment: `Docker example frontend`,
545 tags,
546 enabled: true,
547 defaultCacheBehavior: {
548 targetOriginId: BACKEND_ORIGIN_ID,
549 // Allow every method as we want to also serve the backend through this
550 allowedMethods: [
551 "DELETE",
552 "GET",
553 "HEAD",
554 "OPTIONS",
555 "PATCH",
556 "POST",
557 "PUT",
558 ],
559 cachedMethods: ["GET", "HEAD"],
560 viewerProtocolPolicy: "redirect-to-https", // ensure we serve https
561 forwardedValues: { queryString: false, cookies: { forward: "none" } },
562
563 },
564
565 // origins describe different entities that can serve traffic
566 origin: [
567 {
568 domainName: loadBalancer.lb.dnsName, // our backend is served by the load balancer
569 originId: BACKEND_ORIGIN_ID,
570 customOriginConfig: {
571 originProtocolPolicy: "http-only",
572 httpPort: 80,
573 httpsPort: 443,
574 originSslProtocols: ["TLSv1.2", "TLSv1.1", "TLSv1"],
575 },
576
577 },
578 ],
579
580 restrictions: { geoRestriction: { restrictionType: "none" } },
581 viewerCertificate: { cloudfrontDefaultCertificate: true }, // we use the default SSL Certificate
582 });
We add the Cloudfront CDN - conveniently giving us an SSL certificate.
Questions? Comments? Criticisms? Get in the comments! 👇
Spotted an error? Edit this page with Github