Photo by Luo Lei on Unsplash

From Concept to Cloud App: a reusable Docker, Terraform and AWS Elastic Container Service quickstart for launching a web-application

Colin Smith
15 min readDec 10, 2020

--

Once upon a time, I had an idea for a web-application. I was excited to jump in and get started building features but I wanted to be prepared for a few things:

  • to incorporate contributions from other developers.
  • to launch the project as it was at any stage in development with only a few commands.
  • to have the launched infrastructure be simple but extensible without requiring major refactoring
  • to start a new project with similar infrastructure at any time

In other words, I wanted a reusable, reproducible, shareable, testable, version-controlled solution for developing and launching a web-app so I could focus on Actually Writing the Code for whatever project features came to mind.

The result was my terraform-ecs-app (TEA) project , a fully-functioning minimal implementation of reusable code that automates provisioning a docker image of a web-application and launching it into the AWS cloud on an EC2 (Elastic Compute) instance using ECS (Elastic Container Service). Now I’m writing features again, and happily watching them go live.

This article is a guide to understanding both the building blocks of TEA and the technologies behind it.

If you want to move quickly from an idea to a live application with the infrastructure for launching, hosting and developing it, you can go directly to the terraform-ecs-app code on GitHub and follow the README instructions. On the other hand, if a guided tour of this project is of interest to you, please keep reading.

With the terraform-ecs-app as my map, I intend to sherpa you through the lofty peaks of Terraform and Amazon Elastic Container Service after climbing the foothills of Packer and Docker. This guide is meant to provide you with a clear understanding of the hows and whys of TEA so that you can extend it to satisfy your own use cases without having to repeat all the research that went into its development. First, I’ll discuss why I chose this tech stack, then I’ll explain how TEA was implemented and how it is used.

Why Terraform?

I wanted to avoid complete vendor lock-in with AWS so I chose to use Terraform to interact with Amazon’s least proprietary cloud services. Terraform has competition, but it is popular which means a strong developer community and it has good support documentation. Because Terraform defines infrastructure in terms that are common to different cloud services, this project, though tailored for AWS, can be adapted to other providers with relative ease.

Why EC2?

I chose EC2 instead of AWS’ serverless Fargate because I wanted to retain both control over the provisioning and server details and the ability to use the same specifications with other cloud providers.

Why Docker?

One of the many advantages to using the container technologies Packer and Docker, is that one can dictate exactly what software goes into an instance and then use the same machine image with a completely different cloud provider. One can also use the image as a development environment to be shared with fellow contributors, avoiding the dreaded, “it works on my machine,” problem.

Infrastructure as Code

Defining the cloud infrastructure as code with Packer and Terraform offers flexibility and reliability. Code is more easily maintained than a script or a checklist. By tracking changes to that code with git. one can roll-back a version in an emergency. Because it’s programmatic, human error is mitigated by an automated and reproducible build. If you want the extra confidence of running tests on your build process, Terratest will test the cloud infrastructure created by your Terraform implementations and Goss will test the machine image built and provisioned by Packer.

Why React?

I use Create-React-App (CRA) for the web-app because it is easy to set up as a “hello-world” to demonstrate a functioning web application, it is very current with the industry and it has plenty of support. If you don’t want to start with React or don’t wish to use CRA in the same way, just replace the commands for installing libraries, nvm and node in the project’s build.json file with linux commands to install whatever languages or frameworks you prefer to have on the server. You will also want to replace the CRA-generated contents of the project’s top-level ‘front-end’ directory with those of a web-app of your choosing.

How TEA works

To elucidate the parts and processes of terraform-ecs-app, First, we’ll cover using Packer and Docker to build an image for a container representing your application server. Second, we look at how TEA defines a Terraform application to launch your app into the cloud and thereby make it available on the web. Third, we consider inspecting and troubleshooting the container and the machine instance on which it resides.

The Front End App

To generate the top-level “front-end” app directory in my home environment with the CRA command, “npx create-react-app front-end”, I installed some requisite libraries and nvm, which is handy for using multiple versions of nodejs and for simple installation of the newest versions of node. Then I copied node to the /opt directory and made soft symlinks to the executables npm, node and npx in my /bin directory so those three commands would be found when invoked.

sudo apt install -y build-essential checkinstall libssl-devcurl -o- https://raw.githubusercontent.com/creationix/nvm/v0.35.1/install.sh | bash. /root/.nvm/nvm.shnvm install 13.12.0cp -r /tmp/versions/node /optcd /binln -s /opt/node/v13.12.0/bin/npmln -s /opt/node/v13.12.0/bin/nodeln -s /opt/node/v13.12.0/bin/npx

I will refer to these commands later as they are used in building an image that represents the web-app server.

Image Build and Testing in TEA

Having generated a front-end web-application, the next goal is to build a machine image on which it should run. TEA defines the specifications for the image in a file called ‘build.json’. Included in that specification is a call to a Goss test called ‘goss.yaml’. The directory structure was created as follows:

mkdir -r terraform-ecs-app/cloud/prod/services/front-endcd terraform-ecs-app/cloudmkdir test

build.json and goss.yaml reside in their respective ‘front-end’ and ‘test’ directories. build.json is written to provision the image with Ubuntu Linux and to call the Goss test. The provided test file simply asserts that the image supports a user to own the app and a home directory in which to place it.

The command

packer build build.json

instructs Packer to build the machine image as described in build.json, which specifies that the test run automatically.

In build.json, this is the block to build Ubuntu. Like in JavaScript, curly braces are used for defining key/value pairs and brackets are used for defining arrays of elements. Here we just have one builder element with three pairs.

“builders”: [{“type”: “docker”,“image”: “ubuntu:16.04”,“export_path”: “image.tar”}],

This alone would build a working machine image, but running the provided Goss test would fail because we have not added the user or home directory for which it tests. If one opts for a test-first strategy, this simply motivates the next addition to build.json. In TEA, said addition was a set of Linux shell commands in the provisioners block. As shown, some boiler plate commands were added to allow the system time to boot, to update apt-get for package management and to install sudo for commands requiring enhanced privileges. File provisioner blocks specify to copy the test directory and the Goss test to the image. The goss block specifies to run the test.

“provisioners” : [{    “type”: “shell”,    “inline”: [        “sleep 30”,        “apt-get update”,        “apt-get install -y sudo”,        “sudo apt-get update”,        “sudo useradd -d /home/project-name -m project-name -p project-name”,        “mkdir -p /home/project-name/services/”,        “sudo mkdir -p /tmp/goss/test”    ]},{    “type”: “file”,    “source”: “../../../test”,    “destination”: “/home/project-name/”},{    “type”: “file”,   “source”: “../../../test/goss.yaml”,    “destination”: “/home/project-name/goss.yaml”},{    “type”: “goss”,    “tests”: [        “../../../test/goss.yaml”    ]}],

This makes it so that when we run

packer build build.json

from cloud/prod/services/front-end on a command-line in our home environment, the test should pass. Packer will produce an image in the directory as a tar archive appropriately named ‘image.tar’. Having an image enables one to issue commands to import it into docker, to find its id, and to run the image in a container to view the app or inspect the server.

docker import image.tardocker images | grep secdocker run -it *image-id from previous command* bashcd /home/project-namels

In this example, one could verify that the /home/project-name directory in the container indeed contains goss.yaml and then exit the container with ‘exit’.

Having established what it takes to run a working and tested container, what remains is how to provide the server with an actual web-app. Further specification is required in build.json to provision the machine image with the front-end app directory and the software it needs to run. If you want to run something besides React, it’s simple enough to replace these shell commands with commands to provision any other packages, languages or frameworks you may need. If you look at the complete build.json, you will see the following was added to furnish the web-app.

“provisioners” : [{    “sudo apt-get install -y curl”,    “sudo apt install -y build-essential checkinstall libssl-dev”,    “curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.35.1/install.sh | bash”,},

A file block was added to provisioners to copy the React app directory to the image. A second shell block in provisioners completes the installation of node as previously described in the ‘Front End App’ section. You will also notice the following commands to enter the app directory, install the requisite packages for the app, and fix any security vulnerabilities it can.

“cd /home/project-name/services/front-end”,“npm install”,“npm audit fix”

These provisioning instructions make the app ready to run in a container. So, like before, we can issue commands to import the image, get the id and bash into the container. Once in that bash shell, we can cd to /home/project-name/services/front-end and run

npm start

which should display or open a URL for use outside the container (the one besides the localhost URL) where you can see the basic hello world of our app (minus the string saying “hello world”) in your browser. But you want it in other people’s browsers!

How TEA uses Terraform to launch a cloud infrastructure

Terraform code

Now we can move on to launching the cloud infrastructure with Terraform. All the heavy lifting has been defined in main.tf. To make the terraform code suitable for a public git repository, for a potential team of developers and for possible reuse with different projects, we need it to be tailorable to each project and developer environment and to keep secrets like AWS keys out of version control. Details specific to individual projects and users will primarily be defined as environmental variables and are handled in variables.tf. If we want to test our Terraform with Terratest, then it is useful to define outputs in outputs.tf so that we can programmatically check the results when the infrastructure is applied. A quick look at those files should be enough to form a basic understanding of their function. To really understand how terraform works with a cloud provider and AWS in particular, you may wish to have a look at main.tf as I will describe the parts in top-down order.

Terraform/Provider

After we set the Terraform version (so that if we revert to this version of main.tf from our git repository, it will function as expected), we set the region in which we wish the AWS cloud services provider to host our container.

VPC

Then we need a Virtual Private Cloud (VPC). This is a subset of the AWS network that is dedicated to some part of your infrastructure. All you need to make one is to declare a Terraform resource of type ‘aws_vpc’, give it a name like “the_vpc” and give it a cidr block to allocate network space for subnets, and also, very importantly, enable dns hostnames so that your network can be found from the internet.

Subnet

Naturally what comes next is setting up a subnet resource so that you can allocate addresses to your infrastructure within the confines of your VPC. This way, instances of machines in your infrastructure can communicate privately but internet traffic only interacts with the public facing part of your infrastructure, the “internet gateway”, which will be covered after looking at the Amazon Machine Image (AMI).

AMI

For our purposes, the AMI is a machine image for the front-end application’s EC2 instance to which the internet gateway will route web traffic. The AMI is something that already exists, and that, as I understand it, is why it is called ‘data’ rather than ‘resource’ in Terraform. To get the correct AMI we use filters to narrow down the list of available AMI’s. These will fetch you not just any AMI, but ecs-optimized Amazon Linux AMI to facilitate the automated deployment of your container to an EC2 instance.

Internet Gateway

The Internet gateway is the public facing part of your VPC. It will guide internet traffic to your network but only as dictated by a routing table. These two parts only need the VPC id to be properly declared. But to work properly they require more pieces.

Main Route Table

The aws_main_route_table_association simply designates our routing table as the main routing table. In order for our instance to be associated with our non-default VPC, our routing table must be the main one. This allows us to associate our routing table for our VPC to our subnet. Once the route to the gateway is declared, we have a fully-fledged VPC with a public gateway to a private subnet of our remaining infrastructure.

ECS

With the establishment of our network done, we can move on to the Elastic Container Service. The first thing ECS needs is a cluster, which basically groups our instances. For the purposes here, our instances number a total of one, but more could be added when we need a back-end. The containerInsights setting is for collecting metrics and logs for Amazon’s CloudWatch monitoring, should you choose to use it.

The ECS Task

The instance can be launched into the VPC via an ECS Task, as managed by an ECS Service. The service decides what task definition to use and how many tasks to have active. We associate the service with the cluster, set the task definition and set the desired count to one because we have one task to create one instance. The task will be retried until our instance reaches a steady state.

We only need to generate a task once but it may be used multiple times. For this reason, we define it once as data and later as a resource. In the data definition we specify that the data task depends upon the resource task. Tasks can have different versions in AWS and we want the service to use the freshest one. In the data task declaration we associate the data task with the resource’s family. This way, the service can look at the many versions in the family and choose between the data and the resource based on whichever has the maximum revision number.

The ECS Task has many parts. It can take a lot of trial-and-error with black-box testing and rereading of documentation to distill into the minimum set of essential specifications. For EC2 networking, “bridge-mode” is the one that works. In this case 512 memory is enough (that may change as your project evolves so we will cover troubleshooting a bit later). We set EC2 as a required compatibility and set the execution IAM role so that the task is granted the required permissions.

The container definitions live in a world between AWS and Docker where it is easy to get lost. The entrypoint tells the instance to execute a shell command. The command here is “npm start”. This command to start the react server cannot be specified in the build.json of course, it has to be done after the instance is launched with your container. If you want to run a set of commands instead of just one ‘npm start’ you are allowed to have a comma separated list of them in between the brackets here, but it seems the result will be that your server gets started in a separate shell and will not stay active. In the portmappings, the host port is where the instance is receiving, in this case, the standard http port 80 that does not need to be specified in a browser’s location window. Normally, the react app will start on port 3000 and a developer on that machine can route a browser to http://localhost:3000 to see the app. I tried many different ways of changing this in the command section but the server never stays active as mentioned above. This is why the container port is set to 3000 here. The app will not run unless the working directory, from which the start command is run, is set to inside the React project directory of your docker image. As specified in the project README, this should be set in an environmental variable for Terraform on your home environment. The remainder of the specification is equally important. an interactive pseudoterminal is what keeps the server active. Setting essential to true, likely has some relation in ECS’s ability to tell you “essential task exited” when your task fails and the server wont run. Finally, the image is of central importance. I used Amazon’s Elastic Container Registry to create a repository and used Docker’s tag and push commands to store the image there. In Terraform, the container section of the task definition has an “image” property that takes an image uri so that it knows from where to pull the image. You can see how this works in the project README.

Launch Template

To launch the instance upon which the ECS task is executed, a “launch template” resource is defined. It is the successor to “launch configurations” and associates the instance with a type and an AMI. It also requires a security group. What is most interesting about it is that one can supply a user-data file which can only be securely uploaded to the instance using your AWS key pair file. Without the user data, the ecs-optimized instance will not know to start the ecs service upon which running your container depends and will not know the cluster to which your container belongs. It is an annoyingly complicated yet highly essential extra step to add to the process, especially due to the fact that the name of the cluster can not be taken from your environmental variables since it will be run on the launched instance, so to have it match your environmental variable you have to manually change it or write a script to generate it before you create your docker image. I chose the former.

Autoscaling Group

Taking the path of least resistance, we create an autoscaling group to use the launch template. At this point we only need one server. If that goes down, the autoscaler will attempt to recreate that one server.

Security

Finally, we have the various security groups, roles, profiles and policies to allow Terraform to launch your container into the cloud and to allow certain types of ingress and egress traffic. These are all essential and it may not be obvious that it is the source of the problem when one or more are missing. The traffic is described by direction, port, protocol and participant addresses allowed by CIDR block. In this case all traffic is allowed from any address, as indicated by all-zero CIDR blocks.

Launching the web-application

At this point, we’ve covered all the parts needed to launch a working web-application with ECS and EC2. Terraform-ecs-app is a complete implementation of these concepts and intended to work out-of-the-box as long as you have followed the steps in the README. Such steps include installing Packer, Docker and Terraform, setting up AWS, setting environmental variables and issuing a few commands to build your image, push it to a repository, and apply the Terraform. The README also has instructions at the end for viewing the public face of the app in your browser (the “hello world”).

Analysis

If for any reason after the Terraform successfully applies, the app does not show, you can look at your EC2 instances on the AWS console in your browser. Get the the public URL from there and you can ssh into a shell on the machine using the key you named in your AWS launch template.

ssh -i *full path for AWS key with key name* ec2-user@ec2-*rest of the URL*

From that point you can use docker commands like

docker images

to find your image, and

docker container list

to find the container if it is running. If your container is running you can bash into it (enter a bash shell) with

docker exec -it *container id* /bin/bash

If the image is not running you can go to the ECS page in the console and navigate from your cluster to your service and see the status of your task under the events tab. One of the top rows should have a link to your task. Navigate there and you can see the status of the task. Scroll down to containers and click the arrow to the left to see the details. There you may find an error code and possibly a message such as “essential container in task exited”. Consult the documentation.

You can run Docker inspect on the instance or check the logs. You may also try running the container on the instance yourself with

docker run -it *image id* /bin/bash

and then starting the server manually from inside the container to narrow down what works and what doesn’t.

Helpful Resources

A handful of resources helped me get to a working infrastructure from scratch. Naturally the documentation was a good aid. Check out the documentation for Terraform, for the AWS Terraform registry at and for ECS. Also, Complete-ECS was very helpful and provided examples I cross-referenced to develop a basic set of essential data and resources to get a project online.

Done!

And that’s how it’s done. Thanks for playing and good luck on your web startup.

--

--

Colin Smith

Software Engineer, Multi-instrumental Musician, Barrel-Rider