AWS Elastic Container Service

Mohtasham Sayeed Mohiuddin
6 min readFeb 17, 2022

How to create micro service for a production environment using AWS ECS, Route 53, AWS Load Balancer and the ECS Cluster.

It Wouldn’t be fantastic If you can deploy it in production as soon as possible in the shortest amount of time, passing from your CI/CD pipeline and have all of this standardised across all your environment?

Well, docker can be your perfect companion:

  • Standardisation (the same image you are running on your dev should be identical to the production one, apart from the ENV variable)
  • Read-only container (ready to scale horizontally)
  • Rapid Deployment (as soon as the image available, it takes less than 10 seconds to spin up a container)
  • Isolation (You can define the amount of CPU/Memory to be used)
  • Security (Docker ensure that your applications are completely isolated from each other)

DISCLAIMER: In a production environment I would use cloud formation to create all the resources and orchestrate the Deployment, nevertheless, this guide is an introduction of the basic of ECS to understand the underlying system and how to get the best from it.

What is and why I have to use Amazon ECS?

Amazon Elastic Container Service (Amazon ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines.

There are few alternatives out there; however, today, we will talk about Amazon ECS using AWS as your cloud provider.

Let’s get real: High-Overview Architecture.

For the sake of this example, I will design a VPC following all the basic best-practice that you should apply in a production environment.

In this article, we will focus just on the ECS component. However, this is a pretty standard architecture for a VPC where you do have a Public, Private/Application and Data-Subnet.

As you can see, the Private and Data subnet is secured, this means that no-one can access directly from outside if they are not passing from the application load balancer.

Let’s start: The importance of the Application Load Balancer.

The ALB is the entry point for your application! Imagine typing (www.yourdomain.com); your DNS will translate the request to the IP of the Application Load Balancer. At this stage, the Application Load Balancer will have to analyse the different rule and re-route the request to a specific target group. A target group is used to re-route traffic to a particular service and behind the scene to all the different task (s)/container(s) that the service is running across your ECS cluster.

ECS Cluster

In this example, we are doing an old school EC2 Instance Cluster. Nevertheless, if you are into serverless, I will recommend using the FARGATE option.

An Amazon ECS cluster is a regional grouping of one or more container instances on which you can run task requests. Each account receives a default cluster the first time you use the Amazon ECS service. Clusters may contain more than one Amazon EC2 instance type.

To create a cluster jump on your “Amazon ECS” page and click Create Cluster.

From there you will be asked to choose a template: Fargate template (not covered in this guide) or a Standard Template (Linux / Windows). After you select the template you need to choose:

  • Cluster Name
  • Provisioning Model, Number of Instances, EC2 Image to be used, EBS Storage Size, Keypair
  • Network Configuration, Security Group (never use the public subnet!)
  • Container Instance IAM Role
  • CloudWatch Container Insights

ECS Service

An ECS Service is responsible for launching, monitoring and recovering task (s)/Container(s) started in the ECS Cluster. You can define how many tasks you want to run and how to place them in the cluster (using task placement strategies and constraints to customise task placement decisions).

ECS Service spawning three tasks set in the ECS Cluster across 3 EC2 Instance

To allow the ECS Service to manage your task (s), you need to provide a task definition, that in the docker world will be the equivalent of a docker-compose.yml file.

ECS Service — Task Definition

The Task Definition is a requirement to run Docker container(s)/task(s) in AWS ECS as it defines:

  1. The image to use (generally hosted in AWS ECR/DOCKER HUB or your preferred Registry)
  1. Memory and CPU limits
  2. The launch type
  3. The logging configuration
  4. And much more

You can refer to this gist to have an idea:

For create a task definition, go to your ECS console and create a new one, this will be needed whenever we will start up a Service:

Select EC2 as we are using an old-school EC2 ECS Cluster

At this stage, you can configure everything by the UI, or copy/paste a JSON file.

ECS Service — Service Creation

Jump on your ECS Console and after that, select your cluster and click Create a New Service.
From there, select the task definition you’ve created and read the instruction to complete the task.

Congratulation your docker container is up and running, with a service taking care of the health check and the cluster behind, ready to host more and more of your projects!

Let’s talk about features?

Auto-heal container: Using the target group, you can specify a path for checking the status of your application:

In this example, the health-check is pinging the homepage at an interval of 30 seconds, making sure that it is returning a status code of 200. In the reverse scenario, the health-check fails, and your container will be re-deployed automatically!

Demon Scheduling: Have you ever need to run a demon across your cluster? Well, if the answer is yes, you do know how painful is to manage (what happens if the container dies for a health-check for example?), this is the AWS managed way to deploy demon such a logging system.

Task Scheduling: Batch jobs? Event-driven Jobs? All-in-one solution, the task scheduling allows you to start a task(s) based on a time interval (cronjob like), from a queue job (event-driven) or start manually!

Task Placement: Even if it sounds something “1–0–1” placing the container across your cluster is not an easy job, you want to make sure that, e.g. that you do have an even distribution, therefore, if one of your ec2 instances stop or get restarted by mistake, your service will continue to stay up and running.

AZ Balanced Spread: Availability Zone across EC2 instance (try to place each of the container in an EC2 instance that reside in a different az).
AZ Balanced BinPack: Availability Zone across EC2 instance choosing the one with the least available memory
BinPack: EC2 instance selecting the one with the least available memory
One Task Per Host: As per the description
Custom: This is the best feature; you can specify custom rules like Which kind of instance, you need to run the task, the AMI-id, the region, etc

if you have any question feel free to send me a message on Linkedin

--

--

Mohtasham Sayeed Mohiuddin

Passionate content creator exploring cloud tech and sustainability. 🌱