Containers have become the default way to build and run microservices on cloud platforms like AWS and Azure because they package code and dependencies into lightweight, portable units that are easy to deploy, scale, and operate.
Why containers fit microservices
Microservices work best when each service can be developed, deployed, and scaled independently; containers give every service its own isolated runtime with exactly the libraries and configuration it needs. Because container images are immutable, the same artifact runs consistently on a developer laptop, in CI/CD, and in production on AWS or Azure, reducing “works on my machine” issues and easing rollbacks.
Modern platforms also provide built-in networking, service discovery, and scaling for containerized services, so teams can focus on business logic instead of server setup.
Using containers on AWS
On AWS, microservices are commonly deployed to Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS), often with AWS Fargate to remove the need to manage EC2 instances. ECS offers a tightly integrated, simpler experience for defining tasks and services per microservice, while Fargate runs those tasks serverlessly by provisioning and scaling compute for containers automatically.
EKS exposes full Kubernetes APIs, which suits teams that want advanced customization and Kubernetes-native tooling, at the cost of higher operational complexity. AWS prescriptive guidance patterns show typical architectures where each microservice has its own container image, ECS/EKS service, load-balanced endpoint, and CI/CD pipeline.
Using containers on Azure
On Azure, the two main options for containerized microservices are Azure Kubernetes Service (AKS) and Azure Container Apps. AKS provides managed Kubernetes clusters, ideal when you need full control over orchestration, custom controllers, and complex networking models for many microservices.
Azure Container Apps offers a serverless container platform with built-in service discovery, HTTP/HTTP2 ingress, autoscaling via KEDA, logging, and revision-based deployments, making it well suited for microservices APIs, event-driven workers, and background jobs without managing Kubernetes directly. Microsoft’s reference architectures show containerized microservices moving between AKS and Container Apps with minimal code change, emphasizing platform migration rather than application rewrites.
Practical applications of Docker containers
Docker containers are used across a wide set of scenarios in microservices-based cloud systems.
Microservices APIs: Each business capability (orders, payments, inventory, user profiles) runs as its own containerized service, enabling independent deployments and autoscaling.
Legacy modernization: Existing monoliths are gradually decomposed and parts are moved into containers, often as a first step to cloud migration and later to full microservices.
Background and event processing: Containers run workers that handle asynchronous jobs, queues, and event-driven workloads, which can scale out quickly during traffic spikes.
CI/CD and testing environments: Reusable Docker images provide standardized build, test, and staging environments, improving reliability of automated pipelines.
These patterns allow teams to adopt polyglot stacks, improve fault isolation, and scale only the hotspots in an application instead of the entire system.
Docker key commands map (cheat sheet)
Below is an original, compact “key commands map” that highlights core Docker CLI commands commonly used when working with microservices on AWS and Azure.
Core Docker commands table
Area | Purpose | Example command (pattern) |
|---|---|---|
Check version/info | Verify Docker installation and details |
|
List images | See local images |
|
Build image | Build from Dockerfile |
|
Tag image | Add or change image tag |
|
List containers | Running / all containers |
|
Run container | Start container from image |
|
Name container | Give container a friendly name |
|
Stop/start | Control container lifecycle |
|
Logs | View container logs |
|
Exec into container | Run command inside container |
|
Remove container | Delete stopped container |
|
Remove image | Delete local image |
|
Networks | Basic network operations |
|
Volumes | Manage persistent storage |
|
Prune resources | Clean up unused data |
|
Login to registry | Authenticate to Docker Hub/registry |
|
Push image | Push to registry (ECR/ACR/Hub, etc.) |
|
Pull image | Fetch from registry |
|
These commands underpin typical workflows: building microservice images locally, testing them, then pushing to a registry from which AWS (ECS/EKS/Fargate) or Azure (AKS/Container Apps) pulls images for deployment.
Containers for local development:
Use containers for local development by containerizing each service, wiring them together with Docker Compose (or similar), and mapping ports and volumes so you can code and debug from your normal tools while everything runs in containers.
Step-by-step local workflow
Install tooling
Install Docker Desktop (or Docker Engine + Docker Compose) on your machine.
Optionally add IDE integrations like VS Code Dev Containers to develop “inside” the same environment your app runs in.
Containerize each service
For every microservice, create a Dockerfile that installs dependencies, copies source, and defines how to start the service (entrypoint/cmd).
Keep Dockerfiles dev-friendly: enable hot-reload tools (nodemon, dotnet watch, etc.) and expose the right ports.
Define a local stack with Compose
Create a
docker-compose.yml(orcompose.yaml) that lists all services (APIs, frontends, DBs, queues) and defines networks and environment variables.Add named volumes for databases and bind-mounts for source code so changes on the host are reflected inside containers.
Wire code into containers
Use volume mounts like
./service-a:/appso you can edit code locally while the container runs the app.Configure your dev script or watch process as the container’s command so it reloads automatically when files change.
Start and stop the environment
From the project root, bring everything up with
docker compose up(ordocker-compose up) and add-dto run in the background.Tear the environment down with
docker compose down, and usedocker compose psanddocker logsto inspect running services.
Develop, test, and debug
Hit services on
localhost:<port>from your browser or API clients as usual; containers map those ports to the host.Run tests in containers (for example with a dedicated test service or
docker compose run app-test) so your test environment matches CI and production.
Iterate and optimize
Use
.dockerignore, layered Dockerfile best practices, and shared base images to keep builds fast during frequent code changes.Add health checks and simple Makefile or npm scripts (
make up,make down) so bringing your local environment up and down is a single command.
