Containers have become the default way to build and run microservices on cloud platforms like AWS and Azure because they package code and dependencies into lightweight, portable units that are easy to deploy, scale, and operate.

Why containers fit microservices

Microservices work best when each service can be developed, deployed, and scaled independently; containers give every service its own isolated runtime with exactly the libraries and configuration it needs. Because container images are immutable, the same artifact runs consistently on a developer laptop, in CI/CD, and in production on AWS or Azure, reducing “works on my machine” issues and easing rollbacks.

Modern platforms also provide built-in networking, service discovery, and scaling for containerized services, so teams can focus on business logic instead of server setup.

Using containers on AWS

On AWS, microservices are commonly deployed to Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS), often with AWS Fargate to remove the need to manage EC2 instances. ECS offers a tightly integrated, simpler experience for defining tasks and services per microservice, while Fargate runs those tasks serverlessly by provisioning and scaling compute for containers automatically.

EKS exposes full Kubernetes APIs, which suits teams that want advanced customization and Kubernetes-native tooling, at the cost of higher operational complexity. AWS prescriptive guidance patterns show typical architectures where each microservice has its own container image, ECS/EKS service, load-balanced endpoint, and CI/CD pipeline.

Using containers on Azure

On Azure, the two main options for containerized microservices are Azure Kubernetes Service (AKS) and Azure Container Apps. AKS provides managed Kubernetes clusters, ideal when you need full control over orchestration, custom controllers, and complex networking models for many microservices.

Azure Container Apps offers a serverless container platform with built-in service discovery, HTTP/HTTP2 ingress, autoscaling via KEDA, logging, and revision-based deployments, making it well suited for microservices APIs, event-driven workers, and background jobs without managing Kubernetes directly. Microsoft’s reference architectures show containerized microservices moving between AKS and Container Apps with minimal code change, emphasizing platform migration rather than application rewrites.

Practical applications of Docker containers

Docker containers are used across a wide set of scenarios in microservices-based cloud systems.

  • Microservices APIs: Each business capability (orders, payments, inventory, user profiles) runs as its own containerized service, enabling independent deployments and autoscaling.

  • Legacy modernization: Existing monoliths are gradually decomposed and parts are moved into containers, often as a first step to cloud migration and later to full microservices.

  • Background and event processing: Containers run workers that handle asynchronous jobs, queues, and event-driven workloads, which can scale out quickly during traffic spikes.

  • CI/CD and testing environments: Reusable Docker images provide standardized build, test, and staging environments, improving reliability of automated pipelines.

These patterns allow teams to adopt polyglot stacks, improve fault isolation, and scale only the hotspots in an application instead of the entire system.

Docker key commands map (cheat sheet)

Below is an original, compact “key commands map” that highlights core Docker CLI commands commonly used when working with microservices on AWS and Azure.

Core Docker commands table

Area

Purpose

Example command (pattern)

Check version/info

Verify Docker installation and details

docker version; docker info

List images

See local images

docker images

Build image

Build from Dockerfile

docker build -t <image-name>:<tag> .

Tag image

Add or change image tag

docker tag <image-id> <repo>/<image>:<tag>

List containers

Running / all containers

docker ps; docker ps -a

Run container

Start container from image

docker run -d -p <host>:<container> <image>

Name container

Give container a friendly name

docker run --name <name> <image>

Stop/start

Control container lifecycle

docker stop <container>; docker start <container>

Logs

View container logs

docker logs -f <container>

Exec into container

Run command inside container

docker exec -it <container> /bin/sh

Remove container

Delete stopped container

docker rm <container>

Remove image

Delete local image

docker rmi <image>

Networks

Basic network operations

docker network ls; docker network create <net>

Volumes

Manage persistent storage

docker volume ls; docker volume create <name>

Prune resources

Clean up unused data

docker system prune

Login to registry

Authenticate to Docker Hub/registry

docker login <registry>

Push image

Push to registry (ECR/ACR/Hub, etc.)

docker push <repo>/<image>:<tag>

Pull image

Fetch from registry

docker pull <repo>/<image>:<tag>

These commands underpin typical workflows: building microservice images locally, testing them, then pushing to a registry from which AWS (ECS/EKS/Fargate) or Azure (AKS/Container Apps) pulls images for deployment.

Containers for local development:

Use containers for local development by containerizing each service, wiring them together with Docker Compose (or similar), and mapping ports and volumes so you can code and debug from your normal tools while everything runs in containers.

Step-by-step local workflow

  1. Install tooling

    • Install Docker Desktop (or Docker Engine + Docker Compose) on your machine.

    • Optionally add IDE integrations like VS Code Dev Containers to develop “inside” the same environment your app runs in.

  2. Containerize each service

    • For every microservice, create a Dockerfile that installs dependencies, copies source, and defines how to start the service (entrypoint/cmd).

    • Keep Dockerfiles dev-friendly: enable hot-reload tools (nodemon, dotnet watch, etc.) and expose the right ports.

  3. Define a local stack with Compose

    • Create a docker-compose.yml (or compose.yaml) that lists all services (APIs, frontends, DBs, queues) and defines networks and environment variables.

    • Add named volumes for databases and bind-mounts for source code so changes on the host are reflected inside containers.

  4. Wire code into containers

    • Use volume mounts like ./service-a:/app so you can edit code locally while the container runs the app.

    • Configure your dev script or watch process as the container’s command so it reloads automatically when files change.

  5. Start and stop the environment

    • From the project root, bring everything up with docker compose up (or docker-compose up) and add -d to run in the background.

    • Tear the environment down with docker compose down, and use docker compose ps and docker logs to inspect running services.

  6. Develop, test, and debug

    • Hit services on localhost:<port> from your browser or API clients as usual; containers map those ports to the host.

    • Run tests in containers (for example with a dedicated test service or docker compose run app-test) so your test environment matches CI and production.

  7. Iterate and optimize

    • Use .dockerignore, layered Dockerfile best practices, and shared base images to keep builds fast during frequent code changes.

    • Add health checks and simple Makefile or npm scripts (make up, make down) so bringing your local environment up and down is a single command.​​

Keep Reading

No posts found