Skip to main content
DevopsBeginner10 min readUpdated March 2025

Docker Fundamentals

Docker is the industry-standard containerization platform. It packages applications and their dependencies into portable, isolated containers that run consistently across any environment — from a developer laptop to a production server.

What is Docker?

Docker is an open-source platform that automates the deployment of applications inside lightweight, portable containers. A container is a runnable instance of an image — it bundles the application code, runtime, libraries, and configuration into a single unit.

Before Docker, the classic problem was "it works on my machine." Docker solves this by ensuring the environment is identical everywhere the container runs.

Key concepts: - Image — A read-only template used to create containers (like a class in OOP) - Container — A running instance of an image (like an object) - Dockerfile — A script of instructions to build an image - Registry — A storage and distribution system for images (e.g., Docker Hub)

Docker vs Virtual Machines

Both Docker containers and Virtual Machines (VMs) provide isolation, but they work differently:

  • VMs virtualize the entire hardware stack — each VM runs a full OS kernel, consuming gigabytes of memory.
  • Containers share the host OS kernel — they are isolated at the process level, consuming only megabytes.
  • Containers start in milliseconds; VMs take minutes to boot.
  • A single host can run dozens of containers vs a handful of VMs.
  • VMs provide stronger isolation (separate kernels); containers are lighter but share the kernel.

Writing a Dockerfile

A Dockerfile is a text file with instructions that Docker reads to build an image. Each instruction creates a new layer in the image, and layers are cached for faster rebuilds.

Common Dockerfile instructions: - FROM — Base image to build upon - WORKDIR — Set the working directory inside the container - COPY / ADD — Copy files from host to container - RUN — Execute a command during build (install packages, compile code) - EXPOSE — Document which port the container listens on - CMD / ENTRYPOINT — Command to run when the container starts

Building and Running Containers

Here is a complete example: containerizing a Node.js application, building the image, and running it:

dockerfile
# ---- Dockerfile ----
# Use official Node.js LTS image as base
FROM node:20-alpine AS builder
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o app .

# Stage 2: Minimal runtime image
FROM alpine:3.18
WORKDIR /app
COPY --from=builder /build/app .
EXPOSE 8080
CMD ["./app"]
# Final image is ~10MB instead of ~300MB

Essential Docker CLI Commands

The Docker CLI is your primary interface for managing images and containers:

bash
# ---- Image Management ----
docker build -t myapp:1.0 .          # Build image from Dockerfile in current dir
docker images                         # List all local images
docker pull nginx:latest              # Pull image from Docker Hub
docker push myrepo/myapp:1.0          # Push image to registry
docker rmi myapp:1.0                  # Remove an image

# ---- Container Lifecycle ----
docker run -d -p 3000:3000 --name myapp myapp:1.0   # Run detached, map ports
docker run -it ubuntu:22.04 bash                     # Run interactive shell
docker ps                             # List running containers
docker ps -a                          # List all containers (including stopped)
docker stop myapp                     # Gracefully stop container
docker start myapp                    # Start a stopped container
docker rm myapp                       # Remove a stopped container

# ---- Debugging ----
docker logs myapp                     # View container logs
docker logs -f myapp                  # Follow logs in real-time
docker exec -it myapp bash            # Open shell inside running container
docker inspect myapp                  # Detailed container metadata (JSON)
docker stats                          # Live resource usage (CPU, memory)

# ---- Volumes (persistent storage) ----
docker run -v /host/path:/container/path myapp:1.0
docker volume create mydata
docker run -v mydata:/app/data myapp:1.0

Docker Layers and Caching

Docker images are built in layers — each Dockerfile instruction creates one layer. Understanding layers is critical for building efficient images:

Layer caching: Docker caches each layer. If a layer hasn't changed, Docker reuses the cached version. This makes rebuilds fast.

Best practice: Put instructions that change frequently (like COPY . .) near the bottom of the Dockerfile, and stable instructions (like RUN npm install) near the top. This maximizes cache hits.

Multi-stage builds: Use multiple FROM statements to keep the final image small — build in one stage, copy only the artifacts to a minimal runtime image.

dockerfile
# Multi-stage build example (Go application)
# Stage 1: Build
FROM golang:1.21-alpine AS builder
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN go build -o app .

# Stage 2: Minimal runtime image
FROM alpine:3.18
WORKDIR /app
COPY --from=builder /build/app .
EXPOSE 8080
CMD ["./app"]
# Final image is ~10MB instead of ~300MB

Key Takeaways

  • Docker packages applications into portable containers that run consistently across all environments.
  • Images are read-only templates; containers are running instances of images.
  • Dockerfiles define how to build an image — each instruction creates a cached layer.
  • Containers share the host OS kernel, making them far lighter and faster than VMs.
  • Multi-stage builds dramatically reduce final image size by separating build and runtime environments.

Contact Us

Have a question or feedback? Fill out the form below or reach us directly at support@nvaitraining.com