Advanced Docker Commands & Compose Workflows: DevOps Guide (Part 2)
Introduction
Welcome to Part 2 of our Docker series. In Part 1, we successfully installed Docker, configured user permissions, and ran our first container. Now, it's time to move beyond the basics.
In real-world DevOps environments, you don't just run pre-built images. You create custom images, manage networks, handle persistent data, and deploy multi-container applications.
To make this guide highly practical, we will base all our commands, image builds, networks, and volume configurations on a real-world foundation: a production-ready Nginx web server running on Ubuntu. This exact Nginx setup is the industry standard for acting as a high-performance reverse proxy for backend architectures like Next.js, Node.js, Django, and FastAPI.
Building Custom Images with Dockerfile
A Dockerfile is a text document containing all the commands needed to assemble an image. We will use an optimized Dockerfile that leverages build arguments, metadata labels, and efficient layer caching.
Step 1: Create the Dockerfile
In this example, I am using an Ubuntu base image and building an Nginx image. First, create the Dockerfile in your root directory:
sudo nano Dockerfile
Add this configuration to the file:
ARG UBUNTU_VERSION=22.04
FROM ubuntu:${UBUNTU_VERSION}
LABEL Name="Ubuntu Nginx Web Server" Version="1.0"
LABEL description="Basic Nginx image based on Ubuntu"
ENV DEBIAN_FRONTEND=noninteractive
WORKDIR /var/www/html
RUN apt-get update && apt-get install -y nginx && rm -rf /var/lib/apt/lists/*
EXPOSE 80
ENTRYPOINT ["nginx", "-g", "daemon off;"]
Explanation of Advanced Concepts:
ARG: Defines variables passed during build time.LABEL: Adds metadata information to Docker images.- Layer Caching: Combining package installation into one command improves Docker build speed.
Step 2: Build the Docker Image
docker build -t ubuntu-nginx:v1 .
This command builds the Docker image using the current directory as the build context.
Managing Docker Images
Once you start building images regularly, your disk space can fill up quickly.
docker images
Lists all available Docker images on the server.
docker rmi ubuntu-nginx:v1
Removes a specific Docker image version.
Data Persistence: Docker Volumes
By default, data created inside a container disappears when the container stops or gets removed. Docker Volumes solve this problem by storing data permanently outside containers.
Step 1: Create a Docker Volume
docker volume create app_data
docker volume ls
This creates a persistent Docker volume called app_data.
Step 2: Run a MySQL Container with the Volume
docker run -d --name backend-database -v app_data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=admin1234 mysql:8.0
The -v app_data:/var/lib/mysql option mounts the Docker volume inside the MySQL container.
Step 3: Verify the Docker Volume
docker volume inspect app_data
Step 4: Check the Database Files
sudo ls /var/lib/docker/volumes/app_data/_data/
sudo ls /var/lib/docker/volumes/app_data/_data/mysql
Step 5: Enter the Running MySQL Container
docker exec -it backend-database bash
ls /var/lib/mysql
Step 6: Test MySQL Login
docker exec -it backend-database mysql -u root -p
Enter the password:
admin1234
Connecting Containers: Docker Networks
By default, Docker containers run in isolation. If your Nginx web server needs to talk to your MySQL database, they need a way to communicate securely. Docker Networks allow containers to find and interact with each other using their container names (like built-in DNS) instead of IP addresses, which can change every time a container restarts.
Step 1: Create a Custom Bridge Network
We start by creating a dedicated network for our application stack. This ensures only containers attached to this specific network can communicate with each other, enhancing security.
docker network create app_network
docker network ls
Step 2: Run a Backend Service on the Network
Now, we launch our MySQL container, but this time we explicitly attach it to our new app_network using the --network flag.
docker run -d --name backend-database-2 --network app_network -e MYSQL_ROOT_PASSWORD=admin1234 mysql:8.0
Step 3: Run the Nginx Container on the Same Network
Next, we run our custom Ubuntu Nginx image, exposing port 81, and connecting it to the exact same app_network. Because they share a network, Nginx can now securely route traffic to the database if needed.
docker run -d --name nginx-web --network app_network -p 81:80 ubuntu-nginx:v1
Interacting with Running Containers
Sometimes you need to jump inside a running container to check configurations or troubleshoot issues. You can open an interactive bash shell inside your Nginx container with this command:
docker exec -it nginx-web /bin/bash
To monitor real-time access and error logs generated by Nginx without entering the container, use the logs command with the -f (follow) flag:
docker logs -f nginx-web
Cleaning Up: System Prune
Over time, unused images, stopped containers, and abandoned networks consume disk space. The prune command acts as a garbage collector to safely wipe all unused Docker data.
docker system prune -a
Advanced Docker Compose
Running long strings of docker run commands manually is prone to human error and difficult to scale. Docker Compose solves this by letting you define your entire multi-container architecture (Networks, Volumes, Builds, and Ports) inside a single, version-controlled YAML file.
Step 1: Define the Architecture
Create a file named docker-compose.yml in your project directory. This file instructs Docker to automatically build our custom Nginx image, pull the MySQL image, map our volumes, and connect them via a bridge network.
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: ubuntu_nginx_web
ports:
- "82:80"
networks:
- app_network
restart: unless-stopped
environment:
- NGINX_ENV=production
database:
image: mysql:8.0
container_name: mysql_database
environment:
MYSQL_ROOT_PASSWORD: admin1234
volumes:
- app_data:/var/lib/mysql
networks:
- app_network
restart: unless-stopped
volumes:
app_data:
networks:
app_network:
driver: bridge
Note on the Compose `version` attribute: If you are using a modern version of Docker, you might see a terminal warning stating that the version: '3.8' attribute is obsolete. With Docker Compose V2, the engine automatically supports all modern features natively without needing this version number. However, many developers still include it to ensure backward compatibility when deploying to older Linux servers.
Step 2: Build and Run in Detached Mode
Instead of executing multiple commands, a single command now reads the YAML file, builds any missing images, creates the volumes/networks, and starts the containers in the background (-d).
docker compose up --build -d
Step 3: Stop and Remove the Stack
When you are done testing, you can cleanly shut down and remove all containers and networks associated with this project using:
docker compose down
Frequently Asked Questions (Part 2)
1. What is the difference between an Image and a Container?
An Image is a blueprint, while a Container is the running instance of that image.
2. Why use Ubuntu base images?
Ubuntu provides excellent compatibility and debugging tools for production workloads.
3. Why are Docker Volumes important?
Volumes preserve database files and application data even after containers are removed.
4. Why can’t I see /var/lib/mysql directly on Ubuntu?
Because that directory exists inside the Docker container. Docker stores the actual data inside:
/var/lib/docker/volumes/app_data/_data/
Conclusion
You have now learned essential Docker commands used in real-world DevOps environments. Using Ubuntu, Nginx, MySQL, Docker Volumes, Networks, and Docker Compose, you now have a strong foundation for deploying production-ready applications.
With this setup, you can confidently deploy applications using Next.js, Node.js, Django, FastAPI, MySQL, and Nginx together in modern cloud infrastructure.