Complete Docker Setup Tutorial on Linux (Step-by-Step for Beginners to DevOps) – Part 1
Introduction
Docker is widely used in DevOps to build, ship, and run applications inside lightweight containers. In this Part 1 of the Docker series, we will go step by step through installing and setting up Docker on a Linux system. This guide is written from real-world DevOps experience and is helpful for beginners, developers, and engineers who want to get started with Docker.
Docker simplifies application deployment by packaging everything into containers. It ensures applications run consistently across development, testing, and production environments.
In real DevOps environments, Docker is widely used for microservices, CI/CD pipelines, and scalable cloud deployments. This Part 1 focuses on setting up Docker properly on Linux systems.
What is Docker?
Docker is an open-source containerization platform that allows developers to package applications with all dependencies.
Containers are lightweight compared to virtual machines because they share the host OS kernel, making them faster and efficient.
Prerequisites
- Linux OS (Ubuntu preferred)
- Sudo/root access
- Stable internet connection
- Basic terminal knowledge
sudo apt update && sudo apt upgrade -y
Why this command? This ensures your Linux system has the latest security patches and updated software lists before installing new tools.
Install Docker on Linux
Step 1: Install required packages
sudo apt install ca-certificates curl gnupg lsb-release -y
Why add these packages?
curl allows you to download files from the internet via the command line.
gnupg manages security keys.
ca-certificates allows your system to verify secure (HTTPS) connections.
lsb-release helps the terminal figure out exactly which version of Linux you are running so it downloads the correct Docker files.
Step 2: Add Docker GPG key
sudo mkdir -p /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
Why this command? A GPG key is a digital signature. This step downloads Docker's official "ID card" so your system knows the software it is about to download is legitimate and hasn't been tampered with by hackers.
Step 3: Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Why use echo here? By default, Ubuntu doesn't know where to find the latest version of Docker. We use echo to print the official Docker download link and pipe it directly into a new configuration file inside your system's software sources list. Now, your package manager knows exactly where to look for Docker.
Step 4: Install Docker Engine
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
Why this command? After updating the package list in the first line, the second line installs the actual Docker Engine (the core program), the CLI (command-line tools so you can type `docker`), and useful plugins like Docker Compose.
Step 5: Start Docker service
sudo systemctl start docker
sudo systemctl enable docker
Why this command? start turns Docker on right now. enable tells your Linux system to automatically start Docker in the background every time you restart your computer.
Post-Installation: Docker User Group Permissions
By default, the Docker command communicates with a hidden system file (a socket) that is owned by the `root` (admin) user. If you try to run Docker without sudo right now, you will get a permission denied error like this:
To avoid typing sudo before every single Docker command—which is annoying and a bad security practice for everyday use—you should add your current user to the Docker group:
sudo usermod -aG docker $USER
newgrp docker
Why explain these permissions? usermod -aG adds your current user to the official "docker" user group. The newgrp command applies this change immediately. Once you do this, your user is granted permission to talk to Docker without needing root access (no more typing `sudo`!).
Verify Docker Installation
Verify the installation and check the service status:
docker --version
sudo systemctl status docker
Run First Docker Container
Because we fixed our user permissions earlier, we no longer need sudo to run containers:
docker run hello-world
If successful, Docker will download a tiny test image and display a confirmation message showing that your installation is working correctly.
Basic Docker Commands
- List running containers:
docker ps - List all containers (even stopped ones):
docker ps -a - List downloaded images:
docker images - Stop a container:
docker stop container_id - Remove a container:
docker rm container_id
Docker Compose Setup
Install Docker Compose plugin (Requires sudo as we are installing via apt):
sudo apt install docker-compose-plugin -y
Create a docker-compose.yml file:
sudo nano docker-compose.yml
Paste the following Example docker-compose.yml content into the file:
version: "3.8"
services:
web:
image: nginx
ports:
- "8080:80"
Run Compose (No sudo needed!):
docker compose up -d
Why this command? up reads your compose file and starts the services. The -d flag means "detached mode," which lets it run in the background so you can keep using your terminal.
Common Errors & Fixes
Docker daemon not running
If Docker says it cannot connect to the daemon, it means the background service is offline.
sudo systemctl start docker
Socket permission issue (Temporary workaround)
If group permissions still aren't working due to your environment, you can temporarily modify the socket permissions directly. (Note: This is not recommended for production servers for security reasons, but works well for local testing).
sudo chmod 666 /var/run/docker.sock
Why this command? chmod 666 forces the Docker socket file to grant read and write permissions to absolutely every user on the system, bypassing the need for groups entirely.
Frequently Asked Questions. (Part 1)
1. Is Docker installation different in production servers?
Yes. In production DevOps environments, Docker is installed with security hardening, rootless mode, and CI/CD automation instead of manual setup.
2. Can I use Docker on low configuration servers?
Yes. Docker is lightweight, but production workloads still need proper CPU and RAM allocation.
3. Why do DevOps engineers prefer Docker?
Because it provides consistency, portability, faster deployments, and easy scaling in cloud environments.
4. Is Docker enough for production deployment?
Docker is the foundation, but production systems also use Kubernetes, CI/CD pipelines, and monitoring tools.
5. What will be covered in Part 2?
Part 2 will cover Docker images, Dockerfile, multi-container applications, and real DevOps deployment workflows.
Conclusion
Docker is a must-have skill for DevOps engineers and backend developers. In this Part 1, you learned installation, setup, adding the user group, and basic commands on Linux—and more importantly, why you need each command.
Continue to Part 2 to build real-world Docker images and deployment pipelines.