How to Build and Visualize a Docker Swarm Cluster with Portainer
This guide walks through installing Docker on Debian hosts, creating a Docker Swarm manager and worker nodes, configuring overlay and bridge networks, deploying Portainer for visual management in both single‑node and swarm modes, launching container services, scaling them, and implementing load‑balanced Nginx services.
1. Environment
Install Docker on each Debian 12 host:
<code>sudo apt install docker.io</code>Host allocation:
OS
Hostname
IP
Docker version
debian 12 (bookworm)
fs3 (manager)
192.168.1.95
v20.10.24
debian 12 (bookworm)
fs1 (worker)
192.168.1.91
v20.10.24
debian 12 (bookworm)
fs0 (worker)
192.168.1.92
v20.10.24
Open required ports on each host:
2377/tcp – manager communication
7946/tcp, 7946/udp – node‑to‑node communication
4789/udp – overlay network traffic
2. Create Cluster
2.1 Create manager node
<code>sudo docker swarm init --advertise-addr 192.168.1.95</code>Output shows the manager is initialized and provides a join command for workers.
2.2 Add worker nodes
<code>sudo docker swarm join --token SWMTKN-1-... 192.168.1.95:2377</code>To add another manager, use
docker swarm join-token manager. Tokens expire after 24 hours; retrieve a new token with
docker swarm join-token worker.
2.3 Cluster networks
ingress – overlay network used by services when no custom network is specified.
docker_gwbridge – bridge network that connects the nodes.
List networks with
docker network ls.
3. Cluster Management Visualization (Portainer)
3.1 Single‑node deployment
<code># Pull the image
docker pull portainer/portainer-ce:latest
# Run the container
docker run \
-p 8000:8000 \
-p 9443:9443 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
--name my-portainer \
-d \
--privileged=true \
--restart=always \
portainer/portainer-ce:latest</code>The container runs with root privileges and restarts automatically.
3.2 Swarm deployment
<code>curl -L https://downloads.portainer.io/ce2-19/portainer-agent-stack.yml -o portainer-agent-stack.yml
# (YAML content omitted for brevity)
# Deploy the stack
docker stack deploy -c portainer-agent-stack.yml portainer</code>After deployment, three networks and services appear:
portainer_agent_network,
portainer_agent, and
portainer_portainer. The agent runs on every node, allowing the manager UI to control the whole swarm.
3.3 Reset Portainer admin password
<code>docker run --rm -v /var/lib/docker/volumes/portainer_data/_data:/data portainer/helper-reset-password</code>The command prints a new password; restart the container and log in with the new credentials.
4. Deploy Container Services
4.1 Pull busybox image
<code>docker pull busybox</code>4.2 Create overlay network
<code>docker network create -d overlay --attachable busybox_overlay_network</code>Note the
--attachableflag so that both swarm services and standalone containers can join the network.
4.3 Create busybox service
<code>docker service create -td --name busybox_service \
--network busybox_overlay_network \
--replicas=2 busybox</code>Two replicas are created, one on each worker node.
4.4 Verify with Portainer
The service appears with two running tasks, each on a different node.
4.5 Inter‑node container communication
Enter a busybox container on
fs0and ping the container on
fs1(IP 10.0.2.4 ↔ 10.0.2.3); ping succeeds, confirming overlay network connectivity.
4.6 Scale the service
Increase the replica count to 3 via Portainer; a third container is scheduled on one of the nodes.
5. Load Balancing
5.1 Stateless service with Nginx
<code># Pull image
docker pull nginx
# Create overlay network for the service
docker network create -d overlay --attachable nginx_overlay_network
# Deploy three replicas and expose port 8080
docker service create -td --name nginx_service \
--network nginx_overlay_network \
--replicas=3 -p 8080:80 nginx</code>After a short wait, all three replicas are running. Access
http://<manager‑ip>:8080to see the default Nginx page, which is load‑balanced across the three containers.
Modify each container’s
index.html(e.g.,
echo "server 192.168.1.95" > index.html) to verify that requests are distributed among the nodes.
6. Clean‑up
Stop and remove services, networks, and the Portainer stack when the demo is finished.
Raymond Ops
Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.