Deploy a Multi‑Instance Harbor Registry Using Alibaba Cloud NAS and NFS
This guide walks through building a highly available Harbor container registry across multiple nodes by selecting a shared storage backend, configuring Redis and PostgreSQL as external services, mounting Alibaba Cloud high‑performance NAS via NFS, and exposing the service through Alibaba SLB.
Multi‑Instance Shared Storage Architecture
The load balancer uses Alibaba SLB instead of Nginx.
Key Design Considerations
Select a shared storage backend; Harbor supports AWS S3, OpenStack Swift, Ceph, etc. This guide uses Alibaba Cloud high‑performance NAS (mounted with NFS v3) for superior I/O.
Session data cannot be shared across instances, so Harbor’s Redis must be deployed separately and shared by all instances.
Harbor’s database must also be deployed independently, with all instances connecting to the same database.
In production, prefer the high‑performance NAS variant over the generic NAS option.
Alibaba Cloud NAS performance reference: https://help.aliyun.com/document_detail/124577.html?spm=a2c4g.11186623.6.552.2eb05ea0HJUgUB
Deployment Resources
harbor1 – 192.168.10.10 – centos7.9
harbor2 – 192.168.10.11 – centos7.9
Deployment Steps
1. Mount Alibaba Cloud High‑Performance NAS
Both harbor1 and harbor2 need to mount the NAS.
Configure automatic mount by editing
/etc/fstaband adding the mount command.
<code># Create NAS mount directory
$ mkdir /data
# Increase concurrent NFS request slots
$ sudo echo "options sunrpc tcp_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
$ sudo echo "options sunrpc tcp_max_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
</code>Mount NFS v4 filesystem:
<code>file-system-id.region.nas.aliyuncs.com:/ /data nfs vers=4,minorversion=0,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev,noresvport 0 0
</code>Mount NFS v3 filesystem (if needed):
<code>file-system-id.region.nas.aliyuncs.com:/ /data nfs vers=3,nolock,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,_netdev,noresvport 0 0
</code> <code># Apply fstab changes and mount
$ mount -a
# Verify mount
$ df -h | grep aliyun
</code>2. Deploy a Temporary Single‑Node Harbor
Run the following commands on
harbor1:
<code># Online install Harbor
$ cd /opt/
$ wget https://github.com/goharbor/harbor/releases/download/v2.2.1/harbor-online-installer-v2.2.1.tgz
$ tar xf harbor-online-installer-v2.2.1.tgz
$ cd /opt/harbor
$ cp harbor.yml.tmpl harbor.yml
# Create data directory
$ mkdir /data/harbor
# Add SSL certificates
$ mkdir /data/harbor/cert
$ scp harbor.example.pem [email protected]:/data/harbor/cert/
$ scp harbor.example.key [email protected]:/data/harbor/cert/
# Adjust configuration (example diff shown)
$ diff harbor.yml harbor.yml.tmpl
# Prepare Harbor with optional components
$ ./prepare --with-notary --with-trivy --with-chartmuseum
# Install Harbor
$ ./install.sh --with-notary --with-trivy --with-chartmuseum
# Verify containers
$ docker-compose ps
</code>3. Deploy Separate Harbor Database and Redis
<code># Create storage directories
$ mkdir -p /data/harbor-redis /data/harbor-postgresql
$ chown -R 999.999 /data/harbor-redis /data/harbor-postgresql
</code> <code># docker‑compose.yml for PostgreSQL and Redis
version: '2.3'
services:
redis:
image: goharbor/redis-photon:v2.2.1
container_name: harbor-redis
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
volumes:
- /data/harbor-redis:/var/lib/redis
networks:
- harbor-db
ports:
- 6379:6379
postgresql:
image: goharbor/harbor-db:v2.2.1
container_name: harbor-postgresql
restart: always
cap_drop:
- ALL
cap_add:
- CHOWN
- DAC_OVERRIDE
- SETGID
- SETUID
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: test2021
volumes:
- /data/harbor-postgresql:/var/lib/postgresql/data:z
networks:
- harbor-db
ports:
- 5432:5432
networks:
harbor-db:
driver: bridge
</code> <code># Deploy the services
$ docker-compose up -d
</code>4. Import PostgreSQL Data
<code># Export data from temporary Harbor DB container
$ docker exec -it -u postgres harbor-db bash
$ pg_dump -U postgres registry > /tmp/registry.sql
$ pg_dump -U postgres notarysigner > /tmp/notarysigner.sql
$ pg_dump -U postgres notaryserver > /tmp/notaryserver.sql
# Import into the external PostgreSQL instance
$ psql -h 192.168.10.10 -U postgres registry -W < /tmp/registry.sql
$ psql -h 192.168.10.10 -U postgres notarysigner -W < /tmp/notarysigner.sql
$ psql -h 192.168.10.10 -U postgres notaryserver -W < /tmp/notaryserver.sql
</code>5. Clean Up Temporary Harbor Data
<code># Backup certificates and remove old data
$ cp -a /data/harbor/cert /tmp/
$ rm -rf /data/harbor/*
$ rm -rf /opt/harbor
$ cp -a /tmp/cert /data/harbor/
# Re‑extract installer and re‑configure
$ cd /opt/
$ tar xf harbor-online-installer-v2.2.1.tgz
$ cd /opt/harbor
$ cp harbor.yml.tmpl harbor.yml
# Adjust configuration to point to external DB and Redis, then install again
$ ./prepare --with-notary --with-trivy --with-chartmuseum
$ ./install.sh --with-notary --with-trivy --with-chartmuseum
$ docker-compose ps
# Copy the configured directory to the second node
$ scp -r /opt/harbor 192.168.10.11:/opt/
</code>6. Configure Alibaba Cloud SLB
Configure a TCP listener on port 443 that forwards traffic to the two Harbor nodes (harbor1 and harbor2). Detailed steps are available in the Alibaba Cloud SLB documentation: https://help.aliyun.com/document_detail/205495.html?spm=a2c4g.11174283.6.666.f9aa1192jngFKC
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.