Skip to content

Setting up a Private Docker Registry in a Kubernetes Cluster - Test Pushing Images and Running Deployments

Published:

I am currently in the initial stages of creating a “Sideproject Hub”, where I aim to rapidly deploy small coding projects within Docker containers on my own server. To ensure total overengineering from the very beginning, I started the process by setting up a Kubernetes cluster on my server. This cluster will serve as the foundation for efficiently running my future side projects.

Within this Kubernetes cluster, I want to run Docker images. Therefore I want to have my own Docker registry. Not only because I simply want to have my own Docker registry and continue my overengineering, but also because the contingent available from Github or other providers is limited.

So I went through the process of setting up my own private Docker registry within the Kubernetes cluster. Here are the notes I made during the process.

Disclaimer: I’m writing these lines as I try all this stuff out for myself. This is how it worked for me, and I’m happy if I can help someone else with my notes. If anyone knows how to do it in a better way or if any information is incorrect, please let me know!

Create Kubernetes Namespace for Registry

Configuring Persistent Storage for the Docker Registry in Kubernetes

apiVersion: v1
kind: PersistentVolume
metadata:
  name: docker-registry-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

Persistent Volumes are not scoped to any namespace, only the Persistent Volume Claims are

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: docker-registry-pv-claim
  namespace: docker-registry
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

The registry-pv.yaml file defines a Persistent Volume with a capacity of 10 gigabytes mapped to the host path “/mnt/data.” Persistent Volumes (PVs) need to be accessed via Persistent Volume Claims (PVCs), which serve as requests for storage resources. Another crazy Kubernetes concept that offers advantages in complex environments, but I don’t care at this point.

Generate Registry User Credentials

export REGISTRY_USER=<awesome-username>
export REGISTRY_PASS=<awesome-password>
export DESTINATION_FOLDER=./registry-creds

mkdir -p ${DESTINATION_FOLDER}
echo ${REGISTRY_USER} >>${DESTINATION_FOLDER}/registry-user.txt
echo ${REGISTRY_PASS} >>${DESTINATION_FOLDER}/registry-pass.txt

docker run --entrypoint htpasswd registry:2.7.0 \
    -Bbn ${REGISTRY_USER} ${REGISTRY_PASS} \
    >${DESTINATION_FOLDER}/htpasswd

unset REGISTRY_USER REGISTRY_PASS DESTINATION_FOLDER

The key command is the “Docker Run” command. This command uses the registry:2.7.0 Docker image with the htpasswd entry point to generate an encrypted password for the specified username. It creates or updates password files storing usernames and encrypted passwords. These files are used by web servers to authenticate users. The resulting entry is written to an htpasswd file within the destination folder.

Install Docker Registry using Helm

replicaCount: 1
persistence:
  enabled: true
  size: 10Gi
  deleteEnabled: true
  storageClass: csi-cinder-classic
  existingClaim: docker-registry-pv-claim
secrets:
  htpasswd: awesome-username:$2y$05$P9uMz/WHFoGrO8SmqufQBObtabXj5z4CVjaQT1L3qPHcoBwTcnu3e

Helm is a package manager for Kubernetes applications. It uses “charts”, which are packages of pre-configured Kubernetes resources, making it easier to run these applications on Kubernetes clusters.

In the provided command, the registry-chart.yaml file is used to add additional configuration for the Helm chart during the installation of the Docker registry. It includes configuration details such as storage claims (size, storage class, delete options), replica count, and authentication secrets (htpasswd).

Make the Registry Externally Available

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: registry-ingress
  namespace: docker-registry
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.tls: "true"
    cert-manager.io/cluster-issuer: letsencrypt-ricosapps-prod
spec:
  tls:
    - hosts:
        - registry.<yourdomain.de>
      secretName: letsencrypt-ricosapps-prod
  rules:
    - host: registry.<yourdomain.de>
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: docker-registry
                port:
                  number: 5000

Use HTTPS and a TLS certificate, otherwise, Docker seems to complain during login, pushing, and pulling.

Test Docker Image Build and Push

<!DOCTYPE html>
<html>
  <head>
    <title>Hello Sideproject Hub</title>
  </head>
  <body>
    <h1>Hello, Sideproject Hub!</h1>
  </body>
</html>
FROM nginx:alpine
COPY index.html /usr/share/nginx/html/index.html
CMD ["nginx", "-g", "daemon off;"]
docker tag hellohub registry.<yourdomain.de>/test/hellohub:latest docker push registry.<yourdomain.de>/test/hellohub:latest
{
    "repositories": [
        "test/hellohub"
    ]
}

Run Test Image in Cluster from Private Registry

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: test
  name: hellohub
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hellohub
  template:
    metadata:
      labels:
        app: hellohub
    spec:
      imagePullSecrets:
        - name: ricosapps-private-registry-key
      containers:
        - name: hellohub
          image: registry.<yourdomain.de>/test/hellohub:latest
          ports:
            - containerPort: 80

The YAML defines a Kubernetes Deployment named “hellohub” within the “test” namespace. It ensures the availability of a single replica of the specified Docker image from the created private registry registry.<yourdomain.de>/test/hellohub:latest. It includes the image pull secret (“ricosapps-private-registry-key”) for private registry authentication.

Addressing Architecture Differences Challenge

The M1 Macbook operates on an ARM64 architecture, while the cluster relies on an AMD64 architecture. To determine the architecture of your local machine or a server, you can use the uname -m or arch command, which outputs the machine architecture of the system. For example, “x86_64” indicates a 64-bit x86 architecture, while “arm64” represents a 64-bit ARM architecture.

Links to the resources I used and to information that helped me in the process.