I am currently in the initial stages of creating a “Sideproject Hub”, where I aim to rapidly deploy small coding projects within Docker containers on my own server. To ensure total overengineering from the very beginning, I started the process by setting up a Kubernetes cluster on my server. This cluster will serve as the foundation for efficiently running my future side projects.
Within this Kubernetes cluster, I want to run Docker images. Therefore I want to have my own Docker registry. Not only because I simply want to have my own Docker registry and continue my overengineering, but also because the contingent available from Github or other providers is limited.
So I went through the process of setting up my own private Docker registry within the Kubernetes cluster. Here are the notes I made during the process.
Disclaimer: I’m writing these lines as I try all this stuff out for myself. This is how it worked for me, and I’m happy if I can help someone else with my notes. If anyone knows how to do it in a better way or if any information is incorrect, please let me know!
Create Kubernetes Namespace for Registry
- I started by creating a dedicated namespace within Kubernetes to isolate the components related to the Docker registry
kubectl create namespace docker-registry
Configuring Persistent Storage for the Docker Registry in Kubernetes
- To ensure persistent storage for the Docker registry, I added a Persistent Volume (PV) to the Kubernetes cluster
kubectl apply -f registry-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: docker-registry-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Persistent Volumes are not scoped to any namespace, only the Persistent Volume Claims are
- Next, I created a Persistent Volume Claim (PVC) to dynamically request storage resources from the previously defined PV `kubectl apply -f registry-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: docker-registry-pv-claim
namespace: docker-registry
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
- I verified that the PVC was successfully created and bound to the PV, ensuring that the storage was ready for use
kubectl get pvc docker-registry-pv-claim -n docker-registry
The
registry-pv.yaml
file defines a Persistent Volume with a capacity of 10 gigabytes mapped to the host path “/mnt/data.” Persistent Volumes (PVs) need to be accessed via Persistent Volume Claims (PVCs), which serve as requests for storage resources. Another crazy Kubernetes concept that offers advantages in complex environments, but I don’t care at this point.
Generate Registry User Credentials
- For securing access to the Docker registry, I generated user credentials using this script. Make sure to customize the
REGISTRY_USER
andREGISTRY_PASS
variables as needed. Then run./gen-pass.sh
export REGISTRY_USER=<awesome-username>
export REGISTRY_PASS=<awesome-password>
export DESTINATION_FOLDER=./registry-creds
mkdir -p ${DESTINATION_FOLDER}
echo ${REGISTRY_USER} >>${DESTINATION_FOLDER}/registry-user.txt
echo ${REGISTRY_PASS} >>${DESTINATION_FOLDER}/registry-pass.txt
docker run --entrypoint htpasswd registry:2.7.0 \
-Bbn ${REGISTRY_USER} ${REGISTRY_PASS} \
>${DESTINATION_FOLDER}/htpasswd
unset REGISTRY_USER REGISTRY_PASS DESTINATION_FOLDER
The key command is the “Docker Run” command. This command uses the
registry:2.7.0
Docker image with thehtpasswd
entry point to generate an encrypted password for the specified username. It creates or updates password files storing usernames and encrypted passwords. These files are used by web servers to authenticate users. The resulting entry is written to anhtpasswd
file within the destination folder.
Install Docker Registry using Helm
- Add the Helm repository for the Docker registry to easily manage the deployment using Helm charts
helm repo add twuni https://helm.twun.io
- Update helm repo
helm repo update
- Utilizing Helm, I installed the Docker registry in the designated namespace, configuring it with the specified options from the
registry-chart.yml
file →helm install -f registry-chart.yml docker-registry --namespace docker-registry twuni/docker-registry
replicaCount: 1
persistence:
enabled: true
size: 10Gi
deleteEnabled: true
storageClass: csi-cinder-classic
existingClaim: docker-registry-pv-claim
secrets:
htpasswd: awesome-username:$2y$05$P9uMz/WHFoGrO8SmqufQBObtabXj5z4CVjaQT1L3qPHcoBwTcnu3e
Helm is a package manager for Kubernetes applications. It uses “charts”, which are packages of pre-configured Kubernetes resources, making it easier to run these applications on Kubernetes clusters.
In the provided command, the
registry-chart.yaml
file is used to add additional configuration for the Helm chart during the installation of the Docker registry. It includes configuration details such as storage claims (size, storage class, delete options), replica count, and authentication secrets (htpasswd).
- Want to test if this is working? The following command can be used to forward the Docker registry port to your local machine:
kubectl port-forward svc/docker-registry 5000:5000 --namespace docker-registry
- Now open a web browser and navigate to the Docker registry URL
http://localhost:5000/v2/_catalog
- This URL should prompt for authentication. Enter the generated credentials for the Docker registry. After successful authentication, there should be an empty list of repositories, indicating that the Docker registry is accessible and operational.
Make the Registry Externally Available
- For external access to the Docker registry, I added an Ingress resource to expose it through a specified domain
kubectl apply -f registry-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: registry-ingress
namespace: docker-registry
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
cert-manager.io/cluster-issuer: letsencrypt-ricosapps-prod
spec:
tls:
- hosts:
- registry.<yourdomain.de>
secretName: letsencrypt-ricosapps-prod
rules:
- host: registry.<yourdomain.de>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-registry
port:
number: 5000
- Now, the same page that was accessible on localhost through port forwarding in the last step should be reachable on the specified domain “registry.<yourdomain.de>“.
Use HTTPS and a TLS certificate, otherwise, Docker seems to complain during login, pushing, and pulling.
Test Docker Image Build and Push
- Login to your private Docker registry
docker login registry.<yourdomain.de>
- I did build a simple Docker image for testing by running
docker build -t hellohub .
in the directory where the Dockerfile is located. The Docker image runs a simple ngnix web server serving this HTML page.
<!DOCTYPE html>
<html>
<head>
<title>Hello Sideproject Hub</title>
</head>
<body>
<h1>Hello, Sideproject Hub!</h1>
</body>
</html>
FROM nginx:alpine
COPY index.html /usr/share/nginx/html/index.html
CMD ["nginx", "-g", "daemon off;"]
- The image that has been built locally, has to be tagged and then pushed to move it into the registry. Tagging defines the registry, the repository and the tag for the image.
docker tag hellohub registry.<yourdomain.de>/test/hellohub:latest docker push registry.<yourdomain.de>/test/hellohub:latest
- If the image was pushed successfully can be verified by checking the catalog
curl -X GET https://registry.<yourdomain.de>/v2/_catalog
(or open it in browser) or pulling the imagedocker pull registry.<yourdomain.de>/test/hellohub:latest
{
"repositories": [
"test/hellohub"
]
}
Run Test Image in Cluster from Private Registry
- Following steps assume there exists an Kubernetes namespace named “test”, it can be created with
kubectl create namespace test
- Create a Kubernetes secret containing the credentials to access the private Docker registry: `kubectl -n test create secret docker-registry ricosapps-private-registry-key —docker-server=registry.<yourdomain.de> —docker-username=
—docker-password=
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: test
name: hellohub
spec:
replicas: 1
selector:
matchLabels:
app: hellohub
template:
metadata:
labels:
app: hellohub
spec:
imagePullSecrets:
- name: ricosapps-private-registry-key
containers:
- name: hellohub
image: registry.<yourdomain.de>/test/hellohub:latest
ports:
- containerPort: 80
The YAML defines a Kubernetes Deployment named “hellohub” within the “test” namespace. It ensures the availability of a single replica of the specified Docker image from the created private registry
registry.<yourdomain.de>/test/hellohub:latest
. It includes the image pull secret (“ricosapps-private-registry-key”) for private registry authentication.
- The changes can be applied to the Kubernetes cluster like all yaml files
kubectl apply -f <file>
- Use Kubernetes tools like like k9s or kubectl to check if the deployment has been applied succesfully and the container could be pulled from the registry be the cluster
kubectl -n test get deployments
/kubectl -n test describe deployment hellohub
Addressing Architecture Differences Challenge
- When I deployed the image, it could not be executed in my cluster. Because I built it on my M1 Macbook and it therefore has a different architecture than my cluster requires for execution.
- This could be solved by using a Docker Multiplattform builds, therefore “containerd” must be activated. Here is a guide on how to do this, if using Docker Desktop https://docs.docker.com/desktop/containerd/#enable-the-containerd-image-store
- Use Docker buildx Command, to do a multiplattform build for the hellohub test project
docker buildx build --platform linux/amd64,linux/arm64 -t hellohub .
- If it failed before, this image now needs to be pushed to the registry and used for deployment to see if it works
The M1 Macbook operates on an ARM64 architecture, while the cluster relies on an AMD64 architecture. To determine the architecture of your local machine or a server, you can use the
uname -m
orarch
command, which outputs the machine architecture of the system. For example, “x86_64” indicates a 64-bit x86 architecture, while “arm64” represents a 64-bit ARM architecture.
Links
Links to the resources I used and to information that helped me in the process.