An HTTPS Proxy for Docker providing centralized configuration and caching of any registry (quay.io, DockerHub, k8s.gcr.io)
 
 
Go to file
Ricardo Pardini 7724f3ba15
completely reworked caching, now cache by exception (/blobs/ only essentially)
- now only /v2/.../blobs/... URIs are actually cached (together with their redirect catchers)
- /manifests/, /token, and /v2/ are not cached anymore, which should solve a lot of problems
- better messages for /v1 attempts
- fix usage of $connect_host:443 (which is hostname:port and causes errors to be logged) to $connect_addr (which returns an IP:port) in the proxy layer
2018-11-04 16:43:53 +01:00
.dockerignore completely reworked into an HTTPS_PROXY-based solution 2018-06-29 01:39:02 +02:00
.gitignore completely reworked into an HTTPS_PROXY-based solution 2018-06-29 01:39:02 +02:00
Dockerfile completely reworked caching, now cache by exception (/blobs/ only essentially) 2018-11-04 16:43:53 +01:00
LICENSE Initial commit 2018-06-27 10:08:18 +02:00
README.md fix README instructions 2018-07-04 11:40:33 +02:00
create_ca_cert.sh add mitmproxy/nginx-debug inspection capabilities 2018-11-04 11:23:52 +01:00
entrypoint.sh completely reworked caching, now cache by exception (/blobs/ only essentially) 2018-11-04 16:43:53 +01:00
nginx.conf completely reworked caching, now cache by exception (/blobs/ only essentially) 2018-11-04 16:43:53 +01:00

README.md

docker-registry-proxy

TL,DR

A caching proxy for Docker; allows centralised management of registries and their authentication; caches images from any registry.

What?

Created as an evolution and simplification of docker-caching-proxy-multiple-private using the HTTPS_PROXY mechanism and injected CA root certificates instead of /etc/hosts hacks and --insecure-registry

Main feature is Docker layer/image caching, even from S3, Google Storage, etc. As a bonus it allows for centralized management of Docker registry credentials.

You configure the Docker clients (err... Kubernetes Nodes?) once, and then all configuration is done on the proxy -- for this to work it requires inserting a root CA certificate into system trusted root certs.

Usage

  • Run the proxy on a host close to the Docker clients
  • Expose port 3128 to the network
  • Map volume /docker_mirror_cache for up to 32gb of cached images from all registries
  • Map volume /ca, the proxy will store the CA certificate here across restarts
  • Env REGISTRIES: space separated list of registries to cache; no need to include Docker Hub, its already there
  • Env AUTH_REGISTRIES: space separated list of registry:username:password authentication info. Registry hosts here should be listed in the above ENV as well.
docker run --rm --name docker_registry_proxy -it \
       -p 0.0.0.0:3128:3128 \  
       -v $(pwd)/docker_mirror_cache:/docker_mirror_cache  \
       -v $(pwd)/docker_mirror_certs:/ca  \
       -e REGISTRIES="k8s.gcr.io gcr.io quay.io your.own.registry another.private.registry" \ 
       -e AUTH_REGISTRIES="your.own.registry:username:password another.private.registry:user:pass"  \ 
       rpardini/docker-registry-proxy:latest

Let's say you did this on host 192.168.66.72, you can then curl http://192.168.66.72:3128/ca.crt and get the proxy CA certificate.

Configuring the Docker clients / Kubernetes nodes

On each Docker host that is to use the cache:

  • Configure Docker proxy pointing to the caching server
  • Add the caching server CA certificate to the list of system trusted roots.
  • Restart dockerd

Do it all at once, tested on Ubuntu Xenial, which is systemd based:

# Add environment vars pointing Docker to use the proxy
mkdir -p /etc/systemd/system/docker.service.d
cat << EOD > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://192.168.66.72:3128/"
Environment="HTTPS_PROXY=http://192.168.66.72:3128/"
EOD

# Get the CA certificate from the proxy and make it a trusted root.
curl http://192.168.66.72:3128/ca.crt > /usr/share/ca-certificates/docker_registry_proxy.crt
echo "docker_registry_proxy.crt" >> /etc/ca-certificates.conf
update-ca-certificates --fresh

# Reload systemd
systemctl daemon-reload

# Restart dockerd
systemctl restart docker.service

Testing

Clear dockerd of everything not currently running: docker system prune -a -f beware

Then do, for example, docker pull k8s.gcr.io/kube-proxy-amd64:v1.10.4 and watch the logs on the caching proxy, it should list a lot of MISSes.

Then, clean again, and pull again. You should see HITs! Success.

Do the same for docker pull ubuntu and rejoice.

Test your own registry caching and authentication the same way; you don't need docker login, or .docker/config.json anymore.

Gotchas

  • If you authenticate to a private registry and pull through the proxy, those images will be served to any client that can reach the proxy, even without authentication. beware
  • Repeat, this will make your private images very public if you're not careful.
  • Currently you cannot push images while using the proxy which is a shame. PRs welcome.
  • Setting this on Linux is relatively easy. On Mac and Windows the CA-certificate part will be very different but should work in principle.

Why not use Docker's own registry, which has a mirror feature?

Yes, Docker offers Registry as a pull through cache, unfortunately it only covers the DockerHub case. It won't cache images from quay.io, k8s.gcr.io, gcr.io, or any such, including any private registries.

That means that your shiny new Kubernetes cluster is now a bandwidth hog, since every image will be pulled from the Internet on every Node it runs on, with no reuse.

This is due to the way the Docker "client" implements --registry-mirror, it only ever contacts mirrors for images with no repository reference (eg, from DockerHub). When a repository is specified dockerd goes directly there, via HTTPS (and also via HTTP if included in a --insecure-registry list), thus completely ignoring the configured mirror.

Docker itself should provide this.

Yeah. Docker Inc should do it. So should NPM, Inc. Wonder why they don't. 😼