completely reworked into an HTTPS_PROXY-based solution

- emit our own certificates
- configurable via ENVs
- generates config dinamically
pull/7/head
ricardop 2018-06-29 01:39:02 +02:00
parent ba4c66e8bc
commit 0abd4ca51a
No known key found for this signature in database
GPG Key ID: 3D38CA12A66C5D02
7 changed files with 365 additions and 133 deletions

View File

@ -3,3 +3,5 @@
.gitignore .gitignore
LICENSE LICENSE
README.md README.md
docker_mirror_cache
docker_mirror_certs

2
.gitignore vendored
View File

@ -1 +1,3 @@
.idea .idea
docker_mirror_cache
docker_mirror_certs

View File

@ -1,26 +1,39 @@
# Use stable nginx on alpine for a light container # We start from my nginx fork which includes the proxy-connect module from tEngine
FROM nginx:stable-alpine # Source is available at https://github.com/rpardini/nginx-proxy-connect-stable-alpine
# Its equivalent to nginx:stable-alpine 1.14.0, with alpine 3.7
FROM rpardini/nginx-proxy-connect-stable-alpine:latest
# Add openssl and clean apk cache # Add openssl, bash and ca-certificates, then clean apk cache -- yeah complain all you want.
RUN apk add --update openssl && rm -rf /var/cache/apk/* RUN apk add --update openssl bash ca-certificates && rm -rf /var/cache/apk/*
# Generate a self-signed SSL certificate. It will be ignored by Docker clients due to insecure-registries. # Create the cache directory and CA directory
RUN mkdir -p /etc/ssl && \ RUN mkdir -p /docker_mirror_cache /ca
cd /etc/ssl && \
openssl genrsa -des3 -passout pass:x -out key.pem 2048 && \
cp key.pem key.pem.orig && \
openssl rsa -passin pass:x -in key.pem.orig -out key.pem && \
openssl req -new -key key.pem -out cert.csr -subj "/C=BR/ST=BR/L=Nowhere/O=Fake Docker Mirror/OU=Docker/CN=docker.proxy" && \
openssl x509 -req -days 3650 -in cert.csr -signkey key.pem -out cert.pem
# Create the cache directory
RUN mkdir -p /docker_mirror_cache
# Expose it as a volume, so cache can be kept external to the Docker image # Expose it as a volume, so cache can be kept external to the Docker image
VOLUME /docker_mirror_cache VOLUME /docker_mirror_cache
# Expose /ca as a volume. Users are supposed to volume mount this, as to preserve it across restarts.
# Actually, its required; if not, then docker clients will reject the CA certificate when the proxy is run the second time
VOLUME /ca
# Add our configuration # Add our configuration
ADD nginx.conf /etc/nginx/nginx.conf ADD nginx.conf /etc/nginx/nginx.conf
# Test that the configuration is OK # Add our very hackish entrypoint and ca-building scripts, make them executable
RUN nginx -t ADD entrypoint.sh /entrypoint.sh
ADD create_ca_cert.sh /create_ca_cert.sh
RUN chmod +x /create_ca_cert.sh /entrypoint.sh
# Clients should only use 3128, not anything else.
EXPOSE 3128
## Default envs.
# A space delimited list of registries we should proxy and cache; this is in addition to the central DockerHub.
ENV REGISTRIES="k8s.gcr.io gcr.io quay.io"
# A space delimited list of registry:user:password to inject authentication for
ENV AUTH_REGISTRIES="some.authenticated.registry:oneuser:onepassword another.registry:user:password"
# Should we verify upstream's certificates? Default to true.
ENV VERIFY_SSL="true"
# Did you want a shell? Sorry. This only does one job; use exec /bin/bash if you wanna inspect stuff
ENTRYPOINT ["/entrypoint.sh"]

156
README.md
View File

@ -1,120 +1,100 @@
## docker-registry-proxy
### TL,DR
A caching proxy for Docker; allows centralized management of registries and their authentication; caches images from *any* registry.
### What? ### What?
An intricate, insecure, and hackish way of caching Docker images from private registries (eg, not from DockerHub). Created as an evolution and simplification of [docker-caching-proxy-multiple-private](https://github.com/rpardini/docker-caching-proxy-multiple-private)
Caches via HTTP man-in-the-middle. using the `HTTPS_PROXY` mechanism and injected CA root certificates instead of `/etc/hosts` hacks and _`--insecure-registry`
It is highly dependent on Docker-client behavior, and was only tested against Docker 17.03 on Linux (that's the version recommended by Kubernetes 1.10).
As a bonus it allows for centralized management of Docker registry credentials.
You configure the Docker clients (_err... Kubernetes Nodes?_) once, and then all configuration is done on the proxy --
for this to work it requires inserting a root CA certificate into system trusted root certs.
#### Why not use Docker's own registry, which has a mirror feature? #### Why not use Docker's own registry, which has a mirror feature?
Yes, Docker offers [Registry as a pull through cache](https://docs.docker.com/registry/recipes/mirror/), Yes, Docker offers [Registry as a pull through cache](https://docs.docker.com/registry/recipes/mirror/), *unfortunately*
and, in fact, for a caching solution to be complete, you'll want to run one of those. it only covers the DockerHub case. It won't cache images from `quay.io`, `k8s.gcr.io`, `gcr.io`, or any such, including any private registries.
**Unfortunately** this only covers the DockerHub case. It won't cache images from `quay.io`, `k8s.gcr.io`, `gcr.io`, or any such, including any private registries. That means that your shiny new Kubernetes cluster is now a bandwidth hog, since every image will be pulled from the
Internet on every Node it runs on, with no reuse.
That means that your shiny new Kubernetes cluster is now a bandwidth hog, since every image will be pulled from the Internet on every Node it runs on, with no reuse. This is due to the way the Docker "client" implements `--registry-mirror`, it only ever contacts mirrors for images
with no repository reference (eg, from DockerHub).
When a repository is specified `dockerd` goes directly there, via HTTPS (and also via HTTP if included in a
`--insecure-registry` list), thus completely ignoring the configured mirror.
This is due to the way the Docker "client" implements `--registry-mirror`, it only ever contacts mirrors for images with no repository reference (eg, from DockerHub). #### Docker itself should provide this.
When a repository is specified `dockerd` goes directly there, via HTTPS (and also via HTTP if included in a `--insecure-registry` list), thus completely ignoring the configured mirror.
_Even worse,_ to complement that client-Docker problem, there is also a one-URL limitation on the registry/mirror side of things, so even if it worked we would need to run multiple mirror-registries, one for each mirrored repo. Yeah. Docker Inc should do it. So should NPM, Inc. Wonder why they don't. 😼
### Usage
#### Hey but that sounds like an important limitation on Docker's side. Shouldn't they fix it? - Run the proxy on a dedicated machine.
- Expose port 3128
**Hell, yes**. Actually if you search on Github you'll find a lot of people with the same issues. - Map volume `/docker_mirror_cache` for up to 32gb of cached images from all registries
* This seems to be the [main issue on the Registry side of things](https://github.com/docker/distribution/issues/1431) and shows a lot of the use cases. - Map volume `/ca`, the proxy will store the CA certificate here across restarts
* [Valentin Rothberg](https://github.com/vrothberg) from SUSE has implemented the support - Env `REGISTRIES`: space separated list of registries to cache; no need to include Docker Hub, its already there
the client needs [in PR #34319](https://github.com/moby/moby/pull/34319) but after a lot of discussions and - Env `AUTH_REGISTRIES`: space separated list of `registry:username:password` authentication info. Registry hosts here should be listed in the above ENV as well.
[much frustration](https://github.com/moby/moby/pull/34319#issuecomment-389783454) it is still unmerged. Sigh.
**So why not?** I have no idea; it's easy to especulate that "Docker Inc" has no interest in something that makes their main product less attractive. No matter, we'll just _hack_ our way.
### How?
This solution involves setting up quite a lot of stuff, including DNS hacks.
You'll need a dedicated host for running two caches, both in containers, but you'll need ports 80, 443, and 5000 available.
I'll refer to the caching proxy host's IP address as 192.168.66.62 in the next sections, substitute for your own.
#### 0) A regular DockerHub registry mirror
Just follow instructions on [Registry as a pull through cache](https://docs.docker.com/registry/recipes/mirror/) - expose it on 0.0.0.0:5000.
This will only be used for DockerHub caching, and works well enough.
#### 1) This caching proxy
This is an `nginx` configured extensively for reverse-proxying HTTP/HTTPS to the registries, and apply caching to it.
It should be run in a Docker container, and **needs** be mapped to ports 80 and 443. Theres a Docker volume you can mount for storing the cached layers.
```bash ```bash
docker run --rm --name docker_caching_proxy -it \ docker run --rm --name docker_caching_proxy -it \
-p 0.0.0.0:80:80 -p 0.0.0.0:443:443 \ -p 0.0.0.0:3128:3128 \
-v /docker_mirror_cache:/docker_mirror_cache \ -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
rpardini/docker-caching-proxy-multiple-private:latest -v $(pwd)/docker_mirror_certs:/ca \
-e REGISTRIES="k8s.gcr.io gcr.io quay.io your.own.registry another.private.registry" \
-e AUTH_REGISTRIES="your.own.registry:username:password another.private.registry:user:pass" \
rpardini/docker-caching-proxy:latest
``` ```
**Important**: the host running the caching proxy container should not have any extra configuration or DNS hacks shown below. Let's say you did this on host `192.168.66.72`, you can then `curl http://192.168.66.72:3128/ca.crt` and get the proxy CA certificate.
The logging is done to stdout, but the format has been tweaked to show cache MISS/HIT(s) and other useful information for this use case. #### Configuring the Docker clients / Kubernetes nodes
It goes to great lengths to try and get the highest hitratio possible, to the point of rewriting headers from registries when they try to redirect to a storage service like Amazon S3 or Google Storage. On each Docker host that is to use the cache:
It is very insecure, anyone with access to the proxy will have access to its cached images regardless of authentication, for example. - [Configure Docker proxy](https://docs.docker.com/network/proxy/) pointing to the caching server
- Add the caching server CA certificate to the list of system trusted roots.
- Restart `dockerd`
Do it all at once, tested on Ubuntu Xenial:
#### 2) dockerd DNS hacks
We'll need to convince Docker (actually, `dockerd` on very host) to talk to our caching proxy via some sort of DNS hack.
The simplest for sure is to just include entries in `/etc/hosts` for each registry you want to mirror, plus a fixed address used for redirects:
```bash ```bash
# /etc/hosts entries for docker caching proxy # Add environment vars pointing Docker to use the proxy
192.168.66.72 docker.proxy cat << EOD > /etc/systemd/system/docker.service.d/http-proxy.conf
192.168.66.72 k8s.gcr.io [Service]
192.168.66.72 quay.io Environment="HTTP_PROXY=http://192.168.66.72:3128/"
192.168.66.72 gcr.io Environment="HTTPS_PROXY=http://192.168.66.72:3128/"
EOD
# Get the CA certificate from the proxy and make it a trusted root.
curl http://192.168.66.123:3128/ca.crt > /usr/share/ca-certificates/docker_caching_proxy.crt
echo docker_caching_proxy.crt >> /etc/ca-certificates.conf
update-ca-certificates --fresh
# Reload systemd
systemctl daemon-reload
# Restart dockerd
systemctl restart docker.service
``` ```
Only `docker.proxy` is always required, and each registry you want to mirror also needs an entry.
I'm sure you can do stuff to the same effect with your DNS server but I won't go into that.
#### 3) dockerd configuration for mirrors and insecure registries
Of course, we don't have a TLS certificate for `quay.io` et al, so we'll need to tell Docker to treat all proxied registries as _insecure_.
We'll also point Docker to the "regular" registry mirror in item 0.
To do so in one step, edit `/etc/docker/daemon.json` (tested on Docker 17.03 on Ubuntu Xenial only):
```json
{
"insecure-registries": [
"k8s.gcr.io",
"quay.io",
"gcr.io"
],
"registry-mirrors": [
"http://192.168.66.72:5000"
]
}
```
After that, restart the Docker daemon: `systemctl restart docker.service`
### Testing ### Testing
Clear the local `dockerd` of everything not currently running: `docker system prune -a -f` (this prunes everything not currently running, beware). Clear `dockerd` of everything not currently running: `docker system prune -a -f` *beware*
Then do, for example, `docker pull k8s.gcr.io/kube-proxy-amd64:v1.10.4` and watch the logs on the caching proxy, it should list a lot of MISSes. Then do, for example, `docker pull k8s.gcr.io/kube-proxy-amd64:v1.10.4` and watch the logs on the caching proxy, it should list a lot of MISSes.
Then, clean again, and pull again. You should see HITs! Success. Then, clean again, and pull again. You should see HITs! Success.
Do the same for `docker pull ubuntu` and rejoice.
Test your own registry caching and authentication the same way; you don't need `docker login`, or `.docker/config.json` anymore.
### Gotchas ### Gotchas
Of course, this has a lot of limitations
- Any HTTP/HTTPS request to the domains of the registries will be proxied, not only Docker calls. *beware*
- If you want to proxy an extra registry you'll have multiple places to edit (`/etc/hosts` and `/etc/docker/daemon.json`) and restart `dockerd` - very brave thing to do in a k8s cluster, so set it up beforehand
- If you authenticate to a private registry and pull through the proxy, those images will be served to any client that can reach the proxy, even without authentication. *beware* - If you authenticate to a private registry and pull through the proxy, those images will be served to any client that can reach the proxy, even without authentication. *beware*
- Repeat, this will make your private images very public if you're not careful.

118
create_ca_cert.sh 100644
View File

@ -0,0 +1,118 @@
#! /bin/bash
set -Eeuo pipefail
declare -i DEBUG=0
logInfo() {
echo "INFO: $@"
}
PROJ_NAME=DockerMirrorBox
logInfo "Will create certificate with names $ALLDOMAINS"
CADATE=$(date "+%Y.%m.%d %H:%M")
CAID="$(hostname -f) ${CADATE}"
CN_CA="${PROJ_NAME} CA Root ${CAID}"
CN_IA="${PROJ_NAME} Intermediate IA ${CAID}"
CN_WEB="${PROJ_NAME} Web Cert ${CAID}"
CN_CA=${CN_CA:0:64}
CN_IA=${CN_IA:0:64}
CN_WEB=${CN_WEB:0:64}
mkdir -p /certs /ca
cd /ca
CA_KEY_FILE=/ca/ca.key
CA_CRT_FILE=/ca/ca.crt
CA_SRL_FILE=/ca/ca.srl
if [ -f "$CA_CRT_FILE" ] ; then
logInfo "CA already exists. Good. We'll reuse it."
else
logInfo "No CA was found. Generating one."
logInfo "*** Please *** make sure to mount /ca as a volume -- if not, everytime this container starts, it will regenerate the CA and nothing will work."
openssl genrsa -des3 -passout pass:foobar -out ${CA_KEY_FILE} 4096
logInfo "generate CA cert with key and self sign it: ${CAID}"
openssl req -new -x509 -days 1300 -sha256 -key ${CA_KEY_FILE} -out ${CA_CRT_FILE} -passin pass:foobar -subj "/C=NL/ST=Noord Holland/L=Amsterdam/O=ME/OU=IT/CN=${CN_CA}" -extensions IA -config <(
cat <<-EOF
[req]
distinguished_name = dn
[dn]
[IA]
basicConstraints = critical,CA:TRUE
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
subjectKeyIdentifier = hash
EOF
)
[[ ${DEBUG} -gt 0 ]] && logInfo "show the CA cert details"
[[ ${DEBUG} -gt 0 ]] && openssl x509 -noout -text -in ${CA_CRT_FILE}
echo 01 > ${CA_SRL_FILE}
fi
cd /certs
logInfo "Generate IA key"
openssl genrsa -des3 -passout pass:foobar -out ia.key 4096 &> /dev/null
logInfo "Create a signing request for the IA: ${CAID}"
openssl req -new -key ia.key -out ia.csr -passin pass:foobar -subj "/C=NL/ST=Noord Holland/L=Amsterdam/O=ME/OU=IT/CN=${CN_IA}" -reqexts IA -config <(
cat <<-EOF
[req]
distinguished_name = dn
[dn]
[IA]
basicConstraints = critical,CA:TRUE,pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
subjectKeyIdentifier = hash
EOF
)
[[ ${DEBUG} -gt 0 ]] && logInfo "Show the singing request, to make sure extensions are there"
[[ ${DEBUG} -gt 0 ]] && openssl req -in ia.csr -noout -text
logInfo "Sign the IA request with the CA cert and key, producing the IA cert"
openssl x509 -req -days 730 -in ia.csr -CA ${CA_CRT_FILE} -CAkey ${CA_KEY_FILE} -out ia.crt -passin pass:foobar -extensions IA -extfile <(
cat <<-EOF
[req]
distinguished_name = dn
[dn]
[IA]
basicConstraints = critical,CA:TRUE,pathlen:0
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
subjectKeyIdentifier = hash
EOF
) &> /dev/null
[[ ${DEBUG} -gt 0 ]] && logInfo "show the IA cert details"
[[ ${DEBUG} -gt 0 ]] && openssl x509 -noout -text -in ia.crt
logInfo "Initialize the serial number for signed certificates"
echo 01 > ia.srl
logInfo "Create the key (w/o passphrase..)"
openssl genrsa -des3 -passout pass:foobar -out web.orig.key 2048 &> /dev/null
openssl rsa -passin pass:foobar -in web.orig.key -out web.key &> /dev/null
logInfo "Create the signing request, using extensions"
openssl req -new -key web.key -sha256 -out web.csr -passin pass:foobar -subj "/C=NL/ST=Noord Holland/L=Amsterdam/O=ME/OU=IT/CN=${CN_WEB}" -reqexts SAN -config <(cat <(printf "[req]\ndistinguished_name = dn\n[dn]\n[SAN]\nsubjectAltName=${ALLDOMAINS}"))
[[ ${DEBUG} -gt 0 ]] && logInfo "Show the singing request, to make sure extensions are there"
[[ ${DEBUG} -gt 0 ]] && openssl req -in web.csr -noout -text
logInfo "Sign the request, using the intermediate cert and key"
openssl x509 -req -days 365 -in web.csr -CA ia.crt -CAkey ia.key -out web.crt -passin pass:foobar -extensions SAN -extfile <(cat <(printf "[req]\ndistinguished_name = dn\n[dn]\n[SAN]\nsubjectAltName=${ALLDOMAINS}")) &> /dev/null
[[ ${DEBUG} -gt 0 ]] && logInfo "Show the final cert details"
[[ ${DEBUG} -gt 0 ]] && openssl x509 -noout -text -in web.crt
logInfo "Concatenating fullchain.pem..."
cat web.crt ia.crt ${CA_CRT_FILE} > fullchain.pem

56
entrypoint.sh 100644
View File

@ -0,0 +1,56 @@
#! /bin/bash
set -Eeuo pipefail
trap "echo TRAPed signal" HUP INT QUIT TERM
# The list of SAN (Subject Alternative Names) for which we will create a TLS certificate.
ALLDOMAINS=""
# Interceptions map, which are the hosts that will be handled by the caching part.
# It should list exactly the same hosts we have created certificates for -- if not, Docker will get TLS errors, of course.
echo -n "" > /etc/nginx/docker.intercept.map
# Some hosts/registries are always needed, but others can be configured in env var REGISTRIES
for ONEREGISTRYIN in docker.caching.proxy.internal registry-1.docker.io auth.docker.io ${REGISTRIES}; do
ONEREGISTRY=$(echo ${ONEREGISTRYIN} | xargs) # Remove whitespace
echo "Adding certificate for registry: $ONEREGISTRY"
ALLDOMAINS="${ALLDOMAINS},DNS:${ONEREGISTRY}"
echo "${ONEREGISTRY} 127.0.0.1:443;" >> /etc/nginx/docker.intercept.map
done
# Clean the list and generate certificates.
export ALLDOMAINS=${ALLDOMAINS:1} # remove the first comma and export
/create_ca_cert.sh # This uses ALLDOMAINS to generate the certificates.
# Now handle the auth part.
echo -n "" > /etc/nginx/docker.auth.map
for ONEREGISTRYIN in ${AUTH_REGISTRIES}; do
ONEREGISTRY=$(echo -n ${ONEREGISTRYIN} | xargs) # Remove whitespace
AUTH_HOST=$(echo -n ${ONEREGISTRY} | cut -d ":" -f 1 | xargs)
AUTH_USER=$(echo -n ${ONEREGISTRY} | cut -d ":" -f 2 | xargs)
AUTH_PASS=$(echo -n ${ONEREGISTRY} | cut -d ":" -f 3 | xargs)
AUTH_BASE64=$(echo -n ${AUTH_USER}:${AUTH_PASS} | base64 | xargs)
echo "Adding Auth for registry '${AUTH_HOST}' with user '${AUTH_USER}'."
echo "\"${AUTH_HOST}\" \"${AUTH_BASE64}\";" >> /etc/nginx/docker.auth.map
done
echo "" > /etc/nginx/docker.verify.ssl.conf
if [ "a$VERIFY_SSL" == "atrue" ]; then
cat << EOD > /etc/nginx/docker.verify.ssl.conf
# We actually wanna be secure and avoid mitm attacks.
# Fitting, since this whole thing is a mitm...
# We'll accept any cert signed by a CA trusted by Mozilla (ca-certificates in alpine)
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
proxy_ssl_verify_depth 2;
EOD
echo "Upstream SSL certificate verification enabled."
fi
echo "Testing nginx config..."
nginx -t
echo "Starting nginx! Have a nice day."
nginx -g "daemon off;"

View File

@ -1,7 +1,7 @@
user nginx; user nginx;
worker_processes auto; worker_processes auto;
error_log /var/log/nginx/error.log debug; error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid; pid /var/run/nginx.pid;
events { events {
@ -13,14 +13,20 @@ http {
default_type application/octet-stream; default_type application/octet-stream;
# Use a debug-oriented logging format. # Use a debug-oriented logging format.
log_format tweaked '$remote_addr - $remote_user [$time_local] "$request" ' log_format debugging '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent ' '$status $body_bytes_sent '
'"HOST: $host" "UPSTREAM: $upstream_addr" ' '"HOST: $host" "UPSTREAM: $upstream_addr" '
'"UPSTREAM-STATUS: $upstream_status" ' '"UPSTREAM-STATUS: $upstream_status" '
'"SSL-PROTO: $ssl_protocol" ' '"SSL-PROTO: $ssl_protocol" '
'"PROXY-HOST: $proxy_host" "UPSTREAM-REDIRECT: $upstream_http_location" "CACHE-STATUS: $upstream_cache_status"'; '"CONNECT-HOST: $connect_host" "CONNECT-PORT: $connect_port" "CONNECT-ADDR: $connect_addr" '
'"PROXY-HOST: $proxy_host" "UPSTREAM-REDIRECT: $upstream_http_location" "CACHE-STATUS: $upstream_cache_status" '
'"AUTH: $http_authorization" ' ;
log_format tweaked '$upstream_cache_status [$time_local] "$uri" '
'$status $body_bytes_sent '
'"HOST:$host" '
'"PROXY-HOST:$proxy_host" "UPSTREAM:$upstream_addr" ';
access_log /var/log/nginx/access.log tweaked;
keepalive_timeout 300; keepalive_timeout 300;
gzip off; gzip off;
@ -28,21 +34,35 @@ http {
# Set to 32gb which should be enough # Set to 32gb which should be enough
proxy_cache_path /docker_mirror_cache levels=1:2 max_size=32g inactive=60d keys_zone=cache:10m use_temp_path=off; proxy_cache_path /docker_mirror_cache levels=1:2 max_size=32g inactive=60d keys_zone=cache:10m use_temp_path=off;
# Just in case you want to rewrite some hosts. Default maps directly. # Just in case you want to rewrite some hosts. Default maps directly.
map $host $targetHost { map $host $targetHost {
hostnames; hostnames;
default $host; default $host;
} }
# A map to enable authentication to some specific docker hosts. # A map to enable authentication to some specific docker registries.
# To use this, mount a volume in docker. # This is auto-generated by the entrypoint.sh based on environment variables
map $host $dockerAuth { map $host $dockerAuth {
include /etc/nginx/docker.auth.*.map; hostnames;
include /etc/nginx/docker.auth.map;
default ""; default "";
} }
# Map to decide which hosts get directed to the caching portion.
# This is automatically generated from the list of cached registries, plus a few fixed hosts
# By default, we don't intercept, allowing free flow of non-registry traffic
map $connect_host $interceptedHost {
hostnames;
include /etc/nginx/docker.intercept.map;
default "$connect_host:443";
}
map $dockerAuth $finalAuth {
"" "$http_authorization"; # if empty, keep the original passed-in from the client
default "Basic $dockerAuth"; # if not empty, add the Basic preamble to the auth
}
# These maps parse the original Host and URI from a /forcecache redirect. # These maps parse the original Host and URI from a /forcecache redirect.
map $request_uri $realHost { map $request_uri $realHost {
~/forcecacheinsecure/([^:/]+)/originalwas(/.+) $1; ~/forcecacheinsecure/([^:/]+)/originalwas(/.+) $1;
@ -56,15 +76,48 @@ http {
default "DID_NOT_MATCH_PATH"; default "DID_NOT_MATCH_PATH";
} }
# The proxy director layer, listens on 3128
server {
listen 3128;
server_name _;
# dont log the CONNECT proxy.
access_log off;
proxy_connect;
proxy_connect_address $interceptedHost;
proxy_max_temp_file_size 0;
# We need to resolve the real names of our proxied servers.
resolver 8.8.8.8 4.2.2.2 ipv6=off; # Avoid ipv6 addresses for now
# forward proxy for non-CONNECT request
location / {
return 403 "The docker caching proxy is working!";
}
location /ca.crt {
alias /ca/ca.crt;
}
# @TODO: add a dynamic root path that generates instructions for usage on docker clients
}
# The caching layer
server { server {
# Listen on both 80 and 443, for all hostnames. # Listen on both 80 and 443, for all hostnames.
listen 80 default_server; listen 80 default_server;
listen 443 ssl default_server; listen 443 ssl default_server;
server_name _; server_name _;
# Use a fake SSL certificate. This does not matter, since the Docker clients will be configured with insecure registry # Do some tweaked logging.
ssl_certificate /etc/ssl/cert.pem; access_log /var/log/nginx/access.log tweaked;
ssl_certificate_key /etc/ssl/key.pem;
# Use the generated certificates, they contain names for all the proxied registries.
ssl_certificate /certs/fullchain.pem;
ssl_certificate_key /certs/web.key;
# We need to resolve the real names of our proxied servers. # We need to resolve the real names of our proxied servers.
resolver 8.8.8.8 4.2.2.2 ipv6=off; # Avoid ipv6 addresses for now resolver 8.8.8.8 4.2.2.2 ipv6=off; # Avoid ipv6 addresses for now
@ -74,13 +127,13 @@ http {
# Block POST/PUT/DELETE. Don't use this proxy for pushing. # Block POST/PUT/DELETE. Don't use this proxy for pushing.
if ($request_method = POST) { if ($request_method = POST) {
return 405; return 405 "POST method is not allowed";
} }
if ($request_method = PUT) { if ($request_method = PUT) {
return 405; return 405 "PUT method is not allowed";
} }
if ($request_method = DELETE) { if ($request_method = DELETE) {
return 405; return 405 "DELETE method is not allowed";
} }
proxy_read_timeout 900; proxy_read_timeout 900;
@ -102,10 +155,21 @@ http {
proxy_hide_header Set-Cookie; proxy_hide_header Set-Cookie;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie; proxy_ignore_headers X-Accel-Expires Expires Cache-Control Set-Cookie;
# Add the authentication info, if the map matched the target domain.
proxy_set_header Authorization $finalAuth;
# This comes from a include file generated by the entrypoint.
include /etc/nginx/docker.verify.ssl.conf;
# Some debugging info
# add_header X-Docker-Caching-Proxy-Real-Host $realHost;
# add_header X-Docker-Caching-Proxy-Real-Path $realPath;
# add_header X-Docker-Caching-Proxy-Auth $finalAuth;
# Block API v1. We dont know how to handle these. # Block API v1. We dont know how to handle these.
# Docker-client should start with v2 and fallback to v1 if something fails, for example, if authentication failed to a protected v2 resource. # Docker-client should start with v2 and fallback to v1 if something fails, for example, if authentication failed to a protected v2 resource.
location /v1 { location /v1 {
return 405; return 405 "API v1 is invalid -- you probably need auth to get a v2 endpoint working against $host -- Check the docs";
} }
# don't cache mutable entity /v2/<name>/manifests/<reference> (unless the reference is a digest) # don't cache mutable entity /v2/<name>/manifests/<reference> (unless the reference is a digest)
@ -123,6 +187,13 @@ http {
proxy_pass https://$targetHost; proxy_pass https://$targetHost;
} }
# force cache of the first hit which is always /v2/ - even for 401 unauthorized.
location = /v2/ {
proxy_pass https://$targetHost;
proxy_cache cache;
proxy_cache_valid 200 301 302 307 401 60d;
}
# cache everything else # cache everything else
location / { location / {
proxy_pass https://$targetHost; proxy_pass https://$targetHost;
@ -134,8 +205,8 @@ http {
# We hack into the response, extracting the host and URI parts, injecting them into a URL that points back to us # We hack into the response, extracting the host and URI parts, injecting them into a URL that points back to us
# That gives us a chance to intercept and cache those, which are the actual multi-megabyte blobs we originally wanted to cache. # That gives us a chance to intercept and cache those, which are the actual multi-megabyte blobs we originally wanted to cache.
# We to it twice, one for http and another for https. # We to it twice, one for http and another for https.
proxy_redirect ~^https://([^:/]+)(/.+)$ https://docker.proxy/forcecachesecure/$1/originalwas$2; proxy_redirect ~^https://([^:/]+)(/.+)$ https://docker.caching.proxy.internal/forcecachesecure/$1/originalwas$2;
proxy_redirect ~^http://([^:/]+)(/.+)$ http://docker.proxy/forcecacheinsecure/$1/originalwas$2; proxy_redirect ~^http://([^:/]+)(/.+)$ http://docker.caching.proxy.internal/forcecacheinsecure/$1/originalwas$2;
} }
# handling for the redirect case explained above, with https. # handling for the redirect case explained above, with https.
@ -146,11 +217,6 @@ http {
# Change the cache key, so that we can cache signed S3 requests and such. Only host and path are considered. # Change the cache key, so that we can cache signed S3 requests and such. Only host and path are considered.
proxy_cache_key $proxy_host$uri; proxy_cache_key $proxy_host$uri;
# Some debugging headers. Not important
add_header X-Docker-Caching-Proxy-Real-Proto https;
add_header X-Docker-Caching-Proxy-Real-Host $realHost;
add_header X-Docker-Caching-Proxy-Real-Path $realPath;
} }
# handling for the redirect case explained above, with http. # handling for the redirect case explained above, with http.
@ -161,11 +227,6 @@ http {
# Change the cache key, so that we can cache signed S3 requests and such. Only host and path are considered. # Change the cache key, so that we can cache signed S3 requests and such. Only host and path are considered.
proxy_cache_key $proxy_host$uri; proxy_cache_key $proxy_host$uri;
# Some debugging headers. Not important
add_header X-Docker-Caching-Proxy-Real-Proto http;
add_header X-Docker-Caching-Proxy-Real-Host $realHost;
add_header X-Docker-Caching-Proxy-Real-Path $realPath;
} }
} }
} }