Compare commits

..

4 Commits

Author SHA1 Message Date
Ricardo Pardini a70cb6852c add -e ENABLE_MANIFEST_CACHE=true to one the steps in test workflow. 2020-10-30 18:39:45 +01:00
Ricardo Pardini 68325a2945 add -e ENABLE_MANIFEST_CACHE=true to examples, some wording changes 2020-10-30 18:36:01 +01:00
Ricardo Pardini 917fa0f179 add manifest caching/anti-ratelimit usage note to README 2020-10-30 18:06:33 +01:00
Ricardo Pardini 3bfd778757 3-tier implementation of manifest caching; refactor config with includes, and generate from ENVs in entrypoint.sh
- disabled by default; enable with -e ENABLE_MANIFEST_CACHE=true
- default times and regexes are a wild guess, make sure to tune for your use case.
2020-10-30 16:50:54 +01:00
16 changed files with 65 additions and 690 deletions

View File

@ -36,8 +36,8 @@ jobs:
uses: docker/login-action@v1 uses: docker/login-action@v1
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} # github username or org username: ${{ secrets.DOCKER_GITHUB_USERNAME }}
password: ${{ secrets.GITHUB_TOKEN }} # github actions builtin token. repo has to have pkg access. password: ${{ secrets.DOCKER_GITHUB_PAT }}
# the arm64 is of course much slower due to qemu, so build and push amd64 **first** # the arm64 is of course much slower due to qemu, so build and push amd64 **first**
# due to the way manifests work, the gap between this and the complete push below # due to the way manifests work, the gap between this and the complete push below

View File

@ -49,8 +49,8 @@ jobs:
uses: docker/login-action@v1 uses: docker/login-action@v1
with: with:
registry: ghcr.io registry: ghcr.io
username: ${{ github.repository_owner }} # github username or org username: ${{ secrets.DOCKER_GITHUB_USERNAME }}
password: ${{ secrets.GITHUB_TOKEN }} # github actions builtin token. repo has to have pkg access. password: ${{ secrets.DOCKER_GITHUB_PAT }}
# the arm64 is of course much slower due to qemu, so build and push amd64 **first** # the arm64 is of course much slower due to qemu, so build and push amd64 **first**
# due to the way manifests work, the gap between this and the complete push below # due to the way manifests work, the gap between this and the complete push below

4
.gitignore vendored
View File

@ -1,4 +1,4 @@
.idea .idea
*.iml *.iml
**/docker_mirror_cache docker_mirror_cache
**/docker_mirror_certs docker_mirror_certs

View File

@ -1,66 +0,0 @@
# Configure Docker Desktop on Windows to use the proxy and trust its certificate
1. Let's say you set up the proxy on host `192.168.66.72`. Get the certificate using a browser (go to <http://192.168.66.72:3128/ca.crt>) and save it as a file (e.g., to `d:\ca.crt`)
1. Add the certificate to Windows:
1. Double click the certificate
1. Chose to _Install certificate..._, then click _Next_
1. Chose _Current user_, then click _Next_
1. Select option _Place all certificates in the following store_, click _browse_, and select _Trusted Root Certification Authorities_
1. Proceed with Ok and confirm to install the certificate
If you are not using the WSL2 backend for Docker, then restart Docker Desktop and skip the next step.
1. If you are using WSL2 for Docker, then you need to add the certificate to WSL too:
1. Open a terminal
1. Check the name of the WSL distribution:
```
PS C:\> wsl --list
Windows Subsystem for Linux Distributions:
docker-desktop (Default)
docker-desktop-data
```
The distribution we are looking for is _docker-desktop_. If you installed another distribution, such as Ubuntu, and configured Docker to use that, and proceed with that distribution instead.
1. Get a shell into WSL
```
PS C:\> wsl --distribution docker-desktop
XXXYYYZZZ:/tmp/docker-desktop-root/mnt/host/c#
```
1. Copy the certificate into WSL and import it
Note: The directory and the command below are for the _docker-desktop_ WSL distribution. On other systems you might need to tweak the commands a little, but they seem to be the same for [Ubuntu](https://www.pmichaels.net/2020/12/29/add-certificate-into-wsl/) and [Debian](https://github.com/microsoft/WSL/issues/3161#issue-320777324) as well.
```
XXXYYYZZZ:/tmp/docker-desktop-root/mnt/host/c# cp /mnt/host/d/ca.crt /usr/local/share/ca-certificates/
XXXYYYZZZ:/tmp/docker-desktop-root/mnt/host/c# update-ca-certificates
WARNING: ca-certificates.crt does not contain exactly one certificate or CRL: skipping
```
Don't mind the warning, the operation still succeeded.
1. We are done with WSL, you can `exit` this shell
1. Configure the proxy in Docker Desktop:
1. Open Docker Desktop settings
1. Go to _Resources/Proxies_
1. Enable the proxy and set `http://192.168.66.72:3128` as both the HTTP and HTTPS URL.
1. Done. Verify that pulling works:
```
# execute this in a Windows shell, not in WSL
docker pull hello-world
```
You can check the logs of the proxy to confirm that it was used.
If pulling does not work and complains about not trusting the certificate then Docker and/or the WSL distribution might need a restart. You might try restarting Docker, or you can restart Windows too to force WSL to restart.

View File

@ -1,74 +0,0 @@
# Attention: don't use Docker's own GUI to set the proxy!
- See https://github.com/docker/for-mac/issues/2467
- In `Docker > Preferences`, in `Resources > Proxies`, make sure you're NOT using manual proxies
- Use the hack below to set the environment var directly in LinuxKit
- The issue is that setting it in the GUI affects containers too (!!!), and we don't want that in this scenario
- If you actually need an upstream proxy (for company proxy etc) this will NOT work.
# Using a Docker Desktop for Mac as a client for the proxy
First, know this is a MiTM, and could break with new Docker Desktop for Mac releases or during resets/reinstalls/upgrades.
These instructions tested on Mac OS Catalina, and:
- Docker Desktop for Mac `2.4.2.0` (Edge) (which provides Docker `20.10.0-beta1`)
- Docker Desktop for Mac `2.5.0.0` (Stable) (which provides Docker `19.03`)
This assumes you have `docker-registry-proxy` running _somewhere else_, eg, on a different machine on your local network.
See the main [README.md](README.md) for instructions. (If you're trying to run both proxy and client on the same machine, see below).
We'll inject the CA certificates and the HTTPS_PROXY env into the Docker install inside the HyperKit VM running LinuxKit that is used by Docker Desktop for Mac.
To do that, we use a privileged container. `justincormack/nsenter1` does the job nicely.
First things first:
### 1) Factory Reset Docker Desktop for Mac...
... or make sure it's pristine (just installed).
- Go into Troubleshoot > "Reset to Factory defaults"
- it will take a while to reset/restart everything and require your password.
### 2) Inject config into Docker's VM
For these examples I will assume it is successfully running on `http://192.168.1.2:3128/` --
change the `export DRP_PROXY` as appropriate. Do not include slashes.
Run these commands in your Mac terminal.
```bash
set -e
export DRP_PROXY="192.168.66.100:3129" # Format IP:port, change this
wget -O - "http://${DRP_PROXY}/" # Make sure you can reach the proxy
# Inject the CA certificate
docker run -it --privileged --pid=host justincormack/nsenter1 \
/bin/bash -c "wget -O - http://$DRP_PROXY/ca.crt \
| tee -a /containers/services/docker/lower/etc/ssl/certs/ca-certificates.crt"
# Preserve original config.
docker run -it --privileged --pid=host justincormack/nsenter1 /bin/bash -c "cp /containers/services/docker/config.json /containers/services/docker/config.json.orig"
# Inject the HTTPS_PROXY enviroment variable. I dare you find a better way.
docker run -it --privileged --pid=host justincormack/nsenter1 /bin/bash -c "sed -ibeforedockerproxy -e 's/\"PATH=/\"HTTPS_PROXY=http:\/\/$DRP_PROXY\/\",\"PATH=/' /containers/services/docker/config.json"
```
### 3) Restart, test.
- Restart Docker. (Quit & Open again, or just go into Preferences and give it more RAM, then Restart.)
- Try a `docker pull` now. It should be using the proxy (watch the logs on the proxy server).
- Test that no crazy proxy has been set: `docker run -it curlimages/curl:latest http://ifconfig.me` and `docker run -it curlimages/curl:latest https://ifconfig.me` both work.
- Important: **push**es done with this configured will either not work, or use the auth you configured on the proxy, if any. Beware, and report back.
# Using Docker Desktop for Mac to both host the proxy server and use it as a client
@TODO: This has a bunch of chicken-and-egg issues.
You need to pre-pull the proxy itself and `justincormack/nsenter1`.
Follow the instructions above, but pre-pull after the Factory Reset.
Do NOT use 127.0.0.1, instead use your machine's local LAN IP address.
Make sure to bring the proxy up after applying/restarting the Docker Engine.

View File

@ -1,7 +1,7 @@
# We start from my nginx fork which includes the proxy-connect module from tEngine # We start from my nginx fork which includes the proxy-connect module from tEngine
# Source is available at https://github.com/rpardini/nginx-proxy-connect-stable-alpine # Source is available at https://github.com/rpardini/nginx-proxy-connect-stable-alpine
# This is already multi-arch! # This is already multi-arch!
ARG BASE_IMAGE="docker.io/rpardini/nginx-proxy-connect-stable-alpine:nginx-1.20.1-alpine-3.12.7" ARG BASE_IMAGE="rpardini/nginx-proxy-connect-stable-alpine:nginx-1.18.0-alpine-3.12.1"
# Could be "-debug" # Could be "-debug"
ARG BASE_IMAGE_SUFFIX="" ARG BASE_IMAGE_SUFFIX=""
FROM ${BASE_IMAGE}${BASE_IMAGE_SUFFIX} FROM ${BASE_IMAGE}${BASE_IMAGE_SUFFIX}
@ -19,7 +19,7 @@ ENV DO_DEBUG_BUILD="$DEBUG_BUILD"
# Build mitmproxy via pip. This is heavy, takes minutes do build and creates a 90mb+ layer. Oh well. # Build mitmproxy via pip. This is heavy, takes minutes do build and creates a 90mb+ layer. Oh well.
RUN [[ "a$DO_DEBUG_BUILD" == "a1" ]] && { echo "Debug build ENABLED." \ RUN [[ "a$DO_DEBUG_BUILD" == "a1" ]] && { echo "Debug build ENABLED." \
&& apk add --no-cache --update su-exec git g++ libffi libffi-dev libstdc++ openssl-dev python3 python3-dev py3-pip py3-wheel py3-six py3-idna py3-certifi py3-setuptools \ && apk add --no-cache --update su-exec git g++ libffi libffi-dev libstdc++ openssl-dev python3 python3-dev py3-pip py3-wheel py3-six py3-idna py3-certifi py3-setuptools \
&& LDFLAGS=-L/lib pip install MarkupSafe==2.0.1 mitmproxy==5.2 \ && LDFLAGS=-L/lib pip install mitmproxy==5.2 \
&& apk del --purge git g++ libffi-dev openssl-dev python3-dev py3-pip py3-wheel \ && apk del --purge git g++ libffi-dev openssl-dev python3-dev py3-pip py3-wheel \
&& rm -rf ~/.cache/pip \ && rm -rf ~/.cache/pip \
; } || { echo "Debug build disabled." ; } ; } || { echo "Debug build disabled." ; }
@ -94,29 +94,5 @@ ENV MANIFEST_CACHE_SECONDARY_TIME="60d"
# In the default config, :latest and other frequently-used tags will get this value. # In the default config, :latest and other frequently-used tags will get this value.
ENV MANIFEST_CACHE_DEFAULT_TIME="1h" ENV MANIFEST_CACHE_DEFAULT_TIME="1h"
# Should we allow actions different than pull, default to false.
ENV ALLOW_PUSH="false"
# If push is allowed, buffering requests can cause issues on slow upstreams.
# If you have trouble pushing, set this to false first, then fix remainig timouts.
# Default is true to not change default behavior.
ENV PROXY_REQUEST_BUFFERING="true"
# Timeouts
# ngx_http_core_module
ENV SEND_TIMEOUT="60s"
ENV CLIENT_BODY_TIMEOUT="60s"
ENV CLIENT_HEADER_TIMEOUT="60s"
ENV KEEPALIVE_TIMEOUT="300s"
# ngx_http_proxy_module
ENV PROXY_READ_TIMEOUT="60s"
ENV PROXY_CONNECT_TIMEOUT="60s"
ENV PROXY_SEND_TIMEOUT="60s"
# ngx_http_proxy_connect_module - external module
ENV PROXY_CONNECT_READ_TIMEOUT="60s"
ENV PROXY_CONNECT_CONNECT_TIMEOUT="60s"
ENV PROXY_CONNECT_SEND_TIMEOUT="60s"
ENV DISABLE_IPV6="false"
# Did you want a shell? Sorry, the entrypoint never returns, because it runs nginx itself. Use 'docker exec' if you need to mess around internally. # Did you want a shell? Sorry, the entrypoint never returns, because it runs nginx itself. Use 'docker exec' if you need to mess around internally.
ENTRYPOINT ["/entrypoint.sh"] ENTRYPOINT ["/entrypoint.sh"]

21
Makefile 100644
View File

@ -0,0 +1,21 @@
clean:
rm -rf docker_mirror_cache/*
build:
docker build --tag docker-registry-proxy .
start:
docker run --rm --name=docker-registry-proxy -it \
-p 0.0.0.0:3128:3128 \
-p 0.0.0.0:8081:8081 \
-e DEBUG=true \
-v $(dir $(abspath $(firstword $(MAKEFILE_LIST))))/docker_mirror_cache:/docker_mirror_cache \
-v $(dir $(abspath $(firstword $(MAKEFILE_LIST))))/docker_mirror_certs:/ca \
docker-registry-proxy
stop:
docker stop docker-registry-proxy
test: build start
.INTERMEDIATE: clean stop

View File

@ -1,14 +0,0 @@
## Build
```sh
buildah bud --layers -f Dockerfile \
--tag=rpjosh.de/docker-registry-proxy:0.0.0-dev
```
## Publish
```
podman login git.rpjosh.de
podman tag rpjosh.de/docker-registry-proxy:0.0.0-dev git.rpjosh.de/rpjosh/docker-registry-proxy:0.7.0
podman push git.rpjosh.de/rpjosh/docker-registry-proxy:0.7.0
```

136
README.md
View File

@ -63,8 +63,7 @@ for this to work it requires inserting a root CA certificate into system trusted
## master/:latest is unstable/beta ## master/:latest is unstable/beta
- `:latest` and `:latest-debug` Docker tag is unstable, built from master, and amd64-only - `:latest` and `:latest-debug` Docker tag is unstable, built from master, and amd64-only
- Production/stable is `0.6.2`, see [0.6.2 tag on Github](https://github.com/rpardini/docker-registry-proxy/tree/0.6.2) - this image is multi-arch amd64/arm64 - Production/stable is `0.5.0`, see [0.5.0 tag on Github](https://github.com/rpardini/docker-registry-proxy/tree/0.5.0) - this image is multi-arch amd64/arm64
- The previous version is `0.5.0`, without any manifest caching, see [0.5.0 tag on Github](https://github.com/rpardini/docker-registry-proxy/tree/0.5.0) - this image is multi-arch amd64/arm64
## Also hosted on GitHub Container Registry (ghcr.io) ## Also hosted on GitHub Container Registry (ghcr.io)
@ -73,13 +72,12 @@ for this to work it requires inserting a root CA certificate into system trusted
- Since 0.5.x, they both carry the same images - Since 0.5.x, they both carry the same images
- This can be useful if you're already hitting DockerHub's rate limits and can't pull the proxy from DockerHub - This can be useful if you're already hitting DockerHub's rate limits and can't pull the proxy from DockerHub
## Usage (running the Proxy server) ## Usage
- Run the proxy on a host close (network-wise: high bandwidth, same-VPC, etc) to the Docker clients - Run the proxy on a host close (network-wise: high bandwidth, same-VPC, etc) to the Docker clients
- Expose port 3128 to the network - Expose port 3128 to the network
- Map volume `/docker_mirror_cache` for up to `CACHE_MAX_SIZE` (32gb by default) of cached images across all cached registries - Map volume `/docker_mirror_cache` for up to `CACHE_MAX_SIZE` (32gb by default) of cached images across all cached registries
- Map volume `/ca`, the proxy will store the CA certificate here across restarts. **Important** this is security sensitive. - Map volume `/ca`, the proxy will store the CA certificate here across restarts. **Important** this is security sensitive.
- Env `ALLOW_PUSH` : This bypasses the proxy when pushing, default to false - if kept to false, pushing will not work. For more info see this [commit](https://github.com/rpardini/docker-registry-proxy/commit/536f0fc8a078d03755f1ae8edc19a86fc4b37fcf).
- Env `CACHE_MAX_SIZE` (default `32g`): set the max size to be used for caching local Docker image layers. Use [Nginx sizes](http://nginx.org/en/docs/syntax.html). - Env `CACHE_MAX_SIZE` (default `32g`): set the max size to be used for caching local Docker image layers. Use [Nginx sizes](http://nginx.org/en/docs/syntax.html).
- Env `ENABLE_MANIFEST_CACHE`, see the section on pull rate limiting. - Env `ENABLE_MANIFEST_CACHE`, see the section on pull rate limiting.
- Env `REGISTRIES`: space separated list of registries to cache; no need to include DockerHub, its already done internally. - Env `REGISTRIES`: space separated list of registries to cache; no need to include DockerHub, its already done internally.
@ -87,23 +85,6 @@ for this to work it requires inserting a root CA certificate into system trusted
- `hostname`s listed here should be listed in the REGISTRIES environment as well, so they can be intercepted. - `hostname`s listed here should be listed in the REGISTRIES environment as well, so they can be intercepted.
- Env `AUTH_REGISTRIES_DELIMITER` to change the separator between authentication info. By default, a space: "` `". If you use keys that contain spaces (as with Google Cloud Registry), you should update this variable, e.g. setting it to `AUTH_REGISTRIES_DELIMITER=";;;"`. In that case, `AUTH_REGISTRIES` could contain something like `registry1.com:user1:pass1;;;registry2.com:user2:pass2`. - Env `AUTH_REGISTRIES_DELIMITER` to change the separator between authentication info. By default, a space: "` `". If you use keys that contain spaces (as with Google Cloud Registry), you should update this variable, e.g. setting it to `AUTH_REGISTRIES_DELIMITER=";;;"`. In that case, `AUTH_REGISTRIES` could contain something like `registry1.com:user1:pass1;;;registry2.com:user2:pass2`.
- Env `AUTH_REGISTRY_DELIMITER` to change the separator between authentication info *parts*. By default, a colon: "`:`". If you use keys that contain single colons, you should update this variable, e.g. setting it to `AUTH_REGISTRIES_DELIMITER=":::"`. In that case, `AUTH_REGISTRIES` could contain something like `registry1.com:::user1:::pass1 registry2.com:::user2:::pass2`. - Env `AUTH_REGISTRY_DELIMITER` to change the separator between authentication info *parts*. By default, a colon: "`:`". If you use keys that contain single colons, you should update this variable, e.g. setting it to `AUTH_REGISTRIES_DELIMITER=":::"`. In that case, `AUTH_REGISTRIES` could contain something like `registry1.com:::user1:::pass1 registry2.com:::user2:::pass2`.
- Env `PROXY_REQUEST_BUFFERING`: If push is allowed, buffering requests can cause issues on slow upstreams.
If you have trouble pushing, set this to `false` first, then fix remainig timeouts.
Default is `true` to not change default behavior.
ENV PROXY_REQUEST_BUFFERING="true"
- Timeouts ENVS - all of them can pe specified to control different timeouts, and if not set, the defaults will be the ones from `Dockerfile`. The directives will be added into `http` block.:
- SEND_TIMEOUT : see [send_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout)
- CLIENT_BODY_TIMEOUT : see [client_body_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout)
- CLIENT_HEADER_TIMEOUT : see [client_header_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout)
- KEEPALIVE_TIMEOUT : see [keepalive_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout
- PROXY_READ_TIMEOUT : see [proxy_read_timeout](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout)
- PROXY_CONNECT_TIMEOUT : see [proxy_connect_timeout](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout)
- PROXY_SEND_TIMEOUT : see [proxy_send_timeout](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout)
- PROXY_CONNECT_READ_TIMEOUT : see [proxy_connect_read_timeout](https://github.com/chobits/ngx_http_proxy_connect_module#proxy_connect_read_timeout)
- PROXY_CONNECT_CONNECT_TIMEOUT : see [proxy_connect_connect_timeout](https://github.com/chobits/ngx_http_proxy_connect_module#proxy_connect_connect_timeout)
- PROXY_CONNECT_SEND_TIMEOUT : see [proxy_connect_send_timeout](https://github.com/chobits/ngx_http_proxy_connect_module#proxy_connect_send_timeout))
- DISABLE_IPV6: If set to `true`, prevents nginx from getting IPv6 addresses from the resolver without needing a [custom resolver config](#custom_nginx_resolvers_configuration)
### Simple (no auth, all cache) ### Simple (no auth, all cache)
```bash ```bash
@ -111,7 +92,7 @@ docker run --rm --name docker_registry_proxy -it \
-p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \ -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
-v $(pwd)/docker_mirror_cache:/docker_mirror_cache \ -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
-v $(pwd)/docker_mirror_certs:/ca \ -v $(pwd)/docker_mirror_certs:/ca \
rpardini/docker-registry-proxy:0.6.2 rpardini/docker-registry-proxy:0.5.0
``` ```
### DockerHub auth ### DockerHub auth
@ -127,7 +108,7 @@ docker run --rm --name docker_registry_proxy -it \
-v $(pwd)/docker_mirror_certs:/ca \ -v $(pwd)/docker_mirror_certs:/ca \
-e REGISTRIES="k8s.gcr.io gcr.io quay.io your.own.registry another.public.registry" \ -e REGISTRIES="k8s.gcr.io gcr.io quay.io your.own.registry another.public.registry" \
-e AUTH_REGISTRIES="auth.docker.io:dockerhub_username:dockerhub_password your.own.registry:username:password" \ -e AUTH_REGISTRIES="auth.docker.io:dockerhub_username:dockerhub_password your.own.registry:username:password" \
rpardini/docker-registry-proxy:0.6.2 rpardini/docker-registry-proxy:0.5.0
``` ```
### Simple registries auth (HTTP Basic auth) ### Simple registries auth (HTTP Basic auth)
@ -155,7 +136,7 @@ docker run --rm --name docker_registry_proxy -it \
-v $(pwd)/docker_mirror_certs:/ca \ -v $(pwd)/docker_mirror_certs:/ca \
-e REGISTRIES="reg.example.com git.example.com" \ -e REGISTRIES="reg.example.com git.example.com" \
-e AUTH_REGISTRIES="git.example.com:USER:PASSWORD" \ -e AUTH_REGISTRIES="git.example.com:USER:PASSWORD" \
rpardini/docker-registry-proxy:0.6.2 rpardini/docker-registry-proxy:0.5.0
``` ```
### Google Container Registry (GCR) auth ### Google Container Registry (GCR) auth
@ -178,95 +159,10 @@ docker run --rm --name docker_registry_proxy -it \
-e AUTH_REGISTRIES_DELIMITER=";;;" \ -e AUTH_REGISTRIES_DELIMITER=";;;" \
-e AUTH_REGISTRY_DELIMITER=":::" \ -e AUTH_REGISTRY_DELIMITER=":::" \
-e AUTH_REGISTRIES="gcr.io:::_json_key:::$(cat servicekey.json);;;auth.docker.io:::dockerhub_username:::dockerhub_password" \ -e AUTH_REGISTRIES="gcr.io:::_json_key:::$(cat servicekey.json);;;auth.docker.io:::dockerhub_username:::dockerhub_password" \
rpardini/docker-registry-proxy:0.6.2 rpardini/docker-registry-proxy:0.5.0
``` ```
### Kind Cluster ## Configuring the Docker clients / Kubernetes nodes
[Kind](https://github.com/kubernetes-sigs/kind/) is a tool for running local Kubernetes clusters using Docker container “nodes”.
Because cluster nodes are Docker containers, docker-registry-proxy needs to be in the same docker network.
Example joining the _kind_ docker network and using hostname _docker-registry-proxy_ as hostname :
```bash
docker run --rm --name docker_registry_proxy -it \
--net kind --hostname docker-registry-proxy \
-p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
-v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
-v $(pwd)/docker_mirror_certs:/ca \
rpardini/docker-registry-proxy:0.6.2
```
Now deploy your Kind cluster and then automatically configure the nodes with the following script :
```bash
#!/bin/sh
KIND_NAME=${1-kind}
SETUP_URL=http://docker-registry-proxy:3128/setup/systemd
pids=""
for NODE in $(kind get nodes --name "$KIND_NAME"); do
docker exec "$NODE" sh -c "\
curl $SETUP_URL \
| sed s/docker\.service/containerd\.service/g \
| sed '/Environment/ s/$/ \"NO_PROXY=127.0.0.0\/8,10.0.0.0\/8,172.16.0.0\/12,192.168.0.0\/16\"/' \
| bash" & pids="$pids $!" # Configure every node in background
done
wait $pids # Wait for all configurations to end
```
### K3D Cluster
[K3d](https://k3d.io/) is similar to Kind but is based on k3s. In order to run with its registry you need to setup settings like shown below.
```sh
# docker-registry-proxy
docker run -d --name registry-proxy --restart=always \
-v /tmp/registry-proxy/mirror_cache:/docker_mirror_cache \
-v /tmp/registry-proxy/certs:/ca \
rpardini/docker-registry-proxy:0.6.4
export PROXY_HOST=registry-proxy
export PROXY_PORT=3128
export NOPROXY_LIST="localhost,127.0.0.1,0.0.0.0,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.local,.svc"
cat <<EOF > /etc/k3d-proxy-config.yaml
apiVersion: k3d.io/v1alpha3
kind: Simple
name: mycluster
servers: 1
agents: 0
options:
k3d:
wait: true
timeout: "60s"
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: true
env:
- envVar: HTTP_PROXY=http://$PROXY_HOST:$PROXY_PORT
nodeFilters:
- all
- envVar: HTTPS_PROXY=http://$PROXY_HOST:$PROXY_PORT
nodeFilters:
- all
- envVar: NO_PROXY='$NOPROXY_LIST'
nodeFilters:
- all
volumes:
- volume: $REGISTRY_DIR/docker_mirror_certs/ca.crt:/etc/ssl/certs/registry-proxy-ca.pem
nodeFilters:
- all
EOF
k3d cluster create --config /etc/k3d-proxy-config.yaml
```
## Configuring the Docker clients using Docker Desktop for Mac
Separate instructions for Mac clients available in [this dedicated Doc Desktop for Mac document](Docker-for-Mac.md).
## Configuring the Docker clients / Kubernetes nodes / Linux clients
Let's say you setup the proxy on host `192.168.66.72`, you can then `curl http://192.168.66.72:3128/ca.crt` and get the proxy CA certificate. Let's say you setup the proxy on host `192.168.66.72`, you can then `curl http://192.168.66.72:3128/ca.crt` and get the proxy CA certificate.
@ -287,18 +183,10 @@ Environment="HTTP_PROXY=http://192.168.66.72:3128/"
Environment="HTTPS_PROXY=http://192.168.66.72:3128/" Environment="HTTPS_PROXY=http://192.168.66.72:3128/"
EOD EOD
### UBUNTU
# Get the CA certificate from the proxy and make it a trusted root. # Get the CA certificate from the proxy and make it a trusted root.
curl http://192.168.66.72:3128/ca.crt > /usr/share/ca-certificates/docker_registry_proxy.crt curl http://192.168.66.72:3128/ca.crt > /usr/share/ca-certificates/docker_registry_proxy.crt
echo "docker_registry_proxy.crt" >> /etc/ca-certificates.conf echo "docker_registry_proxy.crt" >> /etc/ca-certificates.conf
update-ca-certificates --fresh update-ca-certificates --fresh
###
### CENTOS
# Get the CA certificate from the proxy and make it a trusted root.
curl http://192.168.66.72:3128/ca.crt > /etc/pki/ca-trust/source/anchors/docker_registry_proxy.crt
update-ca-trust
###
# Reload systemd # Reload systemd
systemctl daemon-reload systemctl daemon-reload
@ -330,7 +218,7 @@ docker run --rm --name docker_registry_proxy -it
-p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \ -p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
-v $(pwd)/docker_mirror_cache:/docker_mirror_cache \ -v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
-v $(pwd)/docker_mirror_certs:/ca \ -v $(pwd)/docker_mirror_certs:/ca \
rpardini/docker-registry-proxy:0.6.2-debug rpardini/docker-registry-proxy:0.5.0-debug
``` ```
- `DEBUG=true` enables the mitmweb proxy between Docker clients and the caching layer, accessible on port 8081 - `DEBUG=true` enables the mitmweb proxy between Docker clients and the caching layer, accessible on port 8081
@ -341,10 +229,10 @@ docker run --rm --name docker_registry_proxy -it
- If you authenticate to a private registry and pull through the proxy, those images will be served to any client that can reach the proxy, even without authentication. *beware* - If you authenticate to a private registry and pull through the proxy, those images will be served to any client that can reach the proxy, even without authentication. *beware*
- Repeat, **this will make your private images very public if you're not careful**. - Repeat, **this will make your private images very public if you're not careful**.
- ~~**Currently you cannot push images while using the proxy** which is a shame. PRs welcome.~~ **SEE `ALLOW_PUSH` ENV FROM USAGE SECTION.** - **Currently you cannot push images while using the proxy** which is a shame. PRs welcome.
- Setting this on Linux is relatively easy. - Setting this on Linux is relatively easy.
- On Mac follow the instructions [here](Docker-for-Mac.md). - On Mac and Windows the CA-certificate part will be very different but should work in principle.
- On Windows follow the instructions [here](Docker-Desktop-Windows.md). - Please send PRs with instructions for Windows and Mac if you succeed!
### Why not use Docker's own registry, which has a mirror feature? ### Why not use Docker's own registry, which has a mirror feature?
@ -365,8 +253,6 @@ Yeah. Docker Inc should do it. So should NPM, Inc. Wonder why they don't. 😼
### TODO: ### TODO:
- [x] Basic Docker-for-Mac set-up instructions
- [x] Basic Docker-for-Windows set-up instructions.
- [ ] Test and make auth work with quay.io, unfortunately I don't have access to it (_hint, hint, quay_) - [ ] Test and make auth work with quay.io, unfortunately I don't have access to it (_hint, hint, quay_)
- [x] Hide the mitmproxy building code under a Docker build ARG. - [x] Hide the mitmproxy building code under a Docker build ARG.
- [ ] "Developer Office" proxy scenario, where many developers on a fast LAN share a proxy for bandwidth and speed savings (already works for pulls, but messes up pushes, which developers tend to use a lot) - [ ] "Developer Office" proxy scenario, where many developers on a fast LAN share a proxy for bandwidth and speed savings (already works for pulls, but messes up pushes, which developers tend to use a lot)

View File

@ -25,15 +25,12 @@ CN_WEB=${CN_WEB:0:64}
mkdir -p /certs /ca mkdir -p /certs /ca
cd /ca cd /ca
CA_KEY_FILE=${CA_KEY_FILE:-/ca/ca.key} CA_KEY_FILE=/ca/ca.key
CA_CRT_FILE=${CA_CRT_FILE:-/ca/ca.crt} CA_CRT_FILE=/ca/ca.crt
CA_SRL_FILE=${CA_SRL_FILE:-/ca/ca.srl} CA_SRL_FILE=/ca/ca.srl
if [ -f "$CA_CRT_FILE" ] ; then if [ -f "$CA_CRT_FILE" ] ; then
logInfo "CA already exists. Good. We'll reuse it." logInfo "CA already exists. Good. We'll reuse it."
if [ ! -f "$CA_SRL_FILE" ] ; then
echo 01 > ${CA_SRL_FILE}
fi
else else
logInfo "No CA was found. Generating one." logInfo "No CA was found. Generating one."
logInfo "*** Please *** make sure to mount /ca as a volume -- if not, everytime this container starts, it will regenerate the CA and nothing will work." logInfo "*** Please *** make sure to mount /ca as a volume -- if not, everytime this container starts, it will regenerate the CA and nothing will work."
@ -82,7 +79,7 @@ EOF
[[ ${DEBUG} -gt 0 ]] && openssl req -in ia.csr -noout -text [[ ${DEBUG} -gt 0 ]] && openssl req -in ia.csr -noout -text
logInfo "Sign the IA request with the CA cert and key, producing the IA cert" logInfo "Sign the IA request with the CA cert and key, producing the IA cert"
openssl x509 -req -days 730 -in ia.csr -CA ${CA_CRT_FILE} -CAkey ${CA_KEY_FILE} -CAserial ${CA_SRL_FILE} -out ia.crt -passin pass:foobar -extensions IA -extfile <( openssl x509 -req -days 730 -in ia.csr -CA ${CA_CRT_FILE} -CAkey ${CA_KEY_FILE} -out ia.crt -passin pass:foobar -extensions IA -extfile <(
cat <<-EOF cat <<-EOF
[req] [req]
distinguished_name = dn distinguished_name = dn

View File

@ -1,16 +0,0 @@
version: '3.7'
services:
docker_registry_proxy:
image: rpardini/docker-registry-proxy:0.6.1 # Check and make sure this is the last released version
env_file: # This contains REGISTRIES and AUTH_REGISTRIES
- ./secrets.env
environment:
- CACHE_MAX_SIZE=256g
- ENABLE_MANIFEST_CACHE=true
volumes:
# Format: <host-path>:<container-path>; adapt to your needs
- ./docker_mirror_cache:/docker_mirror_cache # This will be up to CACHE_MAX_SIZE big
- ./docker_mirror_certs:/ca
ports:
- 0.0.0.0:3128:3128 # 0.0.0.0 binds to all interfaces

View File

@ -1,3 +0,0 @@
# DockerHub authentication
REGISTRIES="k8s.gcr.io gcr.io quay.io" # There is no need to specify auth.docker.io, it's built-in
AUTH_REGISTRIES="auth.docker.io:your_dockerhub_username:your_dockerhub_password"

View File

@ -1,159 +0,0 @@
# How to use docker-registry-proxy with kops
## Install docker-registry-proxy
For running docker-registry-proxy with kops you will need to run it outside the cluster you want to configure, you can either use and EC2 instance and run:
```bash
docker run --rm --name docker_registry_proxy -it \
-p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
-v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
-v $(pwd)/docker_mirror_certs:/ca \
rpardini/docker-registry-proxy:0.6.0
```
or you can run it from another cluster, maybe a management/observability one with provided yaml, in this case, you will need to change the following lines:
```
annotations:
external-dns.alpha.kubernetes.io/hostname: docker-registry-proxy.<your_domain>
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
```
with the correct domain name, so then you can reference the proxy as `http://docker-registry-proxy.<your_domain>:3128`
## Test the connection to the proxy
A simple curl should return:
```
curl docker-registry-proxy.<your_domain>:3128
docker-registry-proxy: The docker caching proxy is working!%
```
## Configure kops to use the proxy
Kops has the option to configure a cluster wide proxy, as explained [here](https://github.com/kubernetes/kops/blob/master/docs/http_proxy.md) but this wont work, as nodeup will fail to download the images, what you need is to use `additionalUserData`, which is part of the instance groups configuration.
So consider a node configuration like this one:
```
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: spot.k8s.local
name: spotgroup
spec:
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200528
machineType: c3.xlarge
maxSize: 15
minSize: 2
mixedInstancesPolicy:
instances:
- c3.xlarge
- c4.xlarge
- c5.xlarge
- c5a.xlarge
onDemandAboveBase: 0
onDemandBase: 0
spotAllocationStrategy: capacity-optimized
nodeLabels:
kops.k8s.io/instancegroup: spotgroup
role: Node
subnets:
- us-east-1a
- us-east-1b
- us-east-1c
```
you will need to add the following:
```
additionalUserData:
- name: docker-registry-proxy.sh
type: text/x-shellscript
content: |
#!/bin/sh
# Add environment vars pointing Docker to use the proxy
# https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
mkdir -p /etc/systemd/system/docker.service.d
cat << EOD > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://docker-registry-proxy.<your_domain>:3128/"
Environment="HTTPS_PROXY=http://docker-registry-proxy.<your_domain>:3128/"
EOD
# Get the CA certificate from the proxy and make it a trusted root.
curl http://docker-registry-proxy.<your_domain>:3128/ca.crt > /usr/share/ca-certificates/docker_registry_proxy.crt
echo "docker_registry_proxy.crt" >> /etc/ca-certificates.conf
update-ca-certificates --fresh
# Reload systemd
systemctl daemon-reload
# Restart dockerd
systemctl restart docker.service
```
so the final InstanceGroup will look like this:
```
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: spot.k8s.local
name: spotgroup
spec:
additionalUserData:
- name: docker-registry-proxy.sh
type: text/x-shellscript
content: |
#!/bin/sh
# Add environment vars pointing Docker to use the proxy
# https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
mkdir -p /etc/systemd/system/docker.service.d
cat << EOD > /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://docker-registry-proxy.<your_domain>:3128/"
Environment="HTTPS_PROXY=http://docker-registry-proxy.<your_domain>:3128/"
EOD
# Get the CA certificate from the proxy and make it a trusted root.
curl http://docker-registry-proxy.<your_domain>:3128/ca.crt > /usr/share/ca-certificates/docker_registry_proxy.crt
echo "docker_registry_proxy.crt" >> /etc/ca-certificates.conf
update-ca-certificates --fresh
# Reload systemd
systemctl daemon-reload
# Restart dockerd
systemctl restart docker.service
image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20200528
machineType: c3.xlarge
maxSize: 15
minSize: 2
mixedInstancesPolicy:
instances:
- c3.xlarge
- c4.xlarge
- c5.xlarge
- c5a.xlarge
onDemandAboveBase: 0
onDemandBase: 0
spotAllocationStrategy: capacity-optimized
nodeLabels:
kops.k8s.io/instancegroup: spotgroup
role: Node
subnets:
- us-east-1a
- us-east-1b
- us-east-1c
```
Now all you need is to upgrade your cluster and do a rolling-update of the nodes, all images will be cached from now on.

View File

@ -1,81 +0,0 @@
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: docker-registry-proxy
namespace: registry-mirrors
labels:
app.kubernetes.io/name: docker-registry-proxy
spec:
serviceName: docker-registry
selector:
matchLabels:
app.kubernetes.io/name: docker-registry-proxy
template:
metadata:
labels:
app.kubernetes.io/name: docker-registry-proxy
spec:
serviceAccountName: default
containers:
- name: docker-registry-proxy
image: ghcr.io/rpardini/docker-registry-proxy:0.6.1
imagePullPolicy: IfNotPresent
env:
- name: ENABLE_MANIFEST_CACHE
value: "true"
- name: REGISTRIES
value: "k8s.gcr.io gcr.io quay.io us.gcr.io"
ports:
- name: http
containerPort: 3128
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
volumeMounts:
- name: ca
mountPath: /ca
- name: docker-registry-cache
mountPath: /docker_mirror_cache
resources: {}
volumeClaimTemplates:
- metadata:
name: ca
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
- metadata:
name: docker-registry-cache
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
---
apiVersion: v1
kind: Service
metadata:
name: docker-registry-proxy
namespace: registry-mirrors
labels:
app.kubernetes.io/name: docker-registry-proxy
annotations:
external-dns.alpha.kubernetes.io/hostname: docker-registry-proxy.<your_domain>
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 3128
targetPort: http
protocol: TCP
name: http
selector:
app.kubernetes.io/name: docker-registry-proxy

View File

@ -1,45 +1,15 @@
#! /bin/bash #! /bin/bash
echo "Entrypoint starting."
set -Eeuo pipefail set -Eeuo pipefail
trap "echo TRAPed signal" HUP INT QUIT TERM trap "echo TRAPed signal" HUP INT QUIT TERM
#configure nginx DNS settings to match host, why must we do that nginx? #configure nginx DNS settings to match host, why must we do that nginx?
# this leads to a world of problems. ipv6 format being different, etc. conf="resolver $(/usr/bin/awk 'BEGIN{ORS=" "} $1=="nameserver" {print $2}' /etc/resolv.conf) ipv6=off; # Avoid ipv6 addresses for now"
# below is a collection of hacks contributed over the years. [ "$conf" = "resolver ;" ] && echo "no nameservers found" && exit 0
echo "-- resolv.conf:"
cat /etc/resolv.conf
echo "-- end resolv"
# Podman adds a "%3" to the end of the last resolver? I don't get it. Strip it out.
export RESOLVERS=$(cat /etc/resolv.conf | sed -e 's/%3//g' | awk '$1 == "nameserver" {print ($2 ~ ":")? "["$2"]": $2}' ORS=' ' | sed 's/ *$//g')
if [ "x$RESOLVERS" = "x" ]; then
echo "Warning: unable to determine DNS resolvers for nginx" >&2
exit 66
fi
echo "DEBUG, determined RESOLVERS from /etc/resolv.conf: '$RESOLVERS'"
conf=""
for ONE_RESOLVER in ${RESOLVERS}; do
echo "Possible resolver: $ONE_RESOLVER"
if [[ "a${DISABLE_IPV6}" == "atrue" ]]; then
conf="resolver $ONE_RESOLVER ipv6=off; "
else
conf="resolver $ONE_RESOLVER; "
fi
done
echo "Final chosen resolver: $conf"
confpath=/etc/nginx/resolvers.conf confpath=/etc/nginx/resolvers.conf
if [ ! -e $confpath ] if [ ! -e $confpath ] || [ "$conf" != "$(cat $confpath)" ]
then then
echo "Using auto-determined resolver '$conf' via '$confpath'"
echo "$conf" > $confpath echo "$conf" > $confpath
else
echo "Not using resolver config, keep existing '$confpath' -- mounted by user?"
fi fi
# The list of SAN (Subject Alternative Names) for which we will create a TLS certificate. # The list of SAN (Subject Alternative Names) for which we will create a TLS certificate.
@ -147,33 +117,10 @@ EOD
} }
EOD EOD
echo -e "\nManifest caching config: ---\n" echo "Manifest caching config: ---"
cat /etc/nginx/nginx.manifest.caching.config.conf cat /etc/nginx/nginx.manifest.caching.config.conf
echo "---" echo "---"
if [[ "a${ALLOW_PUSH}" == "atrue" ]]; then
cat <<EOF > /etc/nginx/conf.d/allowed.methods.conf
# allow to upload big layers
client_max_body_size 0;
# only cache GET requests
proxy_cache_methods GET;
EOF
else
cat << 'EOF' > /etc/nginx/conf.d/allowed.methods.conf
# Block POST/PUT/DELETE. Don't use this proxy for pushing.
if ($request_method = POST) {
return 405 "POST method is not allowed";
}
if ($request_method = PUT) {
return 405 "PUT method is not allowed";
}
if ($request_method = DELETE) {
return 405 "DELETE method is not allowed";
}
EOF
fi
# normally use non-debug version of nginx # normally use non-debug version of nginx
NGINX_BIN="/usr/sbin/nginx" NGINX_BIN="/usr/sbin/nginx"
@ -231,47 +178,6 @@ if [[ "a${DEBUG_NGINX}" == "atrue" ]]; then
NGINX_BIN="/usr/sbin/nginx-debug" NGINX_BIN="/usr/sbin/nginx-debug"
fi fi
# Timeout configurations
echo "" > /etc/nginx/nginx.timeouts.config.conf
cat <<EOD >>/etc/nginx/nginx.timeouts.config.conf
# Timeouts
# ngx_http_core_module
keepalive_timeout ${KEEPALIVE_TIMEOUT};
send_timeout ${SEND_TIMEOUT};
client_body_timeout ${CLIENT_BODY_TIMEOUT};
client_header_timeout ${CLIENT_HEADER_TIMEOUT};
# ngx_http_proxy_module
proxy_read_timeout ${PROXY_READ_TIMEOUT};
proxy_connect_timeout ${PROXY_CONNECT_TIMEOUT};
proxy_send_timeout ${PROXY_SEND_TIMEOUT};
# ngx_http_proxy_connect_module - external module
proxy_connect_read_timeout ${PROXY_CONNECT_READ_TIMEOUT};
proxy_connect_connect_timeout ${PROXY_CONNECT_CONNECT_TIMEOUT};
proxy_connect_send_timeout ${PROXY_CONNECT_SEND_TIMEOUT};
EOD
echo -e "\nTimeout configs: ---"
cat /etc/nginx/nginx.timeouts.config.conf
echo -e "---\n"
# Request buffering
echo "" > /etc/nginx/proxy.request.buffering.conf
if [[ "a${PROXY_REQUEST_BUFFERING}" == "afalse" ]]; then
cat << EOD > /etc/nginx/proxy.request.buffering.conf
proxy_max_temp_file_size 0;
proxy_request_buffering off;
proxy_http_version 1.1;
EOD
fi
echo -e "\nRequest buffering: ---"
cat /etc/nginx/proxy.request.buffering.conf
echo -e "---\n"
# Upstream SSL verification. # Upstream SSL verification.
echo "" > /etc/nginx/docker.verify.ssl.conf echo "" > /etc/nginx/docker.verify.ssl.conf
if [[ "a${VERIFY_SSL}" == "atrue" ]]; then if [[ "a${VERIFY_SSL}" == "atrue" ]]; then
@ -288,6 +194,7 @@ else
echo "Upstream SSL certificate verification is DISABLED." echo "Upstream SSL certificate verification is DISABLED."
fi fi
echo "Testing nginx config..." echo "Testing nginx config..."
${NGINX_BIN} -t ${NGINX_BIN} -t

View File

@ -15,9 +15,6 @@ http {
include /etc/nginx/mime.types; include /etc/nginx/mime.types;
default_type application/octet-stream; default_type application/octet-stream;
# Include nginx timeout configs
include /etc/nginx/nginx.timeouts.config.conf;
# Use a debug-oriented logging format. # Use a debug-oriented logging format.
log_format debugging escape=json log_format debugging escape=json
'{' '{'
@ -76,6 +73,7 @@ http {
'"upstream":"$upstream_addr"' '"upstream":"$upstream_addr"'
'}'; '}';
keepalive_timeout 300;
gzip off; gzip off;
# Entrypoint generates the proxy_cache_path here, so it is configurable externally. # Entrypoint generates the proxy_cache_path here, so it is configurable externally.
@ -133,8 +131,7 @@ http {
# The proxy director layer, listens on 3128 # The proxy director layer, listens on 3128
server { server {
listen 3128; listen 3128;
listen [::]:3128; server_name _;
server_name proxy_director_;
# dont log the CONNECT proxy. # dont log the CONNECT proxy.
#access_log /var/log/nginx/access.log debug_proxy; #access_log /var/log/nginx/access.log debug_proxy;
@ -142,7 +139,6 @@ http {
set $docker_proxy_request_type "unknown-connect"; set $docker_proxy_request_type "unknown-connect";
proxy_connect; proxy_connect;
proxy_connect_allow all;
proxy_connect_address $interceptedHost; proxy_connect_address $interceptedHost;
proxy_max_temp_file_size 0; proxy_max_temp_file_size 0;
@ -203,7 +199,7 @@ echo "Docker configured with HTTPS_PROXY=$scheme://$http_host/"
# actually could be 443 or 444, depending on debug. this is now generated by the entrypoint. # actually could be 443 or 444, depending on debug. this is now generated by the entrypoint.
listen 80 default_server; listen 80 default_server;
include /etc/nginx/caching.layer.listen; include /etc/nginx/caching.layer.listen;
server_name proxy_caching_; server_name _;
# Do some tweaked logging. # Do some tweaked logging.
access_log /var/log/nginx/access.log tweaked; access_log /var/log/nginx/access.log tweaked;
@ -223,14 +219,19 @@ echo "Docker configured with HTTPS_PROXY=$scheme://$http_host/"
# Docker needs this. Don't ask. # Docker needs this. Don't ask.
chunked_transfer_encoding on; chunked_transfer_encoding on;
# configuration of the different allowed methods # Block POST/PUT/DELETE. Don't use this proxy for pushing.
include "/etc/nginx/conf.d/allowed.methods.conf"; if ($request_method = POST) {
return 405 "POST method is not allowed";
}
if ($request_method = PUT) {
return 405 "PUT method is not allowed";
}
if ($request_method = DELETE) {
return 405 "DELETE method is not allowed";
}
proxy_read_timeout 900; proxy_read_timeout 900;
# Request buffering
include /etc/nginx/proxy.request.buffering.conf;
# Use cache locking, with a huge timeout, so that multiple Docker clients asking for the same blob at the same time # Use cache locking, with a huge timeout, so that multiple Docker clients asking for the same blob at the same time
# will wait for the first to finish instead of doing multiple upstream requests. # will wait for the first to finish instead of doing multiple upstream requests.
proxy_cache_lock on; proxy_cache_lock on;