add FORCE_UPSTREAM_HTTP_1_1 capability
parent
1418183659
commit
240a46cf9d
|
@ -102,6 +102,8 @@ ENV ALLOW_PUSH="false"
|
|||
# Default is true to not change default behavior.
|
||||
ENV PROXY_REQUEST_BUFFERING="true"
|
||||
|
||||
ENV FORCE_UPSTREAM_HTTP_1_1="false"
|
||||
|
||||
# Timeouts
|
||||
# ngx_http_core_module
|
||||
ENV SEND_TIMEOUT="60s"
|
||||
|
|
37
README.md
37
README.md
|
@ -10,17 +10,17 @@ Caches the potentially huge blob/layer requests (for bandwidth/time savings), an
|
|||
|
||||
### NEW: avoiding DockerHub Pull Rate Limits with Caching
|
||||
|
||||
Starting November 2nd, 2020, DockerHub will
|
||||
[supposedly](https://www.docker.com/blog/docker-hub-image-retention-policy-delayed-and-subscription-updates/)
|
||||
Starting November 2nd, 2020, DockerHub will
|
||||
[supposedly](https://www.docker.com/blog/docker-hub-image-retention-policy-delayed-and-subscription-updates/)
|
||||
[start](https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developers-network-egress/)
|
||||
[rate-limiting pulls](https://docs.docker.com/docker-hub/download-rate-limit/),
|
||||
also known as the _Docker Apocalypse_.
|
||||
[rate-limiting pulls](https://docs.docker.com/docker-hub/download-rate-limit/),
|
||||
also known as the _Docker Apocalypse_.
|
||||
The main symptom is `Error response from daemon: toomanyrequests: Too Many Requests. Please see https://docs.docker.com/docker-hub/download-rate-limit/` during pulls.
|
||||
Many unknowing Kubernetes clusters will hit the limit, and struggle to configure `imagePullSecrets` and `imagePullPolicy`.
|
||||
|
||||
Since version `0.6.0`, this proxy can be configured with the env var `ENABLE_MANIFEST_CACHE=true` which provides
|
||||
Since version `0.6.0`, this proxy can be configured with the env var `ENABLE_MANIFEST_CACHE=true` which provides
|
||||
configurable caching of the manifest requests that DockerHub throttles. You can then fine-tune other parameters to your needs.
|
||||
Together with the possibility to centrally inject authentication (since 0.3x), this is probably one of the best ways to bring relief to your distressed cluster, while at the same time saving lots of bandwidth and time.
|
||||
Together with the possibility to centrally inject authentication (since 0.3x), this is probably one of the best ways to bring relief to your distressed cluster, while at the same time saving lots of bandwidth and time.
|
||||
|
||||
Note: enabling manifest caching, in its default config, effectively makes some tags **immutable**. Use with care. The configuration ENVs are explained in the [Dockerfile](./Dockerfile), relevant parts included below.
|
||||
|
||||
|
@ -51,13 +51,13 @@ ENV MANIFEST_CACHE_DEFAULT_TIME="1h"
|
|||
|
||||
## What?
|
||||
|
||||
Essentially, it's a [man in the middle](https://en.wikipedia.org/wiki/Man-in-the-middle_attack): an intercepting proxy based on `nginx`, to which all docker traffic is directed using the `HTTPS_PROXY` mechanism and injected CA root certificates.
|
||||
Essentially, it's a [man in the middle](https://en.wikipedia.org/wiki/Man-in-the-middle_attack): an intercepting proxy based on `nginx`, to which all docker traffic is directed using the `HTTPS_PROXY` mechanism and injected CA root certificates.
|
||||
|
||||
The main feature is Docker layer/image caching, including layers served from S3, Google Storage, etc.
|
||||
The main feature is Docker layer/image caching, including layers served from S3, Google Storage, etc.
|
||||
|
||||
As a bonus it allows for centralized management of Docker registry credentials, which can in itself be the main feature, eg in Kubernetes environments.
|
||||
|
||||
You configure the Docker clients (_err... Kubernetes Nodes?_) once, and then all configuration is done on the proxy --
|
||||
You configure the Docker clients (_err... Kubernetes Nodes?_) once, and then all configuration is done on the proxy --
|
||||
for this to work it requires inserting a root CA certificate into system trusted root certs.
|
||||
|
||||
## master/:latest is unstable/beta
|
||||
|
@ -91,6 +91,7 @@ for this to work it requires inserting a root CA certificate into system trusted
|
|||
If you have trouble pushing, set this to `false` first, then fix remainig timeouts.
|
||||
Default is `true` to not change default behavior.
|
||||
ENV PROXY_REQUEST_BUFFERING="true"
|
||||
- Env `FORCE_UPSTREAM_HTTP_1_1`: If set to `true`, injects nginx config that forces upstream to use http 1.1, this allows registries sitting behind an http2 proxy to work eg: harbor registry sitting behind an envoy proxy. Default is `false`.
|
||||
- Timeouts ENVS - all of them can pe specified to control different timeouts, and if not set, the defaults will be the ones from `Dockerfile`. The directives will be added into `http` block.:
|
||||
- SEND_TIMEOUT : see [send_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#send_timeout)
|
||||
- CLIENT_BODY_TIMEOUT : see [client_body_timeout](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout)
|
||||
|
@ -159,10 +160,10 @@ docker run --rm --name docker_registry_proxy -it \
|
|||
|
||||
### Google Container Registry (GCR) auth
|
||||
|
||||
For Google Container Registry (GCR), username should be `_json_key` and the password should be the contents of the service account JSON.
|
||||
Check out [GCR docs](https://cloud.google.com/container-registry/docs/advanced-authentication#json_key_file).
|
||||
For Google Container Registry (GCR), username should be `_json_key` and the password should be the contents of the service account JSON.
|
||||
Check out [GCR docs](https://cloud.google.com/container-registry/docs/advanced-authentication#json_key_file).
|
||||
|
||||
The service account key is in JSON format, it contains spaces ("` `") and colons ("`:`").
|
||||
The service account key is in JSON format, it contains spaces ("` `") and colons ("`:`").
|
||||
|
||||
To be able to use GCR you should set `AUTH_REGISTRIES_DELIMITER` to something different than space (e.g. `AUTH_REGISTRIES_DELIMITER=";;;"`) and `AUTH_REGISTRY_DELIMITER` to something different than a single colon (e.g. `AUTH_REGISTRY_DELIMITER=":::"`).
|
||||
|
||||
|
@ -324,7 +325,7 @@ Since `0.4` there is a separate `-debug` version of the image, which includes `n
|
|||
This allows very in-depth debugging. Use sparingly, and definitely not in production.
|
||||
|
||||
```bash
|
||||
docker run --rm --name docker_registry_proxy -it
|
||||
docker run --rm --name docker_registry_proxy -it
|
||||
-e DEBUG_NGINX=true -e DEBUG=true -e DEBUG_HUB=true -p 0.0.0.0:8081:8081 -p 0.0.0.0:8082:8082 \
|
||||
-p 0.0.0.0:3128:3128 -e ENABLE_MANIFEST_CACHE=true \
|
||||
-v $(pwd)/docker_mirror_cache:/docker_mirror_cache \
|
||||
|
@ -347,15 +348,15 @@ docker run --rm --name docker_registry_proxy -it
|
|||
|
||||
### Why not use Docker's own registry, which has a mirror feature?
|
||||
|
||||
Yes, Docker offers [Registry as a pull through cache](https://docs.docker.com/registry/recipes/mirror/), *unfortunately*
|
||||
Yes, Docker offers [Registry as a pull through cache](https://docs.docker.com/registry/recipes/mirror/), *unfortunately*
|
||||
it only covers the DockerHub case. It won't cache images from `quay.io`, `k8s.gcr.io`, `gcr.io`, or any such, including any private registries.
|
||||
|
||||
That means that your shiny new Kubernetes cluster is now a bandwidth hog, since every image will be pulled from the
|
||||
That means that your shiny new Kubernetes cluster is now a bandwidth hog, since every image will be pulled from the
|
||||
Internet on every Node it runs on, with no reuse.
|
||||
|
||||
This is due to the way the Docker "client" implements `--registry-mirror`, it only ever contacts mirrors for images
|
||||
This is due to the way the Docker "client" implements `--registry-mirror`, it only ever contacts mirrors for images
|
||||
with no repository reference (eg, from DockerHub).
|
||||
When a repository is specified `dockerd` goes directly there, via HTTPS (and also via HTTP if included in a
|
||||
When a repository is specified `dockerd` goes directly there, via HTTPS (and also via HTTP if included in a
|
||||
`--insecure-registry` list), thus completely ignoring the configured mirror.
|
||||
|
||||
### Docker itself should provide this.
|
||||
|
@ -365,7 +366,7 @@ Yeah. Docker Inc should do it. So should NPM, Inc. Wonder why they don't. 😼
|
|||
### TODO:
|
||||
|
||||
- [x] Basic Docker-for-Mac set-up instructions
|
||||
- [x] Basic Docker-for-Windows set-up instructions.
|
||||
- [x] Basic Docker-for-Windows set-up instructions.
|
||||
- [ ] Test and make auth work with quay.io, unfortunately I don't have access to it (_hint, hint, quay_)
|
||||
- [x] Hide the mitmproxy building code under a Docker build ARG.
|
||||
- [ ] "Developer Office" proxy scenario, where many developers on a fast LAN share a proxy for bandwidth and speed savings (already works for pulls, but messes up pushes, which developers tend to use a lot)
|
||||
|
|
|
@ -268,6 +268,20 @@ echo -e "\nRequest buffering: ---"
|
|||
cat /etc/nginx/proxy.request.buffering.conf
|
||||
echo -e "---\n"
|
||||
|
||||
# force upstream to use http 1.1
|
||||
echo "" > /etc/nginx/http1.1.upstream.conf
|
||||
if [[ "a${FORCE_UPSTREAM_HTTP_1_1}" == "atrue" ]]; then
|
||||
cat << EOD > /etc/nginx/http1.1.upstream.conf
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
EOD
|
||||
fi
|
||||
|
||||
echo -e "\nConfigure upstream http version support: ---"
|
||||
cat /etc/nginx/http1.1.upstream.conf
|
||||
echo -e "---\n"
|
||||
|
||||
# Upstream SSL verification.
|
||||
echo "" > /etc/nginx/docker.verify.ssl.conf
|
||||
if [[ "a${VERIFY_SSL}" == "atrue" ]]; then
|
||||
|
|
|
@ -253,10 +253,8 @@ echo "Docker configured with HTTPS_PROXY=$scheme://$http_host/"
|
|||
# Use SNI during the TLS handshake with the upstream.
|
||||
proxy_ssl_server_name on;
|
||||
|
||||
# http2 support for upstream
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
# force upstream to use http 1.1
|
||||
include /etc/nginx/http1.1.upstream.conf;
|
||||
|
||||
# This comes from a include file generated by the entrypoint.
|
||||
include /etc/nginx/docker.verify.ssl.conf;
|
||||
|
|
Loading…
Reference in New Issue