This change means we only check the filters for the existence of placeholders that cannot be replaced at startup. We then utilized cached results of that lookup for subsequent replacements.
This change adjusts several global options moving them into the server block. It additionally notes other breaking changes in the configuration.
BREAKING CHANGE: Several configuration options have been changed and moved into other sections. Migration instructions are documented here: https://authelia.com/docs/configuration/migration.html#4.30.0
#2101 introduced a minor regression when using the authelia scripts suite for developing.
The following issues occurred:
```
[00] # runtime/cgo
[00] cgo: exec gcc: exec: "gcc": executable file not found in $PATH
```
Adding the CGO_ENABLED=0 before the dlv build command in the run-backend-dev.sh fixed the issue.
* refactor: drop cgo requirement for sqlite
Replace github.com/mattn/go-sqlite3 with modernc.org/sqlite which drops our CGO requirement.
* refactor: newline for consistency with dockerfiles
* refactor: logging config key to log
This refactors the recent pre-release change adding log options to their own configuration section in favor of a log section (from logging).
* docs: add step to getting started to get the latest tagged commit
This is so we avoid issues with changes on master having differences that don't work on the latest docker tag.
* test: adjust tests
* docs: adjust doc strings
This is so levels like warn and error can be used to exclude info or warn messages. Additionally there is a reasonable refactoring of logging moving the log config options to the logging key because there are a significant number of log options now. This also decouples the expvars and pprof handlers from the log level, and they are now configured by server.enable_expvars and server.enable_pprof at any logging level.
OpenID connect has become a standard when it comes to authentication and
in order to fix a security concern around forwarding authentication and authorization information
it has been decided to add support for it.
This feature is in beta version and only enabled when there is a configuration for it.
Before enabling it in production, please consider that it's in beta with potential bugs and that there
are several production critical features still missing such as all OIDC related data is stored in
configuration or memory. This means you are potentially going to experience issues with HA
deployments, or when restarting a single instance specifically related to OIDC.
We are still working on adding the remaining set of features before making it GA as soon as possible.
Related to #189
Co-authored-by: Clement Michaud <clement.michaud34@gmail.com>
This change implements yamllint and adjusts all yaml files to abide by our linting setup. This excludes config.template.yml as this will be done in an alternate commit.
* feat: go:embed static assets
Go 1.16 introduced the ability to embed files within a generated binary directly with the go tool chain. This simplifies our dependencies and the significantly improves the development workflow for future developers.
Key points to note:
Due to the inability to embed files that do not reside within the local package we need to duplicate our `config.template.yml` within `internal/configuration`.
To avoid issues with the development workflow empty mock files have been included within `internal/server/public_html`. These are substituted with the respective generated files during the CI/CD and build workflows.
* fix(suites): increase ldap suite test timeout
* fix(server): fix swagger asset CSP
This PR achieves the following goals:
* Utilise upstream version of kind instead of a patched version which allows binding to networks other than the default "kind"
* Utilises the registry cache which is setup one level above the kind cluster
The former point was required to successfully run our integration tests in a Kubernetes environment, however this is now possible without running a patched version of kind.
The second point is because DockerHub has introduced rate limiting for container downloads. If there are a large number of CI jobs nodes may occasionally be rejected due to the Kubernetes suite not pulling down from the registry cache.