Also, I am not sure how the container ended up with the name caddy-caddy-1.
That’s normal Docker Compose behaviour. Unless you specifically set the name in the compose file, you get <path>-<service>-<instance> as the container name. Your directory name ended with caddy and the service is caddy so you got caddy-caddy-1.
But if I understand you correctly, it sounds like the authoritative DNS (e.g. Cloudflare for
mydomain.com) doesn’t need to have anArecord at all fortest.mydomain.com- it just needs to have theTXTrecord that the issuer will look for (e.g._acme-challenge.test).
Correct.
Only 1, it’s for Caddy’s use, it’s not communicated to the issuer. It’s for Caddy checking “I tried to write the TXT record, did it actually tho?” (aka the propagation check) because DNS plugins can misbehave or be slow to actually have the records appear in DNS after using whatever DNS provider’s API (Cloudflare is fast, so non-issue in that case). Once Caddy sees the TXT record, it can move on to tell the issuer “ok it should be there now, go look to confirm the challenge response!”
]]>tls internal - just the above test.mydomain.com example.
But if I understand you correctly, it sounds like the authoritative DNS (e.g. Cloudflare for mydomain.com) doesn’t need to have an A record at all for test.mydomain.com - it just needs to have the TXT record that the issuer will look for (e.g. _acme-challenge.test).
I just wanted to ensure that resolvers in the above example is only used to communicate with 1) the authoritative DNS server (e.g. Cloudflare) to create the TXT record and perhaps 2) the issuer (e.g. Let’s Encrypt), telling it to check for the TXT record.
This is important, for example, if the caddy host is in a network with a private (forwarding) DNS server without an A record for test.mydomain.com (or has an error/misconfiguration/etc), caddy’s DNS challenge flow should still work because of the explicit external resolvers.
Thanks for confirming!
]]>I am currently attempting to migrate from nginx to Caddy, and I’m trying to migrate a personal Pacman package proxy cache along with my other stuff. A similar config to what I had can be found here.
Unfortunately, however, I can’t seem to find a Caddy equivalent to nginx’s proxy_store directive. This is the part that effectively makes the whole thing work; it saves every archive file it receives from the reverse proxy upstream to a local directory preserving the original name/location, and it returns the local copy on every subsequent request to avoid downloading from upstream every time.
Is there a way I can replicate this setup, with or without plugins? I noticed the existence of GitHub - caddyserver/cache-handler: Distributed HTTP caching module for Caddy · GitHub, but it didn’t seem to be able to cache to a regular file.
N/A
v2.11.2
Quadlet (Podman) on Fedora 43 (x86_64) (an external machine on the local network)
# systemctl start caddy
[Unit]
Description=Caddy reverse proxy
After=network-online.target nss-lookup.target
Wants=network-online.target
[Container]
Image=docker.io/caddy:alpine
AutoUpdate=registry
ContainerName=caddy
PublishPort=80:80
PublishPort=443:443
PublishPort=443:443/udp
Network=podman
ReloadCmd=caddy reload --config /etc/caddy/Caddyfile
Volume=/etc/caddy:/etc/caddy:z
Volume=/var/containers/caddy/config:/config:z
Volume=/var/containers/caddy/data:/data:z
Volume=/var/log/caddy:/var/log/caddy:z
Volume=/var/www:/var/www:z
NoNewPrivileges=true
DropCapability=ALL
AddCapability=NET_ADMIN NET_BIND_SERVICE
Memory=1g
PodmanArgs=--memory-reservation=512m --cpu-shares=1024
[Service]
Restart=always
RestartSec=5s
[Install]
WantedBy=default.target
Admittedly untested, as the described issue is a blocker:
(repo) {
reverse_proxy {
dynamic a geo.mirror.pkgbuild.com 443 {
versions ipv6
}
# todo: figure out how to cache response to /var/www/paccache
# with the same directory layout, and preferably toggle
# based on a snippet arg
transport http {
tls
}
}
}
http:// {
root /var/www/paccache
@db path_regexp \.(db|sig|files)$
@tar path_regexp \.tar\.(xz|zst)$
handle @db {
# always proxy db/sig files, never cache
import repo
}
handle {
file_server browse
}
handle_errors {
handle @tar {
# the intent here is to search for local .tar files first,
# and then proxy/cache from upstream if none are found
import repo store
}
}
}
Will there be a problem if caddy is an https reverse proxy for
test.mydomain.comand that domain is not resolvable in any dns server and only via/etc/hosts?
You can use tls internal pretty much for any site name you like. If you want to do DNS-01 challenge, you don’t necessarily need an A record with IP address, but the zone (its authoritative DNS server) needs to be accessible by the issuer. Otherwise, I’m not sure how else the issuer would validate the challenge.
The logs however, show the following.
2026/03/12 20:47:17.170 WARN caddyfile Unnecessary header_up Host: the reverse proxy's default behavior is to pass headers to the upstream
]]>That makes sense for outbound caddy-to-dnsprovider or caddy-to-letsencrypt requests during the DNS challenge.
But I’m also trying to learn:
Will there be a problem if caddy is an https reverse proxy for test.mydomain.com and that domain is not resolvable in any dns server and only via /etc/hosts?
Does the docker image use preset paths somehow?
Yes. Caddy reads /etc/caddy/Caddyfile. You can also see it in your log:
So you just need to mount your Caddyfile into the container so that it ends up at /etc/caddy/Caddyfile.
{"apps":{"http":{"servers":{"srv0":{"listen":[":80"],"routes":[{"handle":[{"handler":"vars","root":"/usr/share/caddy"},{"handler":"file_server","hide":["/etc/caddy/Caddyfile"]}]}]}}}}}# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace ":80" below with your
# domain name.
:80 {
# Set this path to your site's directory.
root * /usr/share/caddy
# Enable the static file server.
file_server
# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080
# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000
}
# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile
Also, I am not sure how the container ended up with the name caddy-caddy-1.
Here’s the directory structure I’m using:
ruby@madcatter:~$ pwd
/home/ruby
ruby@madcatter:~$ ls -R caddy/
caddy/:
compose.yaml conf site
caddy/conf:
Caddyfile
caddy/site:
index.html
Does the docker image use preset paths somehow?
]]>docker exec -ti caddy cat /config/caddy/autosave.json /etc/caddy/Caddyfile
and share the result?
]]>Specifying a resolver lets Caddy bypass that and query something reliable, or query the authoritative DNS for the zone you want the certificate for directly. This does not affect how Caddy resolves other stuff, for example, upstream names.
]]>I set up Caddy through docker compose and added a simple index.html page for testing. It appears that it isn’t getting a TLS certificate.
On my local machine:
[ruby@nixos:~]$ curl -vL https://madcatter.dev/
* Host madcatter.dev:443 was resolved.
* IPv6: 2a0f:f01:206:1ec::
* IPv4: 92.113.145.235
* Trying [2a0f:f01:206:1ec::]:443...
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (OUT), TLS alert, decode error (562):
* TLS connect error: error:0A000126:SSL routines::unexpected eof while reading
* closing connection #0
curl: (35) TLS connect error: error:0A000126:SSL routines::unexpected eof while reading
On the VPS:
ruby@madcatter:~/caddy$ docker compose run caddy
[+] 1/1te 1/1
✔ Network caddy_default Created 0.0s
Container caddy-caddy-run-70ee879c0a6e Creating
Container caddy-caddy-run-70ee879c0a6e Created
2026/03/11 22:54:34.389 INFO maxprocs: Leaving GOMAXPROCS=6: CPU quota undefined
2026/03/11 22:54:34.389 INFO GOMEMLIMIT is updated {"GOMEMLIMIT": 8655486566, "previous": 9223372036854775807}
2026/03/11 22:54:34.389 INFO using config from file {"file": "/etc/caddy/Caddyfile"}
2026/03/11 22:54:34.389 INFO adapted config to JSON {"adapter": "caddyfile"}
2026/03/11 22:54:34.389 INFO admin admin endpoint started {"address": "localhost:2019", "enforce_origin": false, "origins": ["//localhost:2019", "//[::1]:2019", "//127.0.0.1:2019"]}
2026/03/11 22:54:34.390 WARN http.auto_https server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server {"server_name": "srv0", "http_port": 80}
2026/03/11 22:54:34.390 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0x25050e0fd500"}
2026/03/11 22:54:34.390 WARN http HTTP/2 skipped because it requires TLS {"network": "tcp", "addr": ":80"}
2026/03/11 22:54:34.390 WARN http HTTP/3 skipped because it requires TLS {"network": "tcp", "addr": ":80"}
2026/03/11 22:54:34.390 INFO http.log server running {"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2026/03/11 22:54:34.390 INFO autosaved config (load with --resume flag) {"file": "/config/caddy/autosave.json"}
2026/03/11 22:54:34.390 INFO serving initial configuration
2026/03/11 22:54:34.391 INFO tls storage cleaning happened too recently; skipping for now {"storage": "FileStorage:/data/caddy", "instance": "19b51cbf-0d78-4e65-beae-874cf473674a", "try_again": "2026/03/12 22:54:34.391", "try_again_in": 86399.99999972}
2026/03/11 22:54:34.391 INFO tls finished cleaning storage units
journalctl -u caddy --no-pager | less +G
ruby@madcatter:~/caddy$ docker compose exec caddy caddy version
v2.11.2 h1:iOlpsSiSKqEW+SIXrcZsZ/NO74SzB/ycqqvAIEfIm64=
ruby@madcatter:~/caddy$ uname -a
Linux madcatter.dev 6.12.73+deb13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.12.73-1 (2026-02-17) x86_64 GNU/Linux
ruby@madcatter:~/caddy$ docker -v
Docker version 29.3.0, build 5927d80
ruby@madcatter:~/caddy$ docker info
Client: Docker Engine - Community
Version: 29.3.0
Context: rootless
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.31.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v5.1.0
Path: /usr/libexec/docker/cli-plugins/docker-compose
Server:
Containers: 2
Running: 2
Paused: 0
Stopped: 0
Images: 4
Server Version: 29.3.0
Storage Driver: overlayfs
driver-type: io.containerd.snapshotter.v1
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
CDI spec directories:
/etc/cdi
/var/run/cdi
/home/ruby/.config/cdi
/run/user/1001/cdi
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: dea7da592f5d1d2b7755e3a161be07f43fad8f75
runc version: v1.3.4-0-gd6d73eb8
init version: de40ad0
Security Options:
seccomp
Profile: builtin
rootless
cgroupns
Kernel Version: 6.12.73+deb13-amd64
Operating System: Debian GNU/Linux 13 (trixie)
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 8.957GiB
Name: madcatter.dev
ID: 0615bdcd-7635-4510-8687-53bf562caeec
Docker Root Dir: /home/ruby/.local/share/docker
Debug Mode: false
Experimental: false
Insecure Registries:
::1/128
127.0.0.0/8
Live Restore Enabled: false
Firewall Backend: iptables
WARNING: No cpuset support
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support
cd ~/caddy
docker compose up -d
services:
caddy:
image: caddy:2.11.2
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./conf:/home/ruby/caddy/conf
- ./site:/home/ruby/caddy/site
- caddy_data:/data
- caddy_config:/config
stdin_open: true # docker run -i
tty: true # docker run -t
volumes:
caddy_data:
caddy_config:
{
acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
debug
}
madcatter.dev {
root /home/ruby/caddy/site
file_server
}
Assume caddy is running on a private network host that does not yet have any associated A or CNAME records and has its /etc/hosts as follows:
127.0.0.1 localhost
10.1.1.14 test.mydomain.com
Given the following example Caddyfile snippet:
test.mydomain.com {
tls {
dns cloudflare {env.CF_API_TOKEN}
resolvers 1.1.1.1
}
# ... etc...
}
What does the resolvers 1.1.1.1 line do exactly?
Naturally I checked the documentation which says:
- resolvers customizes the DNS resolvers used when performing the DNS challenge; these take precedence over system resolvers or any default ones. If set here, the resolvers will propagate to all configured certificate issuers.This is typically a list of IP addresses.
So it seems clear enough to me that this allows caddy and/or its cloudflare dns plugin to find Cloudflare’s API server so it can add the acme challenge TXT record to the mydomain.com domain.
But is 1.1.1.1 used for anything else?
Does caddy and/or its clouflare plugin use it to find the ip address for test.mydomain.com?
If so, that would be a problem because hostname-to-ip resolution is currently only represented in the caddy host’s /etc/hosts file and not yet in any dns server.
My understanding is that the acme exchange should still ‘just work’ and doesn’t need to know about the machine’s hostname or IP address. It is basically “I own mydomain.com, so I’ll add a TXT record to prove it, and after you verify that record, please issue me the cert.” No?
We see the _acme-challenge records stay in our dns records forever. We also see an error in the caddy output that probably explains why (error deleting temporary record for name (…)).
2026/03/10 17:17:10.418 INFO tls.obtain acquiring lock {"identifier": "ksa.stamhoofd.dev"}
2026/03/10 17:17:10.420 INFO tls.obtain lock acquired {"identifier": "ksa.stamhoofd.dev"}
2026/03/10 17:17:10.421 INFO tls.obtain obtaining certificate {"identifier": "ksa.stamhoofd.dev"}
2026/03/10 17:17:10.422 INFO http waiting on internal rate limiter {"identifiers": ["ksa.stamhoofd.dev"], "ca": "https://acme-v02.api.letsencrypt.org/directory", "account": ""}
2026/03/10 17:17:10.422 INFO http done waiting on internal rate limiter {"identifiers": ["ksa.stamhoofd.dev"], "ca": "https://acme-v02.api.letsencrypt.org/directory", "account": ""}
2026/03/10 17:17:10.422 INFO http using ACME account {"account_id": "https://acme-v02.api.letsencrypt.org/acme/acct/xxxxxxxx", "account_contact": []}
2026/03/10 17:18:15.326 ERROR cleaning up solver {"identifier": "ksa.stamhoofd.dev", "challenge_type": "dns-01", "error": "deleting temporary record for name \"stamhoofd.dev.\" in zone {\"_acme-challenge.ksa\" \"0s\" \"TXT\" \"HVliU-L8zpCEHL8X9bkbONdLUCyW5AM2e8r7M2N72-g\"}: strconv.Atoi: parsing \"\": invalid syntax"}
github.com/mholt/acmez/v3.(*Client).pollAuthorization
github.com/mholt/acmez/[email protected]/client.go:557
github.com/mholt/acmez/v3.(*Client).solveChallenges
github.com/mholt/acmez/[email protected]/client.go:391
github.com/mholt/acmez/v3.(*Client).ObtainCertificate
github.com/mholt/acmez/[email protected]/client.go:149
github.com/caddyserver/certmagic.(*ACMEIssuer).doIssue
github.com/caddyserver/[email protected]/acmeissuer.go:498
github.com/caddyserver/certmagic.(*ACMEIssuer).Issue
github.com/caddyserver/[email protected]/acmeissuer.go:391
github.com/caddyserver/caddy/v2/modules/caddytls.(*ACMEIssuer).Issue
github.com/caddyserver/caddy/[email protected]/modules/caddytls/acmeissuer.go:292
github.com/caddyserver/certmagic.(*Config).obtainCert.func2
github.com/caddyserver/[email protected]/config.go:662
github.com/caddyserver/certmagic.doWithRetry
github.com/caddyserver/[email protected]/async.go:104
github.com/caddyserver/certmagic.(*Config).obtainCert
github.com/caddyserver/[email protected]/config.go:736
github.com/caddyserver/certmagic.(*Config).ObtainCertAsync
github.com/caddyserver/[email protected]/config.go:532
github.com/caddyserver/certmagic.(*Config).manageOne.func1
github.com/caddyserver/[email protected]/config.go:415
github.com/caddyserver/certmagic.(*jobManager).worker
github.com/caddyserver/[email protected]/async.go:73
2026/03/10 17:18:15.326 INFO authorization finalized {"identifier": "ksa.stamhoofd.dev", "authz_status": "valid"}
2026/03/10 17:18:15.326 INFO validations succeeded; finalizing order {"order": "https://acme-v02.api.letsencrypt.org/acme/order/3133281997/489162114987"}
2026/03/10 17:18:17.000 INFO got renewal info {"names": ["ksa.stamhoofd.dev"], "window_start": "2026/05/08 19:03:49.000", "window_end": "2026/05/10 14:14:39.000", "selected_time": "2026/05/09 20:53:51.000", "recheck_after": "2026/03/10 23:12:41.000", "explanation_url": ""}
v2.11.2 h1:iOlpsSiSKqEW+SIXrcZsZ/NO74SzB/ycqqvAIEfIm64=
Modules:
github.com/caddy-dns/digitaloceangithub.com/mholt/caddy-l4Manual build in combination with systemd file.
GOOS=linux GOARCH=amd64 xcaddy build v2.11.2 --output caddy --with github.com/caddy-dns/digitalocean --with github.com/mholt/caddy-l4
Creating users and log directory
sudo groupadd --system caddy
sudo useradd --system --gid caddy --create-home --home-dir /var/lib/caddy --shell /usr/sbin/nologin --comment "Caddy web server" caddy
sudo mkdir -p /var/log/caddy
sudo chown caddy:caddy /var/log/caddy
Systemd config file attached below
sudo systemctl enable caddy
Ubuntu 24.04.4 LTS, amd64 (no docker)
sudo systemctl restart caddy
# caddy-api.service
#
# For using Caddy with its API.
#
# This unit is "durable" in that it will automatically resume
# the last active configuration if the service is restarted.
#
# See https://caddyserver.com/docs/install for instructions.
[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target
[Service]
Type=notify
User=caddy
Group=caddy
ExecStart=/usr/bin/caddy run --environ --resume
TimeoutStopSec=5s
LimitNOFILE=1048576
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
We don’t use Caddyfiles, but json only configs.
{
"apps": {
"http": {
"servers": {
"redirect": {
"listen": [
":80"
],
"routes": [
{
"handle": [
{
"handler": "static_response",
"headers": {
"Location": [
"https://{http.request.hostport}{http.request.uri}"
]
},
"status_code": "301"
}
]
}
]
},
"stamhoofd": {
"listen": [
":443"
],
"routes": [
{
"handle": [
{
"encodings": {
"gzip": {
"level": 6
},
"zstd": {}
},
"handler": "encode"
},
{
"handler": "reverse_proxy",
"headers": {
"request": {
"set": {
"x-real-ip": [
"{http.request.remote}"
]
}
}
},
"upstreams": [
{
"dial": "127.0.0.1:9091"
}
]
}
],
"match": [
{
"host": [
"api.staging.keeo.fos.be",
"*.api.staging.keeo.fos.be"
]
}
]
},
{
"handle": [
{
"encodings": {
"gzip": {
"level": 6
},
"zstd": {}
},
"handler": "encode"
},
{
"handler": "reverse_proxy",
"headers": {
"request": {
"set": {
"x-real-ip": [
"{http.request.remote}"
]
}
}
},
"upstreams": [
{
"dial": "127.0.0.1:5000"
}
]
}
],
"match": [
{
"host": [
"renderer.staging.keeo.fos.be"
]
}
]
},
{
"handle": [
{
"handler": "static_response",
"headers": {
"Cache-Control": [
"no-store"
],
"Location": [
"https://staging.keeo.fos.be{http.request.uri}"
]
},
"status_code": "302"
}
],
"match": [
{
"host": [
"ksa.stamhoofd.dev"
]
}
],
"terminal": true
},
{
"handle": [
{
"encodings": {
"gzip": {
"level": 6
},
"zstd": {}
},
"handler": "encode"
},
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "headers",
"response": {
"set": {
"Cache-Control": [
"no-store"
]
}
}
}
],
"match": [
{
"not": [
{
"path": [
"*.js",
"*.css",
"*.png",
"*.jpg",
"*.jpeg",
"*.gif",
"*.ico",
"*.webm",
"*.mp4",
"*.webp",
"*.avif",
"*.svg",
"*.ttf",
"*.woff",
"*.woff2",
"*.map",
"*.pdf",
"*.doc",
"*.docx",
"*.xls",
"*.xlsx",
"*.ppt",
"*.pptx",
"*.zip",
"*.rar",
"*.7z",
"*.gz",
"*.tar",
"*.mp3",
"*.m4a",
"*.avi",
"*.mkv",
"*.mov",
"*.wmv"
]
}
]
}
],
"terminal": false
},
{
"handle": [
{
"handler": "headers",
"response": {
"set": {
"Cache-Control": [
"max-age=31536000"
]
}
}
}
],
"match": [
{
"path": [
"*.js",
"*.css",
"*.png",
"*.jpg",
"*.jpeg",
"*.gif",
"*.ico",
"*.webm",
"*.mp4",
"*.webp",
"*.avif",
"*.svg",
"*.ttf",
"*.woff",
"*.woff2",
"*.map",
"*.pdf",
"*.doc",
"*.docx",
"*.xls",
"*.xlsx",
"*.ppt",
"*.pptx",
"*.zip",
"*.rar",
"*.7z",
"*.gz",
"*.tar",
"*.mp3",
"*.m4a",
"*.avi",
"*.mkv",
"*.mov",
"*.wmv"
]
}
],
"terminal": false
}
]
},
{
"handler": "file_server",
"pass_thru": true,
"root": "/var/www/stamhoofd/dashboard/"
},
{
"handler": "rewrite",
"uri": "/index.html"
},
{
"handler": "file_server",
"root": "/var/www/stamhoofd/dashboard/"
}
],
"match": [
{
"host": [
"staging.keeo.fos.be"
]
}
]
},
{
"handle": [
{
"handler": "static_response",
"headers": {
"Cache-Control": [
"no-store"
],
"Location": [
"https://staging.keeo.fos.be{http.request.uri}"
]
},
"status_code": "302"
}
],
"match": [
{
"host": [
"www.staging.keeo.fos.be"
]
}
]
},
{
"handle": [
{
"encodings": {
"gzip": {
"level": 6
},
"zstd": {}
},
"handler": "encode"
},
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "headers",
"response": {
"set": {
"Cache-Control": [
"no-store"
]
}
}
}
],
"match": [
{
"not": [
{
"path": [
"*.js",
"*.css",
"*.png",
"*.jpg",
"*.jpeg",
"*.gif",
"*.ico",
"*.webm",
"*.mp4",
"*.webp",
"*.avif",
"*.svg",
"*.ttf",
"*.woff",
"*.woff2",
"*.map",
"*.pdf",
"*.doc",
"*.docx",
"*.xls",
"*.xlsx",
"*.ppt",
"*.pptx",
"*.zip",
"*.rar",
"*.7z",
"*.gz",
"*.tar",
"*.mp3",
"*.m4a",
"*.avi",
"*.mkv",
"*.mov",
"*.wmv"
]
}
]
}
],
"terminal": false
},
{
"handle": [
{
"handler": "headers",
"response": {
"set": {
"Cache-Control": [
"max-age=31536000"
]
}
}
}
],
"match": [
{
"path": [
"*.js",
"*.css",
"*.png",
"*.jpg",
"*.jpeg",
"*.gif",
"*.ico",
"*.webm",
"*.mp4",
"*.webp",
"*.avif",
"*.svg",
"*.ttf",
"*.woff",
"*.woff2",
"*.map",
"*.pdf",
"*.doc",
"*.docx",
"*.xls",
"*.xlsx",
"*.ppt",
"*.pptx",
"*.zip",
"*.rar",
"*.7z",
"*.gz",
"*.tar",
"*.mp3",
"*.m4a",
"*.avi",
"*.mkv",
"*.mov",
"*.wmv"
]
}
],
"terminal": false
}
]
},
{
"handler": "file_server",
"pass_thru": true,
"root": "/var/www/stamhoofd/webshop/"
},
{
"handler": "rewrite",
"uri": "/index.html"
},
{
"handler": "file_server",
"root": "/var/www/stamhoofd/webshop/"
}
],
"match": [
{
"host": [
"shop.staging.keeo.fos.be"
]
}
]
},
{
"handle": [
{
"encodings": {
"gzip": {
"level": 6
},
"zstd": {}
},
"handler": "encode"
},
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "headers",
"response": {
"set": {
"Cache-Control": [
"no-store"
]
}
}
}
],
"match": [
{
"not": [
{
"path": [
"*.js",
"*.css",
"*.png",
"*.jpg",
"*.jpeg",
"*.gif",
"*.ico",
"*.webm",
"*.mp4",
"*.webp",
"*.avif",
"*.svg",
"*.ttf",
"*.woff",
"*.woff2",
"*.map",
"*.pdf",
"*.doc",
"*.docx",
"*.xls",
"*.xlsx",
"*.ppt",
"*.pptx",
"*.zip",
"*.rar",
"*.7z",
"*.gz",
"*.tar",
"*.mp3",
"*.m4a",
"*.avi",
"*.mkv",
"*.mov",
"*.wmv"
]
}
]
}
],
"terminal": false
},
{
"handle": [
{
"handler": "headers",
"response": {
"set": {
"Cache-Control": [
"max-age=31536000"
]
}
}
}
],
"match": [
{
"path": [
"*.js",
"*.css",
"*.png",
"*.jpg",
"*.jpeg",
"*.gif",
"*.ico",
"*.webm",
"*.mp4",
"*.webp",
"*.avif",
"*.svg",
"*.ttf",
"*.woff",
"*.woff2",
"*.map",
"*.pdf",
"*.doc",
"*.docx",
"*.xls",
"*.xlsx",
"*.ppt",
"*.pptx",
"*.zip",
"*.rar",
"*.7z",
"*.gz",
"*.tar",
"*.mp3",
"*.m4a",
"*.avi",
"*.mkv",
"*.mov",
"*.wmv"
]
}
],
"terminal": false
}
]
},
{
"handler": "file_server",
"pass_thru": true,
"root": "/var/www/stamhoofd/webshop/"
},
{
"handler": "rewrite",
"uri": "/index.html"
},
{
"handler": "file_server",
"root": "/var/www/stamhoofd/webshop/"
}
]
}
]
}
}
},
"tls": {
"automation": {
"on_demand": {
"permission": {
"endpoint": "https://api.staging.keeo.fos.be/v394/check-domain-cert",
"module": "http"
}
},
"policies": [
{
"issuers": [
{
"challenges": {
"dns": {
"propagation_delay": "1m",
"propagation_timeout": "10m",
"provider": {
"auth_token": "xxxxxxxxxxxx",
"name": "digitalocean"
}
}
},
"module": "acme"
}
],
"on_demand": false,
"subjects": [
"api.staging.keeo.fos.be",
"*.api.staging.keeo.fos.be"
]
},
{
"issuers": [
{
"challenges": {
"dns": {
"propagation_delay": "1m",
"propagation_timeout": "10m",
"provider": {
"auth_token": "xxxxxxxxxxxx",
"name": "digitalocean"
}
}
},
"module": "acme"
}
],
"on_demand": false,
"subjects": [
"renderer.staging.keeo.fos.be"
]
},
{
"issuers": [
{
"challenges": {
"dns": {
"propagation_delay": "1m",
"propagation_timeout": "10m",
"provider": {
"auth_token": "xxxxxxxxxxxx2",
"name": "digitalocean"
}
}
},
"module": "acme"
}
],
"on_demand": false,
"subjects": [
"ksa.stamhoofd.dev"
]
},
{
"issuers": [
{
"challenges": {
"dns": {
"propagation_delay": "1m",
"propagation_timeout": "10m",
"provider": {
"auth_token": "xxxxxxxxxxxx",
"name": "digitalocean"
}
}
},
"module": "acme"
}
],
"on_demand": false,
"subjects": [
"www.staging.keeo.fos.be",
"staging.keeo.fos.be"
]
},
{
"issuers": [
{
"challenges": {
"dns": {
"propagation_delay": "1m",
"propagation_timeout": "10m",
"provider": {
"auth_token": "xxxxxxxxxxxx",
"name": "digitalocean"
}
}
},
"module": "acme"
}
],
"on_demand": false,
"subjects": [
"shop.staging.keeo.fos.be"
]
},
{
"on_demand": true
}
]
}
}
},
"logging": {
"logs": {
"default": {
"encoder": {
"format": "console"
},
"writer": {
"filename": "/var/log/caddy/caddy.log",
"output": "file",
"roll_size_mb": 50
}
}
},
"sink": {
"writer": {
"filename": "/var/log/caddy/caddy-sink.log",
"output": "file",
"roll_size_mb": 50
}
}
}
}
I cannot determine where such ip addresses are coming from, since they are not present in the config.
They are right here:
Your upstream target is not specified correctly. You are missing the colon before the port number. Without it, Caddy interprets 7880 as an IPv4 address written in decimal form. That value converts to 0.0.30.200 in the standard four octet format, which is why you are seeing that address.
Update the configuration to use a proper upstream format, for example:
to :7880
or
to localhost:7880
More details:
]]>I am working on setting up a matrix homeserver using NixOS modules, and it appears that once everything is set up the caddy reverse proxy attempts to make requests to invalid ip addresses like 0.0.30.200:80 and I cannot determine where such ip addresses are coming from, since they are not present in the config.
Mar 10 08:47:09 nixos-matrix-homeserver-testing caddy[19307]: {"level":"error","ts":1773146829.1132128,"logger":"http.log.error.log0","msg":"dial tcp 0.0.24.23:80: i/o timeout","request":{"remote_ip":"173.49.123.17","remote_port":"54876","client_ip":"173.49.123.17","proto":"HTTP/2.0","method":"GET","host":"matrix2.srasu.org","uri":"/","headers":{"Sec-Fetch-Site":["none"],"Priority":["u=0, i"],"Te":["trailers"],"Accept-Language":["en-US,en;q=0.9"],"Sec-Fetch-Mode":["navigate"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Dest":["document"],"User-Agent":["Mozilla/5.0 (X11; Linux x86_64; rv:148.0) Gecko/20100101 Firefox/148.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"matrix2.srasu.org","ech":false}},"duration":3.000683803,"status":502,"err_id":"mhdgdpduf","err_trace":"reverseproxy.statusError (reverseproxy.go:1473)"}
Mar 10 09:21:37 nixos-matrix-homeserver-testing caddy[19307]: {"level":"error","ts":1773148897.452203,"logger":"http.log.error.log0","msg":"dial tcp 0.0.24.23:80: i/o timeout","request":{"remote_ip":"16.144.17.106","remote_port":"37658","client_ip":"16.144.17.106","proto":"HTTP/1.1","method":"GET","host":"matrix2.srasu.org","uri":"/","headers":{"Connection":["close"],"User-Agent":["Mozilla/5.0 (Android; Linux armv7l; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 Fennec/10.0.1"],"Accept-Charset":["utf-8"],"Accept-Encoding":["gzip"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"","server_name":"matrix2.srasu.org","ech":false}},"duration":3.001503004,"status":502,"err_id":"svnsqvnge","err_trace":"reverseproxy.statusError (reverseproxy.go:1473)"}
Mar 10 09:37:49 nixos-matrix-homeserver-testing caddy[19307]: {"level":"error","ts":1773149869.5751345,"logger":"http.log.error.log1","msg":"dial tcp 0.0.30.200:80: i/o timeout","request":{"remote_ip":"34.67.39.62","remote_port":"57208","client_ip":"34.67.39.62","proto":"HTTP/2.0","method":"GET","host":"sfu.matrix2.srasu.org","uri":"/.env","headers":{"Te":["trailers"],"Accept-Language":["en-US,en;q=0.5"],"Sec-Fetch-Site":["none"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"],"Priority":["u=0, i"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:135.0) Gecko/20100101 Firefox/135.0"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Dest":["document"],"Accept-Encoding":["gzip, deflate, br, zstd"],"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-User":["?1"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"sfu.matrix2.srasu.org","ech":false}},"duration":3.000765468,"status":502,"err_id":"p18bwchua","err_trace":"reverseproxy.statusError (reverseproxy.go:1473)"}
2.11.1
I installed caddy by way of the NixOS module.
NixOS 26.05 (unstable), x86_64-linux. The NixOS module installs caddy via a systemd service.
/nix/store/0y8w75a33h8qxxmg5jglxk0kvibcgx4p-caddy-2.11.1/bin/caddy run --environ --config /etc/caddy/Caddyfile
# section of the nixos configuration relevant to caddy
services.caddy = {
enable = true;
openFirewall = true;
virtualHosts = {
"matrix2.srasu.org" = {
extraConfig = ''
reverse_proxy 6167
'';
};
"sfu.matrix2.srasu.org" = {
extraConfig = ''
@jwt_service {
path /sfu/get* /heathz*
}
handle @jwt_service {
reverse_proxy 8081
}
handle {
reverse_proxy {
to 7880
header_up Connection "upgrade"
header_up Upgrade "{http.request.header.Upgrade}"
}
}
'';
};
};
};
# caddy.service
#
# For using Caddy with a config file.
#
# Make sure the ExecStart and ExecReload commands are correct
# for your installation.
#
# See https://caddyserver.com/docs/install for instructions.
#
# WARNING: This service does not use the --resume flag, so if you
# use the API to make changes, they will be overwritten by the
# Caddyfile next time the service is restarted. If you intend to
# use Caddy's API to configure it, add the --resume flag to the
# `caddy run` command or use the caddy-api.service file instead.
[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target network-online.target
Requires=network-online.target
[Service]
Type=notify
User=caddy
Group=caddy
ExecStart=/nix/store/0y8w75a33h8qxxmg5jglxk0kvibcgx4p-caddy-2.11.1/bin/caddy run --environ --config /etc/caddy/Caddyfile
ExecReload=/nix/store/0y8w75a33h8qxxmg5jglxk0kvibcgx4p-caddy-2.11.1/bin/caddy reload --config /etc/caddy/Caddyfile --force
TimeoutStopSec=5s
LimitNOFILE=1048576
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE
[Install]
WantedBy=multi-user.target
This config is auto-generated from the nixos module listed above.
{
log {
level ERROR
}
}
matrix2.srasu.org {
log {
output file /var/log/caddy/access-matrix2.srasu.org.log
}
reverse_proxy 6167
}
sfu.matrix2.srasu.org {
log {
output file /var/log/caddy/access-sfu.matrix2.srasu.org.log
}
@jwt_service {
path /sfu/get\* /heathz\*
}
handle @jwt_service {
reverse_proxy 8081
}
handle {
reverse_proxy {
to 7880
header_up Connection "upgrade"
header_up Upgrade "{http.request.header.Upgrade}"
}
}
}
When setting up maxmind_geolocation I can’t block non-IT IPs.
I can either block everyone (if I write error @geo 403 before the reverse_proxy instruction) or no one (if I don’t write error @geo 403, but i write reverse_proxy @geo jellyfin:8096)
I tried with my italian IP and with a VPN service (both US and european IPs)
docker logs caddy gives no errors at all
v2.11.1
docker compose
{
admin off
servers {
client_ip_headers X-Forwarded-For
trusted_proxies static private_ranges
trusted_proxies_strict
}
order crowdsec before respond
crowdsec {
api_url http://crowdsec:8080
api_key "MYKEY"
ticker_interval 15s
appsec_url http://crowdsec:7422
#disable_streaming
#enable_hard_fails
}
log {
output file /var/log/caddy/access.log {
roll_size 30MiB
roll_keep 5
}
}
}
(default-headers) {
header {
-frameDeny
-sslRedirect
-browserXssFilter
-contentTypeNosniff
-forceSTSHeader
-stsIncludeSubdomains
-stsPreload
-stsSeconds 15552000
-customFrameOptionsValue SAMEORIGIN
-customRequestHeaders X-Forwarded-Proto https
}
}
*.test.mydomain.com {
tls {
dns cloudflare MYKEY
propagation_delay 2m
resolvers 1.1.1.1
}
log
@geo maxmind_geolocation {
db_path "/etc/caddy/GeoLite2-Country.mmdb"
allow_countries IT
}
@test host *
@jellyfin host jellyfin.test.mydomain.com
route @test {
crowdsec
appsec
respond "test"
}
route @jellyfin {
# I can reach jellyfin:8096 with my italian IP but I can do that even with a non-IT VPN
error @geo 403
crowdsec
appsec
reverse_proxy jellyfin:8096
# I already tried reverse_proxy @geo jellyfin:8096
}
}
I tried this too (this gives error 403 to everyone, IT and outside):
@geo {
not maxmind_geolocation {
db_path "/etc/caddy/GeoLite2-Country.mmdb"
allow_countries IT
}
not remote_ip 172.24.0.0/22 # My container's IPs
}
route @jellyfin {
error @geo 403
crowdsec
appsec
reverse_proxy jellyfin:8096
}
]]>handle is sorted above abort so that’s why it fails, but when you wrap it in a route it disables the directive sorting so it forces abort to be first and you get the behaviour you expect. Alternatively, you could make your snippet have a handle around the abort (apply the matcher on the handle) then since the handles are at the same nesting level with eachother they will act mutually exclusively and the first one which matches (in the order they appear) will execute. ]]>thank you for your feedback. Sorry docker is no option for me. but i finally got it working ![]()
Solution:
i have a 2 config files (simple text) that are read in the envionment var that contains for example.
in Caddyfile for the base domain(s) (allways on):
import conf.d/basedomains.conf
and in conf.d/basedomains.conf
{$DOMAINS:localhost} {
import default_log
...
}
For the subdomains as global block in Caddyfile:
{$SUB_DOMAINS:localhost} {
import securityheader
#auto active all apps
import conf.d.apps/*.conf
}
In conf.d.apps/app1.conf
@app1_app header_regexp sub host ^app1.(.*)
handle @app1_app {
root * {$APP1_WEB_PATH}\public
#special configs...
…
}
In conf.d.apps/app2.conf
@app2_app header_regexp sub host ^app2.(.*)
handle @app2_app {
root * {$APP2_WEB_PATH}\public
#speical configs
…
}
The $SUB_DOMAINS - ENV-Data can also be generated via the active apps in “conf.d.apps/*.conf” and the $DOMAINS
So i can easily add a new base-domain-name without touching the default/fixed config files that must be maintained on many servers.
If i want to active a prepared additional app, i only need to create a link in conf.d.apps/<appname>.conf
![]()
Any Problems to expect with my solution?
]]>handle has higher priority than abort, so your traffic will never reach it unless someone tries to access something other than the sites you listed.
I’m on my phone right now, but I’ll post a better approach to your problem once I’m back on my computer, sorry.
In the meantime, take a look at the directive order:
]]>I have many forwards. I want all of them to only allow local IPs (private_ranges). I tried it with a snippet and importing that one underneath my *.example.com, but it’s forwarding/allowing everything.
No error logs, since Caddy is just allowing everything. I can curl from a remote (public) IP and everything is allowed, even though it should be aborting non private_ranges
v2.10.2
Docker compose on Debian.
Debian VM on Proxmox, running docker compose (v5.1.0)
N/A
services:
caddy:
build: .
container_name: caddy
network_mode: host
restart: unless-stopped
volumes:
- ./conf:/etc/caddy
- ./site:/srv
- ./caddy_data:/data
- ./caddy_config:/config
{
email [email protected]
acme_dns transip {
login my_username
private_key /etc/caddy/my_key.pem
}
}
(internal_only) {
@not_private not remote_ip private_ranges
abort @not_private
}
*.example.com {
import internal_only
@test host test.example.com
handle @test{
reverse_proxy 192.168.1.100:8880
}
}
Putting it all in a route block seems to be working, for example:
{
email [email protected]
acme_dns transip {
login my_username
private_key /etc/caddy/my_key.pem
}
}
(internal_only) {
@not_private not remote_ip private_ranges
abort @not_private
}
*.example.com {
route {
import internal_only
@test host test.example.com
handle @test{
reverse_proxy 192.168.1.100:8880
}
}
}
I’m just not sure if this would be the best pratice…
]]>dockerize to see if it can template the Caddyfile the way you want. Then run dockerize shortly before starting Caddy so it generates the final Caddyfile from a template.
Dockerize is meant for templating inside Docker containers, but you can also use it outside containers. It is a relatively small Golang app.
Another option is to write a simple shell script that generates the final Caddyfile based on your criteria before launching Caddy.
]]>I have multiple configs for each app (subdomain). My goal is not touching the config file for multiple domains. I want to define the different domain names via an environment variable.
At the moment i define my apps for a domain like this. So i can specify the domain name via environment var.
app1.{$BASE_DOMAIN} {
...
}
This is working fine, but i have several host that have multiple (alternative) domain-names that should serve the same apps.
app1.domain-a.com, app1.domain-a.com {
...
}
app2.domain-a.com, app2.domain-b.com {
...
}
so it would be nice to have something like this possible.
app1.{$DOMAINS} {
...
}
app2.{$DOMAINS} {
...
}
where environment var $DOMAINS can be for example “localhost | domain-a | domain-b”
Is it possible to use dynamic (environment-based) base-domain-names, so that i did not need to touch the caddy-config for multiple domains?
Thank you for any hint on this.
Tobias
doesn’t work in the Go ecosystem. There’s no such thing as “minimum and maximum supported versions”. Therefore the statement that it’s “entirely possible in principle” isn’t applicable in the Go ecosystem.
If you’re interested in learning more about this, I encourage you to read Russ Cox thesis on the subject:
]]>So for anyone else affected, this seems to be the more reliable workaround:
header_up Host {hostport}instead of only:
header_up Host {host}I wanted to share that in case it helps others who run into the same issue.
Yeah, I agree. I recommend {hostport} in the original issue where regressions were posted:
The breakage is actually obvious which was part of the point. Clarifying the factors we considered for this:
header_up Host {hostport}tls_insecure_skip_verify is used, and these are not really officially considered legitimate production use cases. Disabling security is just not a great solution in any case. Caddy can fully automate internal PKI and apps or Caddy, one way or another, can be configured to properly trust the certs. (And I am not sure I have seen a complaint where this option was not used.)So, I think in the grand scheme of things, breakage was actually very minimal, obvious, and seems to be limited to controversial configs anyway…
Sorry the release notes were not more clear. We did highlight this change, but maybe it needed a
or something?
grace_period in global options? ]]>I also came across the note about using hostport instead of only host. After changing the config accordingly, live views are now working for me again as well.
So for anyone else affected, this seems to be the more reliable workaround:
header_up Host {hostport}
instead of only:
header_up Host {host}
I wanted to share that in case it helps others who run into the same issue.
That said, I do think this situation highlights a broader concern around release expectations. From a user perspective, this was a breaking behavioral change in a stable release, and it resulted in a fairly time-consuming troubleshooting process for setups that had previously been working without issue.
I also do not find the argument about plugins particularly persuasive as a reason to move away from semantic versioning expectations. In many ecosystems, plugin compatibility is handled with defined minimum and maximum supported versions, so semver-compatible dependency ranges are entirely possible in principle.
My concern is less about the specific technical change itself and more about how disruptive changes like this are introduced. When a change can break established reverse proxy setups in non-obvious ways, it would be much easier on users if that were either reserved for a major version, made opt-in, or communicated much more prominently as a breaking change.
Now that I understand the reason for the change and have a working workaround, I appreciate the intent behind it. But the path to getting there was much more difficult than it should have been, and I think that is worth reflecting on.
]]>the header_up Host {host} workaround does restore the general UniFi page loading behavior with Caddy 2.11, but it does not fully solve the regression.
In my case, the UniFi interface becomes reachable again, but live camera feeds in Protect no longer work afterwards. The result is that the main UI loads, but camera live views are broken, which makes this workaround insufficient in practice.
Because of that, I would strongly suggest reconsidering or reverting this change. While the intention was understandable - making common HTTPS upstream setups less prone to subtle misconfiguration and possible security issues - this behavior change appears to introduce a serious breaking change for existing real-world reverse proxy setups such as UniFi.
From my perspective, the practical impact is worse than the problem it tries to prevent:
before: existing UniFi reverse proxy setups worked
after 2.11: UniFi may fail to load at all unless header_up Host {host} is added
with the workaround: the page loads again, but Protect live camera feeds are still broken
So even the recommended workaround only partially restores functionality.
Please consider reverting this change, or at least providing a compatibility option that restores the pre-2.11 behavior for HTTPS upstreams.
]]>I think my SSE handler currently blocks the shutdown procedure of the Caddy process, and I’d like to clean it up before I release it.
I think there’s two things/questions here.
Why do you think your plugin is blocking the Caddy shutdown? Do you have a link to the code somewhere?
]]>In order to execute PHP files (irrespective of the location), those files must have read/execute permission by the user that runs php-fpm. This is in addition to read permission for Caddy (user).
Thank you for your answer. I understand that now. But I still need to dig a bit to understand what’s happening on my server ^^’
According to my permissions :
As I understand it, caddy should not be able to access my site, because of the 700 on my home. But for static sites it’s working, and not for dynamic sites.
If I change my home to :
then it’s working (at least the reading part from php-fpm).
For the best practices part, I usually have all my websites in my /home/USER/websites/ directory, so I can push them by ssh with my user account. Should I change this practice ? As I understand it from @francislavoie messages, I should put all my websites directly to /var/www, not with a symbolic link, and owned by www-data (or caddy). Thus, I should push them by a caddy ssh account, right ?
]]>I wanted to inquire what the best way is to propagate cancel down to my module.
For context (no pun intended), I am working on a plugin/module that uses SSE. I think my SSE handler currently blocks the shutdown procedure of the Caddy process, and I’d like to clean it up before I release it. So far, I tried to work around it with another channel to make it respond to stop, but it feels dirty.
func (cs *CaddyScope) serveAPIStream(w http.ResponseWriter, r *http.Request) {
w.Header().Set(“Content-Type”, “text/event-stream”)
w.Header().Set(“Cache-Control”, “no-cache”)
w.Header().Set(“X-Accel-Buffering”, “no”)
rc := http.NewResponseController(w)
writeAndFlush := func(data []byte) error {
if _, err := fmt.Fprintf(w, "event: stats\ndata: %s\n\n", data); err != nil {
return err
}
return rc.Flush()
}
if data := cs.snap.get(); data != nil {
if err := writeAndFlush(data); err != nil {
return
}
}
ticker := time.NewTicker(time.Duration(cs.Refresh))
defer ticker.Stop()
for {
select {
case <-r.Context().Done():
return
case <-cs.cancelCtx.Done():
return
case <-ticker.C:
data := cs.snap.get()
if data == nil {
continue
}
if err := writeAndFlush(data); err != nil {
return
}
}
}
}
]]>why can I have my static websites inside my own home directory, but not my dynamic websites ?
In order to execute PHP files (irrespective of the location), those files must have read/execute permission by the user that runs php-fpm. This is in addition to read permission for Caddy (user).
I am not sure about the best practices, however I keep all files on /home/user/sites/example.com/public and give read-only permission for both Caddy user and php-fpm user. I also have /home/user/{backups,log,scripts} directories to manage the sites. I host mostly WordPress sites. This dir structure helps me to host multiple sites using the same user. This is not a recommended config. If a site gets infected by a malware, it can affect other sites hosted alongside. I am only sharing what works for a particular usecase. YMMV.
]]>ca. Also be advised that any ACME client will attempt renewal way ahead of the expiration date, so even small downtimes of Let’s Encrypt are not a problem.
If you insist on the fallback, though, my suggestion would be to create another set of path/service units and to use conditionals: ConditionPathExists or ConditionDirectoryNotEmpty, like so:
ConditionDirectoryNotEmpty=/var/lib/caddy/acme-v02.api.letsencrypt.org-directory/<domain>
and
ConditionDirectoryNotEmpty=/var/lib/caddy/acme.zerossl.com-v2-DV90/<domain>
This for reference for future readers. ![]()
I was ready to do this if needed, but since I already took the plunge into using the events exec module, I’ll be sticking with that for now. Just wanted to explain the reason I didn’t initially go with the solution you showed here, in case my reasoning is useful to you as well!
]]><caddyroot>/certificates/acme-v02.api.letsencrypt.org-directory/<domain>/<domain>.crt, caddyroot being /var/lib/caddy/ by default.# /etc/systemd/system/stalwart-cert-import.path
[Unit]
Description=Stalwart certificate import from Caddy
[Path]
PathModified=/var/lib/caddy/certificates/acme-v02.api.letsencrypt.org-directory/<domain>/<domain>.crt
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/stalwart-cert-import.service
[Unit]
Description=Stalwart certificate import from Caddy
[Service]
Type=oneshot
ExecStart=/usr/bin/install -p -m644 -o stalwart-mail -g stalwart-mail /var/lib/caddy/certificates/acme-v02.api.letsencrypt.org-directory/<domain>/<domain>.crt /var/lib/stalwart-mail/cert/<domain>.crt
ExecStart=/usr/bin/install -p -m640 -o stalwart-mail -g stalwart-mail /var/lib/caddy/certificates/acme-v02.api.letsencrypt.org-directory/<domain>/<domain>.key /var/lib/stalwart-mail/cert/<domain>.key
ExecStart=/usr/bin/stalwart-cli -u https://<domain> -c user:pass server reload-certificates
]]>But I still have my question, why can I have my static websites inside my own home directory, but not my dynamic websites ?
And what’s the good practices about where and how to put the files on my server. Until now I use my account (by ssh ofc) and push my files directly in my home. Should I use another user to do that and push them directly on the /var/www directory ?
]]>
admin : 2019
You have an extra space between the : and the port number, that’s incorrect.
You’re using Caddy-Docker-Proxy, right? Yeah it doesn’t let you change the admin endpoint address before Respect CADDY_ADMIN and preserve Caddyfile admin listen by jriberg · Pull Request #775 · lucaslorentz/caddy-docker-proxy · GitHub but with that merged it should work now (so update your build to use the latest CDP version released yesterday), and it should work, I think.
]]>www-data user (the caddy user is in the www-data group so it can also read files owned by that group). I think php-fpm also runs as www-data by default. ]]>But I don’t understand why it’s working with my static files websites then. Does Caddy have acces and not php-fpm ?
I’d gladly serve from /var/www, but what are your recommendations on how access those directory by SSH ? Should I give my user the right to edit them and that’s good ?
]]>I run Caddy in Docker as reverse proxy but I can’t access the admin API
root@xy:/opt/work/compose/caddy# curl -vL http://localhost:2019
* Host localhost:2019 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
* Trying [::1]:2019...
* Immediate connect fail for ::1: Die angeforderte Adresse kann nicht zugewiesen werden
* Trying 127.0.0.1:2019...
* Connected to localhost (127.0.0.1) port 2019
> GET / HTTP/1.1
> Host: localhost:2019
> User-Agent: curl/8.5.0
> Accept: */*
>
* Recv failure: Die Verbindung wurde vom Kommunikationspartner zurückgesetzt
* Closing connection
curl: (56) Recv failure: Die Verbindung wurde vom Kommunikationspartner zurückgesetzt
Cady Version: v2.10.2 h1:g/gTYjGMD0dec+UgMw8SnfmJ3I9+M2TdvoRL/Ovu6U8=
Caddy runs as container in Docker version 29.2.1, build a5c7197
Deployed with docker compose
Ubuntu 24.04.4 LTS latest udates with plenty cores and ram
configs:
caddy-basic-content:
file: ./config/Caddyfile
labels:
caddy: null
services:
caddy:
container_name: caddy
image: homeall/caddy-reverse-proxy-cloudflare:latest
restart: always
environment:
TZ: "Europe/Berlin"
CADDY_INGRESS_NETWORKS: caddy-proxy
CADDY_DOCKER_CADDYFILE_PATH: /etc/caddy/Caddyfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./config/Caddyfile:/etc/caddy/Caddyfile:ro
- ./caddy-data:/data
ports:
- "80:80"
- "443:443"
- "443:443/udp"
- "443:443/tcp"
- "2019:2019"
networks:
- caddy-proxy
networks:
caddy-proxy:
external: true
{
email [email protected]
auto_https disable_redirects
admin : 2019
metrics
log {
output file /data/log/caddy/access.log {
roll_size 10MB
roll_keep 5
}
}
}
*.xy.org, xy.org {
tls {
dns cloudflare token
resolvers 1.1.1.1
}
}
metrics.xy.org {
metrics
}
beacon.xy.org {
reverse_proxy 192.168.1.1
}
monitor.xy.org {
reverse_proxy 10.120.30.117:8080
}
reverse proxies are defined via labels in the according docker compose files, this works like a charm, just the admin API does not work not matter what I do. I would appreciate every hint and help ![]()
/home. Caddy runs as the caddy user, which doesn’t have permission to read into the HOME of other users.
This is because the /home/youruser directory has drwxr-x--- permissions which means only youruser or yourgroup can read or traverse into it (x permission is needed on directory to enter it).
In Linux, every directory all the way up the chain needs to have x for that user to be able to “see” things. And most of the time, the home directory itself is what gates it. It’s an intentional security mechanism to prevent just any software from reading user files unless granted.
You should serve your site from /srv or /var/www, that’s standard.