If you are running CrowdSec in a Docker environment, you already know how vital it is for securing your infrastructure. But when it comes to checking your manager dashboard or logs, you face a common dilemma: how do you access it remotely without opening vulnerable ports to the public internet?
The answer is Tailscale.
By using a “Sidecar” pattern in Docker Compose, we can seamlessly attach a CrowdSec manager container directly to your private Tailscale network (Tailnet). This gives you secure, encrypted access to your container from any device in the world, while perfectly preserving your local network routing and Traefik configurations.
Here is exactly how to set it up—and how to lock it down using strict Role-Based Access Control (RBAC).
Before diving into the Compose file, you will need:
The magic of this setup lies in the network_mode: service:tailscale directive. Instead of putting our CrowdSec manager on the standard Docker network, we are hiding it behind a dedicated Tailscale container.
Here is the docker-compose.yml file you need:
services:
# 1. The Tailscale Sidecar
tailscale:
image: tailscale/tailscale:latest
container_name: tailscale-crowdsec
hostname: crowdsec-manager-ts # This name appears in your Tailscale dashboard
environment:
- TS_AUTHKEY=tskey-auth-xxxxx # Replace with your actual Auth Key!
- TS_STATE_DIR=/var/lib/tailscale
volumes:
- tailscale-data:/var/lib/tailscale
- /dev/net/tun:/dev/net/tun
cap_add:
- net_admin
- sys_module
ports:
- "127.0.0.1:8080:8080" # Preserves safe localhost access on the host
networks:
pangolin:
aliases:
- crowdsec-manager # Crucial: Allows Traefik/other containers to route traffic here
restart: unless-stopped
# 2. Your CrowdSec Manager
crowdsec-manager:
image: hhftechnology/crowdsec-manager:latest
container_name: crowdsec-manager
network_mode: service:tailscale # Merges this container's network with Tailscale
depends_on:
- tailscale
restart: unless-stopped
environment:
- PORT=8080
- ENVIRONMENT=production
- TRAEFIK_DYNAMIC_CONFIG=/etc/traefik/dynamic_config.yml
- TRAEFIK_CONTAINER_NAME=traefik
- TRAEFIK_STATIC_CONFIG=/etc/traefik/traefik_config.yml
- CROWDSEC_METRICS_URL=http://crowdsec:6060/metrics
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/config:/app/config
- /root/docker-compose.yml:/app/docker-compose.yml
- ./backups:/app/backups
- ./data:/app/data
networks:
pangolin:
external: true
volumes:
tailscale-data:
crowdsec-manager, other containers on your pangolin network (like Traefik) can still talk to it using the hostname they always have. Nothing breaks.tailscale-data volume ensures that when you restart or update your containers, your node doesn’t forget its Tailscale identity.Once you run docker-compose up -d, your container will join your Tailnet. You now have three distinct ways to access it:
http://crowdsec-manager-ts:8080http://127.0.0.1:8080http://crowdsec-manager:8080If you are running a multi-user Tailnet (such as a shared homelab or a corporate team environment), a “default-allow” policy isn’t enough. You need to restrict exactly who can access your CrowdSec manager.
Tailscale manages network traffic rules using Access Control Lists (ACLs) defined in HuJSON. To strictly control access to the CrowdSec container, we will transition from identity-based human access to Role-Based Access Control (RBAC) using Tailscale groups and tags.
To enforce the principle of least privilege, we must implement the following architecture:
group:secops) authorized to view the dashboard.tag:crowdsec) to the Docker container.group:secops to tag:crowdsec on port 8080.Before modifying the ACLs, we need to ensure the CrowdSec Tailscale sidecar is properly tagged.
"tagOwners": {
"tag:crowdsec": ["group:secops"]
},
tag:crowdsec tag to the key during creation.TS_AUTHKEY in your docker-compose.yml with this new tagged key. When the container boots, it identifies to the control plane as a tagged machine rather than inheriting the identity of the user who created the key.Navigate to your Tailscale Admin Console’s Access Controls tab. Below is the required HuJSON configuration to lock down the container:
{
// 1. Define your administrative groups
"groups": {
"group:secops": [
"[email protected]",
"[email protected]"
]
},
// 2. Define who is allowed to deploy infrastructure with specific tags
"tagOwners": {
"tag:crowdsec": ["group:secops"]
},
// 3. Define the routing rules (ACLs)
"acls": [
// Enforce explicit access: Only SecOps can hit the CrowdSec dashboard via port 8080
{
"action": "accept",
"src": ["group:secops"],
"dst": ["tag:crowdsec:8080"]
},
// (Optional) Permit the CrowdSec manager to initiate outbound connections to the internet
// via an exit node, or to other internal services for log ingestion over Tailscale.
{
"action": "accept",
"src": ["tag:crowdsec"],
"dst": ["autogroup:internet:*"]
}
]
}
"action": "accept": Tailscale operates on a default-deny paradigm once strict rules are applied. You only write accept rules."src": ["group:secops"]: Because this is bound to a group, Tailscale’s control plane will verify the cryptographic identity of the source node against the user’s current session state."dst": ["tag:crowdsec:8080"]: This explicitly binds the rule to the Layer 4 port (8080). Even if a user in the secops group attempts to hit port 22 (SSH) or 80, the control plane’s packet filter will silently drop the packets.Once applied, Tailscale pushes the updated packet filtering rules to all nodes almost instantly via the WireGuard tunnels.
group:secops attempts to navigate to http://crowdsec-manager-ts:8080, the TCP connection will immediately time out.127.0.0.1:8080 from the host OS) remain entirely unaffected, as that traffic routes through the local Linux network namespace, bypassing the Tailscale WireGuard interface entirely.By deploying Tailscale as a sidecar and enforcing strict ACLs, you get the absolute best of both worlds: robust local network compatibility and locked-down, zero-trust remote access. Spin it up, connect your devices, and enjoy highly secure access to your infrastructure from anywhere!
]]>I am rarely on Discord, but I’ll try to visit to raise the subject.
]]>When adding a private resource i also cant select my local site (pangolin vps machine).
Maybe it just was missed to be implemented?
I suggest you write Owen or Milo Schwartz (Maintainers of pangolin) via E-Mail or post support ticket in pangolin Discord channel. They replied to me very polite and helped me through several problems already. Maybe this was just missed and you can put it on their minds? Or I am missing something, why this shouldnt work.
]]>next-router/api-router as in the guide. ]]>Might just be me, but I have the same issue and tried
127.0.0.1,localhost,0.0.0.0but nothing worked.
Did you simply try to use the actual IP of the server Pangolin is running on?
My problem is different. It seems not possible to add private resources to local sites :-/
]]>It seems private resources cannot be added to local sites.
]]>In my setup I do have services I want to expose as public resources on the pangolin VPS, but for some reason I have not figured out what to put in as IP/Hostname.
Might just be me, but I have the same issue and tried `127.0.0.1`, `localhost`, `0.0.0.0` but nothing worked.
This was not clear to me reading the documentaion and it is still not clear to me.
Maybe in the future I will be able to expose the service.
Hope this helps a little bit.
The `placeholder` approach at least explains why:
- Private resources
- Health checking
- Docker socket scanning
is not working ![]()
again a super solid peace of summary of yours. Thank you! I’ve been using pangolin as vps gateway for nearly 5 months now and have benefitted from dozens of your tutorials about pangolin, monitoring, backup etc.
Today i have another question, which i thought would fit (hopefully) good in here:
I have newt running in an lxc on my homelab (actually multiple newt containers across multiple machines in my homelab) and they have full subnet access 192.168.178.0/24 etc. Quick question: is this even the case for my tailscale subnet 100…? When my newt lxc/proxmox host is within my tailnet?
Even though i ABSOLUTELY love spinning up a new domain and newt tunnels for my services comfortably in my pangolin dashboard in minutes - sometimes i think, should i somehow harden this a little bit?
I mean - i have crowdsec, geoban, pangolin auth and additionally also pangolin firewall rules with restriction to only a few countries and of course with auth and so on - but just in case newt code would be corrupted or so - i sometimes think of how could i secure my subnet a little more, if this makes any sense. Of course it should be a reasonable and efficient restriction/hardening.
I thought about strict outgoing firewall rules for the newt lxc, which of course would mean, that i update the rules in proxmox everytime i spin up a new tunnel to a new ip. Maybe this would be a consequent, somehow fast solution?
I wonder if someone has better suggestions or thoughts about this topic. As i learned - better assume having a breach from within someday, than just guarding the gates. Might be also too paranoid ![]()
I just don’t understand if I still need to install this local site so it shows in my Pangolin dashboard as connected.
You can access the Dashboard and use Pangolin without ever creating a local site.
This is the way i think of it, reverse proxies work by taking in traffic and routing it to a backend. Usually it happens that the backend is a local network that connects to that resource.Modern reverse proxies now use peer to peer connections to remove the idea that the resources has to be on the same network at the proxy.
Now when you create a site, you are telling pangolin hey i have a resource on my local network please route it to the local route.
With that being said local routes is like plugging in a physical network to the reverse proxy to use as well rather that the peer to peer newt.
You basically repeated what I said I want to achieve.
]]>Use this to expose resources on the same host as your Pangolin server (self-hosted only). No tunnels are created. Required ports must be open on the Pangolin host.
But the following instructions on how to install sites simply does not mention on how to “install” this site.
Yes, I do understand how to create my separate docker networks on the host. I know how to group containers into different networks to only allow access as intended.
I just don’t understand if I still need to install this local site so it shows in my Pangolin dashboard as connected.
Or are “local sites” never installed and it showing offline is jsut what this text meant?
Local sites do not support:
- Private resources
- Health checking
- Docker socket scanning
If trying to create a private resource, I can’t select this local site. If I try creating a public resource, I can select this local site.
Please help clarify this so I can proceed with Pangolin ![]()
whenever I Try to set up captcha through crowdsec manager, the Apply shows:
Create captcha HTML page
traefik configuration directory not found: stat config/traefik: no such file or directory
and:
Update Traefik dynamic config
failed to read dynamic_config.yml from local path: open config/traefik/dynamic_config.yml: no such file or directory
I didn´t change any folders either in my traefik compose or in the Crowdsec manager compose. I also didn´t change the folger in the settings of crowdsec manager.
Config validation shows, that the dynamic config an static config match.
usulnet is a self-hosted Docker management platform written in Go. It ships as a single ~70MB binary — no Node.js, no Electron, no Python. Templates compiled at build time, frontend is HTMX + Alpine.js + Tailwind. PostgreSQL + Redis + NATS, all with TLS by default.
curl -fsSL https://raw.githubusercontent.com/fr4nsys/usulnet/main/deploy/install.sh | sudo bash
All secrets generated automatically. Up and running in under 60 seconds.
The project is functional but still in its early stages. Some features may have bugs or rough edges. If you run into anything, issues and feedback are very welcome — it helps a lot to improve things.
I’m building this mostly thinking about sysadmin, networking, and enterprise environments, but also with devops workflows in mind. The goal is to have a professional, all-in-one platform for managing Docker infrastructure without depending on external SaaS or cloud services.
Happy to answer questions or hear any feedback!
]]>Could I easily follow this and add crowdsec/mwm to my setup?
Asking as I tried adding it before, but it broke in some fashion (mwm couldnt add the plugin - posted on discord).
]]>{ "crowdsec-bouncer-traefik" }
instead of
{ "crowdsec" }
plugin: unknown plugin type: crowdsec
]]>“Critical
Security Engine: No working remediation components
Since February 18, 2026 (12:40)
Security Engine has no working remediation components and cannot block attacks effectively.
Important
Security Engine: no activity
Since February 08, 2026 (16:02)
Security Engine has not pushed alerts for more than 48 hours and might not be functioning properly.”
I have 3 remediation components (Traefik bouncers), one doesn’t show a version and any other info and two others seem to be inactive for 13 days now.
]]>HostRegexp(`.+`) in order for it to work for wildcard dns. ]]>What am I missing?
Here’s my Middleware:
{
"crowdsec": {
"crowdsecAppsecEnabled": true,
"crowdsecAppsecFailureBlock": true,
"crowdsecAppsecHost": "crowdsec:7422",
"crowdsecAppsecUnreachableBlock": true,
"crowdsecLapiHost": "crowdsec:8080",
"crowdsecLapiKey": "redacted",
"enabled": true
}
}
docker exec cscli metrics:
ubuntu@mw-oracle:~/pangolin/config/middleware-manager$ docker exec crowdsec cscli metrics
±----------------------------------------------------------------------------------------------------------------------------+
| Acquisition Metrics |
±----------------------------------±-----------±-------------±---------------±-----------------------±------------------+
| Source | Lines read | Lines parsed | Lines unparsed | Lines poured to bucket | Lines whitelisted |
±----------------------------------±-----------±-------------±---------------±-----------------------±------------------+
| file:/var/log/traefik/access.log | 160 | 160 | - | - | - |
| file:/var/log/traefik/traefik.log | 825 | 825 | - | - | - |
±----------------------------------±-----------±-------------±---------------±-----------------------±------------------+
±------------------------------------------------+
| Local API Decisions |
±----------------------±-------±-------±------+
| Reason | Origin | Action | Count |
±----------------------±-------±-------±------+
| http:bruteforce | CAPI | ban | 1231 |
| http:crawl | CAPI | ban | 747 |
| http:exploit | CAPI | ban | 15986 |
| http:scan | CAPI | ban | 864 |
| vm-management:exploit | CAPI | ban | 1 |
±----------------------±-------±-------±------+
±-----------------------------------+
| Local API Metrics |
±-------------------±-------±-----+
| Route | Method | Hits |
±-------------------±-------±-----+
| /v1/allowlists | GET | 12 |
| /v1/heartbeat | GET | 11 |
| /v1/usage-metrics | POST | 1 |
| /v1/watchers/login | POST | 67 |
±-------------------±-------±-----+
±-------------------------------------------+
| Local API Machines Metrics |
±----------±---------------±-------±-----+
| Machine | Route | Method | Hits |
±----------±---------------±-------±-----+
| localhost | /v1/allowlists | GET | 12 |
| localhost | /v1/heartbeat | GET | 11 |
±----------±---------------±-------±-----+
±-------------------------------------------------------------------+
| Parser Metrics |
±---------------------------------------±------±-------±---------+
| Parsers | Hits | Parsed | Unparsed |
±---------------------------------------±------±-------±---------+
| child-child-crowdsecurity/traefik-logs | 1.97k | 985 | 985 |
| child-crowdsecurity/http-logs | 2.96k | 985 | 1.97k |
| child-crowdsecurity/traefik-logs | 1.97k | 985 | 985 |
| crowdsecurity/dateparse-enrich | 985 | 985 | - |
| crowdsecurity/http-logs | 985 | - | 985 |
| crowdsecurity/non-syslog | 985 | 985 | - |
| crowdsecurity/public-dns-allowlist | 985 | 985 | - |
| crowdsecurity/traefik-logs | 985 | 985 | - |
| crowdsecurity/whitelists | 985 | 985 | - |
±---------------------------------------±------±-------±---------+
±--------------------------------------------------------------------------------------+
| Whitelist Metrics |
±-----------------------------------±----------------------------±-----±------------+
| Whitelist | Reason | Hits | Whitelisted |
±-----------------------------------±----------------------------±-----±------------+
| crowdsecurity/public-dns-allowlist | public DNS server | 985 | - |
| crowdsecurity/whitelists | private ipv4/ipv6 ip/ranges | 985 | - |
±-----------------------------------±----------------------------±-----±------------+
Is it currently working for you? Fail2ban isn’t blocking anything.
]]>16 Feb 2026
NetBird’s been cooking recently. Another major release with less configuration files - v0.65.0
NetBird now includes a built-in reverse proxy in the management server, enabling proxied access to backend services through your NetBird network. Allowing you to expose your services to the public with the option to secure them with SSO, PINs, or passwords.
And here is Migration Guide: Enable Reverse Proxy Feature for existing users who want to upgrade their self-hosted netbird to this version.
20 Feb 2026
In v0.65.3 they added another Migration Guide: Combined Container Setup
Their installation script now includes these lines in config.yaml. If you installed it before like me, then just add these lines.
reverseProxy:
trustedHTTPProxies:
- "172.30.0.10/32"
For people like me who are still using Pangolin as their reverse proxy and NetBird as their overlay vpn, I will keep updating the posts. Haven’t tested migration guides myself, as i did fresh install of the combined container version.
]]>docker exec crowdsec cscli decisions add --ip 1.2.3.4 --duration 1h --type ban
Can see this rule added in the decision list:
docker exec crowdsec cscli decisions list
Source: cscli, Reason: manual ‘captcha’ from ‘localhost’
But if I’m connecting from this IP (it’s a VPN server I chose to test with it, can confirm the same IP is to be seen for the device I’m connecting from) can reach the Pangolin log in page.
And the applications on subdomains provided by Pangolin continue to work.
Why is that? I was thinking this should enforce the 404, yet can’t make it work.
I have some IP in allowlist docker exec crowdsec cscli allowlist inspect my_allowlist, but not this one.
CrowdSec is installed by the Pangolin installer and there is no host system boucer (firewall) installed yet.
]]>New discussion on NetBird + Pangolin (Pangolin as reverse proxy for NetBird) Setup Guide
Locking this thread
]]>Disclaimer: Only tested on fresh installations.
Prerequisites: Pangolin installed. How to Self-host Pangolin - Identity-aware VPN and Reverse Proxy for Easy Remote Access - youtube.com/@pangolin-net
After my failed try recently, I got NetBird finally running properly under Pangolin. With both running on same server as well as different servers. Thanks to Netbird for such awesome changes recently!
NetBird has made it way simpler by adding embedded STUN directly in their relay service instead of using separate coturn service → [infra] add embedded STUN to getting started (#5141) which they added in v0.64.0.
They also added this guide → Migration Guide: From Coturn to Embedded STUN Server - NetBird Docs
Their Self-Hosting Quickstart Guide (5 min) - NetBird Docs doesn’t explicitly tell about removed coturn service, which I think they will update instructions in next major version. They have updated their script though.
Here is my recent installation. Follow this as reference for my below guides -
$ curl -fsSL https://github.com/netbirdio/netbird/releases/latest/download/getting-started.sh | bash
The NETBIRD_DOMAIN variable cannot be empty.
Enter the domain you want to use for NetBird (e.g. netbird.my-domain.com): nb.yourdomain.com
Which reverse proxy will you use?
[0] Traefik (recommended - automatic TLS, included in Docker Compose)
[1] Existing Traefik (labels for external Traefik instance)
[2] Nginx (generates config template)
[3] Nginx Proxy Manager (generates config + instructions)
[4] External Caddy (generates Caddyfile snippet)
[5] Other/Manual (displays setup documentation)
Enter choice [0-5] (default: 0): 5
Should container ports be bound to localhost only (127.0.0.1)?
Choose 'yes' if your reverse proxy runs on the same host (more secure).
Bind to localhost only? [Y/n]: Y
Rendering initial files...
==========================================
MANUAL REVERSE PROXY SETUP
==========================================
Container ports (bound to 127.0.0.1):
Dashboard: 8080
NetBird Server: 8081 (all services: management, signal, relay)
Configure your reverse proxy with these routes (all go to the same backend):
WebSocket (relay, signal, management WS proxy):
/relay*, /ws-proxy/* -> 127.0.0.1:8081
(HTTP with WebSocket upgrade, extended timeout)
Native gRPC (signal + management):
/signalexchange.SignalExchange/* -> 127.0.0.1:8081
/management.ManagementService/* -> 127.0.0.1:8081
(gRPC/h2c - plaintext HTTP/2)
HTTP (API + embedded IdP):
/api/*, /oauth2/* -> 127.0.0.1:8081
Dashboard (catch-all):
/* -> 127.0.0.1:8080
IMPORTANT: gRPC routes require HTTP/2 (h2c) upstream support.
WebSocket and gRPC connections need extended timeouts (recommend 1 day).
Press Enter when your reverse proxy is configured (or Ctrl+C to exit)...
Starting NetBird services
[+] up 4/4
✔ Network netbird_netbird Created 0.0s
✔ Volume netbird_netbird_data Created 0.0s
✔ Container netbird-dashboard Created 0.1s
✔ Container netbird-server Created 0.1s
Waiting for NetBird server to become ready . . . . . . . . done
Done!
NetBird is now running. Access the dashboard at:
https://nb.yourdomain.com
* character at end). You can simply copy from the terminal instructions provided by netbird script without * character -/relay/ws-proxy//signalexchange.SignalExchange//management.ManagementService//api//oauth2//Install Newt on remote server and add remote site in Pangolin dashboard. Then Install netbird in netbird directory by using their quick start guide installation script, and when on this step -
Press Enter when your reverse proxy is configured (or Ctrl+C to exit)…
Add reverse proxy configuration in Pangolin, and press enter on Netbird script.
If you installed both on same server then, first go to Pangolin UI and add your pangolin server as local site in your dashboard -
And then when installing NetBird on this step -
Press Enter when your reverse proxy is configured (or Ctrl+C to exit)…
Create a public resource on that local site with the paths provided by netbird
Add reverse proxy configuration in Pangolin, and press enter on Netbird script.
As you can see I’m using direct service names and ports here because I tried using 127.0.0.1 method like in 1st, pangolin was not detecting netbird for me for some reason. So I moved netbird directly to pangolin network as you can see in my below instructions -
Edit your docker-compose.yml file of inside netbird folder, where you installed netbird and make sure pangolin is already running:
# In every service, replace this -
networks: [netbird]
# with this -
networks: [pangolin]
# And for this global networks declaration part in compose file at the bottom, replace this -
networks:
netbird:
# with this -
networks:
pangolin:
external: true
For Cloudflare users, who run pangolin server with Cloudflare Proxy/Orange Cloud - Pangolin Docs turned ON.
Make sure that you enable gRPC in your Cloudflare Dashboard > Your Domain > Network
So that your devices connect properly using STUN. I was facing this problem before and discovered this option today which fixed my problem when using Cloudflare Proxy/Orange Cloud
]]>