A minimal Kubernetes controller that runs a WireGuard hub on a node with public IP access. The controller watches for peer registrations via node annotations and dynamically configures the local WireGuard interface.
Hub Node (public IP)
┌──────────────────────────────────────────┐
│ DaemonSet Pod (hostNetwork: true) │
│ ┌────────────────────────────────────┐ │
│ │ wg-kube controller │ │
│ │ 1. Creates wg0 via wg-quick │ │
│ │ 2. Watches node annotations │ │
│ │ 3. Reconfigures on peer changes │ │
│ └────────────────────────────────────┘ │
└──────────────────────────────────────────┘
Peers self-register by labeling and annotating their own Kubernetes Node objects. The controller detects these changes and reconfigures WireGuard accordingly.
# Deploy to a cluster
kubectl apply -f deploy/namespace.yaml
kubectl apply -f deploy/rbac.yaml
kubectl apply -f deploy/daemonset.yaml
# Label a node as the WireGuard hub
kubectl label node <hub-node> wireguard.kube/hub=true| Flag | Default | Description |
|---|---|---|
--annotation-prefix |
wireguard.kube/ |
Prefix for all annotations and labels |
--listen-port |
51820 |
WireGuard UDP listen port |
--address |
100.96.0.1/32 |
WireGuard interface address (CIDR) |
--allowed-ips |
(same as --address) |
CIDRs the hub advertises to spokes (published as annotation) |
--node-name |
$NODE_NAME |
Kubernetes node name |
--interface |
wg0 |
WireGuard interface name |
--private-key-file |
(generate new) | Path to existing private key file |
--endpoint |
(auto-discover) | Override endpoint IP |
--kubeconfig |
(in-cluster) | Path to kubeconfig for out-of-cluster use |
All keys use the configurable prefix (default wireguard.kube/).
| Key | Applied To | Value | Description |
|---|---|---|---|
wireguard.kube/hub |
Hub node | "true" |
Marks a node as the WireGuard hub. Set automatically by the controller on startup. Used as the DaemonSet nodeSelector. |
wireguard.kube/peer |
Peer node | "true" |
Marks a node as a WireGuard peer. Must be set by the peer (or an external process) to register with the hub. |
| Key | Applied To | Required | Description |
|---|---|---|---|
wireguard.kube/public-key |
Hub node | Auto | The hub's WireGuard public key (base64). Set automatically by the controller on startup. Peers can read this to configure their own WireGuard tunnel. |
wireguard.kube/endpoint |
Hub node | Auto | The hub's WireGuard endpoint (IP:port). Set automatically by the controller from the node's ExternalIP. |
wireguard.kube/allowed-ips |
Hub node | Auto | CIDRs the hub advertises to spokes (defaults to --address). Spokes use this to configure which traffic routes through the hub. |
wireguard.kube/public-key |
Peer node | Yes | The peer's WireGuard public key (base64). Required for the controller to configure the peer. |
wireguard.kube/endpoint |
Peer node | No | The peer's WireGuard endpoint (IP:port). If omitted, the hub waits for the peer to initiate the connection. |
wireguard.kube/allowed-ips |
Peer node | No | Comma-separated CIDRs the peer is allowed to route (e.g., 10.0.0.0/24, 10.1.0.0/24). Defaults to the peer node's InternalIP/32 if not set. |
# Register a node as a WireGuard peer
kubectl label node my-peer-node wireguard.kube/peer=true
kubectl annotate node my-peer-node \
wireguard.kube/public-key="aBcDeFgHiJkLmNoPqRsTuVwXyZ1234567890+abc=" \
wireguard.kube/allowed-ips="10.0.1.0/24" \
wireguard.kube/endpoint="203.0.113.10:51820"# Get the hub's public key and endpoint
kubectl get node <hub-node> -o jsonpath='{.metadata.annotations.wireguard\.kube/public-key}'
kubectl get node <hub-node> -o jsonpath='{.metadata.annotations.wireguard\.kube/endpoint}'# Build locally
go build -o wg-kube .
# Build container image
docker build -t ghcr.io/<owner>/wg-kube:latest .The spoke/wg-spoke.sh script runs on a Kubernetes node to act as a WireGuard
spoke peer. It generates a key pair, registers itself via node annotations, then
discovers the hub node by label (wireguard.kube/hub=true) and polls it for its
public key and endpoint. When the hub config changes (or on first discovery), it
writes a local wg-quick config and restarts the interface.
# Run on a spoke node (requires wg, wg-quick, kubectl on the host)
export WG_ADDRESS=100.96.0.2/32
./spoke/wg-spoke.sh| Environment Variable | Default | Description |
|---|---|---|
KUBECONFIG |
/etc/kubernetes/kubelet.conf |
Path to kubeconfig |
NODE_NAME |
$(hostname) |
This node's Kubernetes name |
WG_INTERFACE |
wg0 |
WireGuard interface name |
WG_ADDRESS |
100.96.0.2/32 |
Local WireGuard address (CIDR) |
WG_LISTEN_PORT |
51820 |
Local listen port |
WG_ALLOWED_IPS |
$WG_ADDRESS |
CIDRs to advertise to hub |
HUB_ALLOWED_IPS |
(hub's annotation) | CIDRs to route through the hub (overrides hub's allowed-ips annotation) |
ANNOTATION_PREFIX |
wireguard.kube/ |
Annotation/label prefix |
POLL_INTERVAL |
10 |
Seconds between hub polls |
WG_DAEMONIZE |
false |
Fork the poll loop into the background so the script returns immediately (for cloud-init) |
WG_LOG_FILE |
/var/log/wg-spoke.log |
Log file path when daemonized |
This project expects Cilium as the cluster CNI. Spoke peer
nodes must use their WireGuard tunnel IP (e.g. 100.96.0.2) as the Kubernetes
node IP so that Cilium can route pod traffic through the WireGuard overlay.
On the spoke node, set --node-ip on the kubelet to the WireGuard address:
--node-ip=100.96.0.2
The WireGuard spoke script should be run during cloud-init (with
WG_DAEMONIZE=true) before the kubelet starts, so the wg0 interface and
its address are available when the kubelet comes up.
Cilium operates in a flat Layer 3 routing model. When a pod on one node needs to
reach a pod on another node, Cilium's agent looks up the destination node's
InternalIP and either tunnels (VXLAN/Geneve) or direct-routes the packet to
that address, depending on the configured
routing mode.
Either way, the kernel resolves how to reach the destination node IP using the
regular routing table.
By setting --node-ip to the WireGuard tunnel address on spoke nodes:
- The kubelet registers the node's
InternalIPas the WireGuard address (e.g.100.96.0.2). - Cilium's agent on every node discovers this address via the Kubernetes Node object and uses it as the destination for that node's pods.
- When a pod on an AKS node sends traffic to a pod on a spoke node, Cilium
forwards the packet toward
100.96.0.2. The kernel routes it throughwg0(sincewg-quickinstalled a route for100.96.0.2/32via the WireGuard interface), and WireGuard encrypts and delivers it to the spoke over the public internet. - The reverse path works the same way — the spoke's Cilium agent sends traffic
to the hub's WireGuard address (
100.96.0.1), which traverses the tunnel back.
This works with both Cilium tunnel mode (VXLAN/Geneve) and native routing mode.
In tunnel mode, Cilium encapsulates pod packets inside a VXLAN/Geneve header
addressed to the node IP — WireGuard then encrypts the outer packet. In native
routing mode, Cilium expects the underlying network to route pod CIDRs between
nodes — the hub's --allowed-ips must include the spoke's pod CIDR and vice
versa so WireGuard carries the pod traffic directly.
Either way, the WireGuard overlay is transparent to Cilium. Cilium sees a set of node IPs that happen to be reachable over WireGuard, and its normal routing logic works without any special configuration.
- Each peer change causes a brief WireGuard interface restart (
wg-quick down/up), which disrupts connectivity for all existing peers (~100-500ms). This is becausewg syncconfonly updates the WireGuard cryptokey routing table without adding kernel routes for new peerAllowedIPs, so we fall back to a full interface cycle. This can be eliminated by either managing routes manually alongsidewg syncconf, or by pre-routing a broad supernet at startup. - The controller generates a new key pair on each startup unless
--private-key-fileis provided. For stable hub identity, mount a persistent key file.