Firestack is a Go-based VPN/tunnel system that provides DNS resolution, proxy management, and anti-censorship capabilities through a userspace network stack. The system intercepts network traffic via a TUN device, processes it through a gVisor-based TCP/IP stack, and routes connections through configurable DNS resolvers and proxy chains.
This document provides an architectural overview of the entire firestack system, introducing the major components and their interactions. For detailed information about specific subsystems, see:
Firestack is organized into several layers that work together to provide a programmable network stack:
| Layer | Components | Primary Responsibilities |
|---|---|---|
| Application Interface | Tunnel, Resolver, Proxies | External API, system lifecycle management |
| DNS Resolution | Gateway, Transport, Cacher | DNS query processing, ALG translation, caching |
| Proxy Management | Proxifier, WgProxy, socks5, http1, auto | Proxy selection, health monitoring, routing |
| Network Stack | gtunnel, netstack.GConnHandler | gVisor TCP/IP stack, packet forwarding |
| Connection Handling | baseHandler, tcpHandler, udpHandler, icmpHandler | Flow evaluation, policy enforcement, NAT |
| Operating System | protect.Controller, TUN device | Socket protection, device I/O |
Sources: intra/tun2socks.go1-250 intra/tunnel.go1-150 tunnel/tunnel.go1-250
Description: This diagram shows the core architecture using actual struct and interface names from the codebase. The Tunnel interface is implemented by rtunnel, which coordinates the gtunnel (network stack), resolver (DNS), and proxifier (proxy routing). The gVisor stack routes packets to protocol-specific handlers (tcpHandler, udpHandler, icmpHandler), which all extend baseHandler for common flow evaluation logic.
Sources: intra/tunnel.go126-138 intra/tcp.go46-49 intra/udp.go44-47 intra/icmp.go25-27 intra/common.go69-86 intra/dnsx/transport.go188-207 intra/ipn/proxies.go229-265 tunnel/tunnel.go68-78
rtunnel, gtunnel)The tunnel subsystem manages the TUN device lifecycle and coordinates between the application layer and the network stack.
Key Types:
Tunnel interface: Public API for tunnel operations (intra/tunnel.go79-124)rtunnel struct: Top-level tunnel coordinator (intra/tunnel.go126-138)gtunnel struct: gVisor network stack wrapper (tunnel/tunnel.go68-78)Responsibilities:
tunmtu for TUN device, linkmtu for underlying network)Key Functions:
NewTunnel(): Creates tunnel with TUN fd, MTU, interface addresses (intra/tunnel.go153-169)SetLinkAndRoutes(): Attaches TUN device to network stack (intra/tunnel.go360-440)Restart(): Hot-swaps TUN device without stopping services (intra/tunnel.go487-550)Sources: intra/tunnel.go1-600 tunnel/tunnel.go1-350 intra/tun2socks.go68-120
resolver, Gateway)The DNS subsystem provides query processing with Application Level Gateway (ALG) translation, caching, and multiple transport support.
Key Types:
Resolver interface: Main DNS resolver API (intra/dnsx/transport.go168-186)resolver struct: Implements multi-transport DNS resolution (intra/dnsx/transport.go188-207)Gateway interface: ALG translation and IP mapping (intra/dnsx/alg.go83-104)dnsgateway struct: Implements ALG with three-map system (intra/dnsx/alg.go857-877)ALG Translation System:
The ALG system generates fake IPs (RFC6598 100.64.x.x for IPv4, RFC8215a for IPv6) to enable per-domain routing policies. Three maps maintain bidirectional translations:
alg map: domain+qtype → algans (fake IP answers)nat map: algip → baseans (fake → real IP)ptr map: realip → baseans (real IP → domains)Key Functions:
q(): Queries primary/secondary transports, performs ALG translation (intra/dnsx/alg.go1142-1320)X(): Undoes ALG translation, returns real IPs (intra/dnsx/alg.go1481-1558)PTR(): Reverse lookup from IP to domain names (intra/dnsx/alg.go1562-1613)Sources: intra/dnsx/alg.go1-2500 intra/dnsx/transport.go188-1100 intra/dnsx/cacher.go1-500
Proxifier, Proxy implementations)The proxy subsystem selects and manages connections through various proxy types with health monitoring and automatic failover.
Key Types:
Proxies interface: Proxy provider API (intra/ipn/proxies.go217-228)proxifier struct: Manages proxy pool and selection (intra/ipn/proxies.go229-265)Proxy interface: Common proxy interface (intra/ipn/proxies.go167-188)wgproxy, socks5, http1, auto, exit, base (intra/ipn/proxies.go155-166)Connection Pinning: The system uses two Sieve caches for intelligent routing:
ipPins: Maps netip.AddrPort → proxyid (per-destination pinning)uidPins: Maps uid+dst → proxyid (per-app pinning)Pins expire after 10 minutes (pintimeout) unless refreshed by successful connections.
Proxy Types:
Exit: Direct connection via underlying network (always available)Base: May loop back through tunnel (for DNS over VPN)Block: Drops all traffic (policy enforcement)Auto: Races multiple proxies, selects fastestWgProxy: WireGuard with gVisor integration, supports hop chainingKey Functions:
ProxyTo(): Selects proxy for destination, checks pins (intra/ipn/proxies.go453-600)Refresh(): Updates proxy health, re-resolves endpoints (intra/ipn/wgproxy.go329-379)Ping(): Sends WireGuard keepalive packets (intra/ipn/wgproxy.go255-293)Sources: intra/ipn/proxies.go1-1300 intra/ipn/wgproxy.go1-1500 intra/ipn/auto.go1-400 intra/ipn/proxy.go1-800
gtunnel, Protocol Handlers)The network stack layer bridges the TUN device to the gVisor TCP/IP stack and routes packets to protocol-specific handlers.
Packet Flow:
endpoint.Attach() → linkDispatcher.dispatch()supervisor.distribute() hashes 5-tuple to select processor (0-7)stack.InjectInbound() to gVisor stackbaseHandler.onFlow() performs ALG undo, calls policy listenerproxifier.ProxyTo() to select proxyKey Types:
GConnHandler: Protocol handler interface (intra/netstack/netstack.go1-200)tcpHandler: TCP connection proxy (intra/tcp.go46-49)udpHandler: UDP connection proxy with EIM/EIF NAT (intra/udp.go44-47)icmpHandler: ICMP echo (ping) handler (intra/icmp.go25-27)baseHandler: Common flow evaluation logic (intra/common.go69-86)Key Functions:
Proxy(): Main entry point for connection requests (intra/tcp.go241-363 intra/udp.go162-191)onFlow(): Performs ALG undo, calls policy listener (intra/common.go133-230)judge(): Evaluates flow against policies, returns Mark (intra/common.go264-355)Connect(): Selects proxy and establishes connection (intra/udp.go194-386)Sources: intra/tcp.go1-450 intra/udp.go1-400 intra/icmp.go1-150 intra/common.go1-800 tunnel/tunnel.go1-350
SocketListener, Flow Evaluation)The policy engine determines routing decisions for each connection through a multi-stage evaluation process.
Policy Flow:
*Mark with routing decisionMark Structure:
Key Types:
SocketListener interface: Policy decision callbacks (intra/listener.go1-200)Mark: Flow evaluation result with PIDs (intra/common.go1-100)SocketSummary: Connection statistics reported on closeKey Functions:
Preflow(): Early UID detection (intra/common.go165-187)Flow(): Main policy evaluation (intra/common.go133-230)PostFlow(): Final routing notificationOnSocketClosed(): Connection summary reportingSources: intra/listener.go1-200 intra/common.go1-800
ALG translation enables per-domain routing policies by generating fake IP addresses for DNS responses. When a connection is made to a fake IP, the system translates it back to the real IP and associated domain name, allowing the policy engine to make routing decisions based on domain names rather than just IP addresses.
Fake IP Ranges:
100.64.x.x (RFC6598, Carrier-Grade NAT)64:ff9b:1:da19::/96 (RFC8215a, NAT64 prefix variant)Translation Process:
gateway.q() queries upstreamtranslate() generates fake IPsalg, nat, ptr mapsX() translates back to real IPs + domainsSources: intra/dnsx/alg.go857-2500
Pinning associates a successful connection with a specific proxy for a duration, avoiding repeated proxy selection overhead and providing stable routing for ongoing sessions.
Pin Types:
netip.AddrPort → proxyid (destination-based)uid+dst → proxyid (per-app + destination)Pin Lifetime: 10 minutes (configurable via pintimeout)
Use Cases:
Sources: intra/ipn/proxies.go450-600 intra/core/sieve.go1-300
Proxies undergo continuous health monitoring to enable automatic failover and smart selection by the auto proxy.
Health States:
TOK (OK): Proxy responding normallyTKO (Not OK): Proxy experiencing errorsTUP (Up): Proxy initializingTNT (Unresponsive): Proxy not responding to keepalivesTPU (Paused): Proxy temporarily disabledEND: Proxy stoppedMonitoring Mechanisms:
Sources: intra/ipn/wgproxy.go255-380 intra/ipn/proxies.go70-93
WireGuard proxies support hop chaining, where one proxy routes traffic through another. This enables nested VPN scenarios and privacy-preserving proxy chains.
Constraints:
MTU Calculation:
origin_mtu = min(
hop_mtu - wg_overhead - ip_overhead,
link_mtu
)
Sources: intra/ipn/proxies.go1100-1300 intra/ipn/wgproxy.go800-1000
Step-by-step:
100.64.1.2:443 (ALG address)tcpHandler.Proxy()baseHandler.onFlow() undoes ALG: fake IP → real IP + domainlistener.Flow() evaluates policy, returns routing decisionproxifier.ProxyTo() selects WireGuard proxy based on PIDswgproxy.Dial() establishes connection through WireGuard tunnellistener.OnSocketClosed() receives connection summarySources: intra/tcp.go241-450 intra/common.go133-355 intra/dnsx/alg.go1481-1558 intra/ipn/proxies.go453-600
Initialization Steps:
NatPt for IPv4/IPv6 translationgtunnel) with handlersSources: intra/tunnel.go153-280 tunnel/tunnel.go110-250 intra/dnsx/transport.go212-244 intra/ipn/proxies.go285-335
The system exposes global atomic flags for runtime configuration:
| Setting | Type | Purpose |
|---|---|---|
Debug | bool | Enable verbose debug logging |
Loopingback | bool | Allow DNS queries to loop back through tunnel |
SingleThreaded | bool | Force single-threaded packet processing |
ExperimentalWireGuard | bool | Enable WireGuard reverse proxy features |
HappyEyeballs | bool | Enable RFC 8305 Happy Eyeballs for dual-stack |
PortForward | bool | Enable UDP port forwarding for peer discovery |
BlockMode | int | Firewall mode: None/Sink/Filter/FilterProc |
PtMode | int | DNS64/NAT64 mode: Auto/Disabled/Force |
Sources: intra/settings/config.go1-100
Proxies can be added, removed, and updated at runtime:
Sources: intra/ipn/proxies.go54-125 intra/ipn/wgproxy.go396-461
DNS transports can be dynamically added and removed:
Sources: intra/dnsx/transport.go306-370 intra/dnsx/transport.go421-464
Firestack uses several concurrency patterns for safe multi-threaded operation:
atomic.Bool, atomic.Int32atomic.Int64atomic.Boolsync.RWMutex protecting map[string]Proxysync.RWMutex protecting map[string]Transportsync.RWMutex protecting connection maps*SocketSummary, *DNSSummarycore.Barrier for DNS queries (prevents duplicate queries)core.Barrier for proxy refresh (prevents stampeding)core.Sieve with TTL-based expirationSources: intra/core/barrier.go1-300 intra/core/sieve.go1-500 intra/common.go69-103
All goroutines use core.Recover() to catch panics and log stack traces without crashing the entire system:
Failed connections are "stalled" (delayed close) to prevent rapid retry storms:
Sources: intra/core/recover.go1-200 intra/common.go400-500 intra/ipn/proxies.go800-900
Firestack provides a programmable network stack with three major capabilities:
The architecture separates concerns through clear interfaces (Tunnel, Resolver, Proxies) implemented by coordinator structs (rtunnel, resolver, proxifier) that manage lower-level components (gVisor stack, DNS transports, proxy instances). Protocol handlers (tcpHandler, udpHandler, icmpHandler) share common logic through baseHandler, ensuring consistent flow evaluation and policy enforcement across all connection types.
For detailed information about each subsystem, see the respective wiki pages linked at the beginning of this document.
Refresh this wiki