A few months ago, I was woken up by the sound of my server’s fans screaming.

It wasn’t a spike in traffic or a scheduled backup. It was a crypto-miner. Thanks to a Remote Code Execution (RCE) vulnerability in Umami, someone had managed to turn my analytics server into a very inefficient Monero farm.

I got lucky. The hacker was loud. I killed the container within minutes, and since CPU mining is basically a joke in 2026, they gained nothing. But it left me with a cold realization: If they had been smart, they wouldn’t have mined crypto. They would have quietly exfiltrated my database.

The False Sense of Security

We love Docker because it “isolates” things. We think that if a container is compromised, the host is safe. And that’s mostly true. But everyone forgets about outbound traffic.

By default, any Docker container you run can talk to anyone on the internet. If a malicious script gets into your container (whether through an RCE or a compromised dependency) it can send your env files, your database, or your credentials to any remote server in seconds.

Deploying untrusted code to deal with sensitive data

The breaking point came when I wanted to host Gemini FastAPI, a project that wraps Google’s internal Gemini API into an OpenAI-compatible interface, useful for using your Gemini Pro subscription outside Google’s walled garden. The catch: it needs your browser cookies, which means full access to your Google account.

That’s a very different threat model from hosting a metrics dashboard. I can’t audit every line of code and every transitive dependency. The author could be compromised tomorrow, or could have shipped something malicious from day one. I needed a hard guarantee: this container can only talk to Google services, and nothing else.

Why standard IP whitelists fail

The blunt instrument is to hardcode allowed IPs in iptables rules. It works for simple cases, but it’s brittle in practice: IP ranges rotate, CDNs serve content from shifting subnets, and Google in particular scales horizontally with dynamically-assigned subdomains like *.googleusercontent.com. You can’t enumerate those IPs statically as there’s no exhaustive list to fetch. You need something that watches DNS and updates rules in real time.

What I tried first

Cilium

Cilium does exactly this kind of L7 network policy, but it’s “Kubernetes-native”. They dropped Docker support years ago after it broke and wasn’t worth maintaining. Dead end.

OpenSnitch

OpenSnitch was the most promising option I found. It’s a Linux application firewall that intercepts connections via eBPF and builds a live map of domain → IP by snooping on DNS traffic. You can define per-process or per-IP rules, and it does handle Docker if you enable the forward hook, which is off by default and buried in a JSON config:

// Somewhere at the bottom of /etc/opensnitchd/system-fw.json
{
    "Name": "mangle_forward",
    "Table": "opensnitch",
    "Family": "inet",
    "Type": "mangle",
    "Hook": "forward",
    "Rules": [
        {
            "UUID": "7d7394e1-100d-4b87-a90a-cd68c46edb0b",
            "Enabled": false,  // <-- flip this to true
            "Description": "Intercept forwarded connections (docker, etc)",
            "Expressions": [
                {
                    "Statement": {
                        "Op": "",
                        "Name": "ct",
                        "Values": [{ "Key": "state", "Value": "new" }]
                    }
                }
            ],
            "Target": "queue",
            "TargetParameters": "num 0"
        }
    ]
}

Once that’s on, you can write rules to allow a container to reach only specific domains. Here is how you would only allow a container to make requests to example.com:

// In /etc/opensnitchd/rules/000-example-allow.json
{
    "name": "000-example-allow",
    "enabled": true,
    "action": "allow",
    "precedence": true,
    "operator": {
        "type": "list",
        "operand": "list",
        "list": [
            { "type": "simple", "operand": "source.ip", "data": "172.17.0.4" },
            { "type": "simple", "operand": "dest.host", "data": "example.com" }
        ]
    }
}
// In /etc/opensnitchd/rules/001-example-deny-all.json
{
    "name": "001-example-deny-all",
    "enabled": true,
    "action": "reject",
    "duration": "always",
    "operator": {
        "type": "simple",
        "operand": "source.ip",
        "data": "172.17.0.4"
    }
}

In theory, great. In practice, several problems killed it for my use case:

Cold start problem. OpenSnitch’s domain → IP map is built at runtime from observed DNS traffic. On a fresh start, it’s empty. Every connection to an unknown destination is either allowed (insecure) or denied (causes outages). There’s no middle ground. If you could guarantee it runs for weeks without restarting, this would matter less, but here comes the next point.

No cgroup-based filtering. Rules must target source IPs, not container names or cgroups (not implemented yet). That means you have to assign static IPs to your containers and keep rules in sync. If an IP changes, you edit the rule file, restart the daemon, lose the domain map, and cold start all over again.

Scope creep. OpenSnitch hooks into all traffic on the machine: every container, every process. That’s fine if that’s what you want, but it adds overhead (2–20% of a CPU thread in my tests) and complexity you might not need.

DNS bypass. If your containers use your router as DNS resolver (Docker’s default in many setups), queries stay on the LAN and never pass through OpenSnitch’s interception layer. It silently doesn’t work. Fixing this means switching to a public DNS resolver like 1.1.1.1, and that’s not always what you want to have.

I could have worked around all of this. But at some point I stepped back and asked: what would the ideal tool actually look like?

Introducing Dockerwall

I wanted something with a clearly scoped job, designed from the ground up for this exact use case:

  • A DNS proxy, not a DNS sniffer. Instead of passively watching DNS traffic I don’t control, I run a proxy that containers are explicitly configured to use. It forwards queries upstream, records responses, and builds the domain → IP map proactively. No cold start problem.
  • Plain iptables + ipsets, no eBPF. ipset lets you efficiently match large sets of IPs at the kernel level, and they’re updated dynamically as DNS responses come in.
  • Network-scoped rules. Define the allowlist at the Docker network level, not per container IP. Containers join a restricted network and inherit its policy automatically.

I found an 8-year-old project on GitHub that had similar ideas but is unmaintained and not quite what I needed. So I rewrote it in Rust to get proper async I/O and use hickory-dns, a solid DNS library.

How it works

# Start the daemon (run as a service — it never needs to restart)
dockerwall daemon &

# Create a restricted network and spawn your container into it
dockerwall prepare-network gemini-net "gemini.google.com" "*.googleusercontent.com"
docker run --network gemini-net ghcr.io/nativu5/gemini-fastapi

That’s it. Your container is now physically unable to talk to anything that isn’t on that list.

Under the hood, prepare-network:

  1. Creates an ipset to hold resolved IPs for the allowlisted domains.
  2. Creates a Docker network with the DNS proxy configured as the resolver.
  3. Adds iptables rules that only permit egress to IPs in the ipset, and drop everything else.

The daemon listens on the DNS proxy, forwards queries to your upstream resolver, and whenever a response comes in for a tracked domain, updates the ipset. No restarts needed. No cold start. No global interception of unrelated traffic.

Of course, there still is the issue with DNS queries behaving weird through docker and bypassing everything if you are using a resolver on your LAN. In this case, the fix is just setting any other address, as dockerwall will intercept the request anyway:

docker run --network gemini-net --dns 127.0.0.1 ghcr.io/nativu5/gemini-fastapi

Visibility is Security

Dockerwall includes a stats command that shows you exactly what’s happening:

$ dockerwall stats
Network: gemini-net (172.30.113.0/28)
  172.30.113.5    gemini.google.com              47✅       2s ago
  172.30.113.5    malicious-exfil.io             3❌        8m ago

The ❌ means the connection was blocked at the iptables level. The domain isn’t in the allowlist, so the IP was never added to the ipset, and the packet was dropped. Seeing those red Xs next to a domain you didn’t authorize is incredibly satisfying. It’s the “I caught you” moment everyone loves, and you are gonna be excited to wait for the next hack you foil.

Is this production-ready?

It’s one night of work, so take that for what it is. The design is solid and I’m running it in production for the Gemini proxy. The main limitations I’m aware of:

  • IPs that get whitelisted aren’t automatically removed yet
  • The stats output is minimal: no persistent logging yet
  • You can’t have complex logics with multiple upstream DNS providers

Conclusion

Restricting outbound traffic from containers isn’t exotic security hardening, it’s a basic containment layer that’s surprisingly hard to set up with existing tools. If you’re self-hosting anything with elevated credentials or sensitive data, you should be doing this. The attack surface isn’t just inbound; compromised containers routinely exfiltrate data by phoning home to attacker-controlled servers.

Even a coarse rule, “this container may only reach its intended upstream” dramatically limits the blast radius of a supply chain compromise or RCE.

I’ve open-sourced the project on GitHub: Mubelotix/dockerwall. Give it a star if it helps you sleep better, or better yet, try it out and let me know how it works for you.