Advanced Traffic Splitting: dnsmasq, iptables, ip rule, and ipset
Split tunneling — routing only certain traffic through a VPN while keeping everything else on the direct path — is one of the most powerful and least understood capabilities in Linux networking. Done right, it combines the privacy and access benefits of a VPN with the raw speed of a direct connection for traffic that doesn't need protection.
This article builds a complete, production-grade split-tunneling stack entirely from standard Linux kernel primitives: dnsmasq, ipset, iptables, and policy-based routing. No custom kernel modules. No userspace proxies. No routing daemons. Just the tools that ship with every modern Linux distribution, wired together in the right order.
By the end you will have a system where resolving google.com automatically adds its IP addresses to a kernel-level set, which triggers a firewall mark, which selects an alternate routing table that sends packets through your VPN interface — all transparently, without touching application configuration.
The Problem: Why Split Tunneling Matters
Routing all traffic through a VPN solves some problems while creating others.
Performance degradation is the most obvious cost. VPN tunnels add latency proportional to the round-trip time to the VPN server. Routing your local banking app, home NAS, or corporate intranet through a VPN exit node in another country introduces 50–300 ms of avoidable latency. For interactive applications — video calls, SSH sessions, gaming — that overhead is genuinely painful.
Local resource access breaks when you route everything through a tunnel. Printers, NAS devices, smart home controllers, and corporate internal tools all sit on your local subnet. A full-tunnel VPN makes reaching them awkward at best and impossible at worst, since the return path for their replies often does not traverse the VPN.
Privacy and access are what you actually want the VPN for. Streaming services geo-restricted to the US, censored websites, corporate resources accessible only from a specific IP range — these specific destinations need to go through the tunnel. Your local grocery delivery, your bank, your DNS-over-HTTPS resolver? They do not.
The ideal solution is selective routing by destination domain: a predefined list of domains whose traffic goes through the VPN, while everything else takes the direct path. The challenge is that routing decisions in the Linux kernel happen at the IP layer, but domain names are resolved by DNS — a different layer entirely. Bridging that gap is the core problem this article solves.
Architecture Overview
The solution chains four Linux subsystems together. Understanding the data flow end-to-end before touching any configuration is essential.
Application
│
│ DNS query: "google.com"
▼
┌─────────┐
│ dnsmasq │ DNS resolver
│ │ Matches domain against ipset= rules
│ │ Returns A/AAAA records to application
│ │ Side effect: adds resolved IPs to kernel ipset
└────┬────┘
│ writes IPs into
▼
┌──────────────────────┐
│ ipset: vpn_domains │ Kernel-level hash:ip set
│ 142.250.80.46 │ O(1) membership test
│ 142.250.80.78 │ Up to 65,536 entries
│ ... │
└──────────┬───────────┘
│ matched by
▼
┌──────────────────────────────────┐
│ iptables mangle table │
│ -m set --match-set vpn_domains │ Packet filter
│ -j MARK --set-mark 0x1 │ Sets fwmark on matching packets
└──────────────────┬───────────────┘
│ fwmark 0x1 triggers
▼
┌──────────────────────────────────┐
│ ip rule: fwmark 0x1 → table vpn │ Policy-based routing
└──────────────────┬───────────────┘
│ selects
▼
┌──────────────────────────────────┐
│ ip route table vpn │ Custom routing table
│ default via 10.8.0.1 dev wg0 │ Routes matched traffic
└──────────────────┬───────────────┘
│
┌───────┴───────┐
▼ ▼
VPN tunnel Direct path
(wg0 / tun0) (eth0 / default)
Traffic not matching the ipset follows the main routing table and exits directly. Traffic matching the ipset gets marked, hits the alternate routing table, and exits through the VPN. The split is transparent to applications — they see no difference in how they make connections.
Component Overview
dnsmasq — DNS Resolver with ipset Integration
dnsmasq is a lightweight DNS forwarder and DHCP server shipped with most Linux distributions. Its ipset= directive is the linchpin of this setup: when dnsmasq resolves a domain that matches an ipset= rule, it automatically adds every resolved IP address into the named kernel ipset. This happens as a side effect of the DNS resolution itself — before the application even initiates a TCP connection.
The key insight is that dnsmasq operates at the DNS layer, where domain names are still visible. By the time a packet enters the kernel's routing subsystem, it carries only IP addresses. The ipset= directive is the only mechanism that reliably bridges this semantic gap without a userspace proxy.
ipset — Kernel-Level IP Set Data Structures
ipset is a kernel module and userspace tool for managing sets of IP addresses, networks, and ports. It provides O(1) membership testing via hash tables — critical for performance when the set contains thousands of entries. Without ipset, you would need a separate iptables rule for each IP address, which scales catastrophically past a few hundred entries.
The hash:ip type stores individual IPv4 or IPv6 addresses. The hash:net type stores CIDR prefixes — useful for routing entire ASNs or cloud provider IP ranges through the VPN.
iptables — Packet Marking with the Mangle Table
iptables here is not used for filtering — it is used for marking. The mangle table's MARK target attaches a 32-bit integer (the fwmark) to packets matching a given rule. This mark is not transmitted over the network; it is kernel-internal metadata used by the routing subsystem.
We apply marks in both OUTPUT (packets originating from this host) and PREROUTING (packets being forwarded through this host, relevant if the Linux machine acts as a router for other devices).
ip rule / ip route — Policy-Based Routing
The standard Linux routing table associates destination subnets with next-hop gateways. Policy-based routing (PBR) extends this: the kernel can maintain multiple routing tables simultaneously, selecting between them based on rules that examine source address, destination address, TOS bits, or — crucially for us — the fwmark.
ip rule manages the rule database (the RPDB, Routing Policy Database). ip route manages individual routing tables. Together they let us say: "packets with fwmark 0x1, regardless of their destination, look up routing decisions in table vpn instead of the main table."
Step 1: Install the Prerequisites
On Debian/Ubuntu:
sudo apt-get update
sudo apt-get install -y dnsmasq ipset iptables iptables-persistent
On RHEL/CentOS/Fedora:
sudo dnf install -y dnsmasq ipset iptables iptables-services
Verify your running kernel has the required modules:
lsmod | grep -E "^ip_set|^xt_set"
# If empty, load them:
sudo modprobe ip_set ip_set_hash_ip xt_set
Make the modules load on boot:
echo -e "ip_set\nip_set_hash_ip\nxt_set" | sudo tee /etc/modules-load.d/ipset.conf
Step 2: Create the ipset
Create the ipset that dnsmasq will populate. This must exist before dnsmasq starts, because dnsmasq attempts to add entries to it at resolution time — if the set does not exist, the addition silently fails.
sudo ipset create vpn_domains hash:ip maxelem 65536
hash:ip— stores individual IP addresses (not networks)maxelem 65536— maximum set size; 65,536 is sufficient for most domain lists
If you also want to route entire CIDR blocks (for example, an entire CDN's IP range):
sudo ipset create vpn_nets hash:net maxelem 4096
Verify the sets were created:
sudo ipset list -name
# vpn_domains
# vpn_nets
Step 3: Configure dnsmasq to Populate the ipset
Edit or create /etc/dnsmasq.d/vpn-domains.conf. Keeping split-tunnel configuration in a separate file from the main dnsmasq.conf makes maintenance much easier.
# /etc/dnsmasq.d/vpn-domains.conf
# Use upstream DNS for all queries not handled below
server=1.1.1.1
server=8.8.8.8
# Streaming and Google services → route through VPN
ipset=/google.com/youtube.com/googleapis.com/gstatic.com/googlevideo.com/vpn_domains
ipset=/netflix.com/nflxvideo.net/nflximg.net/nflximg.com/vpn_domains
ipset=/hulu.com/hulustream.com/huluim.com/vpn_domains
ipset=/disneyplus.com/disney-plus.net/bamgrid.com/vpn_domains
ipset=/hbomax.com/max.com/hbo.com/vpn_domains
# Social media
ipset=/twitter.com/x.com/t.co/twimg.com/vpn_domains
ipset=/facebook.com/fbcdn.net/instagram.com/cdninstagram.com/vpn_domains
# Development tools
ipset=/github.com/githubusercontent.com/ghcr.io/vpn_domains
ipset=/stackoverflow.com/stackexchange.com/vpn_domains
The ipset= directive syntax is:
ipset=/domain1/domain2/.../setname
Any domain in the list, including all subdomains, will have its resolved IPs added to setname. The match is suffix-based: ipset=/google.com/vpn_domains will match google.com, www.google.com, mail.google.com, accounts.google.com, and so on.
Restart dnsmasq to apply the configuration:
sudo systemctl restart dnsmasq
sudo systemctl status dnsmasq
Configure your system to use the local dnsmasq instance as its DNS resolver. On systemd systems with systemd-resolved, the simplest approach is to point /etc/resolv.conf at it directly:
echo "nameserver 127.0.0.1" | sudo tee /etc/resolv.conf
On NetworkManager systems, set the DNS server in your connection profile or add to /etc/NetworkManager/conf.d/dns.conf:
[main]
dns=none
Then set nameserver 127.0.0.1 in /etc/resolv.conf.
Step 4: Create the Custom Routing Table
Linux supports up to 252 custom routing tables (numbered 1–252 plus a few reserved). Give the VPN table a human-readable name:
echo "200 vpn" | sudo tee -a /etc/iproute2/rt_tables
Add a default route in the vpn table that sends all traffic through your VPN interface. Replace <vpn_gateway> with your VPN server's tunnel IP (e.g., 10.8.0.1 for WireGuard, 10.0.0.1 for OpenVPN) and wg0 with your actual VPN interface name:
sudo ip route add default via 10.8.0.1 dev wg0 table vpn
For WireGuard, the gateway is the server's Address on the wg0 interface. For OpenVPN, it is the remote_host_ip or the gateway pushed by the server. If your VPN does not use a traditional gateway (some WireGuard setups use 0.0.0.0/0 as the AllowedIPs), use:
sudo ip route add default dev wg0 table vpn
Verify the route:
sudo ip route show table vpn
# default via 10.8.0.1 dev wg0
Now add the policy rule that selects this table for marked packets:
sudo ip rule add fwmark 0x1 table vpn priority 100
fwmark 0x1— match packets with this firewall marktable vpn— look up routing in thevpntablepriority 100— rules are evaluated in ascending priority order; 100 runs before the main table (priority 32766) but after the local table (priority 0)
Verify the rule database:
ip rule show
# 0: from all lookup local
# 100: from all fwmark 0x1 lookup vpn
# 32766: from all lookup main
# 32767: from all lookup default
Step 5: Mark Packets with iptables
Add rules in the mangle table to set fwmark 0x1 on packets whose destination IP is in the vpn_domains ipset:
# Mark outbound packets from this host
sudo iptables -t mangle -A OUTPUT \
-m set --match-set vpn_domains dst \
-j MARK --set-mark 0x1
# Mark forwarded packets (if this host routes for other devices)
sudo iptables -t mangle -A PREROUTING \
-m set --match-set vpn_domains dst \
-j MARK --set-mark 0x1
The mark is applied in the mangle table because marking must happen before routing decisions are made. In the netfilter hook order, PREROUTING mangle executes before the routing lookup; OUTPUT mangle executes before the final routing decision for locally generated packets.
Verify the rules were installed:
sudo iptables -t mangle -L OUTPUT -n -v
sudo iptables -t mangle -L PREROUTING -n -v
If you also created the vpn_nets set for CIDR ranges:
sudo iptables -t mangle -A OUTPUT \
-m set --match-set vpn_nets dst \
-j MARK --set-mark 0x1
sudo iptables -t mangle -A PREROUTING \
-m set --match-set vpn_nets dst \
-j MARK --set-mark 0x1
Step 6: Making the Configuration Persistent
The ipset data and iptables rules are in-memory structures that disappear on reboot. The routing table entry and ip rule also need to be restored. Here is how to persist each component.
Persist ipset
# Save current ipset state
sudo ipset save > /etc/ipset.conf
# Restore on boot via a systemd service
sudo tee /etc/systemd/system/ipset-restore.service > /dev/null <<'EOF'
[Unit]
Description=Restore ipset rules
Before=network.target dnsmasq.service iptables.service
DefaultDependencies=no
[Service]
Type=oneshot
ExecStart=/sbin/ipset restore -f /etc/ipset.conf
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl enable ipset-restore.service
Note: the saved ipset file preserves set structure but not DNS-populated entries (those are ephemeral and will be repopulated by dnsmasq as domains are resolved after boot). What matters is that the empty set exists before dnsmasq starts.
Persist iptables Rules
sudo netfilter-persistent save
# This writes to /etc/iptables/rules.v4 and rules.v6
sudo systemctl enable netfilter-persistent
On RHEL/CentOS:
sudo service iptables save
sudo systemctl enable iptables
Persist Routing Table and Rules
Add the routes and rules to your network interface configuration. On systems using /etc/network/interfaces (Debian):
# In the wg0 interface stanza:
post-up ip route add default via 10.8.0.1 dev wg0 table vpn
post-up ip rule add fwmark 0x1 table vpn priority 100
pre-down ip rule del fwmark 0x1 table vpn priority 100
pre-down ip route del default via 10.8.0.1 dev wg0 table vpn
On NetworkManager systems, use a dispatcher script:
sudo tee /etc/NetworkManager/dispatcher.d/50-vpn-routing << 'EOF'
#!/bin/bash
IFACE="$1"
EVENT="$2"
if [ "$IFACE" = "wg0" ] && [ "$EVENT" = "up" ]; then
ip route add default via 10.8.0.1 dev wg0 table vpn 2>/dev/null || true
ip rule add fwmark 0x1 table vpn priority 100 2>/dev/null || true
fi
if [ "$IFACE" = "wg0" ] && [ "$EVENT" = "down" ]; then
ip rule del fwmark 0x1 table vpn priority 100 2>/dev/null || true
ip route del default via 10.8.0.1 dev wg0 table vpn 2>/dev/null || true
fi
EOF
sudo chmod +x /etc/NetworkManager/dispatcher.d/50-vpn-routing
Step 7: Testing and Debugging
With the stack configured, verify each layer independently before testing end-to-end.
Verify dnsmasq is Populating the ipset
Trigger a DNS resolution via the local resolver and check whether the IPs appear in the set:
# Force a fresh resolution
dig @127.0.0.1 google.com +short
# 142.250.80.46
# (... more IPs)
# Check if those IPs landed in the set
sudo ipset list vpn_domains
# Name: vpn_domains
# Type: hash:ip
# ...
# Members:
# 142.250.80.46
# 142.250.80.78
If the ipset remains empty after a dig, check that dnsmasq is actually receiving the queries:
# Enable dnsmasq query logging temporarily
sudo dnsmasq --no-daemon --log-queries --conf-file=/etc/dnsmasq.conf &
dig @127.0.0.1 google.com
Verify iptables is Matching
Flush the hit counters, make a request to a matched domain, and check the counters:
sudo iptables -t mangle -Z OUTPUT
curl -s https://www.google.com > /dev/null
sudo iptables -t mangle -L OUTPUT -n -v
# pkts bytes target prot opt in out source destination
# 42 5880 MARK all -- * * 0.0.0.0/0 0.0.0.0/0 match-set vpn_domains dst MARK set 0x1
If pkts is 0, either the ipset is not being populated or the iptables rule is not matching.
Verify Policy Routing is Working
Check the active routing rules and the vpn table:
ip rule show
# 100: from all fwmark 0x1 lookup vpn
ip route show table vpn
# default via 10.8.0.1 dev wg0
Use ip route get to simulate a routing decision for a specific IP:
# First, get the IP for a vpn-routed domain
VPN_IP=$(dig @127.0.0.1 youtube.com +short | head -1)
# Check where the kernel would route a marked packet to that IP
sudo ip route get "$VPN_IP" mark 0x1
# <ip> via 10.8.0.1 dev wg0 ...
# Check where an unmarked packet would go (should be direct)
sudo ip route get "$VPN_IP"
# <ip> via <default_gateway> dev eth0 ...
Verify End-to-End with Traceroute
# A domain in vpn_domains should route through the VPN
traceroute -n $(dig @127.0.0.1 youtube.com +short | head -1)
# First hop should be your VPN server, not your ISP
# A domain not in vpn_domains should route directly
traceroute -n $(dig @127.0.0.1 yourbank.com +short | head -1)
# First hop should be your default gateway
Common Issues
Ipset fills slowly: DNS-based population is lazy — IPs only enter the set when the domain is first resolved after a set flush. Some CDN providers (Netflix, Google) use hundreds of IPs across different PoPs. Pre-warm the set by resolving the domain from multiple geographic locations or populate it with known CIDR blocks using hash:net.
Connection resets for existing sessions: iptables marks packets, but existing TCP connections were established before the mark was set. Force reconnections after modifying rules, or add --conntrack --ctstate NEW to only mark new connections.
IPv6 leaks: The setup above handles IPv4 only. Add equivalent rules for IPv6 using ip6tables and an ipset created with family inet6:
sudo ipset create vpn_domains6 hash:ip family inet6 maxelem 65536
sudo ip6tables -t mangle -A OUTPUT \
-m set --match-set vpn_domains6 dst \
-j MARK --set-mark 0x1
Real-World Example: Routing US Streaming Services Through FastSox
Here is a practical configuration that routes the major US streaming services through a FastSox gateway while keeping all other traffic direct.
# /etc/dnsmasq.d/streaming-us.conf
# US Streaming services
ipset=/netflix.com/nflxvideo.net/nflximg.net/nflximg.com/nflxso.net/vpn_domains
ipset=/hulu.com/hulustream.com/huluim.com/vpn_domains
ipset=/disneyplus.com/disney-plus.net/bamgrid.com/cdn.registerdisney.go.com/vpn_domains
ipset=/hbomax.com/max.com/hbo.com/hbocontentdelivery.com/vpn_domains
ipset=/paramountplus.com/cbs.com/cbsi.com/cbsnews.com/vpn_domains
ipset=/peacocktv.com/nbcuni.com/nbcuniversal.com/vpn_domains
ipset=/primevideo.com/aiv-cdn.net/vpn_domains
# Streaming support infrastructure
ipset=/akamai.net/akamaized.net/akamaihd.net/vpn_domains
ipset=/fastly.net/fastlylabs.com/vpn_domains
With a FastSox gateway configured as the wg0 peer, video content resolves to IPs in the vpn_domains set, gets marked, and exits through the US gateway. Meanwhile, your local banking site, email, and internal tools take the direct path with no added latency.
To add an entire cloud provider's IP range (for services that use many unpredictable IPs), use hash:net and populate it with published IP ranges:
# AWS US-EAST ranges (example — download current list from ip-ranges.amazonaws.com)
sudo ipset add vpn_nets 3.80.0.0/12
sudo ipset add vpn_nets 18.204.0.0/14
Automating the Setup
Wrap the ipset creation, route, and rule setup in a script that runs idempotently (safe to execute multiple times):
#!/bin/bash
# /usr/local/sbin/setup-split-tunnel.sh
set -euo pipefail
VPN_GW="${VPN_GATEWAY:-10.8.0.1}"
VPN_DEV="${VPN_DEVICE:-wg0}"
MARK="0x1"
TABLE="vpn"
# Create ipset if it doesn't exist
ipset list vpn_domains &>/dev/null || ipset create vpn_domains hash:ip maxelem 65536
ipset list vpn_domains6 &>/dev/null || ipset create vpn_domains6 hash:ip family inet6 maxelem 65536
# Add routing table entry
grep -q "^200 vpn" /etc/iproute2/rt_tables || echo "200 vpn" >> /etc/iproute2/rt_tables
# Add route (idempotent via replace)
ip route replace default via "$VPN_GW" dev "$VPN_DEV" table "$TABLE"
# Add rule if not present
ip rule show | grep -q "fwmark $MARK lookup $TABLE" || \
ip rule add fwmark "$MARK" table "$TABLE" priority 100
# iptables rules (idempotent via -C check)
for CHAIN in OUTPUT PREROUTING; do
iptables -t mangle -C "$CHAIN" -m set --match-set vpn_domains dst \
-j MARK --set-mark "$MARK" 2>/dev/null || \
iptables -t mangle -A "$CHAIN" -m set --match-set vpn_domains dst \
-j MARK --set-mark "$MARK"
done
echo "Split tunnel configured: fwmark $MARK → table $TABLE → $VPN_GW via $VPN_DEV"
sudo chmod +x /usr/local/sbin/setup-split-tunnel.sh
VPN_GATEWAY=10.8.0.1 VPN_DEVICE=wg0 sudo -E /usr/local/sbin/setup-split-tunnel.sh
The Limits of the Manual Approach
This stack is powerful but requires constant maintenance. Domain lists go stale as services migrate infrastructure. CDN IPs change weekly. The DNS-based population misses IPs that were cached before the set was created. There is no feedback loop telling you when your routing is wrong — traffic silently takes the direct path if the ipset misses an IP.
There is also a fundamental race condition: DNS caching means a domain can be resolved by a system resolver that bypasses dnsmasq, skipping the ipset population entirely. Applications that implement their own DNS-over-HTTPS (many modern browsers do) bypass the local resolver entirely.
These are the problems that FastSox's Smart Mode solves at the application level. Rather than intercepting DNS queries and hoping to populate an ipset in time, Smart Mode maintains a continuously updated, geo-aware routing policy applied at the VPN gateway itself — no local configuration, no domain list maintenance, no race conditions. Smart Mode also handles IPv6, handles applications with hardcoded DNS servers, and dynamically adjusts to CDN topology changes.
For a conceptual comparison of routing strategies, see Global Mode vs Smart Mode. To get started with FastSox and skip the infrastructure work, visit fastsox.com.
Conclusion
The dnsmasq + ipset + iptables + policy routing stack is one of the most elegant solutions in Linux networking. It operates entirely in kernel space for the data path, uses standard tools available on every distribution, and requires no application-level changes. Once the pieces are assembled and understood, you have a split-tunnel implementation that is faster, more transparent, and more auditable than any userspace proxy approach.
The key design insight is the three-layer bridge: dnsmasq observes the DNS layer where domain names are still visible, ipset stores the resulting IPs in a kernel-accessible data structure, and policy routing acts on those IPs with zero per-packet overhead. Each layer does exactly one thing — and does it well.
This guide was produced by the FastSox Team at OneDotNet Ltd, builders of FastSox — a privacy-first network service built on modern cryptographic protocols. For questions, corrections, or to share your own routing configurations, join the FastSox community.
Related Articles
Best Practices to Secure a Linux Server in 2026
A comprehensive, checklist-style guide to hardening a Linux server in 2026. Covers SSH hardening, firewalls, fail2ban, automatic updates, user management, kernel sysctl tuning, file system security, audit logging, and VPN-only management access.
How to Bootstrap a Secure Linux Setup Using iptables and ufw
A practical checklist for getting a fresh Ubuntu or Debian machine to a defensible firewall baseline — covering ufw for fast setup, iptables for precision control, common attack mitigations, nftables, WireGuard rules, and how to verify your ruleset.
How to Use WireGuard on Linux: From Installation to Multi-Peer Setup
A practical, step-by-step guide to installing WireGuard on Linux, generating keys, configuring a server and multiple clients, and verifying your tunnel — plus tips on troubleshooting common issues.