securityproxy

What Is Linux Network Namespace and How to Use It

FastSox Team2026-03-2712 min read

The Linux kernel gives every process access to a rich networking stack: interfaces, routing tables, firewall rules, sockets. By default, all processes share the same view of that stack — the same eth0, the same routes, the same iptables chains. For most use cases that is exactly what you want. But for containers, VPNs, and multi-tenant infrastructure, complete isolation of the network stack is essential.

That isolation is what network namespaces provide.

What Is a Network Namespace?

A network namespace is a logical copy of the Linux networking stack. Each namespace gets its own:

  • Network interfaceslo, eth0, and any virtual interfaces you create live inside a specific namespace. An interface belongs to exactly one namespace at a time.
  • Routing table — routes added in one namespace are invisible to another.
  • iptables/nftables rules — each namespace has a completely independent firewall ruleset.
  • Sockets — a TCP socket opened in namespace A cannot communicate directly with a socket in namespace B through the normal kernel path.
  • /proc/net/ entries — tools like ss, netstat, and ip route only show data from the namespace the calling process lives in.

Think of a namespace as a lightweight virtual network card cage. Processes inside it see only the interfaces and rules that belong to that cage, nothing from the host or from other cages.

The Linux Network Stack Model

When the kernel boots, it creates a single initial network namespace. Every process on the system starts in that namespace unless it explicitly creates or joins another one. The initial namespace is what you see when you run ip link or ip route on a freshly installed server.

Namespaces are tracked by the kernel per-process. The file /proc/<pid>/ns/net is a symbolic link pointing to the namespace inode of that process. Two processes sharing the same inode number are in the same namespace.

# Check which network namespace your current shell is in
readlink /proc/$$/ns/net
# Example output: net:[4026531992]

When a new namespace is created, it contains only one interface: the loopback device lo, which starts in the DOWN state. It has no routes, no iptables rules, and no connectivity to the outside world — it is a completely blank network environment.

Basic Operations with ip netns

The ip netns command from the iproute2 package manages named network namespaces. Named namespaces are stored as bind-mounted files under /run/netns/.

# Create a new named namespace
sudo ip netns add red

# List all named namespaces
ip netns list
# Output: red

# Execute a command inside the namespace
sudo ip netns exec red ip link
# Output: only the loopback interface, DOWN

# Delete the namespace
sudo ip netns delete red

Inside the namespace, lo is down and there are no other interfaces. No traffic can enter or leave yet.

# Bring up loopback inside the namespace
sudo ip netns exec red ip link set lo up

# Confirm it is up
sudo ip netns exec red ip link show lo

Creating a veth Pair and Connecting Namespaces

A veth pair (virtual Ethernet pair) is a linked pair of virtual interfaces that act like a network cable: packets sent into one end appear on the other. This is the fundamental building block for connecting namespaces to each other or to the host.

# Create a new namespace
sudo ip netns add blue

# Create a veth pair: veth0 (host side) and veth1 (namespace side)
sudo ip link add veth0 type veth peer name veth1

# Move veth1 into the blue namespace
sudo ip link set veth1 netns blue

# Assign IP addresses
sudo ip addr add 10.10.0.1/24 dev veth0
sudo ip netns exec blue ip addr add 10.10.0.2/24 dev veth1

# Bring both interfaces up
sudo ip link set veth0 up
sudo ip netns exec blue ip link set veth1 up
sudo ip netns exec blue ip link set lo up

# Test connectivity from the namespace to the host
sudo ip netns exec blue ping -c 3 10.10.0.1

At this point the namespace can reach the host side of the veth pair. It cannot yet reach the internet — there is no default route and no NAT.

Giving a Namespace Internet Access via NAT

To route traffic from the namespace out to the internet you need two things: a default route pointing to the host, and IP masquerade (NAT) on the host so return traffic can find its way back.

# Add a default route inside the namespace pointing to the host veth
sudo ip netns exec blue ip route add default via 10.10.0.1

# Enable IP forwarding on the host
sudo sysctl -w net.ipv4.ip_forward=1

# Add a masquerade rule on the host's outbound interface (replace eth0 with your interface)
sudo iptables -t nat -A POSTROUTING -s 10.10.0.0/24 -o eth0 -j MASQUERADE

# Allow forwarding between veth0 and eth0
sudo iptables -A FORWARD -i veth0 -o eth0 -j ACCEPT
sudo iptables -A FORWARD -i eth0 -o veth0 -m state --state RELATED,ESTABLISHED -j ACCEPT

Now test from inside the namespace:

sudo ip netns exec blue ping -c 3 1.1.1.1
sudo ip netns exec blue curl -s https://ifconfig.me

The namespace's traffic exits through the host's eth0 with the host's public IP. From the internet's perspective, there is no difference between host traffic and namespace traffic.

Running a Process Inside a Namespace

ip netns exec runs any command inside a named namespace. The process and all its children inherit the namespace for their entire lifetime.

# Run a shell inside the namespace
sudo ip netns exec blue bash

# From inside: confirm you see only veth1 and lo
ip link

# From inside: check your public IP (will show the host's IP, after NAT)
curl -s https://ifconfig.me

# Exit back to the host
exit

For running a process as a non-root user inside the namespace:

sudo ip netns exec blue sudo -u youruser curl -s https://ifconfig.me

This is the mechanism container runtimes use to confine network access. When Docker or Podman starts a container, it creates a namespace, sets up a veth pair connecting to a bridge (docker0 or podman0), configures NAT, and then launches the container process inside that namespace using a combination of clone(2) with CLONE_NEWNET and setns(2).

Practical Use Cases

Container Networking Under the Hood

When you run docker run, Docker:

  1. Creates a new network namespace via clone(CLONE_NEWNET).
  2. Creates a veth pair and moves one end into the namespace.
  3. Assigns the container's IP from the bridge subnet.
  4. Adds a default route pointing to the bridge IP.
  5. Adds iptables NAT and FORWARD rules on the host.

You can inspect a running container's namespace directly:

# Get the container's PID
CONTAINER_PID=$(docker inspect --format '{{.State.Pid}}' my-container)

# Enter its network namespace without using docker exec
sudo nsenter --net=/proc/$CONTAINER_PID/ns/net ip link
sudo nsenter --net=/proc/$CONTAINER_PID/ns/net ss -tlnp

Podman follows the same pattern. On rootless Podman the user namespace is also isolated, but the network namespace model is identical.

VPN Traffic Isolation

A common security requirement is to route a specific application through a VPN while all other traffic uses the normal internet connection — without a system-wide VPN. Network namespaces make this exact topology possible.

# Create a dedicated namespace for VPN-only traffic
sudo ip netns add vpn-only

# Set up the veth pair as shown earlier
sudo ip link add veth-host type veth peer name veth-vpn
sudo ip link set veth-vpn netns vpn-only
sudo ip addr add 192.168.99.1/24 dev veth-host
sudo ip netns exec vpn-only ip addr add 192.168.99.2/24 dev veth-vpn
sudo ip link set veth-host up
sudo ip netns exec vpn-only ip link set veth-vpn up
sudo ip netns exec vpn-only ip link set lo up

# Start a WireGuard interface inside the namespace
sudo ip netns exec vpn-only wg-quick up /etc/wireguard/wg0.conf

# Now run your application inside the namespace — it can only route through WireGuard
sudo ip netns exec vpn-only firefox
sudo ip netns exec vpn-only curl -s https://ifconfig.me  # shows VPN IP

Traffic from firefox launched inside vpn-only physically cannot leave the machine without going through the WireGuard tunnel — there is no other interface inside that namespace. Any WireGuard connection failure means no connectivity at all, not a silent fallback to the plain internet. This is a real kill-switch by construction, with no iptables rules that could be accidentally flushed.

Network Testing Without Affecting the Host

Namespaces let you build complex test topologies entirely in software, without touching the host's actual network configuration.

# Create three namespaces simulating router, client, server
sudo ip netns add router
sudo ip netns add client
sudo ip netns add server

# Connect client to router
sudo ip link add cl-rt0 type veth peer name cl-rt1
sudo ip link set cl-rt0 netns client
sudo ip link set cl-rt1 netns router

# Connect server to router
sudo ip link add sv-rt0 type veth peer name sv-rt1
sudo ip link set sv-rt0 netns server
sudo ip link set sv-rt1 netns router

# Assign addresses and bring interfaces up
sudo ip netns exec client ip addr add 10.0.1.2/24 dev cl-rt0
sudo ip netns exec router ip addr add 10.0.1.1/24 dev cl-rt1
sudo ip netns exec router ip addr add 10.0.2.1/24 dev sv-rt1
sudo ip netns exec server ip addr add 10.0.2.2/24 dev sv-rt0

sudo ip netns exec client ip link set cl-rt0 up
sudo ip netns exec router ip link set cl-rt1 up
sudo ip netns exec router ip link set sv-rt1 up
sudo ip netns exec server ip link set sv-rt0 up

# Enable routing in the router namespace
sudo ip netns exec router sysctl -w net.ipv4.ip_forward=1

# Add routes
sudo ip netns exec client ip route add default via 10.0.1.1
sudo ip netns exec server ip route add default via 10.0.2.1

# Test: client can ping server through the router namespace
sudo ip netns exec client ping -c 3 10.0.2.2

All of this runs on a single machine without touching its real routing table or interfaces. You can tear it all down with three ip netns delete commands.

Named Namespaces vs Anonymous Namespaces

The ip netns add command creates named namespaces — entries under /run/netns/. These persist as long as the file exists, regardless of whether any process is using them. You can enter and leave them freely with ip netns exec.

Anonymous namespaces are created by calling clone(CLONE_NEWNET) or unshare -n without binding to a file. They exist only as long as at least one process or file descriptor holds a reference to them.

# Create an anonymous namespace for the current shell session
# (exits when the shell exits)
sudo unshare --net bash

# From inside: confirm it is a fresh namespace
ip link
# Only lo, and it is DOWN

Container runtimes typically start with an anonymous namespace created by clone(CLONE_NEWNET) and then use nsenter or setns(2) to let other processes join. The /run/netns/ file is a bind-mount of the namespace inode, used to hold a reference and give it a name for tooling purposes.

To bind an anonymous namespace to a name (pin it), you can:

# Create a bind mount point
sudo touch /run/netns/pinned
# Bind the namespace inode to that path from inside a process
# (this requires nsenter or a custom program using mount(2))

In practice, ip netns add handles all of this transparently.

Inspecting Namespaces

Using lsns

The lsns command (from the util-linux package) lists all namespaces visible from the current process, including their type, inode, and which process created them:

sudo lsns -t net

Example output:

        NS TYPE NPROCS   PID USER COMMAND
4026531992 net     150     1 root /sbin/init
4026532208 net       1  8421 root ip netns exec blue bash
4026532341 net       3 12044 root /usr/bin/podman run ...

Each row is a distinct network namespace. The NS column is the inode number.

Using /proc/<pid>/ns/net

Every running process exposes its namespace membership through the /proc filesystem:

# Check which namespace a process is in
ls -la /proc/$(pgrep firefox)/ns/net
# lrwxrwxrwx 1 user user 0 Mar 27 10:00 /proc/12345/ns/net -> net:[4026532208]

# Compare two processes — same inode = same namespace
readlink /proc/1/ns/net
readlink /proc/$(pgrep dockerd)/ns/net

# Enter a process's namespace without knowing its name
sudo nsenter --net=/proc/12345/ns/net ip route

nsenter is the low-level tool that container runtimes use internally. It calls setns(2) to join an existing namespace, then executes the given command.

How FastSox Uses Network Namespaces

FastSox, developed by OneDotNet Ltd, uses network namespaces as a core primitive in its Gateway container architecture. Each Gateway container — whether running the VRouter, WireGuard, or combo protocol stack — runs inside a Podman container with its own network namespace.

This gives FastSox hard isolation guarantees between tenants: a packet from one tenant's tunnel cannot be forwarded to another tenant's namespace through any kernel path, because the routing tables and interfaces are physically separated at the namespace level. There is no firewall rule that could be misconfigured to allow cross-tenant leakage — the namespaces enforce the boundary at the kernel data structure level.

The same veth-plus-NAT pattern described in this article is how FastSox's Host Agent wires up each Gateway container to the host's uplink interface during provisioning.

Summary

Linux network namespaces are one of the most powerful primitives in the Linux kernel. The key points to remember:

  • Each namespace has its own interfaces, routing table, iptables rules, and sockets.
  • ip netns add/list/delete/exec are the everyday management commands.
  • veth pairs are the standard cable between namespaces.
  • IP forwarding + iptables masquerade gives a namespace internet access.
  • Named namespaces live in /run/netns/; anonymous namespaces live only as long as a process holds a reference.
  • lsns -t net and /proc/<pid>/ns/net are the right tools for inspection.
  • Container runtimes (Docker, Podman) and VPN isolation tools are built entirely on this primitive.

For a deeper look at how virtual networks span multiple hosts, see our related article: What Is VXLAN and How to Use It.

#linux#netns#networking#containers#iproute2#advanced

Related Articles