Skip to content

Instantly share code, notes, and snippets.

@taslabs-net
Created August 20, 2025 03:15
Show Gist options
  • Select an option

  • Save taslabs-net/aa1af9b5e3e5213e561e616d1ff3fabe to your computer and use it in GitHub Desktop.

Select an option

Save taslabs-net/aa1af9b5e3e5213e561e616d1ff3fabe to your computer and use it in GitHub Desktop.

How I Brought IPv6 to My Entire Network Using Just One IPv4 Address

"IPv6 is the future" for over a decade. Yet here we are in 2025, and many of us are still stuck with limited IPv4 addresses while our ISPs drag their feet on IPv6 deployment. I decided to stop waiting.

This is the story of how I deployed production-ready IPv6 across an entire network using provider-independent address space, Cloudflare Magic Transit, and a single IPv4 address. No ISP support required.

Why I Needed This Solution

My situation might sound familiar:

  • Single public IPv4 address from my ISP
  • No native IPv6 support (and no timeline for it)
  • Multiple services that needed to be globally accessible
  • Growing network with containers, VMs, and IoT devices all hungry for addresses
  • Security concerns about exposing management interfaces

The goal was clear: achieve global IPv6 connectivity with enterprise-grade security, using existing infrastructure.

The Architecture That Made It Possible

Example architecture:

Internet (IPv6 Traffic) 
    ↓
Cloudflare Edge (AS13335)
    ├── BGP announcement of the /40 prefix
    ├── DDoS protection
    └── Magic Firewall filtering
    ↓
GRE Tunnel (IPv6 encapsulated over IPv4)
    ↓
My Router (single IPv4: 203.0.113.1)
    ↓
Internal Network (full IPv6)
    ├── Proxmox cluster
    ├── Containers
    ├── VMs
    └── IoT devices

The beauty of this setup? From the internet's perspective, the entire network sits behind Cloudflare's massive global infrastructure. From the network's perspective, every device has a real, globally-routable IPv6 address.

The Implementation Journey

Step 1: Getting IPv6 Space

For this deployment, I'm using a /40 IPv6 allocation. This provides an enormous amount of address space—enough for any conceivable growth. The key advantage is that this address space is provider-independent and fully portable.

Step 2: Building the Tunnel

The magic happens through a GRE tunnel between my router and Cloudflare:

Tunnel Configuration:
  Local: Single IPv4 (203.0.113.1)
  Remote: Cloudflare endpoint (192.0.2.1)
  MTU: 1420 (optimized for IPv6-over-IPv4)
  Transit: 10.0.0.0/31 (IPv4) + IPv6 link-local

MTU is set to 1420 to account for the GRE encapsulation overhead and prevent fragmentation issues.

Verification:

# Check tunnel status
$ ip tunnel show gre1
gre1: gre/ip remote 192.0.2.1 local 203.0.113.1 ttl 255

# Verify MTU
$ ip link show gre1
5: gre1@NONE: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN
    link/gre 203.0.113.1 peer 192.0.2.1

# Test connectivity to Cloudflare
$ ping6 -c 3 2001:db8:cf::1
PING 2001:db8:cf::1(2001:db8:cf::1) 56 data bytes
64 bytes from 2001:db8:cf::1: icmp_seq=1 ttl=64 time=14.2 ms
64 bytes from 2001:db8:cf::1: icmp_seq=2 ttl=64 time=14.5 ms
64 bytes from 2001:db8:cf::1: icmp_seq=3 ttl=64 time=14.1 ms

--- 2001:db8:cf::1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 14.1/14.3/14.5/0.2 ms

Step 3: Distributing IPv6 Addresses

Once the tunnel was up, I had two distribution strategies:

For directly connected devices:

  • Static assignments for servers (Proxmox nodes at ::12, ::13, ::14)
  • SLAAC for containers and VMs
  • Each device gets a globally unique address from the /64

Testing SLAAC autoconfiguration:

# On a container/VM
$ ip -6 addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP
    inet6 2001:db8:1::f0a3:8bff:fe21:4567/64 scope global dynamic
       valid_lft 86389sec preferred_lft 14389sec
    inet6 fe80::f0a3:8bff:fe21:4567/64 scope link
       valid_lft forever preferred_lft forever

# Verify global connectivity
$ curl -6 ifconfig.co
2001:db8:1::f0a3:8bff:fe21:4567

# Traceroute showing path through Cloudflare
$ traceroute6 google.com
traceroute to google.com (2607:f8b0:4005:814::200e), 30 hops max, 80 byte packets
 1  gateway (2001:db8:1::1)  0.521 ms  0.498 ms  0.487 ms
 2  cloudflare-tunnel (2001:db8:cf::1)  14.234 ms  14.225 ms  14.213 ms
 3  * * *
 4  2400:cb00:71:1024::a29e:5f12  15.987 ms  15.975 ms  15.963 ms
 5  2001:4860:0:1::5eba  16.234 ms  16.223 ms  16.210 ms
 6  2001:4860:0:1::3097  17.456 ms  17.234 ms  17.123 ms
 7  lax28s01-in-x0e.1e100.net (2607:f8b0:4005:814::200e)  16.897 ms  16.885 ms  16.873 ms

For downstream networks:

  • DHCPv6-PD hands out /60 prefixes to other routers
  • Each router automatically configures its own /64 networks
  • Zero manual configuration required

DHCPv6-PD in action:

# On downstream router
$ show dhcpv6-pd status
DHCPv6 Prefix Delegation Status:
  Prefix: 2001:db8:100::/60
  Valid Lifetime: 86400 seconds
  Preferred Lifetime: 43200 seconds
  Renewal in: 21543 seconds

# Automatic network creation
$ ip -6 route show | grep 2001:db8:100
2001:db8:100:1::/64 dev lan0 proto kernel metric 256
2001:db8:100:2::/64 dev lan1 proto kernel metric 256
2001:db8:100:3::/64 dev guest proto kernel metric 256

Step 4: Securing Everything at the Edge

Here's where Cloudflare Magic Firewall becomes the hero. Instead of managing firewall rules on my router, I set one simple rule at Cloudflare's edge:

Expression: (ip.dst in {ipv6-prefix::/40} and tcp.dstport != 443)
Action: Block

This means:

  • ✅ HTTPS traffic passes through
  • ❌ SSH, management UIs, and everything else blocked at Cloudflare
  • ✅ The network never sees unwanted traffic
  • ✅ Automatic DDoS protection included

No more worrying about exposing Proxmox's web UI or SSH to the internet—they're simply not reachable from outside. As needs evolve, security rules can be selectively relaxed at the edge to allow specific services through, or made more granular internally for fine-tuned access control.

Testing the firewall:

# From external network - SSH blocked
$ ssh -6 user@2001:db8:1::12
ssh: connect to host 2001:db8:1::12 port 22: Connection timed out

# From external network - HTTPS allowed
$ curl -6 -I https://[2001:db8:1::100]
HTTP/2 200
server: nginx/1.24.0
date: Mon, 19 Aug 2025 14:23:45 GMT
content-type: text/html

# From internal network - Everything works
$ ssh user@2001:db8:1::12
Welcome to Ubuntu 22.04.3 LTS
Last login: Mon Aug 19 14:20:13 2025 from 2001:db8:1::50

# Check Cloudflare logs showing blocked attempts
$ cf-logs --filter="action==block" --last=1h
2025-08-19T14:15:23Z BLOCK TCP 2001:470:1f0b::dead:beef -> 2001:db8:1::12:22 (SSH)
2025-08-19T14:16:45Z BLOCK TCP 2001:470:1f0b::bad:cafe -> 2001:db8:1::13:8006 (Proxmox UI)
2025-08-19T14:18:12Z BLOCK TCP 2001:470:1f0b::hack:er01 -> 2001:db8:1::14:3389 (RDP)

Step 5: Gaining Visibility

To understand what's happening on the network, I configured NetFlow export to Cloudflare:

NetFlow Configuration:
  Version: IPFIX v10
  Collector: Cloudflare endpoint:2055
  Sampling: 1:1 (all packets)
  Result: Complete traffic visibility

Now you can see exactly what's flowing through the network, identify patterns, and troubleshoot issues with real data.

Sample NetFlow data from Cloudflare dashboard:

# Top talkers by traffic volume
$ cf-analytics --type=netflow --top=5
Source                          Destination                    Protocol  Bytes      Flows
2001:db8:1::100                2607:f8b0:4005::200e          TCP       45.2 MB    1,234
2001:db8:1::101                2606:4700:3037::ac43:8b73     TCP       23.1 MB      892
2001:db8:1::12                 2001:4860:4860::8888          UDP        8.7 MB      456
2001:db8:1::50                 2620:fe::fe                   UDP        3.2 MB      234
2001:db8:1::200                2606:4700::6812:1a65          TCP        2.1 MB      123

# Traffic by protocol
$ cf-analytics --type=protocol --period=24h
Protocol    Flows    Bytes       Percentage
TCP         8,234    823.4 MB    72.3%
UDP         2,456    234.1 MB    20.6%
ICMPv6        892     45.2 MB     4.0%
Other         234     35.1 MB     3.1%

# Current active connections
$ cf-analytics --type=active --limit=10
Proto  Source:Port                         Dest:Port                          State      Duration
TCP    [2001:db8:1::100]:45234      ->    [2607:f8b0:4005::200e]:443       ESTABLISHED  00:12:34
TCP    [2001:db8:1::101]:52341      ->    [2606:4700:3037::ac43:8b73]:443  ESTABLISHED  00:08:21
UDP    [2001:db8:1::12]:53421       ->    [2001:4860:4860::8888]:53        ACTIVE       00:00:02
TCP    [2001:db8:1::50]:38291       ->    [2620:149:af0::10]:443           TIME_WAIT    00:00:45

Technical Reference

For those who want the deep dive:

  • Stack: Router with GRE support, Cloudflare Magic Transit, /40 IPv6 allocation
  • Protocols: GRE (RFC 2784), DHCPv6-PD (RFC 3633), SLAAC (RFC 4862)
  • Monitoring: NetFlow v10/IPFIX (RFC 7011)
  • Security: Stateless filtering at edge, local stateful backup

Performance Metrics

# Latency comparison
$ mtr -6 -r -c 100 google.com
HOST                                       Loss%   Avg   Best  Wrst  StDev
1. gateway (2001:db8:1::1)                 0.0%   0.5   0.3   0.8    0.1
2. cloudflare-tunnel (2001:db8:cf::1)      0.0%  14.3  14.1  15.2    0.2
3. cloudflare-edge                         0.0%  14.8  14.5  16.1    0.3
4. google-peer                             0.0%  16.2  15.9  18.3    0.4
5. lax28s01-in-x0e.1e100.net               0.0%  16.9  16.5  19.2    0.5

# Bandwidth test
$ iperf3 -6 -c speedtest.example.net
Connecting to host speedtest.example.net, port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  1.09 GBytes   938 Mbits/sec  sender
[  5]   0.00-10.00  sec  1.09 GBytes   935 Mbits/sec  receiver

# MTU path discovery
$ tracepath6 -n 2001:4860:4860::8888
 1?: [LOCALHOST]                      pmtu 1420
 1:  2001:db8:1::1                    0.543ms 
 2:  2001:db8:cf::1                   14.234ms 
 3:  2001:db8:cf::1                   14.186ms pmtu 1420
 4:  2400:cb00:71:1024::a29e:5f12     15.432ms 
 5:  2001:4860:0:1::5eba              16.123ms 
 6:  2001:4860:4860::8888             16.897ms reached
     Resume: pmtu 1420 hops 6 back 6
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment