Setting IPv6 for HOST machine working on Debian jessi and GUEST VM using Proxmox 4.3 installation from ovh template
Installation process is giving us ready to use machine which we can access via SSH (port 22) or web interface (port 8006) with setup network interfaces. Unfortunately during the process of testing IPv6 on vanilla Proxmox 4.3 delivered by OVH doesn't work out of the box.
ping6 ipv6.google.com
connect: Network is unreachable
First let's check if you have IPv6 entry for vmbr0 in /etc/network/interfaces it should look like this
cat /etc/network/interfaces
iface vmbr0 inet6 static
address 2001:xxxx:xxxx:xxxx::
netmask 64
post-up /sbin/ip -f inet6 route add 2001:xxxx:xxxx:xxff:ff:ff:ff:ff dev vmbr0
post-up /sbin/ip -f inet6 route add default via 2001:xxxx:xxxx:xxff:ff:ff:ff:ff
pre-down /sbin/ip -f inet6 route del default via 2001:xxxx:xxxx:xxff:ff:ff:ff:ff
pre-down /sbin/ip -f inet6 route del 2001:xxxx:xxxx:xxff:ff:ff:ff:ff dev vmbr0
If you have IPv6 configuration for vmbr0 than you are ready to go just run those commands in the terminal in order to set default route:
/sbin/ip -f inet6 route add 2001:xxxx:xxxx:xxff:ff:ff:ff:ff dev vmbr0
/sbin/ip -f inet6 route add default via 2001:xxxx:xxxx:xxff:ff:ff:ff:ff
This will set up proper routing and you should be able to test your connectivity
ping6 -c 3 ipv6.google.com
PING ipv6.google.com(par03s15-in-x0e.1e100.net) 56 data bytes
64 bytes from par03s15-in-x0e.1e100.net: icmp_seq=1 ttl=57 time=9.47 ms
64 bytes from par03s15-in-x0e.1e100.net: icmp_seq=2 ttl=57 time=11.1 ms
64 bytes from par03s15-in-x0e.1e100.net: icmp_seq=3 ttl=57 time=10.4 ms
--- ipv6.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 9.470/10.375/11.194/0.706 ms
To sum up post-up commands from your /etc/network/interfaces are not being executed and that's why HOST has no IPv6 route set. In order to fix that I have placed them in /etc/rc.local just before exit 0 so it should look like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
true > /etc/motd
if [ -e /etc/lsb-release ]
then
grep DISTRIB_DESCRIPTION /etc/lsb-release | sed 's/^DISTRIB_DESCRIPTION="\(.*\)"$/\1/' > /etc/motd
fi
uname -a >> /etc/motd
echo >> /etc/motd
echo "server : `cat /root/.mdg 2>/dev/null`" >> /etc/motd
echo "ip : `cat /etc/network/interfaces | grep "address" | head -n 1 | cut -f 2 -d " "`" >> /etc/motd
echo "hostname : `hostname`" >> /etc/motd
echo >> /etc/motd
/bin/cat /etc/motd > /etc/issue
# setting here IPv6 routing because it's not working in post-up phase in /etc/network/interfaces
/sbin/ip -f inet6 route add 2001:xxxx:xxxx:xxff:ff:ff:ff:ff dev vmbr0
/sbin/ip -f inet6 route add default via 2001:xxxx:xxxx:xxff:ff:ff:ff:ff
exit 0
Without going in to many details we need proxy ndp enabled and IPv6 forwarding enabled in order to have our VMs connected to the outside world.
In order to do that we need to set up net.ipv6.conf.default.proxy_ndp = 1 and net.ipv6.conf.default.forwarding = 1 and disable net.ipv6.conf.all.autoconf = 0. We do that in /etc/sysctl.conf
net.ipv6.conf.all.autoconf = 0
net.ipv6.conf.default.autoconf = 0
net.ipv6.conf.vmbr0.autoconf = 0
net.ipv6.conf.all.accept_ra = 0
net.ipv6.conf.default.accept_ra = 0
net.ipv6.conf.vmbr0.accept_ra = 0
net.ipv6.conf.vmbr0.accept_ra = 0
net.ipv6.conf.vmbr0.autoconf = 0
net.ipv6.conf.all.router_solicitations = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1
net.ipv6.conf.default.proxy_ndp = 1
net.ipv6.conf.all.proxy_ndp = 1
net.ipv4.ip_forward = 1
After we finish setting up parameters in sysctl.conf we need to run sysctl -p in order to load new settings into kernel.
sysctl -p
And final step we need to set up ndp proxy which means for each IPv6 address of VM running in our HOST environment we need to execute following command (NOTE: 2001:xxxx:xxxx:xxxx::22 is and IPv6 address set on the VM):
ip -6 neigh add proxy 2001:xxxx:xxxx:xxxx::22 dev vmbr0
In this way we tell the system that we have a VM with the address 2001:xxxx:xxxx:xxxx::22 and it's accessible via vmbr0. After setting the proxy we should be able to ping our VM from Internet and also ping IPv6 address from VM.
Obviously this is very not practical approach that's why we will install ndppd which we do this for us.
wget http://debian.sdinet.de/jessie/main/ndppd/ndppd_0.2.4-1~sdinetG1_amd64.deb
dpkg -i ndppd_0.2.4-1~sdinetG1_amd64.deb
echo "proxy vmbr0 {
rule 2001:xxxx:xxxx:xxxx::/64 {
}
}">/etc/ndppd.conf
NOTE 2001:xxxx:xxxx:xxxx:: is the main IPv6 address for vmbr0 After installing the ndppd and creating above config file all we need to do is restart the ndppd daemon
/etc/init.d/ndppd restart
This is all we need to do in order to have IPv6 HOST and VM connectivity.
After setting up HOST we can start VM machine and set up network interface with network address from /64 range. Log in to your vm and set up public interface with IPv6 address:
/etc/network/interfaces
auto ens18
iface ens18 inet6 static
address 2001:xxxx:xxxx:xxxx::1
netmask 64
# Our HOST IPv6 address
gateway 2001:xxxx:xxxx:xxxx::
After we have set the inet6 entry for ens18 network interface we are IPv6 ready which means we can access VM from the Internet and access IPv6 addresses from VM.
I'm trying implement a IPv4 NAT + IPv6 routing configuration. Something like this:
Then using Npd6 daemon to discover neighbor containers. It works but partially... thus... Containers become reachable from internet (but they lose some packets) and, from inside, they works just after I run (for example)
traceroute6 2600::
and then, after a while (30 minutes or more) I cannot ping outside anymore. IPv4 with NAT works great instead.Any ideas?