DEV Community

Junxiao Shi
Junxiao Shi

Posted on • Edited on • Originally published at yoursunny.com

Enable IPv4 Access in EUserv IPv6-only VS2-free

This post is originally published on yoursunny.com blog https://yoursunny.com/t/2020/EUserv-IPv4/

EUserv is a virtual private server (VPS) provider in Germany.
Notably, they offer a container-based Linux server, VS2-free, free of charge.
VS2-free comes with one 1GHz CPU core, 1GB memory, and 10GB storage.
Although I already have more than enough servers to play with, who doesn't like some more computing resources for free?

There's one catch: the VS2-free is IPv6-only.
It neither has a public IPv4 address, nor offers NAT-based IPv4 access.
All you can have is a single /128 IPv6 address.

$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
546: eth0@if547: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b2:77:4b:c0:eb:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 2001:db8:6:1::6dae/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::5ed4:d66f:bd01:6936/64 scope link
       valid_lft forever preferred_lft forever
Enter fullscreen mode Exit fullscreen mode

If I attempt to access an IPv4-only destination, a "Network is unreachable" error appears:

$ host lgger.nexusbytes.com
lgger.nexusbytes.com has address 46.4.199.225
$ ping -n -c 4 lgger.nexusbytes.com
connect: Network is unreachable
Enter fullscreen mode Exit fullscreen mode

Not having IPv4 access severely restricts the usefulness of the VS2-free, because I would be unable to access many external resources that are not yet IPv6-enabled.
Is there a way to get some IPv4 access in the IPv6-only VS2-free vServer?

NAT64

Stateful NAT64 translation is a network protocol that allows IPv6-only clients to contact IPv4 servers using unicast UDP, TCP, or ICMP.
It relies on a dual-stack server, known as a NAT64 translator, to proxy packets between IPv6 and IPv4 networks.

There are a number of public NAT64 services in Europe that would enable IPv4 access from my server.
To use NAT64, all I need to do is changing the DNS settings in my server:

$ sudoedit /etc/resolvconf/resolv.conf.d/base
    nameserver 2a01:4f9:c010:3f02::1
    nameserver 2a00:1098:2c::1
    nameserver 2a00:1098:2b::1

$ sudo resolvconf -u
Enter fullscreen mode Exit fullscreen mode

Note that on a Debian 10 system with resolveconf package, the proper way to change DNS servers is editing /etc/resolvconf/resolv.conf.d/base and then executing resolvconf -u to regenerate /etc/resolv.conf.
If you modify /etc/resolv.conf directly, the changes will be overwritten during the next reboot.

After making the changing, DNS responses for IPv4-only destinations would contain additional IPv6 addresses that belong to the NAT64 translator, which would facilitate the connection:

$ host lgger.nexusbytes.com
lgger.nexusbytes.com has address 46.4.199.225
lgger.nexusbytes.com has IPv6 address 2a00:1098:2c::1:2e04:c7e1
lgger.nexusbytes.com has IPv6 address 2a01:4f9:c010:3f02:64:0:2e04:c7e1
lgger.nexusbytes.com has IPv6 address 2a00:1098:2b::2e04:c7e1

$ ping -n -c 4 lgger.nexusbytes.com
PING lgger.nexusbytes.com(2a00:1098:2c::1:2e04:c7e1) 56 data bytes
64 bytes from 2a00:1098:2c::1:2e04:c7e1: icmp_seq=1 ttl=41 time=39.9 ms
64 bytes from 2a00:1098:2c::1:2e04:c7e1: icmp_seq=2 ttl=41 time=39.7 ms
64 bytes from 2a00:1098:2c::1:2e04:c7e1: icmp_seq=3 ttl=41 time=39.6 ms
64 bytes from 2a00:1098:2c::1:2e04:c7e1: icmp_seq=4 ttl=41 time=39.8 ms
Enter fullscreen mode Exit fullscreen mode

It is easy to gain IPv4 access on the EUserv VS2-free container by using a public NAT64 service, but there are several drawbacks:

  • The IPv4 addresses of public NAT64 services are shared by many users. If any other user misbehaves, the shared IPv4 address of the NAT64 translator could be blocklisted by the destination IPv4 service.
  • The NAT64 translator could apply rate limits if it gets busy.
  • While we can contact an IPv4-only destination by its hostname, it is still not possible to contact an IPv4 address:
  $ ping 8.8.8.8
  connect: Network is unreachable
Enter fullscreen mode Exit fullscreen mode

IPv4 NAT over VXLAN

To get true IPv4 access on an IPv6-only server, we need to create a tunnel between the IPv6-only server and a dual-stack server, and then configure Network Address Translation (NAT) on the dual stack server.
Many people would think about using a VPN software, such as OpenVPN or WireGuard.
However, VPN is overkill, because there is a lighter weight solution: VXLAN.

VXLAN, or Virtual eXtensible Local Area Network, is a framework for overlaying virtualized layer 2 networks over layer 3 networks.
In our case, I can create a virtualized Ethernet (layer 2) network over an IPv6 (layer 3) network.
Then, I can assign IPv4 addresses to the virtual Ethernet adapters, in order to give IPv4 access to the previously IPv6-only VS2-free vServer.

I have a small dual-stack server in Germany, offered by Gullo's Hosting.
It is an OpenVZ 7 container.
It runs Debian 10, the same operating system as my VS2-free.
I will be using this server to share IPv4 to the VS2-free.

In the examples below:

  • 2001:db8:473a:723d:276e::2 is the public IPv6 address of the dual-stack server.
  • 2001:db8:6:1::6dae is the public IPv6 address of the IPv6-only server.
  • 192.0.2.1 is the public IPv4 address of the dual-stack server.

After reverting the DNS changes from the previous section, I execute the following commands on the EUserv vServer to setup a VXLAN tunnel:

sudo ip link add vx84 type vxlan id 0 remote 2001:db8:473a:723d:276e::2 local 2001:db8:6:1::6dae dstport 4789
sudo ip link set vx84 mtu 1420
sudo ip link set vx84 up
sudo ip addr add 192.168.84.2/24 dev vx84
sudo ip route add 0.0.0.0/0 via 192.168.84.1
Enter fullscreen mode Exit fullscreen mode

Note that I reduced the MTU of the VXLAN tunnel interface to 1420 from the default 1500.
This is necessary to accommodate the overhead of VXLAN headers, so that the encapsulated IPv6 packets can fit into the normal MTU.

On the dual-stack server, I execute these commands to setup its end of the tunnel and enable NAT:

sudo ip link add vx84 type vxlan id 0 remote 2001:db8:6:1::6dae local 2001:db8:473a:723d:276e::2 dstport 4789
sudo ip link set vx84 mtu 1420
sudo ip link set vx84 up
sudo ip addr add 192.168.84.1/24 dev vx84
sudo iptables-legacy -t nat -A POSTROUTING -s 192.168.84.0/24 ! -d 192.168.84.0/24 -j SNAT --to 192.0.2.1
Enter fullscreen mode Exit fullscreen mode

It's worth noting that the command for enabling NAT is iptables-legacy instead of iptables.
Apparently, there are two variants of iptables that access different kernel APIs.
Although both commands would succeed, only iptables-legacy is effective in an OpenVZ 7 container.
This had me scratching my head for a while.

After these setup, I'm able to access IPv4 from the IPv6-only server:

$ traceroute -n -q1 lgger.nexusbytes.com
traceroute to lgger.nexusbytes.com (46.4.199.225), 30 hops max, 60 byte packets
 1  192.168.84.1  23.566 ms
 2  *
 3  213.239.229.89  34.058 ms
 4  213.239.229.130  23.615 ms
 5  94.130.138.54  24.077 ms
 6  46.4.199.225  23.955 ms
Enter fullscreen mode Exit fullscreen mode

In Wireshark, these packets would look like this:

Frame 5: 146 bytes on wire (1168 bits), 146 bytes captured (1168 bits)
Linux cooked capture v1
Internet Protocol Version 6, Src: 2001:db8:6:1::6dae, Dst: 2001:db8:473a:723d:276e::2
User Datagram Protocol, Src Port: 53037, Dst Port: 4789
Virtual eXtensible Local Area Network
Ethernet II, Src: b6:ab:7c:af:51:d1 (b6:ab:7c:af:51:d1), Dst: be:ce:c9:cf:a7:f3 (be:ce:c9:cf:a7:f3)
Internet Protocol Version 4, Src: 192.168.84.2, Dst: 46.4.199.225
User Datagram Protocol, Src Port: 50047, Dst Port: 33439
Data (32 bytes)
Enter fullscreen mode Exit fullscreen mode

Make Them Persistent

Effect of ip commands will be lost after a reboot.
Normally the VXLAN tunnel should be written into the ifupdown configuration file, but as I discovered earlier, OpenVZ 7 would revert any modifications to the /etc/network/interfaces file.
Thus, I have to apply these changes dynamically using a systemd service.

The systemd service unit for the IPv6-only server is:

[Unit]
Description=VXLAN tunnel to vps9
After=network-online.target
Wants=network-online.target

[Service]
ExecStartPre=ip link add vx84 type vxlan id 0 remote 2001:db8:473a:723d:276e::2 local 2001:db8:6:1::6dae dstport 4789
ExecStartPre=ip link set vx84 mtu 1420
ExecStartPre=ip link set vx84 up
ExecStartPre=ip addr add 192.168.84.2/24 dev vx84
ExecStartPre=ip route add 0.0.0.0/0 via 192.168.84.1
ExecStart=true
RemainAfterExit=yes
ExecStopPost=ip link del vx84

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

The systemd service unit for the dual-stack server is:

[Unit]
Description=VXLAN tunnel to vps2
After=network-online.target
Wants=network-online.target

[Service]
ExecStartPre=ip link add vx84 type vxlan id 0 remote 2001:db8:6:1::6dae local 2001:db8:473a:723d:276e::2 dstport 4789
ExecStartPre=ip link set vx84 mtu 1420
ExecStartPre=ip link set vx84 up
ExecStartPre=ip addr add 192.168.84.1/24 dev vx84
ExecStartPre=iptables-legacy -t nat -A POSTROUTING -s 192.168.84.0/24 ! -d 192.168.84.0/24 -j SNAT --to 192.0.2.1
ExecStart=true
RemainAfterExit=yes
ExecStopPost=iptables-legacy -t nat -D POSTROUTING -s 192.168.84.0/24 ! -d 192.168.84.0/24 -j SNAT --to 192.0.2.1
ExecStopPost=ip link del vx84

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

On both servers, this service unit file should be uploaded to /etc/systemd/system/vx84.service.
Then, I can enable the service unit with these commands:

sudo systemctl daemon-reload
sudo systemctl enable vx84
Enter fullscreen mode Exit fullscreen mode

They will take effect after a reboot:

$ ip addr show vx84
4: vx84: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether f2:4c:5d:6c:4b:25 brd ff:ff:ff:ff:ff:ff
    inet 192.168.84.2/24 scope global vx84
       valid_lft forever preferred_lft forever
    inet6 fe80::f04c:5dff:fe6c:4b25/64 scope link
       valid_lft forever preferred_lft forever

$ ping -c 4 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=57 time=28.9 ms
64 bytes from 1.1.1.1: icmp_seq=2 ttl=57 time=28.7 ms
64 bytes from 1.1.1.1: icmp_seq=3 ttl=57 time=28.9 ms
64 bytes from 1.1.1.1: icmp_seq=4 ttl=57 time=28.10 ms
Enter fullscreen mode Exit fullscreen mode

Conclusion

This article describes two methods of gaining IPv4 access on an IPv6-only server such as the EUserv VS2-free.

  • Use a public NAT64 translator.
  • Establish a VXLAN tunnel to a dual-stack server, and then configure IPv4 addresses and NAT on the virtual Ethernet interfaces.

To workaround OpenVZ 7 limitation of not being able to modify /etc/network/interfaces, we use a systemd service unit to dynamically establish and teardown the VXLAN tunnel and related configuration.

Top comments (1)

Collapse
 
erickythierry profile image
Ericky Thierry

Great article, gave me ideas on how to use an IPv6 only VPS that I have.

Could you tell me if it is possible to create a routing rule with iptables so that it uses only one of the several IPv6s available on the VPS as the outgoing address for requests?