When I came back to China in 2018, I learned the hard way that the network carriers don’t give out public IPv4 addresses anymore. Now, you’re stuck behind their NAT, using the same IP address as everyone else, forced to live that peasant life.

I mean, sure, it’s not a big problem for 95% of Internet users out there. This system works just fine with major websites and online services. It’s only when you want to be a website or online service yourself do you run into these network problems.

So, you want to host your own content. Too bad, port forwarding won’t work because you are double-NATted. So what do you do?

Well, you can try that awesome reverse-SSH tutorial post I posted two years ago. But this approach comes with its own set of problems. For one, I never managed to achieve the full bandwidth I was given by the network carrier. SSH is a slow protocol, and it showed with the speed tests. Also, the method was prone to failure, crashing at the most inopportune times. I knew I had to come up with something else.

It was only when I saw this article that I was determined to try this approach myself. Major thanks to Jordan Crawford for the original post. This post goes into how I set it up so I can help others and so that I can come back to it should my setup ever go belly-up in the future.

So what is tinc?

tinc is a VPN software, just like OpenVPN and IPsec. However, instead of one server acting as a central, tinc expands with your use, preferring to send data directly to the recipient. This model does mean there is some configuration and set-up involved, but it’s not that hard once you get used to it.

tinc also has a number of benefits, such as auto-reconnects in case of connection failure, small footprint and fast transmissions. It checked all of my boxes, so I decided to give it a try.

Installing tinc

Go to any VPS provider and rent a VPS. I chose DigitalOcean, but popular choices include Linode and Vultr. Whichever VPS gives you a public IPv4 address will work with this tutorial.

Install tinc. This varies by distribution, but on Ubuntu 18.04.3 I just had to run:

sudo apt update
sudo apt install tinc -y

Run tincd --help to make sure you have it installed correctly. Done? Next steps!

Now you need the tinc client on your home server, or the machine you’re trying to break free from China’s double-NAT situation (or any other double-NAT situation, for that matter). I have an UnRAID home server, so I chose to use a Docker container. I used this image from JensErat.

Configuring tinc on the server

We’ll now go into configuration on the public server. tinc is a mesh-based software, which means we need to make sure the two computers (or servers, ahem) know each other.

Before we begin, understand that tinc keeps all configuration files under /etc/tinc/. Sound good?

Log into the public endpoint (the VPS that you just rented). Go to /etc/tinc. We need to now decide on a netname. The netname will just distinguish your tinc VPN from other tinc instances. It is important that this remains the same across the two machines. I just chose something like homelab.

Run the following command:

sudo mkdir -p /etc/tinc/homelab/hosts

This creates all of the directories required for this instance of tinc. Now, create and edit /etc/tinc/homelab/tinc.conf. This file will hold basic information about the VPN:

Name = vps
AddressFamily = ipv4
Interface = tun0

The name denotes the machine name. I just chose vps for simplicity. tun0 denotes the interface name, as shown. We won’t go into IPv6 since this is all about getting a public IPv4 instance. Save and exit.

Now we need to edit another host file. Create and edit /etc/tinc/homelab/hosts/vps:

Address =
Subnet =

This bit is *extremely* important. First, get the public IPv4 address of the VPS, and then enter it in the Address field. You can see that I wrote for example. You can also forward a domain to the IP, and then write the domain here, for example test.example.com. You NEED to correctly edit this, or else the machines will not be able to connect.

For the Subnet field, you can see that we’re using a /32 address block, and we’ve assigned this machine the internal VPN address of You do not need to change this, usually.

Save and exit. Now we need to generate the private keys.

Execute this command:

sudo tincd -n homelab -K4096

Make sure to replace homelab if you put in some other name for your VPS instance earlier. tincd will generate the proper keys and put them in your configuration directory.

We need to create two files – /etc/tinc/homelab/tinc-up and /etc/tinc/homelab/tinc-down. This shows tinc what interface to bring up each time the server is restarted.

For /etc/tinc/homelab/tinc-up write the following:

ifconfig $INTERFACE netmask

And for /etc/tinc/homelab/tinc-down:

ifconfig $INTERFACE down

Now let’s make both scripts executable:

sudo chmod 755 /etc/tinc/netname/tinc-*

Now we need to set up our clients.

Configuring tinc on the clients

The process is mostly the same. I’ll quicken it up so that you don’t get bored. This time, the machine will be named homeserver.

I’ll assume tinc is already installed. Run:

sudo mkdir -p /etc/tinc/homelab/hosts

Again, replace homelab… you know the drill.

Let’s edit the VPN instance configuration file, or /etc/tinc/homelab/tinc.conf:

Name = homeserver
AddressFamily = ipv4
Interface = tun0
ConnectTo = vps

Pretty self-explanatory, right? Save and exit!

Now the host configuration. Edit /etc/tinc/homelab/hosts/homeserver:

Subnet =

You can see that we’re still using the same address block, but we’re now assigning this machine in the internal VPN IP address pool. Also, some smart readers may have noticed that this one doesn’t have the Address directive. That’s because it doesn’t need to. The clients first connect to the server endpoint, and once the connection is established they behave like a mesh.

Again, now we need to generate the keys:

sudo tincd -n homelab -K4096

Done? Cool. Now we need to make the /etc/tinc/homelab/tinc-up and /etc/tinc/homelab/tinc-down files again. Take note though. You still need to edit the internal address here, or you will end up with a broken connection!


ifconfig $INTERFACE netmask


ifconfig $INTERFACE down

Come on, we’ve done this before, almost there! Let’s make the scripts executable:

sudo chmod 755 /etc/tinc/homelab/tinc-*

What’s next?

Connecting the server and clients

I told you that tinc is a mesh network, and that comes into play here. Before tinc will connect, you need to distribute the private keys that we generated earlier.

Use whatever method you want, but make sure that the files in the /etc/tinc/homelab/hosts/ directory are all synced up! That means, you should have the files vps and homeserver on both servers in the directory above. This allows them to verify each others’ identity. I use SFTP here.

Now we need to test the connection. Execute tincd in debug mode, but start with the public server first. Run:

sudo tincd -n homelab -D -d3

Then run that on the node, your private home server. There should be a couple of log lines indicating the connection is successful. If not, go back and try the configuration again. Maybe you made a typo.

Now, on your home server, try pinging It should respond. On your VPS’s console, try pinging Your home server should respond.

Congratulations! Your tinc VPN works. What’s next? Now, we need to set it to run on boot. Make this file on each server, /etc/tinc/nets.boot:

# This file contains all names of the networks to be started on system startup.

Again, replace… you got it? Cool. Both servers must have this file.

Now run:

sudo service tinc start

On some machines you may need to run this instead of service:

sudo systemctl enable tinc
sudo systemctl start tinc


Setting up Plex

Huge thanks to this GitHub Gist author! Install nginx on the VPS. Then, edit the nginx configuration at /etc/nginx/sites-available/default:

http {
    upstream plex-upstream {

    server {
        listen 80;

        # server names for this server.
        # any requests that come in that match any these names will use the proxy.
        server_name plex.example.com

        # this is where everything cool happens (you probably don't need to change anything here):
        location / {
            # if a request to / comes in, 301 redirect to the main plex page.
                    # but only if it doesn't contain the X-Plex-Device-Name header
            # this fixes a bug where you get permission issues when accessing the web dashboard
            if ($http_x_plex_device_name = '') {
                rewrite ^/$ http://$http_host/web/index.html;

            # set some headers and proxy stuff.
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_redirect off;

            # include Host header
            proxy_set_header Host $http_host;

            # proxy request to plex server
            proxy_pass http://plex-upstream;

Note that the Gist edits the nginx.conf file directly, but I recommend you don’t do this. I made this mistake, but the better solution is to edit the default file in sites-available, as that is correctly symlinked into sites-enabled.

But once you have this set up, Plex should now be working (assuming the home server runs Plex on port 32400). Make sure to disable Remote Access and paste this in the Custom Access URL:


Yes, you need to put in both for some reason for it to work.

Set up HTTPS with Lets Encrypt

This is the quick rundown version of the full tutorial available here, but since we have nginx mostly set up already let’s just run through the process here, with the only steps that we need to take.

Install certbot first:

sudo add-apt-repository ppa:certbot/certbot
sudo apt install python-certbot-nginx

Run to obtain your certificate (replace your domain names here):

sudo certbot --nginx -d plex.example.com


Forwarding ports

Let’s enable IPv4 forwarding first. Log onto the VPS, and edit /etc/sysctl.conf. Change this line:

# net.ipv4.ip_forward = 1


net.ipv4.ip_forward = 1


sudo sysctl -p

Now we need to set up iptables. Define what port you want to pass with this:

iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 8000 -j DNAT --to-destination
iptables -t nat -A PREROUTING -p udp -i eth0 --dport 8001 -j DNAT --to-destination

This allows TCP port 8000 to forward connections to port 8000 on the home server. The same goes for UDP port 8001. You can optionally add -s xxx.xxx.xxx.xxx or -s xxx.xxx.xxx.xxx/xx to redirect specific sources only.

But we also need one more step. Since port forwarding is messing with NAT, we need to add a couple more rules so that the network addresses are properly re-written:

iptables -t nat -A POSTROUTING -o tun0 -p tcp --dport 8000 -d -j SNAT --to-source
iptables -t nat -A POSTROUTING -o tun0 -p udp --dport 8001 -d -j SNAT --to-source