For those who want to run Tailscale on their Docker containers, but don't want to switch to images based off linuxserver.io, you can still run Tailscale as a sidecar container, and use "network_mode: service:tailscale"
I do that for my containers and it is incredibly useful for cross containers communication, especially for containers that are hosted in different dedicated servers.
I run my game servers using `network_mode: service:tailscale` and every time the game server needs to restart (or crash) Tailscale will permanently lose connectivity and needs to be recreated (restart doesn't work).
To solve this problem I add another container which should never need to be restarted, and both the game and Tailscale use the networking of that container. This is also the exact use case of Kubernetes' pause containers, so I just use the EKS pause image from ECR public gallery.
Another tip I'd recommend is to run the Tailscale container with `TS_USERSPACE: 'false'` `TS_DEBUG_FIREWALL_MODE: nftables` (since autodetection fails on my machine) and give it `CAP_NET_ADMIN`. This allow Tailscale to use tun device instead of emulation, and it supposed to be more performant. But the clear benefit is that the game server will see everyone's Tailnet IP instead of 127.0.0.1.
It'll work but my Minecraft server sees everyone as 127.0.0.1. After disabling TS_USERSPACE I see each person's Tailnet IP. Tailscale doesn't provide this information anywhere (since their node name is private), so once I have their IP address I can also use `tailscale ping` to ping the IP and see whether the connection is going through relay or direct, which is helpful when debugging their latency.
I use quite a few linuxserver.io containers on my home stack on my Synology NAS, and they've been awesome. When I see that they're from there, I know they'll be reliable and the setup process is going to be straightforward and similar to other containers I've already used.
Ironically, the 1 container I really wanted to use this Tailscale mod for was not from linuxserver.
I find it's a bit annoying that almost all their images assume root access by default. Their init script does a bunch of things as root and only switch to a non-root user at the very last step before starting the main process if some magic environment variable is discovered. If your infra does not allow root users in containers you can't use their images.
It's also too much magic for my liking. Some software distributed as a single executable binary gets packaged in some over complicated base image on top of another base image, when I can technically just copy the binary into a scratch and call it a day. I understand the benefits when they have to manage tons of images at scale, but my life has been much easier with images packaged by myself or the upstream projects.
I am really impressed by what the tailscale folks have been building. I use their product suite regularly and have nothing but good things to say about it. I will be tinkering with this mod as well starting next week ;)
Not every container is based on the LinuxServer.io stack. I can't take any arbitrary container and use the docker mod and have it work.
I have over 25 containers running on my home server and not a single one of them is based on a LinuxServer.io image. This "universal" mod would work with 0 of them.
As others have said, you can run a sidecar container and proxy your current containers through the sidecar, and into the Tailscale network. They are universal in that the docker containers can run on any docker host, not that they are guaranteed to mesh/drop in and run inside whatever random containers you already run. Not sure why I am being downvoted for asking a genuine question out of curiosity…
I would disagree that containers aren't supposed to run more than one process. It's just discouraged because a lot of people aren't well versed in the pitfalls of being PID 1. Fedora's toolbox is a great counter-example, as is systemd now being able to boot up as your PID 1 in some container distros without much modification.
No question, just a thanks. I've read a lot of your stuff, and it's always incredibly insightful and clever. Thanks for being an amazing Internet citizen!
I wonder if this will fix the issue of appending -n to new ephemeral servers that join the network.
For example, if you have a service wiki-1 and that container/instance gets restarted, it then appears on your tailnet as wiki-1 making users unable to access it at wiki/
Their official solution is to run a logout command before shutting down but that's not always possible.
This is where mounting a state volume helps. If you mount a state volume, you don't need to make the containers ephemeral unless you expect them to move between hosts frequently. If that is the case, I'd love to hear more about your use case so that I can suggest a better alternative.
That is helpful to know for Docker containers, thank you.
The use-case where we find the renaming most frustrating is typically when we start a cloud instance with a Tailscale setup script in the cloud init (via Terraform). If we, say, change a parameter that requires Terraform to restart that instance, then the freshly-started instance will be given a `-1` name by Tailscale and the old instance will be offline.
I wish there would simply be a --force-hostname option or something of that nature that tells Tailscale "if a host is authenticating with this name, give it that name, any older machines using that name should be kicked off"
This is really cool, I didn’t even know Docker mods existed. That’s the best kind of cool.
I wonder if the internals will be open sourced? I assume it’s a pretty “simple” go tcp proxy that listens on the tailnet instead of an open port. I had been thinking about writing one for our services at work, so maybe we can use this, but I’d prefer to build the binary directly into our containers.
One of my favorite applications of this is this little tool that turns Wireguard VPNs into SOCKS5 proxies (which you can selectively enable in your browser)
This is really cool. Networking in general is full of quirks and what people think is full of "magic".
Full disclosure, I am founder of Adaptive [1]. We use a similar technique to the one with VPN exposed as SOCK5 proxy but for accessing internal infrastructure resources.
I think we had the same idea, but I didn't get to finish building mine. OIDC tokens being available in most CI systems these days is a nice building block.
Docker doesn't do mods. As the article says, this is possible due to s6 and s6-overlay, which is included with linuxserver.io docker images, combined with a set up scripts that set it all up. This does prevent your containers from being immutable.
All the code for LSIO images is available on their GitHub.
It looks like I need to regenerate the auth key every 90 days, which kind of kills this for me. I definitely don't want to have to update all my docker stuff every 90 days, and it's almost assuredly going to go offline right when I can't deal with it.
The trick is to persist the tailscale var volume. The auth key is only used when setting up a particular client the first time, once it's connected to your network the auth key is irrelevant.
If you're doing this with ephemeral containers then yes you'll need a way to roll auth keys. OAuth credentials don't expire and Tailscale has a command line single purpose tool to get an auth key given OAuth credentials, so that can be a viable alternative.
This post is great as the current state of network mesh is too complex for some users. That led me to write a simple rust daemon to run a TLS proxy and spawn the original app locally, reverse proxying requests as the cost of implementing a full mesh just to have tls across applications was too much for my team at the time. I didn't knew about ONRUN, s6 and all that. Also, why not tailscale as the mesh ?
Yeah, the main advantage of giving your containers their own IP addresses is the ability to use Tailscale as a service discovery mesh. If you combine this with MagicDNS, this gets you most of an 80:20 of Istio with about 10% of the configuration required!
It's actually even easier to use. Add `tailscale.com/expose: "true"` to a kubernetes service annotations and it will be added to the tailnet automatically
What I think is the really cool part about this is that tailscaled is able to store its state in a Kubernetes secret on the fly so that it can dynamically update itself and handle being restarted on a new runner node. This isn't the same as true multiple nodes with the same IP, but when combined with automatic restarts it gets way closer to that in practice than it has any right to.
I do that for my containers and it is incredibly useful for cross containers communication, especially for containers that are hosted in different dedicated servers.
https://mrpowergamerbr.com/us/blog/2023-03-20-untangling-you...
To solve this problem I add another container which should never need to be restarted, and both the game and Tailscale use the networking of that container. This is also the exact use case of Kubernetes' pause containers, so I just use the EKS pause image from ECR public gallery.
Another tip I'd recommend is to run the Tailscale container with `TS_USERSPACE: 'false'` `TS_DEBUG_FIREWALL_MODE: nftables` (since autodetection fails on my machine) and give it `CAP_NET_ADMIN`. This allow Tailscale to use tun device instead of emulation, and it supposed to be more performant. But the clear benefit is that the game server will see everyone's Tailnet IP instead of 127.0.0.1.
In Thai: https://blog.whs.in.th/node/3676
If you were using userspace networking, you wouldn't be able to connect to other services in your tailnet without setting up a HTTP/SOCKS5 proxy https://tailscale.com/kb/1112/userspace-networking/
My users report better latency, but I doubt it.
Asking because I've been happy with their containers so far
Ironically, the 1 container I really wanted to use this Tailscale mod for was not from linuxserver.
It's also too much magic for my liking. Some software distributed as a single executable binary gets packaged in some over complicated base image on top of another base image, when I can technically just copy the binary into a scratch and call it a day. I understand the benefits when they have to manage tons of images at scale, but my life has been much easier with images packaged by myself or the upstream projects.
keep it up guys!
I have over 25 containers running on my home server and not a single one of them is based on a LinuxServer.io image. This "universal" mod would work with 0 of them.
Managing all those household docs: https://docs.paperless-ngx.com
Backups of mail accounts: https://www.offlineimap.org
Cloud storage for phones: http://nextcloud.com
Mirroring podcasts locally: https://github.com/akhilrex/podgrab
Managing dynamic service dns via plugins: https://coredns.io
My own matrix instance: https://matrix-org.github.io/dendrite/
Backups: https://restic.net
Media Management: https://jellyfin.org
Relay only tor help: https://www.torproject.org
S3 compatible storage: https://github.com/seaweedfs/seaweedfs
Git + CI: https://about.gitlab.com
Managing SSL and container proxying: https://traefik.io
Mirror the docker registry locally: https://github.com/docker-library/docs/tree/master/registry
Samba support for the windows hosts: https://github.com/ServerContainers/samba
HTTP/S Proxy with support for modifying results: http://www.privoxy.org
Database: https://www.postgresql.org
Datastore: https://redis.io
and a bunch of support software. Paperless has Tika and Gotenberg as deps for example.
I just read in the README that Tini is included by Docker since 1.13 if using --init flag.
[1] https://github.com/krallin/tini
Their official solution is to run a logout command before shutting down but that's not always possible.
The use-case where we find the renaming most frustrating is typically when we start a cloud instance with a Tailscale setup script in the cloud init (via Terraform). If we, say, change a parameter that requires Terraform to restart that instance, then the freshly-started instance will be given a `-1` name by Tailscale and the old instance will be offline.
I wish there would simply be a --force-hostname option or something of that nature that tells Tailscale "if a host is authenticating with this name, give it that name, any older machines using that name should be kicked off"
I wonder if the internals will be open sourced? I assume it’s a pretty “simple” go tcp proxy that listens on the tailnet instead of an open port. I had been thinking about writing one for our services at work, so maybe we can use this, but I’d prefer to build the binary directly into our containers.
https://github.com/tailscale/tailscale/blob/main/ipn/serve.g...
And they also support direct embedding:
https://tailscale.dev/blog/embedded-funnel
I think this is built on the wireguard-go + gvisor mashup, that allows you to do this with just Wireguard:
https://github.com/WireGuard/wireguard-go/tree/master/tun/ne...
One of my favorite applications of this is this little tool that turns Wireguard VPNs into SOCKS5 proxies (which you can selectively enable in your browser)
https://github.com/octeep/wireproxy
Full disclosure, I am founder of Adaptive [1]. We use a similar technique to the one with VPN exposed as SOCK5 proxy but for accessing internal infrastructure resources.
[1] https://adaptive.live/
https://github.com/acuteaura/tinybastion/
All the code for LSIO images is available on their GitHub.
https://github.com/tailscale-dev/docker-mod
If you're doing this with ephemeral containers then yes you'll need a way to roll auth keys. OAuth credentials don't expire and Tailscale has a command line single purpose tool to get an auth key given OAuth credentials, so that can be a viable alternative.
https://tailscale.com/kb/1215/oauth-clients/#get-authkey-uti...
Thanks!
So if the local tailscale address is 1.2.3.4, I do:
ports:
- 1.2.3.4:8080:8080
This doesn't actually add applications to the tailnet as in the OP, but it works.
It's actually even easier to use. Add `tailscale.com/expose: "true"` to a kubernetes service annotations and it will be added to the tailnet automatically