Just exposed Immich via a remote and reverse proxy using Caddy and tailscale tunnel. I’m securing Immich using OAuth.
I don’t have very nerdy friends so not many people appreciate this.
O have a very similar setup but have a couple of questions if you don’t mind me asking, what did you used for OAuth? and where is it running? I tried athelia on the VPS but had some problems I can’t remember now and decided it wasn’t worth the time at the time, but probably should set it up.
Authelia is great. Recently added protection for multiple domains.
I’m a huge fan of Caddy and I wish more people would try it. The utter simplicity of the config file is breathtaking when you compare it with Apache or Nginx. Stuff that takes twenty or thirty lines in other webservers becomes just one in Caddy.
I love Caddy. So easy to configure, and the automatic SSL is almost always what I need.
The only thing I don’t like about caddy is that using DNS challenge requires recompiling the program itself, and the plugins themselves can be a bit quirky. Mind you, you can easily handle this with a separate program like
lego
orcertbot
so not a huge deal.I moved from swag to caddy and I’m glad i did. So much more simple.
Congratulations!
It feels really good when you learn something new and get it working the way you like.
If you want more challenges take a look at this:
This would be useful if you ever wanted to share albums with other people outside your tailscale network and that lack an account for your immich server.
Wrapping my head around reverse proxy was a game changer for me. I could finally host things that are usefull outside my LAN. I use Nginx-Proxy-Manager which makes the config simple for lazy’s like me.
Do you serve things to a public? Because unless you’re serving a public, that’s a dumb to do… and you really don’t understand the purpose of it.
If all you wanted was the ability to access services remotely, then you should have just created a WireGuard tunnel.
A lemmy instance, a wiki, and a couple of other website type things, yes.
Publicly facing things are pretty limited, but it’s still super handy inside the LAN with Adguard Home doing DNS rewrites to point it to the reverse proxy.
I appreciate what you’re saying, though. A lot of people get in trouble by having things like Radarr etc. open to the internet through their reverse proxy.
Am I making a mistake by having my Jellyfin server proxied through nginx? The other service I set up did need to be public so I just copied the same thing when I set up Jellyfin but is that a liability even with a password to access?
Not really. Personally I’d allow the service account running jellyfin only access to read media files to avoid accidental deletion but otherwise no.
Also, jellyfin docs have a sample proxy config. You should use that. It’s a bit more in depth than a normal proxy config.
This is very short sighted. I can think of dozens of things to put on the open internet that aren’t inherently public. The majority are things for sharing with multiple people you want to have logins for. As long as the exposed endpoints are secure, there’s no inherent problem.
And yet you’ve not provided one example, hmmmm
Seriously?
Plex, Jellyfin, VaultWarden, AdGuard, Home Assistant, GameVault, any flavor of pastebin, any flavor of wiki, and the list goes on.
If you’re feeling spicy throw whatever the hell you want onto a reverse proxy and put it behind a zero trust login.
The idea that opening up anything at all through to the open internet is “dumb” is antiquated. Are there likely concerns that need to be addressed? Absolutely. But don’t make blanket statements about virtually nothing belonging on the open internet.
NPM is awesome until you have a weird error that the web GUI does not give a hint about the problem. Used it for years at this point and wouldn’t consider anything else at this point. It just works and is super simple.
+1 for NPM! Used to even do things manually, but I’m too lazy for that and NPM fulfils nearly all my use cases lol
came here to leave this exact response! 😁
Used to mess around with multiple Apache Proxy Servers. When I left that job I found Docker and (amongst other things) NPM and I swear, I stared at the screen in disbelief on how easy the setup and config was. All that time we wasted on Apache, the issues, the upgrades, the nightmare in setting it all up…
If I were to do that job again I would not hesitate to use NPM 100% and stop wasting my time with that Apache Proxy mess.
NPM
Nginx-Proxy-Manager. Got it.
I didn’t read the parent comment well enough and was wondering what the Node Package Manager had to do with anything 😂
Good job!
I’m still trying to understand what it is and why I would want it. I see several programs I use recommend it but I just don’t get what it does and why what it does is good.It does a couple things. It’s one service that routes requests to multiple services. So if you have radarr, sonarr, etc., you can put a reverse proxy in front and use the same ip-port to connect to all, and the proxy routes the request to the service by hostname.
If you have multiple instances of the same service for HA, it can load balance between them (though this is unlikely for a homelab).
Personally I run all my services through docker and put traefik in front, so that I don’t have to keep track of ports. It’s all by name.
It’s also nice because traefik handles HTTPS termination, so it automatically gets certs for each name, and the backing service never needs to worry about it (it’s http on the backend, but all that traffic is internal).
Thank you for the explanation. But that’s it than? Just convenience with ports?
Well it IS pretty nice to be able to tell people to go to jellyfin.example.com instead of example.com:8096, but you also get security benefits for using a properly set up reverse proxy. You don’t need to keep your ports open to the whole internet, only the reverse proxy accesses them. As far as the rest of the internet is concerned, you have :443 open.
Nice work! 😎
Just be sure to read up on network security and set yourself up for success! Even tunnels can still be an attack surface. Always keep everything up to date! And plan for the worst case.
Tailscale?
Is this setup advisable for the CGNATED environment?
You will need a VPS as your other endpoint
This is necessary for CGNat ISPs. That or cloudflared or ngrok or the like. Because you aren’t really routable on a CGNAT address.
In a nutshell, CGNAT users must spend money for something that people with IPv4 addresses can do for free 😔
We wouldn’t be in this mess if we switched to ipv6, but nOoOooOo… we can’t possibly do that…
I just finally got it this weekend when I got Matrix-synapse and Pixelfed working on the same box.
All I can say is good for you! It wasn’t easy. And it’s so powerful.
Congrats! I just pulled off the same thing last week using cloudflare tunneling? The phrase “reverse proxy” scared me too much lol. So props to you.
I’ve been wanting do something similar, but with Authentik. Does anyone know a good guide on this?
Same boat (in the learning cycle that is). No idea what immich is, but I got Stirling-PDF hosting in docker. I only learned the other day that localhost, is localhost for the container. I couldn’t get a bunch of stuff running for.ever, till I learned the way I was calling things needed to be to host.docker.internal.
Just out of curiosity, is the tail scale part of this required? If i just reverse proxy things and have them only protected from there by the login screen of the app being shown, that’s obviously less safe. But the attackers would still need to brute force my passwords to get any access? If they did, then they could do nasty things within the app, but limited to that app. Are there other vulnerabilities I’m not thinking about?
The attack surface might be an entire API, not just your login screen. You have no idea what that first page implements that could be used to gain access.
You can improve this by putting a basic auth challenge at least in front of the applications webpage. That would drastically reduce the potential endpoints.
Thanks for the insight! Does running this in a docker container help limit the damage at all? Seems like they’d only be able to access the few folders I have the container access to?
Maybe a bit, but if you’re not running rootless docker if they get out of that container they’ll have the run of your docker host. It is a lot of layers to crack, but sometimes they’ve got nothing but time, or it’s been so long since the containers been updated that its trivial. That’s why rootless docker or podman, and Watchtower are your friends.
Also, vlan off your exposed surface and build firewall rules for the VPN and LAN inbound to it, and specific outbound rules if you need those servers to reach into those networks themselves.
It’s not required, but probably OP has a home server with Immich and a VPS which exposes it to the internet. In that setup you need Tailscale for the VPS to access your home server. Sometimes you can’t directly expose your home server for different reasons, e.g. ISP doesn’t give you an external IP directly (I’ve had this, where my router would get a 10.x IP so I couldn’t port forward because the internet IP was being shared between multiple houses), or the ISP gives you a dynamic IP so there’s no guarantee that your IP won’t change next time you reset the router, etc.
Also it provides an extra layer of separation, so for example a DDOS would hit the VPS which probably has automatic countermeasures, and even if someone were to gain access to the VPS they still need an extra jump to get to the home server (obviously if they exploit something on immich they would get direct access to the home server).
Gotcha. Thanks for the insight!
It’s annoying, as I’d like to expose things for other people in my family (like Overseerr or whatever) without hassling them to also start a VPN or other stumbling block steps.
I was hoping that reverse proxy to overseerrs login screen would be safe enough. 8(
Does docker help limit things at all? I’m running my services through docker, which seems to limit the folders the container can hit. Feels like that would limit the damage someone could do even if they bypassed the login page of Overseerr or whatever app it is?
First of all let me make this absolutely clear, docker is not expected to be secure to that level. While they try to make it hard for someone to escape a container, it’s not their main concern so expect that there are vulnerabilities that would allow an attacker to escape.
Now the second thing, the Overseer login screen might be secure enough for your case, the problem is that login is hard to do right, and Overseer are doing several other stuff as well, so they might not give it enough emphasis, and even if they do, maybe Immich devs don’t, or any one of the dozens of other services, so there are dozen of possible points of failure. Things like Authelia or Google OAuth are focused on authentication, so they do that absolutely right, and then they become the only point of failure for authentication.
To be fair, if you keep things updated it’s unlikely not having auth would be a problem. Mostly because most hackers won’t even know of your server to begin with. And most systems are secure enough for most casual hacks. But it’s an investment worth the time if you plan on making something available to the internet.
I just got this set up last week too. Same setup with caddy on a free oracle vps, tailscale on vps and home pfsense router, tailscale on pfsense advertising routes (private IPs of my docker hosted services).
CGNAT sucks 🤮