My current setup has my DHCP + DNS on my Unifi USG. However, as I have all my apps hosted on a different server (unifi, plex, home assistant, NAS, etc.) I’ve ran into issues trying to get things set up.
Basically, Unifi needs to know where the unifi server is, but it’s assigning the IP address to it.
Should I put DHCP+DNS onto it’s own system? Should I put it on my current server? And any non-Pi recommendations for systems? (I’ve had the PI filesystem clobber itself too many times)
I use pihole for managing DNS and DHCP. It’s run via docker and the computer file and dnsmasq configs are version controlled so if the Pi dies I can just bring it up on another Pi.
The Pi with pihole has a static IP to avoid some of the issues you described.
That’s what I do. I do have a small VM that is linked to it in a keepalived cluster with a synchronized configuration that can takeover in case the rpi croaks or in case of a reboot, so that my network doesn’t completely die when the rpi is temporarily offline. A lot of services depend on proper DNS resolution being available.
I’ve been meaning to standup another pihole on another pi for DNS redundancy. I have to research how to best keep the piholes in sync. So far I’ve found orbital-sync and gravity-sync.
For me gravity sync was too heavy and cumbersome. It always failed at copying over the gravity sqlite3 db file consistently because of my slow rpi2 and sd card, a known issue apparently.
I wrote my own script to keep the most important things for me in sync: the DHCP leases, DHCP reservations and local DNS records and CNAMES. It’s basically just rsync-ing a couple of files. As for the blocklists: I just manually keep them the same on both piholes, but that’s not a big deal because it’s mostly static information. My major concern was the pihole bringing DHCP and DNS resolution down on my network if it should fail.
Now with keepalived and my sync script that I run hourly, I can just reboot or temporarily shutdown pihole1 and then pihole2 automatically takes over DNS duties until pihole1 is back. DHCP failover still has to be done manually, but it’s just a matter of ticking the box to enable the server on pihole2, and all the leases and reservations will be carried over.
If you ever switch to AdGuard Home, adguardhome-sync is pretty good. IMO AdGuard Home is better since it has all of PiHole’s features plus it supports DNS-over-HTTPS out-of-the-box, so your ISP can’t spy on your DNS queries (non-encrypted DNS queries can be easily intercepted and modified by your ISP even if you use a third-party DNS server, since they’re unencrypted and unauthenticated)
DNS-over-HTTPS
You can also do that with running cloudflared or unbound on your pihole.
Generally speaking, any device (“server”) hosting a “service” NEEDS to be assigned a static IP. It simplifies routing significantly and avoids random break problems because DHCP is incredibly stupid at times…
Is there any specific reason you need DHCP to assign an IP to your main hosting server vs setting it all statically?
Moving it to it’s own system will not fix the routing problem. You can probably still leave it on the USG.
You should be able to set a fixed static IP on your server, and then also statically assign that same IP to your server in your USG DHCP config- as long as they both are “thinking about” the same IP I think routing should work correctly.
If that breaks, try just assigning the static IP only from the USG side or only from the server’s side. I’m 90% sure that even if the USG does not have your server machine in it’s client list, if it sends broadcast packets to an entered IP looking for the unifi server, and the unifi server is listening on that manually set IP, they should be able to talk.
disclaimer: i am high as shit right now and this may be bullshit
Unifi is specific about expecting the controller address to not change. You have several options: There’s the “override controller address” setting, which you can use to point the devices at a dns name, instead of an ip address. The dns can then track your controller. It doesn’t exactly solve your issue, though, as USG doesn’t assign dns names to dynamic allocations.
Another option is to give the controller a static IP allocation. This way, in case you reboot everything, USG will come up with the latest good config, then will (eventually) allocate the IP for controller, and adopt itself.
Finally, the most bulletproof option is to just have a static IP address on the controller. It’s a special case, so it’s reasonable to do so. Just like you can only send NetFlow to a specific address and have to keep your collector in one place, basically.
I’d advise against moving dhcp and dns off unifi unless you have a better reason to do so, because then you lose a good chunk of what unifi provides in terms of the network management. USG is surprisingly robust in that regard (unlike UDMs), and can even run a nextdns forwarding resolver locally.
DHCP is a really stupid* service for the most part. Unless you are working with multiple subnets or have some very specific settings you need to pass to your clients, it’s probably not worth it to manage it yourself. I don’t want to discourage you though! Assigning static IP addresses by MAC can be extremely useful and is not always an option on routers. If you want static names and dynamic addresses, that is really where you need to manage both DNS and DHCP. It really depends on how and where you want names to be resolved and what you are trying to accomplish. (*stupid as in, it’s a really simple service. You want it simple because when DHCP breaks, you have other serious issues going on.)
Setting up your own DNS is worth its weight in gold. You can put it just about anywhere on your network (before your gateway, after, in China, whatever.) and your network won’t even know the difference if setup correctly. You can point BIND at the root servers and bypass your ISP completely if you want. ISP DNS services suck ass, so regardless of you resolve yourself, or forward all name queries to your anon DNS server of choice you have a really decent level of control on your network. It is the service to learn if you want to keep an eye on where your network wants to talk.
Your Unifi USG must play nice with your own server, by the laws of DNS. There may be some nuances when it comes to internal protocols like WINS, but other than that, it should be just fine.
I would setup a simple VM somewhere first, to answer your actual question. It’s good practice to keep core services isolated on their own, dedicated instances. This is to speed up recovery time and minimize down time. Even on your home network, DNS and DHCP are services you do not want going down. It’s always a pain when they do go down.
For the above reasons it’s very nice to use dnsmasq, because DHCP + DNS integration is really sweet, and a full featured local DNS is gold.
OpenWRT comes with dnsmasq btw so if you have a dodgy router but it supports OpenWRT you may be able to breath new life into it.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters DNS Domain Name Service/System IP Internet Protocol Unifi Ubiquiti WiFi hardware brand
[Thread #424 for this sub, first seen 11th Jan 2024, 09:35] [FAQ] [Full list] [Contact] [Source code]
Either use a static IP on the server or setup DHCP to always reserve & assign a specific IP for the server’s MAC address.
I’m not familiar with USG but any decent appliance with DHCP+DNS functionality should be able to do this.
Basically, Unifi needs to know where the unifi server is, but it’s assigning the IP address to it.
Set a static IP on the Unifi server.
New Lemmy Post: Should I use a dedicated DHCP/DNS server hardware (https://lemmy.world/post/10565049)
Tagging: #SelfHosted(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md
Sidnenote about the PI filesystem self-clobbering: Are you running off of an SD card? Running off an external SSD is way more reliable in my experience. Even a decent USB stick tends to be better than micro-SD in the long run, but even the cheapest external SSD blows both of them out of the water. Since I switched my PIs over to that, they’ve never had any disk-related issues.