But I want it so badly! All i need to figure out is:

reverse proxys (I stumbled through getting one caddy instance setup so far but gosh I struggle with that also, nginx proxy manager seems like my next step)

a rock solid backup/restore setup (but first I need to figure out where the vaultwarden alpine files live, then be able to get those off of the proxmox vm)

this is more of a vent, than a request for someone to spell it all out for me. But I wouldn’t be upset if anyone had the time to point me in the right direction for me.

Would it just be easier to run a keypass XC and syncthing setup?

  • model_tar_gz@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things. Like “now you have it setup, make sure you tune it for production”

    Dude I’m already in pain from trying to serve these models and you just have to go rub salt into my eyes. “Simplify your stack with <Tech>” they said. “Share your resources effectively and easily with <Tech>” they said. “Here’s your fuckin’ ‘Hello, World’ now GRTFM and buzz off” they said.

    Working close to the metal do be like that.

    • towerful@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      At the homelab scale, proxmox is great.
      Create a VM, install docker and use docker compose for various services.
      Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
      Have proxmox take regular snapshots of the VMs.
      Every now and then, copy those backups onto an external USB harddrive.
      Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.

      Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.

      Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.

      That’s all you really need to do.
      At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
      Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.

      Automating any of the above will become apparent when tinkering stops being fun.

      The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.

      • ChapulinColorado@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Great points, as someone who is very happy with their current home automation and services, checking in the config files to a git repo was the critical step. Also backup volumes since many containers tend to store state in some binary or internal DB. At the very least try restoring the config to verify you have what’s needed. The containers should start even if they have no media on it.

        In terms of tinkering not being fun anymore. That’s okay, sometimes you need a break.

        A point that is sometimes not brought up enough in my opinion is to plan for loses. What can you afford to lose if you can’t backup everything (due to price, etc.)? config files and photos or personal data are relatively small (compared to something like a media library) and should be prioritized.