Hi! Question in the title.

I get that its super easy to setup. But its really worthwhile to have something that:

  • runs everything as root (not many well built images with proper useranagement it seems)
  • you cannot really know which stuff is in the images: you must trust who built it
  • lots of mess in the system (mounts, fake networks, rules…)

I always host on bare metal when I can, but sometimes (immich, I look at you!) Seems almost impossible.

I get docker in a work environment, but on self hosted? Is it really worth while? I would like to hear your opinions fellow hosters.

  • umbrella@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    people are rebuffing the criticism already.

    heres the main advantage imo:

    no messy system or leftovers. some programs use directories all over the place and it gets annoying fast if you host many services. sometimes you will have some issue that requires you to do quite a bit of hunting and redoing things.

    docker makes this painless. you can deploy and redeploy stuff easily and quickly, without a mess. updates are painless and quick too, with everything neatly self-contained.

    much easier to maintain once you get the hang of things.

  • Display Name@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago
    • Podman solves the root issue
    • you can inspect the stuff. You don’t have to, but it helps if you’re not paranoid with popular and widespread images
    • I have no mess

    It’s great that you do install things on bare metal, I did that in the beginning until I discovered docker and I will never go back. Docker/ podman compose is just so good

    • redcalcium@lemmy.institute
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      you can inspect the stuff. You don’t have to, but it helps if you’re not paranoid with popular and widespread images

      Dive is a great tool for inspecting docker images. I wish I found it sooner.

    • Shimitar@feddit.itOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Need to study podman probably, stuff running as root is my main dislike.

      Probably if in only used docker images created by me I would be less concerned of losing track of what I am really deploying, but this would deflect the main advantage of easy deploy?

      Portability is a point I didn’t considered too… But rebuilding a bare metal server properly compatimentized took me a few hours only, so is that really so important?

      • null@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        But rebuilding a bare metal server properly compatimentized took me a few hours only, so is that really so important?

        Depends on how much you value your time.

        Compare a few hours on bare metal to a few minutes with containers. Then consider that you also spend extra time on bare metal cleaning up messes. Containers don’t make a mess in the first place.

  • Aniki 🌱🌿@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 months ago

    1.) No one runs rooted docker in prod. Everything is run rootless.

    2.) That’s just patently not true. docker inspect is your friend. Also you can build your own containers trusting no-one. FROM Scratch https://hub.docker.com/_/scratch/

    3.) I think mess here is subjective. Docker folders makes way more sense than Snap mounts.

  • vzq@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    How is this meaningfully different than using Deb packages? Or building from since without inspecting the build commands? Or even just building from source without auditing the source?

    In the end docker files are just instructions for running software to set up other software. Just like every other single shell script or config file in existence since the mid seventies.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Your first sentence proves that it’s different. The developer needs to know it’s going to be a Deb package. What about rpm? What about if it’s going to run on mac? Windows? That means they’ll have to change how they develop to think about all of these different platforms. Oh you run windows - well windows doesn’t have openssl, so we need to do this vs that.

      I’d recommend reading up on docker and containerization. It is not a script for setting up software. If that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it. Like it or not it’s here, and if you’re doing any dev/ops work professionally you will be left behind for not understanding it.

      • hedgehog@ttrpg.network
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        I don’t think you understood the context of the comment you replied to. As a reply to “Here are all these drawbacks to Docker vs hosting on bare metal,” it makes perfect sense to point out that the risks are there regardless.

        Unless I misread your comment and you’re suggesting that you think devs not having to deal with OS-specific code is a disadvantage of Docker. Or maybe you meant your second paragraph to be directed at OP?

      • vzq@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Apparently I was unclear, I was referring to the security implications of using different manifestations of other people’s code. Those are rather similar.

        I’d recommend reading up on docker and containerization. It is not a script for setting up software.

        I was referring specifically to docker files. Those are almost to the letter scripts for setting up software.

        if that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it.

        I find your attitude not just uncharitable, but also rude.

        • Scrubbles@poptalk.scrubbles.tech
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          and I find misinformation about topics like this also to be rude. It’s perfectly fine if you don’t understand something, but what I don’t like is you going out of your way to dissuade people from using a product when I don’t think you understand the core concepts of it. If you have valid criticisms like security of docker then that’s a different conversation about securing containers, but it’s hard to take them as valid criticisms if the criticism is based on a fundamental misunderstanding of the product.

          I don’t think anyone I have ever talked to professionally or read about docker would ever describe a dockerfile as “scripts for setting up software”. It is much more nuanced then that.

          So yes, I’m a bit rude about it. I do this professionally and I’m very tired of people who don’t understand containerization explain to me how containerization sucks.

          • vzq@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 months ago

            Everything I wrote is rigorously correct, if a bit tongue in cheek.

            Go play with your Dunning Kruger somewhere else.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    I’ll answer your question of why with your own frustration - bare metal is difficult. Every engineer uses a different language/framework/dependencies/whathaveyou and usually they’ll conflict with others. Docker solves this be containing those apps in their own space. Their code, projects, dependencies are already installed and taken care of, you don’t need to worry about it.

    Take yourself out of homelab and put yourself into a sysadmin. Now instead of knowing how packages may conflict with others, or if updating this OS will break applications, you just need to know docker. If you know docker, you can run any docker app.

    So, yes, volumes and environments are a bit difficult at first. But it’s difficult because it is a standard. Every docker container is going to need a couple mounts, a couple variables, a port or two open, and if you’re going crazy maybe a GPU. It doesn’t matter if you’re running 1 or 50 containers on a system, you aren’t going to get conflicts.

    As for the security concerns, they are indeed security concerns. Again imagine you’re a sysadmin - you could direct developers that they can’t use root, that they need to be built on OS’s with the latest patches. But you’re at home, so you’re at the mercy of whoever built the image.

    Now that being said, since you’re at their mercy, their code isn’t going to get much safer whether you run it bare-iron or containerized. So, do you want to spend hours for each app figuring out how to run it, or spend a few hours now to learn docker and then have it standardized?

  • Big P@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Docker is a messy and not ideal but it was born out of a necessity, getting multiple services to coexist together outside of a container can be a nightmare, updating and moving configuration is a nightmare and removing things can leave stuff behind which gets messier and messier over time. Docker just standardises most of the configuration whilst requiring minimal effort from the developer

  • Semi-Hemi-Demigod@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago
    1. I don’t run any of my containers as root
    2. Dockerfiles aren’t hard to read so you can pretty easily figure out what they’re doing
    3. I find managing dependencies for non-containerized services to be worse than one messy docker directory I never look at

    Plus having all my services in a couple docker-compose files also means I can move them around incredibly easily.

  • bluGill@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Docker gives you a few different things which might or might not matter. Note that all of the following can be gotten in ways other than docker as well. Sometimes those ways are better, but often what is better is just opinion. There are downsides to some of the following as well that may not be obvious.

    With docker you can take a container and roll it out to 100s of different machines quickly. this is great for scaling if your application can scale that way.

    With docker you can run two services on the same machine that use incompatible versions of some library. It isn’t unheard of to try to upgrade your system and discover something you need isn’t compatible with the new library, while something else you need to upgrade needs the new library. Docker means each service gets separate copies of what is needs and when you upgrade one you can leave the other behind.

    With docker you can test an upgrade and then when you roll it out know you are rolling out the same thing everywhere.

    With docker you can move a service from one machine to a different one somewhat easily if needed. Either to save money on servers, or to use more as more power is needed. Since the service itself is in a docker you can just start the container elsewhere and change pointers.

    With docker if someone does manage to break into a container they probably cannot break into other containers running on the same system. (if this is a worry you need to do more risk assessment, they can still do plenty of damage)

  • BoofStroke@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I concur with most of your points. Docker is a nice thing for some use cases, but if I can easily use a package or set up my own configurations, then I will do that instead of use a docker container every time. My main issues with docker:

    • Containers are not updated with the rest of the host OS
    • firewall and mounting complexities which make securing it more difficult
  • oranki@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Portability is the key for me, because I tend to switch things around a lot. Containers generally isolate the persistent data from the runtime really well.

    Docker is not the only, or even the best way IMO to run containers. If I was providing services for customers, I would definetly build most container images daily in some automated way. Well, I do it already for quite a few.

    The mess is only a mess if you don’t really understand what you’re doing, same goes for traditional services.

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    To answer each question:

    • You can run rootless containers but, importantly, you don’t need to run Docker as root. Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.
    • True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
    • It’s the opposite - you don’t really need to care about docker networks, unless you have an explicit need to contain a given container’s traffic to it’s own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.

    I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I’ve created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.

    It’s not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.

    Why? I like to play.

    Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

    Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

    Let’s say there’s a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).

    I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

    I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos... hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.

    • lemmyvore@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.

      There is no daemon in rootless mode. Instead of a daemon running containers in client/server mode you have regular user processes running containers using fork/exec. Not running as root is part and parcel of this approach and it’s a good thing, but the main motivator was not “what if someone breaks out of the container” (which doesn’t necessarily mean they’d get all the privileges of the running user on the host and anyway it would require a kernel exploit, which is a pretty tall order). There are many benefits to making running containers as easy as running any kind of process on a Linux host. And it also enabled some cool new features like the ability to run only partial layers of a container, or nested containers.

      • DeltaTangoLima@reddrefuge.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Yep, all true. I was oversimplifying in my explanation, but you’re right. There’s a lot more to it than what I wrote - I was more relating docker to what we used to do with chroot jails.

  • Hexarei@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Others have addressed the root and trust questions, so I thought I’d mention the “mess” question:

    Even the messiest bowl of ravioli is easier to untangle than a bowl of spaghetti.

    The mounts/networks/rules and such aren’t “mess”, they are isolation. They’re commoditization. They’re abstraction - Ways to tell whatever is running in the container what it wants to hear, so that you can treat the container as a “black box” that solves the problem you want solved.

    Think of Docker containers less like pets and more like cattle, and it very quickly justifies a lot of that stuff because it makes the container disposable, even if the data it’s handling isn’t.

  • eluvatar@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    About the trust issue. There’s no more or less trust than running on bare metal. Sure you could compile everything from source but you probably won’t, and you might trust your distro package manager, but that still has a similar problem.

  • Gooey0210@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 months ago

    Check out Nixos, this is like the next step of docker

    Ah, and a side note: docker is not fully open source