Hi! Question in the title.
I get that its super easy to setup. But its really worthwhile to have something that:
- runs everything as root (not many well built images with proper useranagement it seems)
- you cannot really know which stuff is in the images: you must trust who built it
- lots of mess in the system (mounts, fake networks, rules…)
I always host on bare metal when I can, but sometimes (immich, I look at you!) Seems almost impossible.
I get docker in a work environment, but on self hosted? Is it really worth while? I would like to hear your opinions fellow hosters.
Imo, yes.
- only run containers from trusted sources (btw. google, ms, apple have proven they cant be trusted either)
- run apps without dependency hell
- even if someone breaks in, they’re not in your system but in a container
- have everything web facing separate from the rest
- get per app resource statistics
Those are just what was in my head. Probably more to be said.
Also the ability to snapshot an image, goof around with changes, and if you don’t like them restore the snapshot makes it much easier to experiment than trying to unwind all the changes you make.
- Even if someone breaks in, they are not a user, but root 🤝
*in that container, not in the system
Docker is a messy and not ideal but it was born out of a necessity, getting multiple services to coexist together outside of a container can be a nightmare, updating and moving configuration is a nightmare and removing things can leave stuff behind which gets messier and messier over time. Docker just standardises most of the configuration whilst requiring minimal effort from the developer
1.) No one runs rooted docker in prod. Everything is run rootless.
2.) That’s just patently not true.
docker inspect
is your friend. Also you can build your own containers trusting no-one.FROM Scratch
https://hub.docker.com/_/scratch/3.) I think mess here is subjective. Docker folders makes way more sense than Snap mounts.
About the trust issue. There’s no more or less trust than running on bare metal. Sure you could compile everything from source but you probably won’t, and you might trust your distro package manager, but that still has a similar problem.
To answer each question:
- You can run rootless containers but, importantly, you don’t need to run Docker as root. Should the unthinkable happen, and someone “breaks out” of docker jail, they’ll only be running in the context of the user running the docker daemon on the physical host.
- True but, in my experience, most docker images are open source and have git repos - you can freely download the repo, inspect the build files, and build your own. I do this for some images I feel I want 100% control of, and have my own local Docker repo server to hold them.
- It’s the opposite - you don’t really need to care about docker networks, unless you have an explicit need to contain a given container’s traffic to it’s own local net, and bind mounts are just maps to physical folders/files on the host system, with the added benefit of mounting read-only where required.
I run containers on top of containers - Proxmox cluster, with a Linux container (CT) for each service. Most of those CTs are simply a Debian image I’ve created, running Docker and a couple of other bits. The services then sit inside Docker (usually) on each CT.
It’s not messy at all. I use Portainer to manage all my Docker services, and Proxmox to manage the hosts themselves.
Why? I like to play.
Proxmox gives me full separation of each service - each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.
Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.
Let’s say there’s a new contender that competes with Immich. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT).
I can spin up a Proxmox CT from my own template, use my Ansible playbook to provision Docker and all the other bits, load it in my Portainer management platform, and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.
I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my
photos...
hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.- Podman solves the root issue
- you can inspect the stuff. You don’t have to, but it helps if you’re not paranoid with popular and widespread images
- I have no mess
It’s great that you do install things on bare metal, I did that in the beginning until I discovered docker and I will never go back. Docker/ podman compose is just so good
you can inspect the stuff. You don’t have to, but it helps if you’re not paranoid with popular and widespread images
Dive is a great tool for inspecting docker images. I wish I found it sooner.
Need to study podman probably, stuff running as root is my main dislike.
Probably if in only used docker images created by me I would be less concerned of losing track of what I am really deploying, but this would deflect the main advantage of easy deploy?
Portability is a point I didn’t considered too… But rebuilding a bare metal server properly compatimentized took me a few hours only, so is that really so important?
But rebuilding a bare metal server properly compatimentized took me a few hours only, so is that really so important?
Depends on how much you value your time.
Compare a few hours on bare metal to a few minutes with containers. Then consider that you also spend extra time on bare metal cleaning up messes. Containers don’t make a mess in the first place.
About the root problem, as of now new installs are trying to let the user to run everything as a limited user. And the program is ran as root inside the container so in order to escape from it the attacker would need a double zero day exploit (one for doing rce in the container, one to escape the container)
The alternative to “don’t really know what’s in the image” usually is: “just download this Easy minified and incomprehensible trustmeimtotallynotavirus.sh script and run it as root”. Requires much more trust than a container that you can delete with no traces in literally seconds
If the program that you want to run requires python modules or node modules then it will make much more mess on the system than a container.
Downgrading to a previous version (or a beta preview) of the app you’re running due to bugs it’s trivial, you just change a tag and launch it again. Doing this on bare metal requires to be a terminal guru
Finally, migrating to a new fresh server is just
docker compose down
, then rsync to new server, and thendocker compose up -d
. And not praying to ten different gods because after three years you forgot how did you install the app in bare metal like that.Docker is perfect for common people like us self hosting at home, the professionals at work use kubernetes
people are rebuffing the criticism already.
heres the main advantage imo:
no messy system or leftovers. some programs use directories all over the place and it gets annoying fast if you host many services. sometimes you will have some issue that requires you to do quite a bit of hunting and redoing things.
docker makes this painless. you can deploy and redeploy stuff easily and quickly, without a mess. updates are painless and quick too, with everything neatly self-contained.
much easier to maintain once you get the hang of things.
I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.
As for your user & permissions concern, are you aware that docker these days can be configured to map “root” in the container to a different user? Personally I prefer to use podman though, which doesn’t have that problem to begin with
I find it makes my life easier, personally, because I can set up and tear down environments I’m playing with easily.
Same here. I self-host a bunch of dev tools for my personal toy projects, and I decided to migrate from Drone CI to Woodpecker CI this week. Didn’t have to worry about uninstalling anything, learning what commands I need to start/stop/restart Woodpecker properly, etc. I just commented-out my Drone CI/Runner services from my docker-compose file, added the Woodpecker stuff, pointed it to my Gitea variables and ran
docker compose up -d
.If my server ever crashes, I can just copy it over and start from scratch.
I really need to get into Woodpecker.
How is this meaningfully different than using Deb packages? Or building from source without inspecting the build commands? Or even just building from source without auditing the source?
In the end docker files are just instructions for running software to set up other software. Just like every other single shell script or config file in existence since the mid seventies.
Your first sentence proves that it’s different. The developer needs to know it’s going to be a Deb package. What about rpm? What about if it’s going to run on mac? Windows? That means they’ll have to change how they develop to think about all of these different platforms. Oh you run windows - well windows doesn’t have openssl, so we need to do this vs that.
I’d recommend reading up on docker and containerization. It is not a script for setting up software. If that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it. Like it or not it’s here, and if you’re doing any dev/ops work professionally you will be left behind for not understanding it.
Apparently I was unclear, I was referring to the security implications of using different manifestations of other people’s code. Those are rather similar.
I’d recommend reading up on docker and containerization. It is not a script for setting up software.
I was referring specifically to docker files. Those are almost to the letter scripts for setting up software.
if that’s what you’re thought is then you really don’t understand containerization and I recommend taking some learnings on it.
I find your attitude not just uncharitable, but also rude.
and I find misinformation about topics like this also to be rude. It’s perfectly fine if you don’t understand something, but what I don’t like is you going out of your way to dissuade people from using a product when I don’t think you understand the core concepts of it. If you have valid criticisms like security of docker then that’s a different conversation about securing containers, but it’s hard to take them as valid criticisms if the criticism is based on a fundamental misunderstanding of the product.
I don’t think anyone I have ever talked to professionally or read about docker would ever describe a dockerfile as “scripts for setting up software”. It is much more nuanced then that.
So yes, I’m a bit rude about it. I do this professionally and I’m very tired of people who don’t understand containerization explain to me how containerization sucks.