I guess Poland? I know from my colleagues that internet infrastructure jumped from old slow stuff to fiber there and it’s fairly cheap.
I guess Poland? I know from my colleagues that internet infrastructure jumped from old slow stuff to fiber there and it’s fairly cheap.
I struggled with this for a long time, and then I just decided to use synology photos.
It has albums, tagging, geolocation, sharing. It has phone picture backup, it is inherently a backup as it’s on my NAS and I back that data up again.
I want to keep the thing that I really care about the most friction free and also not too dependent on myself so that I can still experiment.
I didn’t try PiGallery2 though, maybe I will have a look!
Did it sound cold? Because I didn’t mean that, I just meant to actually answer the question from my PoV. Just for the record, I also did not down vote you.
So yeah, use whatever footgun you prefer, I don’t judge :)
Or rustic! It is compatible with restic but has some nice additions, for example the fact that supports a config files. It makes operations a bit easier IMHO (I am currently using both).
I really thought swarm was dead :)
To be honest, some kubernetes distributions make the cluster operations minimal (I use k0s managed via ansible)!
Either way, the moment you go from N containers on one box to N containers on M boxes you need to start considering how to handle stateful applications, load balancing, etc. And that in general requires knowledge on a domain which is different from having simply applications wrapped in containers locally.
Yeah ultimately every container has it’s own veth interface, so you can do shaping using tc on those.
Edit: I had a look at docker-tc. It does what you want, BUT. Unless your use case is complex, I would really think twice about running a tool written in bash which has access to the docker socket (I.e. trivial node escape) and runs with NET_ADMIN capability.
That’s a lot of power to do something you can also do with a few lines of code executed after you start the container. Again, provided that your use case is not complex.
Cgroups have the ability to limit TCP and total network bandwidth. I don’t know from the top of my mind whether this can be configured at runtime (I.e. via docker run), but you can specifcy at runtime the cgroup parent to use. This means you can pre-create the cgroup, set the limits and start the container with that parent cgroup.
You can also run some hook script after launch that adds the PID to a cgroup every time the container is launched, or possibly use tc.
I am not aware of the ability to only limit uplink bandwidth, but I have not researched this.
I think k8s is a different beast, that requires way more domain specific knowledge besides server/Linux basic administration. I do run it, but it’s an evolution of a need, specifically when you want to manage a fleet of machines running containers.
Because the lxc way is inherently different from the docker/podman way. It’s aimed at running full systems, rather than mono process containers. It has it’s use cases, but they are not as common IMHO.
You have a bunch of options:
kubectl run $NAME --image=$IMAGE
this just creates a pod running the specific image. If you kill the pod, or it terminates, it won’t be run again. In general though, you probably want to do some customization before running (maybe you need volumes, secrets, env, ports, labels, securityContext, etc.) and for that you can simply let kubectl generate the boilerplate YAML and then simply make some edit:
kubectl run $NAME --image=$IMAGE --dry-run=client -o yaml > mypod.yaml
# edit mypod.yaml
kubectl create -f mypod.yaml
You can do the same with a deployment
or statefulset
:
kubectl create deployment $NAME -n $NAMESPACE [...] --dry-run=client -o yaml > deployment.yaml
In case you don’t need anything fancy, the kubectl create
subcommand allows you to create simple workload, so probably that’s the answer to your question.
Docker can run rootless too, see https://docs.docker.com/engine/security/rootless/
I would say Docker. There is no substantial benefit in running podman, while docker is a widely adopted tool (which means more tooling in the ecosystem, easier to find answers to questions etc.). The difference is not huge tbh, and some time ago the biggest advantage for podman was being able to run rootless, while docker was stuck with a root daemon. This is not the case anymore (docker can run rootless), so I would say unless you have some specific argument to use podman, stick with docker.
As someone who is being pressured to move to macOS (M1) from Linux for work, I feel you. I was just having a conversation in another thread about trackpads and I feel that Apple really built the workflow around gestures, which leaves people who would rather use keybindings quite out of luck. I know there is rectangle, but it doesn’t even go close to what a good WM gives.
If there is already another reverse proxy, doing this IMHO is worse than just running a container and adding one more rule in the proxy (if needed, with traefik it’s not for example). I also build all my servers with IaC and a repeatable setup, so installing stuff manually breaks the model (I want to be able to migrate server with minimal manual action, as I had to do it already twice…).
The job is simple either way, I would say it mostly depends on which ecosystem someone is buying into and what secondary requirements one has.
I would consider the lack of a shell a benefit in this scenario. You really don’t want the extra attack surface and tooling.
Considering you also manage the host, if you want to see what’s going on inside the container (which for such a simple image can be done once while building it the first time more likely), you can use unshare to spawn a bash process in the container namespaces (e.g., unshare -m -p […] -t PID bash, or something like this - I am going by memory).
It really depends, if your setup is docker based (as OP’s seems to be), adding something outside is not a good solution. I am talking for example about traefik or caddy with docker plugin.
By versioning I meant that when you do a push to master, you can have a release which produces a new image. This makes it IMHO simpler than having just git and local files.
I really don’t see the complexity added, I do gain isolation (sure, static sites have tiny attack surfaces), easy portability (if I want to move machine it’s one command), neat organization (no local fs paths to manage essentially), and the overhead is a 3 lines Dockerfile and a couple of MB needed to duplicate a webserver binary. Of course it is a matter of preference, but I don’t see the cons honestly.
Containers are a perfectly suitable use-case for serving static sites. You get isolation and versioning at the absolutely negligible cost of duplicating a binary (the webserver - which in case of the one I linked in my comment, it’s 5MB of space). Also, you get autostart of the server if you use compose, which is equivalent to what you would do with a Systemd unit, I suppose.
You can then use a reverse-proxy to simply route to the different containers.
I personally package the files in a scratch or distroless image and use https://github.com/static-web-server/static-web-server, which is a rust server, quite tiny. This is very similar to nginx or httpd, but the static nature of the binary removes clutter, reduces attack surface (because you can use smaller images) and reduces the size of the image.
I don’t think this is needed to implement censorship. It’s Italy, I know better than thinking something is done out of malice, when it can be the result of incompetence. I completely believe this is some idiotic implementation of what football an TV economic powers wanted. Either way, this idea that everything bad is because “old white men” is bs. We don’t even need meloni, we can use the dear iron lady as an example…class and economic positions count way more than age and gender, ultimately.
Thanks (grazie?)! I was looking for something similar and kanidm looks great feature wise and simple to deploy!