• 0 Posts
  • 287 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle



  • especially once a service does fail or needs any amount of customization.

    A failed service gets killed and restarted. It should then work correctly.
    If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
    So, either build your recovery process to account for this… or fix it so it can recover.
    It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.

    As for customisation, if it isn’t exposed via env vars then it can’t be altered.
    If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)

    It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
    It’s using a chisel incorrectly.


  • I would always run proxmox to set up docker VMs.

    I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
    It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
    I will use Talos Linux again.
    However next time, I’m running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
    I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it’s the way k8s is designed.
    It wasn’t the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would’ve made things so much easier.

    Also, why wouldn’t I run proxmox?
    Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups






  • Yeh, but you only need 10 vibe code cleaner-uppers per vibe coder.
    And a vibe coder is a 10x developer.
    You just have to mitigate the increased cost of AI API calls.
    It pretty much balances out, with the obvious 20% efficiency boost - which is where everyone makes their money: companies, developers and shovel AI platforms… All 20% efficiency boost. Which directly relates to profit boosts. 20% line goes up!
    Which also pays for the datacenters, the shovels GPUs, the power, the cooling and the water for the cooling. It’s all cheaper, cause AI is at least 20% more productive.

    Even if your vibe-coder-code-fixers turn into vibe-coder-code-vibe-fixers… That’s just another 20% efficiency boost. Basically printing money! Oh, but you need to buy more shovels GPUs. But that’s also a win because shovels GPUs don’t have unions or require holidays. Think of the profits! They work 24/7.
    And all you need are vibe-coder-code-vibe-fixer-code-fixers.

    …As long as your vibe-coder-code-vibe-fixer-code-fixers don’t turn into vibe-coder-code-vibe-fixer-code-vibe-fixers (I’m so lost, I think that’s right).

    Edit: forgot some shovels



  • I’d still run k8s inside a proxmox VM. Even if it’s basically all resources dedicated to the VM, proxmox gives you a huge amount of oversight and additional tooling.
    Proxmox doesn’t have to do much (or even anything), beyond provide a virtual machine.

    I’ve ran Talos OS (dedicated k8s distro) bare metal. It was fine, but I wish I had a hypervisor. I was lucky that my project could be wiped and rebuilt with ease. Having a hypervisor would mean I could’ve just rolled back to a snapshot, and separated worker/master nodes without running additional servers.
    This was sorely missed when I was both learning the deployment of k8s, and k8s itself.
    For the next project that is similar, I’ll run talos inside proxmox VMs.

    As far as “how does cloudflare work in k8s”… However you want?
    You could manually deploy the example manifests provided by cloudflare.
    Or perhaps there are some helm charts that can make it all a bit easier?

    Or you could install an operator, which will look for Custom Resource Definitions or specific metadata on standard resources, then deploy and configure the suitable additional resources in order to make it work.
    https://github.com/adyanth/cloudflare-operator seems popular?

    I’d look to reduce the amount of yaml you have to write/configure by hand. Which is why I like operators




  • It’s also easier to share vulnerability fixes between different projects.

    “Y” was using a similar memory management as “T”, T was hacked due to whatever, people that use Y and T report to Y that a similar vulnerability might be exploitable

    Edit:
    In closed source, this might happen if both projects are under the same company.
    But users will never have the ability to tell Y that T was hacked in a way that might affect Y