Great news! I started my selfhost journey over a year ago, and I’m finding myself needing better hardware. There’s so many services I want that my NAS can’t handle. And I unfortunately need to add GPU transcoding to my Jellyfin setup.

What’s the best OS for a machine focused on containers and (getting started with) VMs? I’ve heard Proxmox

What CPU specs should I be concerned about?

I’m willing to buy a pre-built as long as its hardware has sufficient longevity.

  • borari@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    34
    ·
    2 days ago

    Depending on how many bays your Synology is, you might be best off getting a nuc or a mini pc for compute and using your synology just for storage.

    • Lem453@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 day ago

      I did his when I moved from unraid because I wanted better infra as code for my dockers etc. Kept unraid with all my drives and use NFS mounts from another machine with proxmox that runs a VM for my dockers

    • Ebby@lemmy.ssba.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      That’s the route I took too. NAS for storage and simple docker containers, Minipc for compute/GPU.

    • curbstickle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      This is precisely what I do with my nas.

      I have 9…ish tiny/mini/micros for compute, two NAS (locally).

      Solid approach

      • Zikeji@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        9? That’s quite a bit of compute lol.

        My journey started with 1 server, then 4, then 5 (one functioning as a NAS), then 1 (just the NAS box), then I moved and decided to slim it down to a proper NAS and 1 mini PC/NUC clone. Now I’m up to two because the first was an Intel N105 which just isn’t up for the challenges lol

        • curbstickle@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          3 are for the family, 3 are for work stuff, 3 are for me as toys.

          (Plus a Mac mini and a p330 as spare desktops for me, thus the -ish)

      • Bronzie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I run a 4 bay and a N100 NUC.

        The Synology is almost a pure storage machine. Works really well with Proxmox on the side. Not a single file has made it kneel yet, and I’ve thrown some high bitrate badboys on it.

        Is not upgrading the drives an alternative?

        I feel like you sacrifice a lot of practicality removing the NAS, such as automatic backup from phones and very easy remote access.
        Personally I also prefer separating data and software, so I don’t lose it all if a component fails.

        Just my .02

          • Bronzie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            Cool, well then I can at least share what I went with that has worked really well: GMKtech N100 NUC from Aliexpress with 16 GB of RAM.

            It’s hosting Jellyfin with transcoding, PiHole, Home Assistant, Heimdal, a Valheim server and loads of other small LXC’s in Proxmox.
            I don’t think I’ve ever seen it break a sweat.
            The NAS holds the .arr stack and Qbit, but that’s it.

            I cannot speak to the longevity of it, but I repasted the CPU once I got it and it’s chilling below 45 degrees all day long, so I expect it to last for many years. I also enabled C-states to get idle consumption as low as possible, around 7-8W.

            Best of luck with whatever setup you end up with mate!

      • borari@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I have a 6 bay, so yeah that might be a little limiting. I have all my personal stuff backed up to an encrypted cloud mount, the bulk of my storage space is pirated media I could download again, and I have the Synology using SHR so I just plug in a bigger drive, expand the array, then plug in another bigger drive and repeat. Because of duplication sectors you might not benefit as much from that method with just 4 bays. Or if you have enough stuff you can’t feasible push to up to the cloud to give piece of mind during rebuilding I guess.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    I think at this point I agree with the other commenter. If you’re strapped for storage it’s time to leave Synology behind, but it sounds more like it’s time to separate your app server from your storage server.

    I use proxmox, and it was my primary when I got started with the same thing. I recommend build out storage in proxmox directly, that will be for VM images and container volumes. Then utilize regular backups to your Synology box. That way you have hot storage for drives and running things, cold storage for backups.

    Then, inside your vms and containers you can mount things like media and other items from your Synology.

    For you, I would recommend proxmox, then on top of that a big VM for running docker containers. In that VM you have all of your mounts from Synology into that VM, like Jellyfin stuff, and you pass those mounts into docker.

    If you ever find yourself needing to stretch beyond the one box, then you can think about kubernetes or something, but I think that would be a good jump for now.

    • Leax@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Why not use Proxmox to host the containers directly instead of using a VM? I know it’s easier to use this way but it kinda misses the point of using proxmox then

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        Not at all. Proxmox does a great job at hosting VMs and giving a control plane for them - but it does not do containers well. LXCs are a thing, and it hosts those - but never try to do docker in an LXC. (I tried so many different ways and guides and there were just too many caveats, and you end up always essentially giving root access to your containers, so it’s not great anyway). I’d like to see proxmox offer some sort of docker-first approach will it will manage volumes at the proxmox level, but they don’t seem concerned with that, and honestly if you’re doing that then you’re nearing kubernetes anyway.

        Which is what I ended up doing - k3s on proxmox VMs. Proxmox handles the instances themselves, spins up a VM on each host to run k3s, and then I run k3s from within there. Same paradigm as the major cloud providers. GKE, AKS, and EKS all run k8s within a VM on their existing compute stack, so this fits right in.

        • N0x0n@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Thanks for sharing your experience ! I was kinda interested for my new N300 if I should install promox+LXC-docker or promox+VM-docker !

          Hearing you had a lot of issues and caveats makes my choice easier wihout even giving it a try ! So thanks !

          • Scrubbles@poptalk.scrubbles.tech
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            I really wanted it to work, for me it made the most sense I thought, as little virtualization as I could do. VM felt like such a heavy layer in between - but it just wasn’t meant to work that way. You have to essentially run your LXC as root, meaning that it’s essentially just the host anyway so it can run docker. Then when you get down to it, you’ve lost all the benefits of the LXC vs just running docker. Not to mention that anytime there was even am minor update to proxmox something usually broke.

            I’m surprised Proxmox hasn’t added straight-up support for containers, either by docker, podman, or even just containerd directly. But, we aren’t it’s target audience either.

            I’m glad you can take my years of struggling to find a way to get it to work well and learn from it.

        • non_burglar@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Docker runs fine nested in lxc with uid/gid mapping.

          The difficulties of running docker in lxc are particular to proxmox, I ran docker in lxc on proxmox for years, but I’m glad I moved incus; much more sensible approach.

        • Leax@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          Thanks for your point of view ! I’m still new to proxmox and I went the LXC route… Seems to be working well so far but time will tell!

    • LazerDickMcCheese@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Thanks, that’s some of the info I’m needing to make the jump over. How’s the learning curve? One of my big concerns is wrapping all of these things under Tailscale. It was easy on Synology, but Proxmox (I imagine) isn’t as straightforward. Eventually, I’d like to switch to headscale, but one thing at a time

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Just focus on one project at a time, break it out into small victories that you can celebrate. A project like this is going to be more than a single weekend. Just get proxmox up and running. Then a simple VM. Then a backup job. Don’t try to get everything including tailscale working all at once. The learning curve is a bit more than you’re probably used to, but if you take it slow and focus on those small steps you’ll be fine.

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    Define goals. What services can’t be handled?

    If transcoding is a goal build around intel. Quicksync video is a no brainer, imo. GPU is unnecessary power draw (15-25w+ idle depending on card) and waste of a pcie slot unless you want to do LLM stuff. Imo 10th gen intel is the sweet spot for quicksync unless you desperately need av1/vp9. If so then you need much more expensive 13/14 gen, which use more power and have more considerations for thermal management

    OS is an endless debate. Proxmox is fine and free, why not try it? Unraid is easier to get your bearings but it does cost money. Debian is also free but a bit more confusing because not purpose built. Truenas as well. All can do containers and VMs, but approach in different ways. None is “best” but some are more “free” which is nice

    CPU specs are dependent on goals. For transcoding as said above quicksync is necessary and is so impressive. I can transcode a 4k remux to one device while transcoding a 1080 remux to another and direct playing a 4k remux and cpu sits under 25% load on Xeon equivalent of 10700. You don’t need a Xeon btw, I just got a great deal where this was $50 (see next point). Otherwise specs depend wildly on what you plan to do. I can run windows VMs pretty well with this though for the handful of times I need a windows machine

    Prebuilt is a waste. Used hardware is cheap and gives more options and can plan more. What are you willing to buy now and what do you eventually want? My NAS started as a 36tb array with 16gb ram and no cache, now it’s 234tb and 4tb cache with 32gb ecc ram years later. Slowly building up was easier on wallet and used hardware, refurb drives, etc is 100% the build. Your goals will likely vary but figure out your roadmap and go from there

    Also keep in mind that not every service benefits from running on a NAS. My homeassistant server is run on a raspberry pi for example. Easier to keep it segregated and don’t have to worry about getting zwave/zigbee/mqtt/etc all working with a docker plus dealing with any server downtime impacting home. Tbf literally everything else is run on the nas though haha

    • LazerDickMcCheese@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      In hindsight, I didn’t explain myself well enough. My plan is to use my current NAS as a NAS and little more; I’d like a machine with respectable hardware to handle what my NAS is currently running plus more.

      My NAS has Jellyfin, arrs, all the stuff that goes with that, Pi-hole, and Homarr. And that’s pushing its limits: everything has been slow, streams freeze, I’ve had containers quit, etc.

      I’d like to get into other projects like Radicale, Mealie, ErsatzTV (old PC could handle it, NAS can’t), CCTV, and more. But according to my resources, the NAS can’t handle it

      GPU (for the sake of transcoding) isn’t worth it?

      • ragebutt@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        This makes sense

        No sense in getting rid of hardware that is working. I’m not familiar with ersatztv but for all the other stuff I am able to handily run it on a 10th gen intel build that is also handling nas duties fwiw. And some stuff is not ideal (cctv is handled via blue iris, which runs in windows VM, everything else is docker)

        for the gpu it really depends on your needs. How many users is the big one. If you have at most 2-4 concurrent users and that is an uncommon scenario the gpu is a waste of power, money, and thermal management. Igpu will sip power and transcode (depending on library content, again av1/vp9 on a 10th gen isn’t happening) with that user load assuming you have a decent amount of ram (I have 32gb so you don’t need absurd amounts).

        However if you have a lot of users hitting you, 5-6+ or more concurrent streams that all transcode, then you need to start evaluating a discrete gpu (and maybe a significant internet connection bc damn). Alternatively you can suggest your users get something like a ugoos am6b+ flashed with coreelec or a similar setup that can just direct play basically anything but that’s a bit challenging to setup

        So then it may be as simple as buying some e waste pc to use a server and using the nas as its intended purpose. Frankly this is probably better, it’s worse power wise but having the storage separate from services has advantages

        • LazerDickMcCheese@sh.itjust.worksOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          So I’m expecting a max of 5 concurrent users, but most wouldn’t need transcoding. The real hiccup (brace yourself) is a 720p CRT and (assuming I get transcoding to work well) a 480p CRT. I’m pretty novice to PC specs outside of the “buy whatever you can afford for gaming” mindset, so any suggestions there are welcome. My budget is…whatever it takes to not regret the hardware years from now. My last build was $2k for reference

  • some_guy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    My Synology is compatible with an expansion unit and can support two of them. Check if yours can do the same for the storage aspect.

  • Cousin Mose@lemmy.hogru.ch
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Honestly I just run Alpine Linux on a mini PC (router) or Raspberry Pi (NAS). I don’t like to screw around with outdated, bloated Debian-based distros.