Currently working on an Arch server for my self hosting needs. I love arch, in my eyes its the perfect platform for self hosting. There is no bloat, making it lightweight and resource efficient. Its also very stable if you go down the lts route and have the time and skills to head off problems before they become catastrophic.

The downsides. For someone who is a semi-noob there is a very steep learning curve. Arch is very well documented but when you hit a problem or a brick wall its very frustrating. My low tolerence for bullshit means I take hours/days long breaks from it. There’s also time demands in the real world so needless to say I’ve been going at it for a few weeks now.

Unraid is very appealing - nice clean interface, out-of-the-box solutions for whatever you want to do, easy NAS management… What’s not to like? If it was fully open-source I would’ve bought into it from the start. At least once a day I think “I’m done. Sign me up unraid”. Its taking an age to set up the Arch server. If I went for unraid I could be self hosting in a matter of hours. Unraid is the antitheses of Arch. Arch is for masochists.

Do you ever look at products like unraid and think “fuck this shit, gimme some of that”? What is your version of this? Have you ever actually done it and regretted it/lived happily ever after?

  • chaotic_disorganizer@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    13 days ago

    Weeeell, since they switched to a semi-subscription model, I’d recommend looking into TrueNAS (inb4 they start locking down their stuff)

    • jobbies@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      13 days ago

      TrueNAS was actually the first thing I tried. The NAS side of it is great but my need to tinker and get my hands dirty got the better of me. And I don’t actually mind paying for good software, its the fact that so much of unraid is closed-source puts me off.

      • nagaram@startrek.website
        link
        fedilink
        English
        arrow-up
        10
        ·
        13 days ago

        Are you using truenas as the entire homelab?

        I also love messing with stuff until it breaks and I learn something, but I’ve decided I just want my files to be accessible.

        So I actually have truenas virtualized with a passed through HBA so I can run proxmox to host all my breakable VMs while leaving truenas alone.

        • jobbies@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 days ago

          So I actually have truenas virtualized with a passed through HBA so I can run proxmox to host all my breakable VMs while leaving truenas alone.

          I want to try this eventually. Never used HBAs before. Is it hard to set up? Reliable once its up and running?

          • nagaram@startrek.website
            link
            fedilink
            English
            arrow-up
            5
            ·
            13 days ago

            It was really simple to do in Proxmox.

            You will find no name brand HBAs in IT mode on eBay for half the price of Intel, Supermicro, Dell, Etc branded ones. Do not buy the no names. I spent a week flashing and reflashing some cheap one, cycling through cables, etc. Nothing.

            My supermicro branded one worked absolutely no issue. And I think it was like $40

            It probably took a total of 30 minutes to pass it through and build the VM and everything. It took a couple days to rebuild my data from my previous truenas server but I had 10 TB of data on 4 drives.

            The only issues I’ve had have been my own reading comprehension in setting up truenas accounts.

  • HelloRoot@lemy.lol
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    13 days ago

    How close are you to “fck it, im just gonna pay for unraid”?

    Extremely far. Maximum distance. My self updating debian with an sftpgo container and some RAID HDDs slapped onto it has been rocksolid for years.

    • Overspark@piefed.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      13 days ago

      Yeah I wouldn’t call Arch a server OS. I run Arch on my laptop, but Debian on my docker/file/self-hosting server. Best tool for the job etc. Never even been tempted by Unraid, the whole point of running Linux is that I control what goes where.

      • refreeze@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        13 days ago

        Arch on the desktop, Debian on the server is the way to go. Both solid, community (non-corporate) distros that fit each use case.

      • hddsx@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 days ago

        I’m not sure what the benefits of unraid are but for better or worse, I’ve been running Linux servers since 2007 or so, so……

        I also used arch now and full time for a few years in the 2010s. I like it, but they put in breaking changes occasionally that I don’t want to have to deal with for a server.

        I was on CentOS and switched to Debian because of IBM/RH

  • glizzyguzzler@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    17
    ·
    13 days ago

    Reading that is wild

    Why are you doing Arch on a server? You want to tinker forever and read the update notes like a hawk lest the server implode forever?

    Arch isn’t gonna be noticeably leaner than Debian.

    Get Debian, install docker and/or podman, set unattended upgrades, and then install Incus if you need VMs or containers down the line. You can stick on ZFS and it’ll be fine, you already have BTRFS for basic mirrors. Install Cockpit and you’ll have a nice GUI. Try not to think you have to fiddle with settings, the maintainers for each package/service have set it so it works for most people (and we’re most people!); you’ll only need to intervene on an handful of package configs. All set and it’s not proprietary.

    • paper_moon@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      13 days ago

      There was a thread yesterday where most people were choosing arch for their server, I didn’t get it either. Like you, I’d much rather Debian or something else with smoother updates.

    • Vorpal@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      13 days ago

      Agreed, I run arch on my desktop and laptop, because it is more stable (in the sense of fewer bugs, things like suspend/resume works reliably for example) than any other distro I have used.

      But on my VPS and my Pi I run Debian because it is more stable (in the sense of fewer upgrades that could break things). I can enable unattended upgrades there, which I would never do on my Arch system (though it is incredibly rare for those to break).

      Also: if someone said they were a (self proclaimed) “semi noob” I would not recommend Arch. I have used Linux since 2002, and as my main OS since 2006. (Furthermore I’m a software developer in C/C++/Rust.) While Arch is a great distro, don’t start with Arch.

  • Zwuzelmaus@feddit.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    13 days ago

    Not close at all.

    OK, I got some missing bells and whistles in my current setup, which is just a poor man’s NAS made of ZFS and samba, plus a nextcloud for convenience.

    But I fell so much in love with ZFS that I would never replace it with unraid. For my next box I am looking forward to use TrueNAS instead.

    • jobbies@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      13 days ago

      ZFS is a bit like Arch - its wonderful in theory but in practice it can be bitch to work with. I’ve got it working on Arch but it wasn’t easy, let me tell you.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        13 days ago

        What parts are “a bitch” to work with?

        I’m a bit confuses about your approach in general:

        No zfs because it “breaks”, but you use arch as server is? Sounds like you want to tinker and break things to learn, but virtualization is “overkill”?

        I don’t understand what you’re trying to get from your homelab.

        • jobbies@lemmy.zipOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 days ago

          What parts are “a bitch” to work with

          If you’re coming from Windows servers/environs (or Mac for that matter), configuring ZFS in CLI (as you do in arch) is a learning curve and can be tedious.

          No zfs because it “breaks”

          Its not baked into the arch kernels so unless you’ve got your wits about you running updates can fuck everything up.

          virtualization is “overkill”

          Yes. If all you’re looking for is a NAS with some docker containers and you don’t need the segregation virtualization is overkill.

          I don’t understand what you’re trying to get from your homelab.

          You could just ask questions? There’s no need to be a dick about it.

          • non_burglar@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            13 days ago

            There’s no need to be a dick about it.

            I meant no disrespect, I suppose I should have been more direct.

            I asked about ZFS because it is not really that difficult to set up and there aren’t that many variables involved. You create a pool, then start using it. There isn’t much more to it.

            Its not baked into the arch kernels so unless you’ve got your wits about you running updates can fuck everything up.

            That is an arch problem, not ZFS. An update on Debian with ZFS would almost never behave like this.

            I asked about virtualization because it would allow you to break things intentionally and flip back to a desired state, which seems to fit with your like of solving broken stuff.

            So in the end, you’re obviously free to do what you like, and that’s the great thing about Linux. But you definitely seem to want to do things the hard way.

            Have a better one.

          • AbidanYre@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 days ago

            I don’t know what that other dude’s on about.

            The general consensus on zfs is (or at least was) that your need 1Gb of RAM per terabyte of zpool. Especially if you want to run deduplication.

            If you don’t need dedupe the requirements drop significantly.

          • Flamekebab@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 days ago

            Looks like I angered people by not loving ZFS. I don’t feel like being bagged on further for using it wrong or whatever.

              • Flamekebab@piefed.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                13 days ago

                I was trying to use it for a mirrored setup with TrueNAS and found it to be flakey to the point of uselessness. I was essentially told that I was using it wrong because I had USB disks. It allowed me to set it up and provided no warnings but after losing my test data for the fifth time (brand new disks - that wasn’t the issue) I gave up and setup a simple rsync job to mirror data between the two ext4 disks.

                If losing power effectively wipes my data then it’s no damn use to me. I’m sure it’s great in a hermetically sealed data centre or something but if I can’t pull one of the mirrored disks and plug it into another machine for data recovery then it’s no damn good to me.

                • non_burglar@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  13 days ago

                  Ah, I hear you, and sorry you had that experience. GUI controls of ZFS aren’t usually very intuitive.

                  Also, ZFS assumes it has direct access to the block device, and certain USB implementations (not UAS) use sync operations that sit between the HAL and userland somewhere. So ZFS likes direct-attached storage, it’s a caveat to be sure.

                  If you ever change your mind, https://klarasystems.com/zfs/ has a ton of reading and tutorials on ZFS.

                • JGrffn@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  13 days ago

                  Wait so you built a pool using removable USB media, and was surprised it didn’t work? Lmao

                  That’s like being angry that a car wash physically hurt you because you drove in on a bike, then using a hose on your bike and claiming that the hose is better than the car wash.

                  Zfs is a low level system meant for pcie or sata, not USB, which is many layers above sata & pcie. Rsync was the right choice for this scenario since it’s a higher level program which doesn’t care about anything other than just the data and will work over USB, Ethernet, wifi, etc., but you gotta understand why it was the right choice instead of just throwing shade at one of the most robust filesystems out there just because it wasn’t designed for your specific usecase.

                • Andres@social.ridetrans.it
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  13 days ago

                  @Flamekebab @non_burglar Sounds like snapraid might be a better fit for your needs. Since it runs over top of the filesystem, if you lose a disk you can still access files from the other disk(s). It’s better than rsync, in that it would provide regular data validation (‘snapraid scrub’ once per week or so). It is more designed to work in raid5 rather than mirroring (raid1) setup, however.

  • roofuskit@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    13 days ago

    I already did, no regrets. The way it handles storage is the killer feature for me. Being able to upgrade my drives or add one with very little effort is worth every penny.

    Edit: I was grandfathered in before the subscription

      • jobbies@lemmy.zipOP
        link
        fedilink
        English
        arrow-up
        4
        ·
        13 days ago

        There is the ‘lifetime’ option? I hate subscriptions. I don’t know how much money these companies think I have but there’s no way I could subscribe to everything.

        • Unforeseen@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          Ahh yes I see that option now. But at $250 CAD that’s pretty steep, but I am glad they at least have it as an option.

          • jobbies@lemmy.zipOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 days ago

            I’m very jealous of everyone who bought in before the subscriptions ha

      • roofuskit@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 days ago

        I thought you could still buy the highest tier and get no subscription, just that they raised the price?

  • MudMan@fedia.io
    link
    fedilink
    arrow-up
    10
    ·
    13 days ago

    I’m sidetracking a bit, but am I alone in thinking self hosting hobbyists are way too into “lightweight and not bloated” as a value?

    I mean, I get it if you have a whole data center worth of servers, but if it’s a cobbled together home server it’s probably fine, right? My current setup idles at 1.5% of its CPU and 25% of its RAM. If I turned everything off those values are close to zero and effectively trivial alongside any one of the apps I’m running in there. Surely any amount of convenience is worth the extra bloat, right?

    • Illecors@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      5
      ·
      13 days ago

      Gentoo/Arch guy checking in. It’s more about having fewer codepaths to go wrong after some update. At least in my case.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        13 days ago

        After a OS update? I mean, I guess, but most things are going to be in containers anyway, right?

        The last update that messed me up on any counts was Python-related and that would have got me on any distro just as well.

        Once again, I get it at scale, where you have so much maintenance to manage and want to keep it to a minimum, but for home use it seems to me that being on an LTS/stable update channel would have a much bigger impact than being on a lightweight distro.

        • SayCyberOnceMore@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          13 days ago

          No, even lighterweight - no containers.

          My NAS is mostly plain Arch packages, so just upgrade and all is well. No additional container software layer to maintain either.

          Btrfs management tools update with the OS, all is good.

    • jobbies@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      hobbyists are way too into “lightweight and not bloated” as a value?

      Yeah I get you. I suppose it is only installing what you need and knowing exactly what everything is for/does. As well as squeezing every last drop of resource form tired old hardware. But yeah there is a usability trade-off.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        13 days ago

        squeezing every last drop of resource form tired old hardware

        This is such a myth. 99% of the time your hardware is doing there doing nothing. Even when running “bloated” services.

        Nextcloud, for example, uses practically zero cpu and a few tens on mb when sitting around yet people avoid it for “bloat”.

      • MudMan@fedia.io
        link
        fedilink
        arrow-up
        1
        ·
        13 days ago

        I suppose it makes more sense the less you want to do and the older your hardware is. Even when repurposing old laptops and stuff like that I find the smallest apps I’d want to run were orders of magnitude more costly than any OS overhead. This was even true that one time I got lazy and started running stuff on an older Windows machine without reinstalling the OS, so I’m guessing anything Linux-side would be fine.

  • Creat@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    13 days ago

    UnRaid doesn’t provide anything I am interested in, at all. Currently running TrueNAS for main storage and proxmox for virtualization, both ZFS based. If TrueNAS ever enshittifies, I’d run some bare metal Linux with ZFS. My workstations also run ZFS as the file system, making backups trivial. VM snapshots and backups of any system are trivial and take seconds (including network transfers).

    I never understood why I’d even consider UnRaid for anything.

    • Know_not_Scotty_does@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      Setup was easy, it just works, its stable, and if you want the regular updates, just get the lifetime model on sale. I bought it becaue I didn’t want to spend time screwing with setup and just wanted to get my data moved snd running.

  • Brunette6256@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    13 days ago

    I bout unraid years ago… Mostly because I was just married and just started my career and wanted a solution that was already baked. It’s been great. I think it’s helped me get to under stand docker more. I’d often want to run a docker that’s not in their app store (yet).

    The problem I kept running into is I wanted go check out and do everything which often broke things or something weird would happen… Lol. So I have two know. One that’s “production” and another for checking things out.

    • jobbies@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      Ha nice. I actually like it when things break and I manage to fix them. That’s how you learn and finding the solution is satisfying.

  • GottaHaveFaith@fedia.io
    link
    fedilink
    arrow-up
    6
    ·
    13 days ago

    I think most selfhosters already know/use Linux, so management issues are already known. About the ease of use, if you manage services with docker it’s really easy to bring them up/down, and if you want some GUI there’s portainer.

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 days ago

    Nah. I have everything in containers so maintenance is a non-issue, since I can upgrade the host separately from the containers. I’m using openSUSE Leap with a BTRFS mirror for the storage and I never have to think about it. I’ll probably move to openSUSE MicroOS when I get a new boot drive so I don’t have to do the release upgrade every other year.

  • anamethatisnt@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 days ago

    I’ve never tried Arch but my debian server with kvm/qemu/cockpit running mdraid1 and smb/nfs file sharing - works well enough and I enjoy the tinkering and setting it all up. I’m writing this from a virtual Fedora KDE workstation that I’ve setup vfio and pcie passthrough of my dgpu and a usb controller on (both connected to my monitor that acts as a usb hub).

    A friend runs a Proxmox VE Community Edition with physical disk passthrough to a virtual Nextcloud server and that seems to work well too.
    I guess my answer is no, I don’t look at UnRAID and think “fuck this shit I’m done”, I enjoy the tinkering that makes you frustrated.

    May I ask what kind of brick walls you’re hitting and what software you run on Arch that makes it so frustrating?

    • jobbies@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 days ago

      I actually gave Debian a go and I get the hype. Compared to Arch its a dream to set up and work with. Somewhere down the road I might go back to it.

      Proxmox - it looks great but I think its overkill for what I need. I can run most things in Docker - I don’t really need virtualization. At some point in the future I’d like to try it and have TrueNAS virtualized on top to manage the NAS side of things.

      There’s not really particular thing (or things) that are insurmountable/unbearable with Arch. Its more the experience. But I love it and hate it in equal measure ha.

      • anamethatisnt@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        13 days ago

        What I like about running a hypervisor and true vms is that I can fool around with some vms in my server without risk of disrupting the others.
        I run most of my dockers in one VM, my game servers in another and the Jellyfin instance on a third. That allows me to fool around with my portainer instance or game servers without disrupting Jellyfin and so on.
        Part of it is that I’m more used to and comfortable in managing vms and their backup/recovery compared to LXCs and Dockers.

        • CmdrShepard49@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 days ago

          Im running a similar setup (ZFS pool, Cockpit, portainer x2, and a few LXCs for Plex, Frigate, etc) and it’s been great. Before building it early this year, I’d been running everything on Windows for the decade prior because I was unfamiliar with Linux and struggled like OP when problems arose, but after following a guide to get everything setup it’s been rock solid and if I screw anything up I can just load a backup. I’d also looked into TrueNAS and Unraid but this gives me a more flexible setup without any extra cost and the ability to tinker without affecting anything else like you said.

  • hamsda@feddit.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    13 days ago

    To me it seems like:

    • you want to do a lot of stuff yourself on arch
    • but there’s quite some complicated stuff to learn and try

    I’d try Proxmox VE and, if you’re also searching for a Backup Server, Proxmox Backup Server.

    I recommend these because:

    • Proxmox VE is a Hypervisor, you can just spin up Arch Linux VMs for every task you need
    • Proxmox VE, as well as Proxmox BS are open source
    • you can buy a license for “stable updates” (you get the same updates, but delayed, to fix problems before they get to you)
    • includes snapshots, re-rolls, full-backups, a firewall (which you can turn on or off for every VM), …

    I personally run a Proxmox VE + Proxmox BS setup in 3 companies + my own homelab.

    It’s not magic, Proxmox VE is literally Debian 13 + qemu + kvm with a nice webui. So you know the tech is proven, it’s just now you also get an easy to use interface instead of virsh console commands or virt-manager.

    I personally like a stable infrastructure to test and run my important and experimental tuff upon. That’s why I’m going with this instead of managing even the hypervisor myself with Arch.