Hello there lemmings! Finally I have taken up the courage to buy a low power mini PC to be my first homeserver (Ryzen 5500U, 16GB RAM, 512 SSD, already have 6TB external HDD tho). I have basically no tangible experience with Debian or Fedora-based system, since my daily drivers are Arch-based (although I’m planning to switch my laptop over to Fedora).
What’s your experiences with Debian and Rocky as a homeserver OS?
Debian stable is a very solid choice for a server OS.
It depends on how you’re going to host your services though. Are you going to use containers (what kind), VMs, a mix of the two, install directly on the host system (and if so where do you plan to source the packages)?
I’ve kept my Debian system very basic, installed latest Docker from the official apt repo, and I’ve installed almost every service in a docker container. Only things installed directly on host are docker, ssh, nfs and avahi.
I’m going full container mode if it’s possible, or just make the docker images myself then.
- Jellyfin
- Onedrive alternative (probably Nextcloud)
- Personal website + it’s backend, or just the backend (Might won’t host this tho, since it’s a high security risk to my personal data)
- Pi-hole
- Probably other ideas which seems fun to host
Make sure you use a docker image that tracks the stable version of Jellyfin. The official image jellyfin/jellyfin tracks unstable. Not all plugins work with unstable and switching to stable later is difficult. This trips lots of people and locks them into unstable because by the time they figure it out they’ve customized their collection a lot.
The linuxserver/jellyfin image carries stable versions but you have to go into the “Tags” tab and filter for
10.
to find them (10.8.13 pushed 16 days ago is the latest right now).To use that version you say “image: linuxserver/jellyfin:10.8.13” in your docker compose instead of “linuxserver/jellyfin:latest”.
This approach has the added benefit of letting you control when you want to update Jellyfin, as opposed to :latest which will get updated whenever the container (re)starts if there’s a newer image available.
While upgrading your images constantly sounds good in theory, eventually you will see that sometimes the new versions will break (especially if they’re tracking unstable versions). When that happens you will want to go back to a known good version.
What I do is go look for tags every once in a while and if there’s a newer version I comment-out the previous “image:” line and add one with the new version, then destroy and recreate the container (the data will survive because you configure it to live on a mounted volume, not inside the container), then recreate with the new version. If there’s any problem I can destroy it, switch back to the old version, and raise it again.
Oh that explains the 2 linuxserver and official jellyfin then. It was always kinda strange to me.
Luckily my uni hosted a docker course and binge watched a beginner Linkedin Learning too about it, but I’m really grateful for your in-depth guide. Guys like you really make Lemmy the old Reddit you used to have and cherish in your hearts. :3
The official image jellyfin/jellyfin tracks unstable
Why did they make that choice? I am on this version right now, didn’t know it was unstable. I found it very difficult to have information regarding the docker images in general, it’s a pity we don’t have a few lines explaining what the content is.
I thought the official jellyfin images on the versioned tags (like “10.8.13”) were stable - are they not?
Oh right, I filtered for “10.” and got an unstable image and thought they don’t have them. Yeah those are stable too.
Debian is a distro of few surprises and stable but slightly out of date packages. Their software repositories are vast and supported across pretty much every architecture you could think of running Linux on.
Meanwhile the world of RHEL has been turned upside down with Redhat essentially putting a paywall around their sources. Although Rocky currently promises to continue being bug for bug compatible with RHEL it remains to be seen if they can continue to do so (in my opinion)
Yeah that’s one of the main reason I’m interested in your experience. The sorta recent source lock is definitely shaky just in general, although I believe in Rocky’s message that they won’t have to roll their shutters down.
What surprised me with debain, it comes as a very minimal installation, so you will have to set up stuff like sudo yourself.
If you don’t set a root password, it’ll add your user created during the install to the sudo group.
That’s 23s of your life wasted, but how would you set it?
NOPASSWD?That’s not secure by most experts, people do it as convenience, but say rogue code was run by user and sudo was open, … done your system belongs to someone else now.
Hmmm interesting, so having no sudo is a security move then?
Sudo is fine, just use a good password. Anyone setting up NOPASSWD has given up on security, it’s not a thing in real practice.
That is a strict position some have, but I didn’t say this. Editing /etc/sudoers and giving sudo or wheel group users a no-passwd access is insecure.
sudo chmod 1777 /tmp
will not ask you for
passwd, it is like bypassing sudoIf you open sudoers you will see what I’m saying. In debiuntu it is sudo group in arch/void … it is wheel group
As others have said, debian is very minimal, so If you would prefer to setup and configure the whole system yourself, debian is a good choice.
Personally, I prefer fedora server. It comes with more things configured out of the box (zram and sysctl configs for example) as well as better security defaults (selinux included with proper policies) and first class support for container infrastructure. Ultimately you could achieve a similar end result with debian, but for my homeservers I prefer to let the fedora team handle most of the system configuration for me.
I would be careful if they wanna use zfs though. Fedora can be a bit quick on the kernels meaning a kernel can come out that isn’t supported by zfs. This causes zfs to fail to build the kernel module on the new kernel and so you lose zfs on the next boot.
Almost happened to me tracking debian testing a while back.
deleted by creator
New Gentoo Linux mascot is looking legit in this one art
Use Debian, make your life easier. Chances are the RHEL copies are going to get frozen out, but there will always be Debian, and it’s the most community supported server mainline anyway.
What would you like to do with your home server?
Ahh yeah I have forgot to mention that.
- Jellyfin
- Onedrive alternative (probably Nextcloud)
- Personal website + it’s backend, or just the backend
- Pi-hole
- Probably other ideas which seems fun to host
Hi! Here’s you, like 2 yrs down the road. I have no opinion on the server OS since I started with ubuntu server but my projects went a similar direction.
One major thing I’d recommend is thinking about security: web facing servers with your private data on it are a very bad idea. So unless you mean a website for personal use, I’d split the “home” server and the “personal web server” or vps in two so you have the stuff you want others to use unsupervised and the stuff you use at home and from the road.
Another thought is bandwith, unless you have insane upload, I’d stay away from web facing stuff like websites, game servers and social media instances. This works on a cheap vps with gigabit bandwith up and down. Way less hassle and less security issues.
I would do Truenas scale + portainer
Honestly yeah, that’s the more productive option, but I want to learn setting up things by myself.
I use both (and others) for different reasons. However, the primary homelab server I use is based on Debian - Proxmox OS. It runs on the machine hardware you have but then you can run a few ‘fake’ computers (virtual machines) on top of that host OS. This is called a hypervisor. So when running Proxmox on the host, you could run a Virtual machine (guest) that is running Rocky and play around with that. Or Fedora, or Gentoo… or Arch. That really would be the avenue to go to learn about different Distros and nuances without having to breakdown and rebuild everything every time.
My experience is that both Debian and Rocky are stable and very useful for what you need them to do. Debian favors stability, whereas Rocky favors being a RHEL compatible OS. It’s easier to do somethings on Debian, but you may learn more enterprise aspects using Rocky.
I run Debian on my server and while it’s sometimes annoying how old a lot of packages are, it’s ridiculously stable.
How annoying do you find the outdated packages?
Mostly not at all but sometimes I want to try some new features and that’s when it gets annoying. Right now, I’d like to try passing encoding capability from my APU to a VM I’m hosting but it requires Mesa 23 and Debian is on 22.
You can use the backports repository fairly easily. I did for the kernel and had no issues.
Thanks for the tip but Mesa is not in the backports repo.
I was running CentOS then migrated to Rocky. It handles various VMs and containers great and has been trouble free for years. 10 core Haswell-era Xeon with 64 GB RAM and a lot of ZFS storage.
I moved from Arch to Fedora on my desktop/laptop as well. Really helps my mental state not keeping up with the different distro-specific knowledge between hosts.
Did you get bored of dealing with packages dependencies and always relying on AUR when you wanted to download a corpo software? I’m planning to do the Arch to Fedora pill too tbf.
Somewhat but it was more driven on the server-side decision. I wanted something that I could set and forget, that didn’t have a ton of updates but prioritized stability/security patches.
Of course, speaking of packages I do regularly use rpmfusion and epel for the extra stuff the normal repos don’t have, but I understand why.
Also being a heavy user of KVM, PCIe and GPU passthrough I found the experience easier and less likely to break between updates. A lot of Red Hat devs work on these subsystems so I assume it’s better QA’d.
I use Debian for everything; from games to servers! The best distro, by far!
My experience with Debian is good.
I have a home lab consisting of 9 mini PCs running Docker Swarm. They’re from various manufacturers, Intel, ASRock, Minisforum, etc. I originally tried to use Debian to build out the environment but it couldn’t find the network interfaces, or storage, or whatever else. So I made a Rocky 9 install drive and tried that. Every machine came up with all hardware recognized on the first try. So, that’s what I’ve been running for just about two years now. No complaints.
Good to hear that. How many containers do you run if you need 9 mini PCs for those?
I use three systems for manager nodes so they don’t get much work. Mostly Traefik and a few other administrative services. I have about 80 containers running on the six worker nodes.
I’m using Rocky on my main server at the moment, I was/am used to Debian based operating systems beforehand but wanted to learn red hat without dealing with Oracle directly.
It was definitely a step curve getting to understanding the os but I’m quite happy with the stability of Rocky and it does everything I need and more. I think the real question is which would you get more enjoyment out of as far as learning and personally I don’t think the learning curve is as steep with Debian.
The best thing I can advise is just back up your data regularly and if you’re not vibing or something breaks don’t be afraid to change to something different, though as an arch user I’m sure you’re used to things breaking.
deleted by creator