I dunno when it happened but I swear SBCs were the new best thing in the universe for a while and everyone was building cool little servers with their RockPis and OrangePis.

Now it’s all gone x86 and Proxmox with everyone shitting on Arm. What happened? What gives?

Is my small army of xPis pointless? What about my 2 Edge routers?

I’ve got about 6 xPis scattered round my flat - is there anything worth doing with them or should I just bin them?

All thoughts, feelings and information welcome. Thank you.

  • Handles@leminal.space
    link
    fedilink
    English
    arrow-up
    104
    ·
    11 months ago

    So SBCs are shit now?

    Nothing changed, the hardware is the same as before. Your little pi servers are still doing the exact same work they did before. The only variables are prices on SBCs vs used small factor x86s, and the short, short attention span of terminally online hobbyists.

    Use whatever you like, no need to race after others’ subjective (and often hyperbolic) judgment.

    • Bizarroland@kbin.social
      link
      fedilink
      arrow-up
      45
      ·
      11 months ago

      Very much this. The allure of raspberry pis was that they were $30 toys that could actually be used to do things that were equivalent to much more expensive computers and computer control systems.

      Somewhere along the way they lost the plot, probably when supply chain issues drove their prices sky high along with the compute modules being used for home lab servers, and now cheap knockoffs based off of Rockville chips or ESP32 are just as capable as raspberry pis for a fraction of the cost, and at the same time actual desktop computers in miniature form factor have become so cheap on the second hand market that they are incredibly competitive with the raspberry pi.

      Don’t get me wrong, pi is a great platform. But the use cases in which it leads the pack have become incredibly narrow.

      Actually I can’t think of anything that raspberry pi does that can’t be done better by a less expensive alternative.

      Even the pi5 with the nvme hat is not currently price competitive with a 4-year-old HP ultra small form factor as far as I know.

      • Valmond@lemmy.mindoki.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        Yeah, make a Pi with 1GB RAM, video & ethernet for like 20-30€ and you’d ruin me.

        I know about the banana, orange, whetever-pis but in my experience they always needed lots of extra stuff to work (like fucing and recompiling libraries). The Pi “just worked” IMO.

      • aard@kyu.de
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        Actually I can’t think of anything that raspberry pi does that can’t be done better by a less expensive alternative.

        That has been true even before the price increase - what still makes me use pis now and then is that just so many people are familiar with them, the standardized form factor with lots of extension modules, and the software support - pretty much any software targeting that kind of use has been tested on pi variants.

        I’d nowadays go for using compute modules, though - they’re smaller, and you can get them with flash, eliminating the SD card problem many pis had. You can get carrier boards for the compute modules in the classic pi form factor, so you can have the best of both worlds.

        • Valmond@lemmy.mindoki.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          What’s the benefits of compute modules, except the sd card? Doesn’t it have to have hardware support to work?

          • aard@kyu.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            A small form factor, small high density connector. Most interfaces are not populated, as on the regular pis, but just lead out via the connector, so you can decide what you want to expose on your compute module carrier. It has a gbit ethernet chip on board, and a pcie chip - rpi4 also has pcie, but it is hooked up to USB3 there. With the compute module you can decide what you want to do with that.

      • const_void@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        price competitive with a 4-year-old HP ultra small form factor

        What’s the model number for that?

  • constantokra@lemmy.one
    link
    fedilink
    English
    arrow-up
    53
    ·
    11 months ago

    People are shitting on them because the price point for arm sbcs has risen, while the price point for small x86 computers has come down. Also, x86 availability is high and arm sbc availability has become unreliable. They also aren’t generally supported nearly as well. If you don’t need more power and you already have them on hand there’s no reason not to use them.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I’m curious, what’s an example of a mini x86 machine comparable to a raspberry pi? I just did research and ended up buying a RPI 5. I may have not known what to look for, but what I found in the x86 space was $200+ and seemed pretty underwhelming compared to a $80 SBC on arm.

      • FailBait@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        In 2022, when Pi4s were going for $150-200, I managed to get a 7th gen NUC for about $150. I was looking to start Home Assistant, so both were viable options, but even the Pi5’s coming close to $100 retail, spending 50% more gets you a lot more performance for a 7th gen intel i5/i7 mobile chip, 16gb of RAM and a 256GB NVME.

      • constantokra@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        You’d be looking at used mini PCs. I’ve heard really good things about lenovo. It’s not necessarily exactly comparable in price, but the reason people are souring on arm SBCs, and especially PiS, is that it’s only a little more for a more powerful lenovo, and there are never any supply issues.

      • Grippler@feddit.dk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        I bought an old Intel NUC with a 2.x GHz i3, 8gb ram and 120gb nvme used for $65, upgraded it to 16gb of ram and 1tb nvme for another $50. I run everyting from that in either VMs or LXCs (HA, jellyfin, NAS, CCTV, pihole) and it draws about 10W

    • Toribor@corndog.social
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      11 months ago

      This exactly. If you already have Pis they are still great. Back when they were $35 it was a pretty good value proposition with none of the power or space requirements of a full size x86 PC. But for $80-$100 it’s really only worth it if you actually need something small, or if you plan to actually use the gpio pins for a project.

      If you’re just hosting software a several year old used desktop will outperform it significantly and cost about the same.

        • Toribor@corndog.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          True. I did some rough math when I needed to right-size a UPS for my home server rack and estimated that running a Pi4 for a year would cost me about $8 worth of electricity and that running an x86 desktop would cost me about $40. Not insignificant for sure if you’re not going to use the extra performance that an x86 PC can offer.

      • Altima NEO@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        And then there’s still all the crap it needs to work, if you don’t already have it. Power supply, adapters, storage, case, hats, etc.

  • MigratingtoLemmy@lemmy.world
    link
    fedilink
    English
    arrow-up
    38
    ·
    11 months ago

    The only reason SBCs were ever relevant is because of the excellent pricing, which has now been matched by used x86 computers. That and if the SBC had an open-source design/implementation (open schematics on RISC-V)

      • Djtecha@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Low power too. I replaced a x86 server with 3 PIs in a k8s setup for about half the wattage.

  • phanto@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    ·
    11 months ago

    I have an x86 proxmox setup. I stuck a kill-o-watt on it. Keep your pi setup if it does what you want, and realize that there’s someone out there who is jealous of your power bill.

    • BearOfaTime@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      How bad is it?

      My current file server, an old gaming rig, consumes 100w at idle.

      I’m considering a TrueNAS box running either 2.5" ssd’s or NVME sticks (My storage target is under 8TB, and that’s including 3 years projected growth).

      • stevehobbes@lemy.lol
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        11 months ago

        Go tweak your power and fan settings. 100w at idle is way too much unless it’s 15 years old.

        Fans, especially small ones are very sneaky energy hogs. Turn them waaay down.

        • nezbyte@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          Depends on what your server is running. Multiple GPUs, HDDs, and other fun items start to add up to well over 100W. I justify it by using it to keep my 3d printer filament dry.

          • stevehobbes@lemy.lol
            link
            fedilink
            English
            arrow-up
            4
            ·
            11 months ago

            If you have multiple GPUs in your home server you’re probably doing it wrong. But even then, at idle, with no displays connected, the draw will be surprisingly low.

            Most systems with some ssd/NVMe, 2-4 DIMMs and maybe a drive or two should idle closer to 50w-60w.

            • nezbyte@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              Agreed, don’t do what I do if you value your power bill. To be fair, my network switch pulls more power than my cobbled together server anyhow.

            • ddh@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              11 months ago

              If you’re getting two gaming PCs out of one hypervisor, you might be doing it right.

        • fuckwit_mcbumcrumble@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          11 months ago

          Newer CPU’s tend to use a good chunk more power under low loads than some older ones. Going from 1st Gen. Ryzen to 2nd Gen. got me about 20 watts higher total system power draw with my use case. And 3rd Gen. is even worse.

          Intel is MUCH worse at it than AMD, but every Gen. AMD keeps cranking up those boost clocks and power draw and it really can make a difference at low to mid range loads.

          My Ryzen 3000 based system uses about 90 watts at “idle” with all my stuff running and the hard drives on.

          • stevehobbes@lemy.lol
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            It’s probably more about aggressive default bios speeds. Tweak your c states / bios overclocking / pcie power management / windows power management features. Idle power has gone down on most chips.

            The Ryzen 3000 should truly idle closer to 20-30w.

            • fuckwit_mcbumcrumble@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              That is after tweaking bios settings. Originally I was at around 100 watts, now I’m closer to 80.

              Keep in mind that’s with a bunch of hard drives, and it’s not a 100% idle, more of a 90% idle which is where modern “race to idle” CPUs struggle the most.

        • BearOfaTime@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          Nothing to be done. It’s old. Only fan to adjust is cpu, and I can tell when the cooler is getting dirty because the fan stays at higher speeds.

          Otherwise there’s one large, slow rpm fan in the case, always on low speed.

      • helenslunch@feddit.nl
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        How bad is it? My current file server, an old gaming rig, consumes 100w at idle.

        That’s very bad haha. Most home servers for personal use are using 7-10w.

        Although you’ll have to do the math with your local energy prices to determine how important that is. It’s probably not.

          • saiarcot895@programming.dev
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            11 months ago

            $1/day? At 100W average power usage, that’s 2.4kWh per day, suggesting that where you live, the price is 41.67 cents per kWh, roughly double that of California.

            Is electricity that expensive where you live?

            Edit: it’s been a while since I lived in the Bay area, I hadn’t realized that the electricity price now ranges from 38-62 cents per kWh, depending on rate plan and time.

      • krash@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        Holy crap! I have a n100 SFF that consumes 5-6 w idle (with WiFi on) and I have an old i5 (gen 6 I think) that consumes 30 at idle. Your rig is defiantly not meant to act as a server (unless you want to mine bitcoons or run boinc…)

        • BearOfaTime@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Lol, yea, it’s old, was built for performance, and hasn’t run right in a while.

          I’m looking to setup a NAS and turn that thing off

      • CumBroth@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        That’s so much! With current energy prices, this would cost me €236.52 a year. My current rate is €0.27/kWh. Calculation:

        (100 W / 1000 W/kW) * 24 hours/day = 2.4 kWh/day

        2.4 kWh/day * 365 days/year = 876 kWh/year

        876 kWh/year * 0.27 Euros/kWh = 236.52 Euros/year

        That’s more than what I pay for powering my AC an entire summer.

    • chunkystyles@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      My x86 Proxmox consumes about 0.3 kwh a day at around 15% average load. I’ve only had the Kill A Watt on it for a day, so I don’t know how accurate that is, but it shouldn’t be too far off.

  • JackbyDev@programming.dev
    link
    fedilink
    English
    arrow-up
    22
    ·
    11 months ago

    I don’t understand this post. Whatever you bought then for they’re still good for. People’s opinions don’t make them less useful.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      Sir, this is Lemmy. People treat the applications and hardware you use with ethical alignment and switching to FOSS literally has approval on the level of religious conversion.

      It’s no wonder people around here care so much about random people’s opinions, the place practically filters for it.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    11 months ago

    What happened is that people realized what I’ve been saying since ever - that the RPi and others are a money grab because of all the required accessories while a MiniPC will get you way more power, stable hardware , case, power supply and everything in between for the same price (if you go for second hand). Here is are examples of such posts: https://lemmy.world/comment/5357961 , https://lemmy.world/comment/4696545

    For eg. for 100€ you can find an HP Mini with an i5 8th gen + 16GB of ram + 256GB NVME that obviously has a case, a LOT of I/O, PCIe (m2) comes with a power adapter and outperforms a RPi5 in all possible ways. Note that the RPi5 8GB of ram will cost you 80€ + case + power adapter + cable + bullshit adapter + SD card + whatever else money grab - the Pi isn’t just a good option.

    Either way, Pis have their use cases however in my opinion it was an overhyped product that sits on the middle of a market:

    • They tried to make the Arduino easy by adding an operating system and high level programming languages such as Python. It never made much sense, why would you want to have GPIOs directly on a “computer”? not reasonable at all. Nowadays we’re seeing a raise of the ESP32 devices that have 30-40 GPIOs and Wifi for 2$ each. Cheap, easy to develop and deploy and eating away on the Pi’s market.
    • Another typical use case for a Pi is some low power server, but while it is great in theory then it lacks the CPU performance required for the container-based absurdities people want to run and the I/O sucks. USB wasn’t ever a good way to connect to storage, let alone a USB/network shared bus like we had in the past. The new PCIe is questionable (look at the NanoPi M4v2 from 2018) and requires… more adapters;
    • Price-wise it doesn’t make much sense as well because a second hand x86 will be 10x faster at the same price point… and way more stable with more expansion.

    Now it’s all gone x86 and Proxmox

    Proxmox isn’t a new thing, in fact it is a pile of crap and questionable open-source that people still run because they haven’t discovered LXC/LXD yet. Read more here: https://lemmy.world/comment/6507871. FYI you can run LXD on your Pis and get both containers and virtual machines with it in the same way Proxmox people do with x86.

    The irony of this comment is that people will shit on me about replacing Proxmox with LXD in the same way they used to when I said that Pis were a money grab and x86 MiniPCs were way better.

    • akrot@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 months ago

      The mian issue with Mini/used PCs is the power efficiency. It’s just a waste of wattage and performanve/Watt is very bad, especially at idle.

    • jkrtn@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Do you think the used server market is worth the cost? It looks like I could have a giant chunk of DDR3 for not so much.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        11 months ago

        I don’t (specially DDR3-era stuff) because old server hardware is way more expensive, won’t be of any particular advantage and older hardware, compared to new stuff, will use a LOT of power.

        Instead use regular desktop/laptop machines as they’ll probably be more than enough for homelabs. You can a good 9-10th gen Intel CPU and motherboard that is perfect to run servers (very high performance) but that people don’t want because they aren’t good to play the latest games. Modern hardware = less power consumption, cheaper, more performance.

        If you go really low end, let’s say i5-6500, this will probably cost around 80€ second hand with RAM. You can use https://www.cpubenchmark.net/compare/ to compare CPUs the server hardware you can get with modern hardware if you’re interested.

        Most DDR3-era server hardware comes with RAID controllers/cards and other things that nobody uses anymore, people have moved on the software RAID be it BRTFS or ZFS and you will want to do the same. Servers make a lot of noise - impractical for a home - and a CPU from that era will be around 150-200W, you can get a recent i5 with more performance that runs around 50W.

        Another thing to consider: you’re trying to build a NAS get a basic motherboard with 4 SATA ports and then add a PCI to 5 SATA port card and it will be much cheaper than whatever server hardware. BTRFS as your filesystem and its RAID if needed. Now you may be thinking something like “I want a faster CPU in order to have fast SMB”, just don’t - your gigabit network will saturate before an i5-6500 or any mechanical drive does and when this happens you’ll be at something like 10-20% CPU usage. Just don’t waste your money.

        • jkrtn@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 months ago

          Thank you, really appreciate your advice. I was just struggling to install Proxmox on a new machine, and you made me take a step back. The kernel is messed up, do I really want this? Why am I jumping through hoops for this when Debian has zero issues installing? I’ll be trying the container software you mentioned instead.

          • 1371113@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            I’ve done the same thing as the person you replied to is suggesting for around 10 years now. It works very well for a home user because parts etc are readily available. Most hypervisors will run on x86/amd64 hardware without issue. Check out something other than proxmox. LXC is one suggestion. If you’re going to stick with Debian look into SAMBA with BIND to ensure ease of sharing and cross platform integration.

            Another reason to not get an old server is power, noise and thermals. They’re designed to live in an air conditioned room. Anyone who works in server rooms for any length of time will tell you to wear ear protection.

    • chunkystyles@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      11 months ago

      people will shit on me about replacing Proxmox with LXD

      From reading your comments I understand why. It’s in your delivery. You’re abrasive and you don’t explain why. You’re also telling people not to use something they know, to use something they don’t know, and not explaining how that would be beneficial. As far as I can see, you’ve only explained how LXD, when setup correctly, can do what Proxmox does.

      You’re essentially telling people to use something that is at best a side grade for reasons, and being salty about it.

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Ahaha I don’t explain why 😂😂

        I wrote dozens of posts replying to every single question people had about LXD/Incus. Gave out printscreens, explained how it works, what it does, described useful features and pointed out multiple issues of Proxmox. I can show you what roads you can take and why but you must do the work yourself.

        The same applies to the MiniPC vs Raspberry discussion as my price, performance and feature breakdowns and proved countless times that for a large number of use cases a MiniPC is better. Unsurprisingly this is the first of such breakdowns that got upvotes, and do you know why? Because a known youtuber in this space recently came out with a video saying the exact same things I’ve been saying and now it became “acceptable” to criticize the Raspberry Pi money grab.

        to use something they don’t know, and not explaining how that would be beneficial you’ve only explained how LXD, when setup correctly, can do what Proxmox does.

        Even if that were true, what’s was the issue then? Isn’t it obvious that a true open-source solution that is available on Debian’s repos from a fresh install is better than a half proprietary solution that asks you to buy a license at any turn? Use your common sense.

        Besides my comments aren’t a marketing campaign there’s no “LXD will make you rich today and solve all your family drama” as soon as you complete our three step formula:

        1. apt install lxd
        2. lxd init
        3. lxc launch debian debian-container

        The advantage of using LXD/Incus are on the details, not on a flashy and shinny feature. It’s about running a clean Debian system, a non twisted and mangled kernel that will conflict with everything and not run stuff like OVPN properly, it’s about the license, the tools, not depending on a company, not having to wait 3x the time before your cluster is online. It’s about having a decent API for once and so many others.

        Most people say they don’t want to be put in the same situation they were put about the the CentOS/RedHat licensing change, but then they proceeded to replace CentOS with Ubuntu and still use Proxmox. All questionable open-source that is as likely to fuck you over as RedHat did.

        So eventually there will be a video from some youtuber stating that LXD/Incus is much better than Proxmox and people will flock to it without questioning anything. :)

  • tburkhol@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    11 months ago

    Pi 4’s were hard to get there for a while. Pi 5’s are expensive. Lot of other SBCs are also expensive, as in not all that much cheaper than a 2-3 generations old low-end x86. That makes them less attractive for special purpose computing, especially among people who have a lot of old hardware lying around.

    Any desktop from the last decade can easily host multiple single-household computer services, and it’s easier to maintain just one box than a half dozen SBCs, with a half dozen power supplies, a half dozen network connections, etc. Selfhosters often have a ‘real’ computer running 24/7 for video transcoding or something, so hosting a bunch of minimal-use services on it doesn’t even increase the electric bill.

    For me, the most interesting aspect of those SBCs was GPIO and access to raw sensor data. In the last few years, ‘smart home’ technology seems to have really exploded, to where many of the sensors I was interested in 10 years ago are now available with zigbee, bluetooth or even wifi connectivity, so you don’t need that GPIO anymore. There are still some specific control applications where, for me, Pi’s make sense, but I’m more likely to migrate towards Pi-0 than Pi-5.

    SBCs were also an attractive solution for media/home theater displays, as clients for plex/jellyfin/mythtv servers, but modern smart-TVs seem mostly to have built-in clients for most of those. Personally, I’m still happy with kodi running on a pi-4 and a 15 year old dumb TV.

    • brygphilomena@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      11 months ago

      This is how I feel.

      I would much rather have a single machine running vms which I can easily snapshot and back up rather than a dozen small machines I have to deal with power supplies and networking.

      SBCs have specific use cases, usually where they need to interact with hardware. That’s what made the rpi so great with it’s GPIO and hats. But that’s a rather small use case.

      • BCsven@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 months ago

        I have pi4 with OpenMediaServer for SMB shares and videos to TV, it has docker and portainer add ins; so that single Pi has CUPS, Trillium Notes, PaperlessNG, homeassistant, kanboard, pdftk converter, syncthing. It could have more, I just ran out of applications I might need. no issues with performance.

    • JustUseMint@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      My pi4 8gb is awful as a jellyfin client am I doing something wrong? Pi OS, and just using Firefox to watch. CPU/GPU were maxed out, ram usage like 1gb

      • tburkhol@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        My guess is Firefox. I’m using Kodi - OSMC/libreelec - and it coasts along at 1080p, with plenty of spare CPU to run pihole and some environmental monitors. Haven’t tried anything 4k, but supposedly Pi4 offloads that to hardware decoding and handles it just fine. (as long as the codec is supported).

      • JASN_DE@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 months ago

        Which codecs do you have in your library? Also which resolution/bitrate?

        Also, have a look at Kodi as a client.

    • loki@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 months ago

      man reads few comments on the internet.

      man takes it literally.

      Anxiety sets in

      ㄟ(ツ)ㄏ

  • BearOfaTime@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    11 months ago

    2 - 8 watts of power for a Pi vs 9-150watts for an x86 system. There are definitely use-cases.

    I use a Pi for DHCP, DNS with PiHole, Tailscale Subnet Router, Rustdesk server, Vaultwarden, Syncthing (connects to local device shares, rather than run ST on each device), ArchiveBox, and working on instant messaging (maybe SimpleX, not sure yet). It’s kind of maxed out.

    But all this runs under 8watts (actually it’s so low my smart switch doesn’t even register the consumption).

    • arglebargle@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      Uh, my server is an x86, is fanless and the cpu idles at 9 and maxes at 12. Is much faster then my pi and has quicksync.

      I run plex, jellyfin, smb shares, mealie, tailscale and rerouting, notes, and books.

      I like my pi but performance per watt isn’t as drastic with x86 if you build for it. Did I mention it’s also fanless? Passive heating that just works on the cpu.

      • BearOfaTime@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 months ago

        Nice!

        Yea, I’ve been eyeing a box like that, looks like it could be useful.

        Yep, it’s all tradeoffs, gotta know what you’re shooting for. My Pi cost $5, I’m using an old phone charger (I have many), and an old microsd. If anything fails, I just grab another from the junk box.

        All I know with my current use-case is I can’t measure the power consumption with the tools I use. I imagine that means under 5w draw (not really sure what it’s capable of measuring).

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    11 months ago

    I’m just going to say, I shit on them all along. ARM is relatively expensive, bespoke and difficult to compile for because of that. Anyone can puke out a binary for amd64 that works everywhere. And way, way faster than some sad little SOC. Especially weird is spending $1000 on a clusterboard with CMs that had half of the power of a 5 year old X86 SFF desktop you could pick up for $75 and attach some actual storage to.

    Maybe RISC-V will change all that, but I doubt it. Sure hope so though. The price factor has already leaned the right way to make it worthwhile.

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      Out of interest from someone with an Rpi4 and Immich, did you deactivate the machine learning? I did since I was worried it will be too much for the Pi, just curious to hear if its doable or not after all.

      • spez_@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        10 months ago

        I didn’t deactivate the machine learning. It’s definitely doable

      • owenfromcanada@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 months ago

        Not sure what kind of tinker board you’re working with, but the power of Pis has increased exponentially through its generations. There are tasks that would run slowly on a dedicated Pi2 that ran easily in parallel with a half dozen other things on a Pi4.

        The older ones can still be useful, just for less intensive tasks.

  • helenslunch@feddit.nl
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    11 months ago

    SBC (specifically RPis) got more expensive. x86 got more powerful, more importantly more efficient, and cheaper. Also X86 has more software built for it than ARM.

    There are a few X86 SBCs now though.

    If you already have SBCs and they’re doing what you need, I see no reason to switch.

  • ninjan@lemmy.mildgrim.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 months ago

    A lot of stuff runs great on SBCs, it’s just that they’re not as smooth to manage as a Proxmox server running containers or VMs. You also need several SBCs to reach the scale of what many do here on selfhosted and once you reach 4+ SBCs the old x86 server starts looking cost effective all of a sudden. The biggest benefit though is the no noise and very low power consumption, which is great for stuff that will be powered on 24/7/365.

    Really a mix is ideal, so you can get the benefits of cheap running costs of SBCs and the power and versatility of x86 for the tasks that require it.

  • DeltaTangoLima@reddrefuge.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    11 months ago

    It’s about fitness for purpose, IMO.

    I recently migrated most of my homelab to Proxmox running on a pair of x86 boxes. I did it because I was cutting the streaming cord, and wanted to build a beefy Plex capability for myself. I also wanted to virtualise my router/firewall with OPNsense.

    Once I mastered Proxmox, and truly came to appreciate both the clean separation of services and the rapid prototyping capability it gave me, I migrated a lot of my homelab over.

    But, I still use RasPis for a few purposes: Frigate server, second Pi-hole instance, backup Wireguard server. I even have one dedicated to hosting temperature sensors, reed switches, and webcams for our pet lizard’s enclosure.

    Each has their place for me.

    • dave@hal9000@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      Same feeling, except that rather than lizard enclosure, I am waiting to see how long that Pi will last in the heat and dust of a chicken coop while serving the sole purpose of a “do we have eggs?” And/or “WTF happened/WTF did the chickens do?” Web stream