• 1 Post
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • You mention frigate specifically. Were you running this on the system when the drive failed, or is this a future endeavour?

    I bring this up because I also use frigate, and for some time I was running with a misconfigured docker compose that drove my SSD wearout to 40% in a matter of months.

    Make sure that the tmpfs is configured per the frigate documentation and example config. If misconfigured like mine was, all of that IO is on disk. I believe the ramdisk is used for temp storage of camera streams, until an event occurs and the corresponding clip is committed to disk.

    Good luck!


  • Pulling around 200W on average.

    • 100W for the server. Xeon E3-1231v3 with 8 spinning disks + HBA, couple of sata SSD’s
    • ~80W for the unifi PoE 48 Pro switch. Most of this is PoE power for half a dozen cameras, downstream switches and AP’s, and a couple of raspberry pi’s
    • ~20W for protectli vault running Opnsense
    • Total usage measured via Eaton UPS
    • Subsidised during the day with solar power (Enphase)
    • Tracked in home assistant


  • thumdinger@lemmy.worldtoSelfhosted@lemmy.worldHelp me decide?!
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    18 days ago

    For storage redundancy RAID 5 is not recommended, particularly as you get to high capacity drives (think >8TB). I think the rating to consider is URE (unrecoverable read error, usually 1 in 10^14 bits read).

    Once a drive inevitably fails and you are forced to resilver the array to avoid data loss. During the resilver the healthy disks are running at 100%, reading every bit of data they have to complete the parity calculation and determine what data is missing. The chances of encountering a URE on another drive is a near certainty at high capacities as the total number of bits read exceeds the URE rating. As result the resilver would fail and the array would be lost.

    RAID 6 as a minimum (2 drive redundancy), although a popular option now (and the layout I use) is mirrored vdevs.

    Edit: Consider TrueNAS for NAS software. I have been using it for 10 years and it is absolutely rock solid. 25TB usable storage across 4x mirrored vdevs. I run it as a VM inside Proxmox with 4 logical cores on a 10 year old Xeon with 16GB RAM for the VM (I run ECC as was recommended at the time, but whether it’s still considered necessary I’m not certain).

    I would also recommend getting an LSI HBA (host bus adapter) like the 9207-8i flashed to IT mode (it must not be in raid mode, let TrueNAS manage the disks directly). This simplifies passing through all the disks to a VM.




  • Thanks, I’ll need to have a look at how the chipset link works, and how the southbridge combines incoming PCIe lanes to reduce the number of connections from 24 in my example, to the 4 available. Despite this though, and considering these devices are typically PCIe 3.0, operating at the maximum spec, they could swamp the link with 3x the data it has bandwidth for (24x3.0 is 23.64GB/s, vs 4x4.0 being 7.88GB/s).




  • I hadn’t considered AMD, really only due to the high praise I’m seeing around the web for QuickSync, and AMD falling behind both Intel and nvidia in hwaccel. Certainly will consider if there’s not a viable option with QS anyway.

    And you’re right, the south bridge provides additional PCIe connectivity (AMD and Intel), but bandwidth has to be considered. Connecting a HBA (x8), 2x m.2 SSD (x8), and 10Gb NIC (x8) over the same x4 link for something like a TrueNAS VM (ignoring other VM IO requirements), you’re going to be hitting the NIC and HBA and/or SSD (think ZFS cache/logging) at max simultaneously, saturating the link resulting in a significant bottleneck, no?


  • Thanks. I’ll be the first to admit a lack of knowledge with respect to CPU architecture - very interesting. I think you’ve answered my question - I can’t have QuickSync AND lanes.

    Given I can’t have both, I suppose the question pivots to a comparison of performance-per-watt and number of simultaneous streams of an iGPU with QuickSync vs. a discrete GPU (likely either nVidia or Intel ARC), considering a dGPU will increase power usage by 200W+ under load (27c/kWh here). Strong chance I am mistaken though, and have misunderstood QuickSync’s impressive capabilities. I will keep reading.

    I think the additional lanes are of greater value for future proofing. I can just lean on CPU without HWaccel. Thanks again!