I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they’re a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they’re not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.

Considering swapping to 2x ‘refurbed’ 12TB enterprise drives and running ZFS RAIDZ1. So even though they’d have a decent amount of hours on them, they’d be better quality drives and fewer disks means less change of any one failing (I have good backups).

The next time I have one of my current drives die I’m not feeling like staying with my current setup is worth it, so may as well change over now before it happens?

Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it’ll be nicer.

  • Jondar@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 hours ago

    I just went all refurbished on my new drives. Time will tell. Oldest one has about 8 months runtime on it.

    I went with 5x recertified Seagate exos 20tb, and one recertified ironwolf pro 20tb.

    • thejml@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Nice, we’ll all look out for an update in a year!

      I try to mix brands and lots (buy a few from one retailer and some from another). I used to work for a storage/NAS company and we had many incidents when we’d fill a 12 or 24 drive raid with drives right from the same order and had multiple drives die within hours of each other. Which isn’t usually enough for replacement/resilvering.