I’ve posted a few days ago, asking how to setup my storage for Proxmox on my Lenovo M90q, which I since then settled. Or so I thought. The Lenovo has space for two NVME and one SATA SSD.

There seems to a general consensus, that you shouldn’t use consumer SSDs (even NAS SSDs like WD Red) for ZFS, since there will be lots of writes which in turn will wear out the SSD fast.

Some conflicting information is out there with some saying it’s fine and a few GB writes per day is okay and others warning of several TBs writes per day.

I plan on using Proxmox as a hypervisor for homelab use with one or two VMs runnning Docker, Nextcloud, Jellyfin, Arr-Stack, TubeArchivist, PiHole and such. All static data (files, videos, music) will not be stored on ZFS, just the VM images themselves.

I did some research and found a few SSDs with good write endurance (see table below) and settled on two WD Red SN700 2TB in a ZFS Mirror. Those drives have 2500TBW. For file storage, I’ll just use a Samsung 870EVO with 4TB and 2400TBW.

SSD TB TBW
980 PRO 1TB 600 68
2TB 1200 128
SN 700 500GB 1000 48
1TB 2000 70
2TB 2500 141
870 EVO 2TB 1200 117
4TB 2400 216
SA 500 2TB 1300 137
4TB 2500 325

Is that good enough? Would you rather recommend enterprise grade SSDs? And if so, which ones would you recommend, that are m.2 NVME? Or should I just stick with ext4 as a file system, loosing data security and the ability for snapshots?

I’d love to hear your thought’s about this, thanks!

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      Thank you so much for this explanation. I am just a beginner, so those horror stories did scare me a bit. I also read, that you can fine tune ZFS to prevent write amplification so I’ll read into that subject a bit more.

      I thought ZFS without redundancy did give no benefits, but I most have gotten that wrong. Thanks again!

        • lemmyvore@feddit.nl
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          Where can I read more about good ZFS settings for a filesystem on a new RAID6 array? I don’t want to manage disks or volumes with ZFS, I’ll be doing that with mdadm, just want ZFS as filesystem instead of ext4. I assume a ZFS filesystem can grow if the space available expands later?

            • lemmyvore@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 years ago

              I assume you don’t like the inflexibility of RAIDZ resizing

              Right, I’d like to be able to add another disk and then grow the filesystem and be done with it.

              my guess is that with mdadm+ZFS, features like self-healing won’t work because ZFS isn’t aware of the RAID at a low-level

              Really, I’ll have to look into that then because health checks are my main reason for using ZFS over ext4.

              mdadm RAID should be a transparent layer for ZFS, it manages the array and exposes a raw storage device. Not sure why ZFS would not like that but I don’t want to experiment if it’s not a reliable combination. I was under the impression that ZFS as a filesystem can be used without caring about the underlying disk support, but if it’s too opinionated and requires its own disk management then too bad…

                • turbo_scanning@feddit.de
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 years ago

                  Today, growing a pool is possible by adding a vdev, right?

                  So, instead of RAIDZ2, one could setup their pool with mirrored vdevs.

                  However, I’m not sure about the self-healing part. Would it still work with mirrored vdevs, especially when my vdevs consist of two physical drives only?

        • Pete90@feddit.deOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          I barely scratched the surface with ZFS, so I’m not going to touch another file system for a while now. I’m fine with detecting data corruption only, since those files (on the static data storage) can be replaced easily and hold no real value for me. All other data will be either on the redundant pool or is saved to several other media and even one off-site copy.

          I already wrote down ashift=12 in my notes for when I set it up.

          In general, I found there is a lot of FUD out there when it comes to data security. One I liked a lot was ECC RAM being mandatory for ZFS. Then one of the creators of it basically said: "Nah, it’s not needed more than for any other file system’.

  • SayCyberOnceMore@feddit.uk
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 years ago

    I’m kinda repeating things already said here, but there’s a couple of points I wanted to highlight…

    Monitor the SMART health: Enterprize and consumer drives fail, it’s good to know in advance.

    Plan for failure: something will go wrong… might be a drive failure, might be you wiping it by accident… just do backups.

    Use redundancy; several cheapo rubbish drives in a RAID / ZFS / BTRFS pool are always better than 1 “good” drive on it’s own.

    Main point: build something and destroy it to see what happens, before you build your “final” setup - experience is always better than theory.

    I built my own NAS and was going with ZFS until I fkd around with it… for me… I then went with BTRFS because of my skills, tools I use, etc… BTRFS just made more sense to me… so I know I can repair it.

    And test your backups 🎃

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 years ago

      I’m currently playing around in VMs even before I order my hard drives. Just to see, what I can do. Next up is to simulate a root drive failure and how to replace that. I also want to test rolling back from snapshots.

      The data that I really do need and can’t replace is redundant anyway: one copy on my PC, one on my external HDD, one on my NAS and one on a system at my sisters place. Thats 4 copies on several media (one cold) and at another place. :)

  • AnonStoleMyPants@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 years ago

    Don’t sweat it.

    I remember looking into this as well like a year ago. I also found the same info and started to look into ssds, consumer and enterprise grade and after all that I realised that most of it is just useless fuzzing about. Yes it is an interesting rabbit hole in which I spent a week probably. In the end one simple thing nullifies most of this: you can track writes per day and SSD health. It is not like you need to somehow made a guess when the drives fail. You do not. Keep track of the health and writes per day and you will get a good sense of how your system behaves. Run that for 6 months and you are infinitely wiser when it comes to this stuff.

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      That rabbit hole is interesting, but also deep and scary. I’m trying to challenge myself by setting up Proxmox, as so far I’ve just used Raspbery Pis as well as OpenMediaVault. So when I saw those stories about drives dying after 6 months, I was a bit concerned;. Especially because I can’t yet verify the truth in those storries, since I’d call myself and advanced novice if I’, being generous.

      I’ll track drive usage and wear and see what my system does. Good point, then I can get rid of the guesswork. Thank you a lot!

  • NAK@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 years ago

    I’ll agree with the other commenter here.

    Also there may not be any difference between the consumer and enterprise drives. The reason the enterprise cost more is the better warranty. But because they have different components.

    Monitor the drives, modern drives are pretty good at predicting when they are dying, and replace it necessary.

    • Pete90@feddit.deOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Yeah, concering TBW there wasn’t a huge difference between cosumer- and enterprise drives that I saw. Something along 2500TBW vs. 3500TBW (unless you go with those unaffordable drives, then yes). I’ll monitor the drives and if I find rapidly increasing wear, I can still switch to another file system. The whole reason I bought the Lenovo is to setup a second machine and experiment, while I still have a running “production” system. Thank you!

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 years ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    NAS Network-Attached Storage
    RAID Redundant Array of Independent Disks for mass storage
    SSD Solid State Drive mass storage

    3 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

    [Thread #259 for this sub, first seen 2nd Nov 2023, 14:30] [FAQ] [Full list] [Contact] [Source code]