Seeking advice about BTRFS RAID

Hi, I got a tiny Lenovo M720Q (i5-8400T / 8RAM / 128NVME / 1Tb 2,5" HDD) that I want to set as my home server with the ability to add 2 more drives (for RAID5 if possible) later using its two USB 3.1 Gen 2 (10gbps).

  • The OS (debian 12 + docker) will be exclusive to the nvme, I will mostly use 40/128GB of its capacity with no idea how to make use of the rest.
  • My data (medias, documents and ISO files) will resides on the HDD pool, while keeping a copy of my docs on my home pc.

I read a bit about BTRFS RAID I even experimented with it in a VM and it really got me interested in using it because of its flexibility of balancing between raid levels and the hot swapping of unequally sized drives in both stripped and mirrored arrays. However, most of what I read online predate kernel 6,2 (which improved BTRFS RAID56 reliability). So, Here I am asking if anyone here is using BTRFS RAID and if it is stable enough to use on a mostly idle server or should I stick with LVM instead. What good practices to do or bad ones to avoid?

Thank you.

Atemu,
@Atemu@lemmy.ml avatar

I got a tiny Lenovo M720Q (i5-8400T / 8RAM / 128NVME / 1Tb 2,5" HDD) that I want to set as my home server with the ability to add 2 more drives (for RAID5 if possible) later using its two USB 3.1 Gen 2 (10gbps).

Do not use USB drives in a multi-device scenario. Best avoid actively using them at all. Use USB drives for at most daily backups.

I wouldn’t advocate for RAID5. I’d also advocate against RAID to begin with in a homelab setting unless you have special uptime requirements (e.g. often away from home for prolonged periods) or an insane amount of drives.

I will mostly use 40/128GB of its capacity with no idea how to make use of the rest.

I use spare SSD space for write-through bcache. You need to make the decision to use it early on because you need to format the HDDs with bcache beneath the FS and post-formatting conversions are hairy at best.

most of what I read online predate kernel 6,2 (which improved BTRFS RAID56 reliability).

Still unstable and only for testing purposes. Assume it will eat your data.

yote_zip,
@yote_zip@pawb.social avatar

You can also use MergerFS+SnapRAID over individual BTRFS disks which will give you a pseudo-RAID5/6 that is safe. You dedicate one or more disks to hold parity, and the rest will hold data. At a specified time interval, parity will be calculated by SnapRAID and stored on the parity disk (not realtime). MergerFS will scatter your files across the data disks without using striping, and present them under one mount point. Speed will be limited to the disk that has the file. Unmitigated failure of a disk will only lose the files that were assigned to that disk, due to lack of striping. Disks can be pulled and plugged in elsewhere to access the files they are responsible for.

It’s a bit of a weird-feeling solution if you’re used to traditional RAID but it’s very flexible because you can add and remove disks and they can be any size, as long as your parity disks are the largest.

poVoq,
@poVoq@slrpnk.net avatar

There is little benefit using raid5/6 over using raid1 IMHO since you can quite easily match discs to utilize all the space as others have already mentioned.

Moonrise2473,

I understood that software raid on USB is dangerous as sometimes the drives can get offline for a few seconds due to current fluctuations and then will lose the sync. Maybe it’s ok for files that don’t get accessed too often, like video file backups

poVoq,
@poVoq@slrpnk.net avatar

In my experience there are often issues with sata ssd over USB, but slower HDD seem to work fine. With btrfs I would set up a regular scrubbing job to find and fix possible data errors automatically.

Atemu,
@Atemu@lemmy.ml avatar

With btrfs I would set up a regular scrubbing job to find and fix possible data errors automatically.

This only works for minor errors caused by tiny physical changes. A buggy USB drive dropping out and losing writes it claimed to have written can kill a btrfs (sometimes unfixably so) especially in a multi-device scenario.

mee,

I would continue to say don’t use RAID56. You can use RAID1, which will give you the sum of all your drives divided by 2 in usable space. As long you’re not matching say a 4TB and 2x1TB. It’s called RAID1, but really it writes all data to 2 separate drives, that’s why the 4TB and 2x1TB example you don’t have enough to write more than 2TB on separate drives. www.carfax.org.uk/btrfs-usage/ is a calculator you can play with

btrfs.readthedocs.io/en/latest/Status.html#block-… They still list RAID56 as unstable on the docs.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • wartaberita
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • TheResearchGuardian
  • Ask_kbincafe
  • KbinCafe
  • Testmaggi
  • Socialism
  • feritale
  • oklahoma
  • SuperSentai
  • KamenRider
  • All magazines