About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.
Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.
Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.
Anyone else pondering or using btrfs? It seems like a solid choice.
btrfs has been the default file system for Fedora Workstation since Fedora 33 so not much reason to not use it.
Btrfs came default with my new Synology, where I have it in Synology’s raid config (similar to raid 1 I think) and I haven’t had any problems.
I don’t recommend the btrfs drivers for windows 10. I had a drive using this and it would often become unreachable under load, but this is more a Windows problem than a problem with btrfs
Didn’t have any btrfs problems yet, infact cow saved me a few times on my desktop.
Can you elaborate for the curious among us?
btrfs + timeshift saved me multiple times, when updates broke random stuff.
If it didn’t give you problems, go for it. I’ve run it for years and never had issues either.
A bit of topic; am I the only one that pronounces it “butterface”?
Similarly, I read bcachefs as BCA Chefs 😅
I call it butter fuss. Yours is better.
Not anymore.
You son of a bitch, I’m in.
Isn’t it meant to be like “better FS”? So you’re not too far off.
i call it “butter FS”
Ah feck. Not any more.
Related, and I cannot help but read “bcachefs” as “bitch café”
Raid 5/6, only bcachefs will solve it
Btrfs Raid 5 and raid 6 are unstable and dangerous
Bcachefs is cool but it is way to new and isn’t even part of the kernel as of yet.
https://en.wikipedia.org/wiki/Bcachefs it was added as of Linux 6.7
Edit: and I’ve said raid 5/6 as what troubles btrfs have so you proved my point while trying to explain to me that I’m not right
I though was then removed later as there was a disagreement between Linus and the bcachefs dev
Yeah, i remember something like that, i don’t remember exactly which kernel version it was when they removed it
Pretty sure it’s not removed, they just aren’t accepting any changes from the developer for the 6.13 cycle
Using it here. Love the flexibility and features.
I run it now because I wanted to try it. I haven’t had any issues. A friend recommended it as a stable option.
One time I had a power outage and one of the btrfs hds (not in a raid) couldn’t be read anymore after reboot. Even with help from the (official) btrfs mailinglist It was impossible to repair the file system. After a lot of low level tinkering I was able to retrieve the files, but the file system itself was absolutely broken, no repair process was possible. I since switched to zfs, the emergency options are much more capable.
Was that less than 2 years ago? Were you using kernel 5.15 or newer?
Yes that was may/june 23 and I was on a 6.x kernel
You shouldn’t have abysmal performance with ZFS. Something must be up.
What’s up is ZFS. It is solid but the architecture is very dated at this point.
There are about a hundred different settings I could try to change but at some point it is easier to go btrfs where it works out of the box.
You’ve been downvoted, but I’ve seen a fair share of ZFS implementations confirm your assessment.
E.g. “Don’t use ZFS if you care about performance, especially on SSD” is a fairly common refrain in response to anyone asking about how to get the best performance out of their solution.
Since most people with decently simple setups don’t have the described problem likely somethings up with your setup.
Yes ifta old and yes it’s complicated but it doesn’t have to be to get a decent performance.
I have been trying to get ZFS working well for months. Also I am not the only one having issues as I have seen lots of other posts about similar problems.
I don’t doubt that you have problems with your setup. Given the large number of (simple) zfs setups that are working flawlessly there are a bound to be a large number of issues to be found on the Internet. People that are discontent voice their opinion more often and loudly compared to the people that are satisfied.
What seems dated in its architecture? Last time I looked at it, it struck me as pretty modern compared to what’s in use today.
It doesn’t share well. Anytime anything IO heavy happens the system completely locks up.
That doesn’t happen on other systems
That doesn’t speak much of the architecture. Also it’s really odd. Not denying what you’re seeing is happening, just that it seems odd based on the setups I run with ZFS. My main server is in fact a shared machine that I use as a workstation and games along as a server. All works in parallel. I used to have a mirror, then a 4-disk RAIDz and now an 8-disk RAIDz2. I have multiple applications constantly using the pool. I don’t notice any performance slowdowns on the desktop, or in-game when IO goes high. The only time I notice anything is when something like multiple Plex transcoders hit the CPU hard. Sequential performance is around 1.3GB/s which is limited by the data bus speeds (USB DAS boxes). Random performance is very good although I don’t have any numbers out of my head. I’m using mostly WD Elements shucked disks and a couple of IronWolfs. No enterprise grade disks on this system.
I’m also not saying that you have to keep fucking around with it instead of going Btrfs. Simply adding another anecdote to the picture. If I had a serious problem like that and couldn’t figure it out I’d be on LVMRAID+Ext4 which is what used prior to ZFS.
Yeah maybe my machines are cursed
That is totally possible. I spent a month changing boards and CPUs to fix a curse on my main, unrelated to storage. In case you’re curious.
@avidamoeba @possiblylinux127 Does your ZFS not print on Tuesdays? https://bugs.launchpad.net/ubuntu/+source/cupsys/+bug/255161/
I doubt that. Some options:
- bad memory
- failing drives
- silent CPU faults
- poor power delivery
The list is endless. Maybe BTRFS is more tolerant of the problems you’re facing, but that doesn’t mean the problems are specific to ZFS. I recommend doing a bit of testing to see if everything looks fine on the HW side of things (memtest, smart tests, etc).
I set the Arc cache to 4GB and it is working better now
You have angered the zfs gods!
I have gotten a ton of people to help me. Sometimes it is easier to piss people off to gather info and usage tips.
btrfs raid subsystem hasn’t been fixed and is still buggy, and does weird shit on scrubs. But fill your boots, it’s your data.
One day I had a power outage and I wasn’t able to mount the btrfs system disk anymore. I could mount it in another Linux but I wasn’t able to boot from it anymore. I was very pissed, lost a whole day of work
ACID go brrr
When did this happen?
I think 5 years ago, on Ubuntu
For my jbod array, I use ext4 on gpt partitions. Fast efficient mature.
For anything else I use ext4 on lvm thinpools.
That doesn’t do error detection and correction nor does it have proper snapshots.
No reason not to. Old reputations die hard, but it’s been many many years since I’ve had an issue.
I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.
I’ll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.
I’ve been vaguely planning on using btrfs in raid5 for my next storage upgrade. Is it really so bad?
Check status here. It looks like it may be a little better than the past, but I’m not sure I’d trust it.
An alternative approach I use is mergerfs + snapraid + snapraid-btrfs. This isn’t the best idea for a system drive, but if it’s something like a NAS it works well and
snapraid-btrfs
doesn’t have the write hole issues that normalsnapraid
does since it operates on r/o snapshots instead of raw data.It’s affected by the write-hole phenomenon. In BTRFS case that can mean that perfectly good old data might corrupt without any notice.
I am using btrfs on raid1 for a few years now and no major issue.
It’s a bit annoying that a system with a degraded raid doesn’t boot up without manual intervention though.
Also, not sure why but I recently broke a system installation on btrfs by taking out the drive and accessing it (and writing to it) from another PC via an USB adapter. But I guess that is not a common scenario.