gitlab.com

twistypencil, to steamdeck in [PSA] Swapping your Deck's filesystem to Btrfs is easy to do, and can give you more space for free

I’ve tried btrfs twice, and both times I regretted it. I also regret XFS, and reiserfs, but I had to do that because ext2 could just not deal with the very large, deep and multitudinous number of files I had to manage. Oh and jfs, also regret.

Nagairius,

Really? I have run BTRFS for that last 3 years on my desktop and my laptop and it has saved me quite a few times now and I have yet to have any issues tied back to my filesystem.

twistypencil,

Maybe I used it too early, dunno.

yote_zip,
@yote_zip@pawb.social avatar

How exactly did the data get lost? Nowadays BTRFS stores 2 copies of its metadata by default (this wasn’t always the case), and since it’s Copy-On-Write (no corruption during power loss) it should be basically bulletproof for filesystem integrity. Running RAID5/6 (which are known to have bugs) or trying to perform filesystem repair without reading the manual is about the only thing I can think of that could cause actual issues.

Scrubs need to be run ~monthly to detect bitrot for normal data. Note that BTRFS actually has checksums for data so you can detect data loss - with something like Ext4 you can only detect if the metadata/filesystem is corrupt. Bitrot happens naturally and should be protected against with either backups or RAID. SnapRAID is a good replacements for RAID5/6 if you’re trying to run BTRFS on a NAS, or you can easily run two drives in RAID1 so they self-heal each other. If data integrity is of utmost importance and you only have one drive, you can actually run btrfs balance start -dconvert=dup /path/to/btrfs/mount to tell BTRFS to keep 2 copies of your data on your drive, halving total available space and write speed. -mconvert=dup is used to keep two copies of metadata, but that’s already enabled by default.

twistypencil,

I couldn’t say how, when I got to that point, my goal was recovery, and stabilizing, and moving on. Trying to figure out how it failed was beyond my capabilities and scope

anlumo,

You should try ZFS (not on the SD, though). It’s pretty solid and used in NAS very often.

vagrantprodigy,

I’ve had great luck with xfs and zfs, but btrfs has lost data for me more than once.

scrubbles, to steamdeck in [PSA] Swapping your Deck's filesystem to Btrfs is easy to do, and can give you more space for free
@scrubbles@poptalk.scrubbles.tech avatar

As someone who fell for the “Swap over to Fat32 and you’ll gain so much space” back in the day, I feel like I need to point out to newbies here, anything done to your file system has quite a bit of risk. Things can go wrong in a way that are unrecoverable unless you full reset your device. I’m not saying this project is unstable, but there is a high amount of risk involved with this.

If you decide to do this, back up all data that you may need saved and then mentally prepare that you may ultimately end up resetting your device in the end. These are real possibilities when messing with file systems.

anlumo,

The real risk is losing a bit of time with this. Since everything is backed up anyways, the data is just a restore away.

If you don’t have a backup, that’s the risk and has nothing to do with this procedure.

For the Steamdeck the risk is even less, since Steam backs up savegames automatically and the games can be re-downloaded at any point for free (except for Unity developers, who have to pay 20 cents for this).

skullgiver, (edited ) to steamdeck in [PSA] Swapping your Deck's filesystem to Btrfs is easy to do, and can give you more space for free
@skullgiver@popplesburger.hilciferous.nl avatar

deleted_by_author

  • Loading...
  • Fubarberry,
    @Fubarberry@sopuli.xyz avatar

    I know this gitlab project sets some downloading/temp folders to have COW disabled, possibly for this very reason.

    anlumo,

    Yeah, Linus Torvalds has been pushing for ECC RAM everywhere for just this reason.

    yote_zip,
    @yote_zip@pawb.social avatar

    The filesystem metadata comes with 2 copies that can heal each other, and Copy-on-Write protects against power loss. The filesystem itself should be bulletproof.

    I feel like people reporting data loss on BTRFS are unaware that at least BTRFS is actually measuring the data loss. Bitrot is not rare, especially with how big our drives are getting. If you care about your data it should be backed up and/or RAIDed. Ext4 has no idea if your data is still intact - that’s not the same as no data loss.

    skullgiver, (edited )
    @skullgiver@popplesburger.hilciferous.nl avatar

    deleted_by_author

  • Loading...
  • yote_zip,
    @yote_zip@pawb.social avatar

    What deduplication program did you use? Deduplication is not technically an end-to-end supported feature, and depending on how the third-party program implemented it there could be issues earlier in the pipeline. I’m also not sure how a RAM bit flip would interact in this scenario - I know ZFS checks the file checksum several times during transaction but I don’t know how often BTRFS does.

    The problem is that there are a lot of people online reporting vague problems with BTRFS, but all reports have little info on how they were actually caused and are not able to be reproduced. There is no solution if we’re operating under these rules, other than to completely stop using BTRFS out of pure superstition. If there are bugs we need to be able to point to the bugs in order to fix them. As I said before, this problem you had would not have even been detected by Ext4, so I think error reporting is biased against a FS that actually checks its work. W/r/t to checking work, I think ZFS gets away with a lot more because it’s normally run in RAID setups, where healing happens automatically. BTRFS, lacking RAID5/6 support, is usually just run on a single drive, and any data integrity error becomes a target of frustration as soon as it happens.

    skullgiver, (edited )
    @skullgiver@popplesburger.hilciferous.nl avatar

    deleted_by_author

  • Loading...
  • yote_zip,
    @yote_zip@pawb.social avatar

    I’m interested to see that reported somewhere - the duperemove repo might be a good starting point as that’s generally the standard BTRFS dedupe solution. I don’t currently see any issues on the GitHub repo about corruption (or at least the last one was 7 years ago). Again, I’m not sure if a RAM bit flip could cause this during a dedupe. Just because you scrubbed, deduped, and scrubbed again doesn’t mean there wasn’t a bit flip during the dedupe.

    As for btrfs-check vs fsck, there are just way fewer things that need to be repaired in BTRFS and ZFS because they are copy-on-write (ZFS doesn’t even have a fsck at all!). Because Ext4 is not Copy-On-Write, it’s highly vulnerable to powerloss events, and an fsck is required to replay the journal when this happens. BTRFS and ZFS make atomic COW transactions and will never be in a state of corruption on power loss. The other part of fsck is repairing the filesystem, which BTRFS and ZFS do through scrub and/or auto-heal on read instead. ZFS and BTRFS keep multiple copies of the filesystem metadata so that it can auto-repair itself while online. btrfs check is not something that should be used lightly, and I’ve seen a lot of people just run btrfs-check --repair expecting the same behavior as fsck, then wonder why they ended up with a broken filesystem.

    yote_zip, to steamdeck in [PSA] Swapping your Deck's filesystem to Btrfs is easy to do, and can give you more space for free
    @yote_zip@pawb.social avatar

    I noticed this guide recommends compress-force=zstd, which sets the ZSTD compression to level 3. There’s a BTRFS benchmark and ZFS benchmark of the ZSTD levels which can give a rough idea of how ZSTD performs for transparent filesystem compression. Note that almost all of ZSTD’s compression gains happen starting at level 1, and levels after that have very minor improvements.

    Also keep in mind that ZSTD levels only affect how long it takes to write new data to the filesystem. ZSTD is somewhat unique as a compression algorithm in that as you increase compression effort, the decompression effort stays the same. You could compress everything with level 15 and it will decompress just as fast as level 1 (~generally). Setting higher ZSTD levels could arguably make more sense for a gaming drive because the data is usually write-once, read-many. I don’t know at what level the Steam Deck CPU will start limiting your I/O though.

    BTRFS compression is enabled per-file, so you can change ZSTD levels at any time and old data will still be compressed with your previous algorithm. To recompress using a new level, change your /etc/fstab/ ZSTD level and remount the partition, then run a defrag to poke the data into recompression.

    flux,

    Here’s one sharp edge: defrag will unshare file contents so sometimes it’s not just feasible to do it.

    yote_zip,
    @yote_zip@pawb.social avatar

    Run duperemove on the partition after defragging to get those reflinks back. Duperemove is usually a good idea anyway unless you’re running on a HDD - reflinks are almost identical to fragmentation in nature so you might prefer to have less fragmentation on a mechanical drive instead of easy de-duping.

    flux,

    If you can do that, you already had enough space for reflinking not to matter in the first place, right? Or you can carefully do defragmenting in parts, running dupremove incrementally? seems like a lot of wasted time :).

    yote_zip,
    @yote_zip@pawb.social avatar

    Free storage is free storage, and storage is at a premium on the Steam Deck - I would gladly trade time for storage, considering that time passes regardless. This defrag scenario would only happen if you want to change ZSTD levels, so if you pick your level at the start, copy your data on, then run duperemove you’ll save the most space possible without needing to defrag. Deduplication is probably the difference between being able to fit 1-2 more games on your Steam Deck - Wine prefixes are prime targets for deduplication.

    Luci, to steamdeck in [PSA] Swapping your Deck's filesystem to Btrfs is easy to do, and can give you more space for free

    To get around the case folding issue you could mount your steamapps/common folder as a loop device with ext4 with case folding support. The virtual device should follow the compression settings without too much issue but deduplication won’t be an option and snapshots may be larger.

    Sh1nyM3t4l4ss, to linux in WORDY: A TUI WORD PUZZLE

    Pretty cool in principle, although the default word list on my system is awful for wordle.

    christos,
    @christos@lemmy.world avatar

    Oh the word list issue is a matter of great debate. I opted for using the default word list just to avoid this matter (thinking that this list is something fixed and undebatable). It might be a good idea to make the word list configurable, so that each user may use their prefered word list. Stay tuned, I might do it the very near future.

    njinx,

    Have you thought of the Wordle wordlist by default and let the user override it via a config option? This would cut down on the per-distro variability. IIRC the wordlist itself it around 14KB so not terribly large. You could even pull it from the GH via curl during installation if you’d like. I don’t think I’d mind implementing this if your interested.

    christos,
    @christos@lemmy.world avatar

    If someone wishes to play the game using a different word list, they can do so, editing LINE 17of the wordy.sh

    
    <span style="color:#323232;">WORD_LIST="/usr/share/dict/words"
    </span><span style="color:#323232;">
    </span>
    

    change to

    
    <span style="color:#323232;">WORD_LIST="/path/to/prefered/wordlist"
    </span>
    

    I have already considered this before. I know that what you propose is quite easy. However I do not want to impose any other word list (that might even be considered “proprietary”) to anyone, when the majority of the users’s systems already have the default word list. So, whoever user want any other list, they can easily edit that line and go on enjoy the game using the list they like.

    Another solution is a config file instead of editing line 17. It is just as easy, but having a config file to configure just one parameter sounds not great.

    christos,
    @christos@lemmy.world avatar

    Another alternative is to provide the user a few word lists (as you mentioned, they should be quite light), and from then on each one is free to choose their prefered one.

    It could be done with curl as well, but then if we did that, we would have another dependency: curl.

    I forgot to thank you for the feedback, and for your willlingness to help, it is appreciated.

    christos,
    @christos@lemmy.world avatar
    • As mentioned above, this script is using the word list contained in /usr/share/dict/words.

      If your distro doesn’t include this installed, you can install the respective package (wordlist, words) using the respective command (apt, pacman).

    • ADDITIONALLY, if someone wishes to play the game using a different word list, they can do so, editing LINE 17of the wordy.sh

    
    <span style="color:#323232;">WORD_LIST="/usr/share/dict/words"
    </span><span style="color:#323232;">
    </span>
    

    change to

    
    <span style="color:#323232;">WORD_LIST="/path/to/prefered/wordlist"
    </span><span style="color:#323232;">
    </span>
    
    irmoz, to linux in WORDY: A TUI WORD PUZZLE

    Pretty cool dude, nice work :)

    christos,
    @christos@lemmy.world avatar

    Thanks!

    copylefty, to linux in Happy Birthday, Linux!
    @copylefty@lemmy.fosshost.com avatar

    Why does Tux look sad? :(

    Osama,

    Because we are not celebrating him at time

    lowleveldata, to linux in Happy Birthday, Linux!

    Ah off-by-1 error. Typical.

    pewgar_seemsimandroid, to fediverse in GitLab plans support for ActivityPub

    gitlab is allowed

    PlexSheep, to fediverse in GitLab plans support for ActivityPub
    @PlexSheep@feddit.de avatar

    This feels like such a great step in the right direction. I Selfhost gitea, which also has this planned and is working on it, but it’s taking a lot of time. I might consider switching to git lab if they are faster.

    gondwana, to fediverse in GitLab plans support for ActivityPub

    Looking at their epic and list of tickets, they look serious about it.

    This is awesome.

    Anonymousllama, to fediverse in GitLab plans support for ActivityPub

    It’s a great use case that they’ve defined. Super keen to see how it turns out for them.

    nils, to fediverse in GitLab plans support for ActivityPub
    @nils@feddit.de avatar

    This is amazing! Honestly a no brainer feature. Having to create an account just to contribute on one project’s instance is not a great experience currently and the reason I mainly stick to Github.

    skymtf, to fediverse in GitLab plans support for ActivityPub
    @skymtf@pricefield.org avatar

    This is so based!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • Ask_kbincafe
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines