I’ve messed up my system so many times over the years that now I think I secretly get excited when it accidentally happens. Maybe I’m a masochist, but I actually enjoy trying to understand what went wrong. A USB stick with a light weight Linux distro and chroot you can usually get back in there and look around at the damage.
Although I think having to fix a borked bootloader is a good bit of experience, it's probably not something you are always going to want to spend time on. I have used boot-repair only once, but it was like magic. Just throwing it out there for your future use and a general recommendation. :)
Sweet, welcome! :) I know the feeling. I just finished reinstalling Nobara after being dumb and goofing up patching. Then I tried to fix it and made the system totally unusable and I gave up.
A while ago I jacked my grub config and decided to try to fix it manually. I managed to stumble through it and learned some stuff, though I am still fuzzy on some details.
I mostly want to just use the computer without a lot of headache and both Mint and Nobara have been great for coding (various), electronics design, 3d modeling and printing, graphics, photo editing, and such.
This is why I gave up on fixing it yesterday lol. I spent a few days setting it up, I didn’t wanna spend a few more days to try to figure out exactly what the issue was when I could just give in and then actually use it
Totally valid! Theoretically with more experience it may be easier / faster to fix but…idk
See this is why I keep /home on a separate partition (or drive in some cases). I can reinstall or switch distros anytime without worrying about all my files (they’re backed up, anyway but doing a restore is a pita).
If you keep around a bootable rescue stick like System Rescue it has a boot menu entry that will boot the Linux installed on your machine. Once you do that you can run a command or two to reinstall the bootloader. You can search the net or whatever at leisure since it will work fully.
Alternatively, if your system Linux is borked harder, you can boot the rescue Linux and use more advanced methods, depending on what’s wrong. The rescue Linux also has a graphical environment with browser if you need it.
At the very least sometimes you can figure out what went wrong. It may not be much comfort if you lost your system but at least you learn what not to do in the future. Too many people just say “oh, it just broke” and leave it at that.
I think I know what the issue was… I modified the grub.cfg file and ran grub2-mkconfig and I think it was saying it detected a Linux install at my root partition, but didn’t seem to recognize my /boot or /boot/efi partitions and I couldn’t figure out how to edit that via the grub cli. If that wasn’t the case, then that’s okay. I’ll make sure to teach myself a bit more about the bootloader before trying to edit it again
Few days ago I downgraded glibc(I’m dumdum) because it was recommended in a reddit thread for a problem I was having. I couldn’t even chroot. Fortunately I could update with pacman --root
There is some stuff that I hate, but I tend to come back to it for my home server just because of livepatch, which is nice to minimize the amount of reboots necessary and having a patched kernel for all my LXCs makes then also automatically protected.
People dont hate on ubuntu cause its inherently bad. They hate on it because its a corporate distro and they do some questionable stuff sometimes. The OS runs fine.
Why not debian unstable? Its better than ubuntu in pretty much every way imo. Somewhat less user friendly i guess.
Debian unstable is not really unstable, but it’s also not as stable as Ubuntu. I’m told that when bugs appear they are fixed fast.
I ran Debian testing for years. That is a rolling release where package updates are a few weeks behind unstable. The delay gives unstable users time to hit bugs before they get into testing.
When I wanted certain packages to be really up-to-date I would pin those select packages to unstable or to experimental. But I never tried running full unstable myself so I didn’t get the experience to know whether that would be less trouble overall.
It’s relatively alright for something that’s called unstable. There is also testing which is tested for at least 10 days. And you can mix and match, but that’s not recommended either.
I wouldn’t put it on my server. And I wouldn’t recommend it to someone who isn’t okay with fixing the occasional hiccup. But I’ve been using it for years and I like it.
However, mind that it’s not supported and they do not pay attention to security fixes.
I used to run Debian testing on my servers. These days I don’t have much free time to mess with them, so they’re all running the stable release with unattended-upgrades.
However, mind that it’s not supported and they do not pay attention to security fixes.
To be clear, it can still get security updates, but it’s the package maintainer’s responsibility to upload them. Some maintainers are very responsive while others take a while. On the other hand, Debian stable has a security team that quickly uploads patches to all officially supported packages (just the “main” repo, not contrib, non-free, or non-free-firmware).
Thanks for clarifying. Yeah I implied that but didn’t explain all the nuances. I’ve been scolded before for advertising the use of Debian testing. I’m quite happy with it. But since I’m not running any cutting edge things on my server and Docker etc have become quite stable… I don’t see any need to put testing on the server. I also use stable there and embrace the security fixes and stability / low maintenance. I however run testing/unstable on my laptop.
It’s not actually unstable, more accurately it’s tested and verified as much as Debian stable, meaning it’s fine for desktop use but I wouldn’t use it for a server or critical system I plan on running 24/7 without interruption, both since it may have bugs that develop after long term use and gets more frequent updates which will be missed and render it out of date quickly if it’s running constantly.
Unstable is pretty damn stable, feels arch-y to me, and arch rarely has issues. If there are issues they’re fixed fast.
Testing is the middle ground. Tested for a bit by unstable peeps but thats it.
It’s unstable in the sense that it doesn’t stay the same for a long time. Stable is the release that will essentially stay the same until you install a different release.
Sid is the kid next door (Iirc) from Toy Story who would melt and mutilate toys for fun. He may have been a different kind of unstable.
Side question on this, why are people suggesting Debian, a stable but “old” distro, but never mention RHEL / Rocky? They are on par with stability (and quite possibly RHEL wins on it). Did you know that you can get a free licence if you register as a developer?
If we pretend the issue is just the corporate aspects of Ubuntu/Canonical, Red Hat and RHEL have all of those and then some. People just try not to think about that because Fedora is so nice.
As for Rocky: The status of that is pretty much in massive flux since Redhat bounce between tolerating it and wanting it to be even deader than CentOS depending on the day.
Are we really back to the 00s? Are we going to start calling it Micro$hill next?
And “Legally it can’t be stopped” doesn’t really bode well for long term support in the context of contributors and so forth. It won’t prevent me from using Rocky (I actually really like it for servers I will likely re-image sooner than later) but it also means I am not going to recommend it to people looking for a distro.
When looking at the 8.x and 9.x releases Rocky is the most popular distro for enterprise Linux. Even more popular than R hell, and yes I’m still bitter about what they did to centos.
As the other reply said, Fedora and RHEL harbor the same problem as Ubuntu in terms of corporate backing.
They’re all as stable at it gets when it comes to linux distros; all those “server distributions”.
I guess people recommend debian because that’s what they know. It’s got the biggest community, so the most support.
Nothing against Rocky, but i wont recommend it if i’ve never used it.
I prefer software with defaults that are in line with my preferences. I rather have sensible defaults and a nice OOTB experience, instead of fighting my distro and it’s packages.
I loved Unity. Also, I would argue that both Snap and Flatpak are bad. That said, be happy with whatever works for you. Ubuntu always gives me problems, whereas Fedora runs smooth. That said Ubuntu can read my old Passports, Fedora can’t. They each have the benefits.
The beauty of Linux, at least for me, is that there’s inter-dependability and so you can run apps using less space than you would on Windows. Linux is like a metaphor for society, if your neighbour has something you need, they should share and vice versa. But alas, some twats with a Windows fetish decided to introduce the likes of Flatpak and Snap 🤮
Snap is a steaming pile of excrement. So much of the crap on the Snap Store is obsolete and out of date. Anyone and their monkey can post a snap on snapcraft, and… they do. Canonical is just as bad. They took it upon themselves to package up a lot of commercial-level open-source software 3 or 4 years ago… and then have done fuck all with it ever since. Zero updates to the original snaps they put there in the initial population of the Snap store (yes they do maintain a select few things, but only a small percentage of the flood of obsolete software in the Snap store). The result is people looking to install apps who poke the Snap store, go “oh hey, the application I want is there”, install it, and then get all pissy with the vendor… who looks about in surprise wondering how a potential customer managed to find such an old version (happened with at least 2 of my employers, and I’ve come across many more). Go search Reddit (or Google) for obsolete snap discussions. There’s no shortage people pointing at the same issue.
This doesn't seem to be a problem with snap. Canonical probably tried to show vendors a way how to distribute software commercially. But vendors are on the level of cavemen and don't know shit about Linux even after serving a solution. Or they simply don't care about building up a market opportunity.
I don't want to defend Ubuntu. I don't like Ubuntu especially, but it might be a simple explanation.
It’s a problem with Canonical. They stepped up and created the snaps and then abandoned them instead of maintaining them. They still maintain the core that they include with the distro… it’s all the extras they created to pad out the store… and then abandoned. “Look the snap store has so many packages”… yeah… no… it doesn’t.
Why would a company who makes a commercial level open source package want to add snaps to their already broad Linux offering? They typically already build RPM (covering RHEL, Fedora, openSUSE, Mandriva, etc.) and DEB (covering Debian, Ubuntu, all Ubuntu derivatives, etc.)… and have a tar.gz to cover anything they missed. Why should they add the special snowflake snap just to cover Ubuntu which is already well covered by the DEB hey already make?
Sure, show vendors what’s possible, but if Canonical stepped up to make the snaps, then they should still be maintaining them. It’s not a business opportunity… its more bullshit from Canonical that no one wants.
linux
Oldest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.