Just some Internet guy

He/him/them 🏳️‍🌈

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P,
@Max_P@lemmy.max-p.me avatar

The only reason it will not work would be if OP has manually configured stuff in /etc/X11 in some way. You can even have both in the system at the same time (which does require a little bit of extra configuration). Absolute worst case you check out /var/log/Xorg.0.log it tells you the config you forgot in /etc/X11/xorg.conf.d/20-nvidia.conf 5 years ago doesn’t work because the GPU is gone, you delete it, restart Xorg and you’re good to go.

Even on Windows it’s kind of a myth. Some people are like you need to DDU the old driver in safe mode before swapping them out. You can really have them both installed it’s just going to be weird because on Windows both vendors come with ridiculous amounts of bloat.

AMD cards just works as long as your distro is reasonably up to date. No extra drivers, in fact, installing AMDGPU-PRO is usually worse unless you fit some specific use cases.

Max_P,
@Max_P@lemmy.max-p.me avatar

Basically the idea is that if you have a lot of data, HDDs have much bigger capacities for the price, whereas large SSDs can be expensive. SSDs have gotten cheap, but you can get used enterprise drives on eBay with huge capacities for incredibly cheap. There’s 12TB HDDs for like $100. 12TB of SSDs would run you several hundreds.

You can slap bcache on a 512GB NVMe backed by a 8TB HDD, and you get 8TB worth of storage, 512GB of which will be cached on the NVMe and thus really fast. But from the user’s perspective, it’s just one big 8TB drive. You don’t have to think about what is where, you just use it. You don’t have to be like, I’m going to use this VM so I’ll move it to the SSD and back to the HDD when done. The first time might be super slow but subsequent use will be very fast. It also caches writes too, so you can write up to 512GB really fast in this example and it’ll slowly get flushed to the HDD in the background. But from your perspective, as soon as it’s written to the SSD, the data is effectively commited to disk. If the application calls fsync to ensure data is written to disk, it’ll complete once it’s fully written to the SSD. You get NVMe read/write speeds and the space of an HDD.

So one big disk for your Steam library and whatever you play might be slow on the first load but then as you play the game files gets promoted to the NVMe cache and perform mostly at NVMe speeds, and your loading screens are much shorter.

Max_P,
@Max_P@lemmy.max-p.me avatar

I don’t know, it’s going to depend a lot on usage pattern and cache hit ratio. It will probably do a lot more writes than normal to the cache drive as it evicts older stuff and replaces it. Everything has tradeoffs in the end.

Another big tradeoff depending on the cache mode (ie. writeback mode) if the SSD dies, you can lose a fair bit of data. Not as catastrophic as a RAID0 would but pretty bad. And you probably want writeback for the fast writes.

Thus I had 2 SSDs and 2 HDDs in RAID1, with the SSDs caching the HDDs. But it turns out my SSDs are kinda crap (they’re about as fast as the HDDs for sequential read/writes) and I didn’t see as much benefit as I hoped so now they’re independent ZFS pools.

Ubuntu 24.04 LTS Committing Fully To Netplan For Network Configuration (www.phoronix.com)

The Canonical-developed Netplan has served for Linux network configuration on Ubuntu Server and Cloud versions for years. With the recent Ubuntu 23.10 release, Netplan is now being used by default on the desktop. Canonical is committing to fully leveraging Netplan for network configuration with the upcoming Ubuntu 24.04 LTS...

Max_P,
@Max_P@lemmy.max-p.me avatar

What is even the value of Netplan on… desktop? Most people just pick their WiFi in the menu in Gnome. That sounds like a lot of unnecessary complexity.

For servers, sure, it’s fairly nice. But, desktop? Why?

Max_P,
@Max_P@lemmy.max-p.me avatar

Netplan’s been the default since 20.04 on the server side and the article says it’s coming to the desktop release with 24.04.

Max_P,
@Max_P@lemmy.max-p.me avatar

If you’re just using DHCP, you won’t. What Netplan does is take a YAML input file and renders it as a systemd-networkd or NetworkManager configuration file. It’s a very quick and easy way to configure your network, and even have a try command that auto reverts in case you get kicked out of your SSH session.

It seems like what they’re doing for the desktop is hacking up NetworkManager so that it saves back its config as Netplan configs instead of regular NetworkManager configs. That’s the part I’m confused about, because NetworkManager is huge and Netplan doesn’t support close to every option. Their featuresets are wildly different. And last time I checked, the NetworkManager renderer was the least polished one, with the systemd-networkd one being close to a 1:1 match and more reliable.

It made a lot more sense when it was one way only. Two way sounds like an absolute mess.

Would it be possible to have more algorithms?

I mean we can sort by new or hot or active etc. But can we add more algorithms that people can then choose. For example id love to be able to have an algorithm that allows me to rate communities so I could say rate a meme community 1 and a chess community 10 so I only see the best of the meme community and most of thr chess...

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s supposed to come out soon but there’s some compatibility issues with apps still which may delay instances that want to remain as compatible as possible. It’s a pretty big change.

Max_P,
@Max_P@lemmy.max-p.me avatar

And like and comment as appropriate about the same as I would on Lemmy and used to on Reddit.

Not because they asked for it, but because I genuinely liked it and want to boost it, and because I genuinely have something useful to say/add to it in the comments.

Am I running the risk of getting my Google account banned for logging into the Aurora Store or a custom rom like GrapheneOS?

I guess there is no need to introduce what a Degoogled phone is (or a custom ROM without google services, like GrapheneOS is) and the Aurora Store is basically said in a crude way the Google Playstore but without the need to log in to your Google account, quite useful in my opinion....

Max_P,
@Max_P@lemmy.max-p.me avatar

Yes but only if you use their Gapps packages unmodified or don’t use their services at all. They don’t look as kindly when it comes to microG and Aurora, or even ReVanced, and they still fight to make sure Google Pay doesn’t work through SafetyNet and Play Integrity, and you’ll only know at checkout too.

People have been banned for using Aurora. You can mess with the OS but they don’t want you to mess with their apps and especially not if it affects how much money they make off you.

Max_P,
@Max_P@lemmy.max-p.me avatar

Android apps are still heavily based on Java. But doing Java stuff at runtime has proven to be fairly slow still.

So modern Android does something called Ahead of Time compiling (AOT) where it compiles the Java bytecode to native code at installation.

After an OTA, those are all invalidated because system libraries may have changed, so it needs to redo that. That’s what it’s doing.

If you adb shell into the device while it’s doing that, you’ll see it’s running dex2aot on all your apps.

Yes it’ll do it if you flash it directly too. For the update process purposes, there’s very little difference between regular OTAs and flashes in recovery. You just don’t see it on first boot because you’re busy doing the initial setup screens and system apps are usually precompiled and baked into the ROM anyway. No third-party apps installed, no optimization to do.

They could probably just do it in the background silently, but it does give an indication to the user that heavy stuff is going on in the background and thus explains why the device is running a bit slower, a bit hotter and using more battery than normal. A lot of things a regular user could be concerned about immediately after an update, so you’re better off letting the user know stuff is happening and that it’ll settle down.

Max_P,
@Max_P@lemmy.max-p.me avatar

It does it post reboot in both cases.

I don’t know why OTAs are so slow to install. Maybe it’s slow to conserve battery or not affect phone performance too much? No idea.

It does the same thing in the end.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s been slow for quite a while for me too, and it can’t even be the ads because I have Premium.

It’s usually fine when it’s loaded but it does take quite a while to load for some reason, and I’ve got gigabit fiber and 16 cores to process it.

I heard YouTube falls back to a very slow path on Firefox because it uses features that Chrome implemented and never made it to the standard and something else was adopted instead.

Max_P,
@Max_P@lemmy.max-p.me avatar

Similarly, the high availability of source code may lead to malicious instances, actors, and/or back-end modifications that would favor specific instances resounding consequence throughout the Fediverse.

That’s ultimately just the Internet being the Internet.

On the fediverse, any instance shouldn’t blindly trust any other instance for that exact reason. That’s part of the game. Instances share the data over ActivityPub, and it’s up to you to process and make use of that data. That includes spam filtering and whatnot. Some instances have CSAM detection for example.

Every instance that’s subscribed to a user or community gets the full set of data: every vote, from every user, from every instance involved. We have the data, we can analyze it. And that’s what really matters.

It doesn’t matter if there’s rogue instances trying to manipulate votes. Everyone have the data to detect and filter out the noise. Maybe one day it’ll be like E-Mail where the majority of the traffic is spam. But just like E-Mail, we’ll make filters and make it work. If all else fails, there’s always the allowlist method: only see content from sources you trust not be spammy. You can even run AI models on it to filter the data if you want. You have the data, you can do whatever you want with it to make it useful for you.

I have faith in the protocol and its openness, not the software that runs it.

Max_P,
@Max_P@lemmy.max-p.me avatar

Kind of but not really? You’d have to federate out every vote individually. There’s no upvotes totals anywhere, there’s a vote table that contains who voted up/down on what, and it’s counted as needed. So if you want to send out 1000 votes, you need 1000 valid users and also send 1000 different activities to at least one instance.

You can make it display 100000 votes on your own instance if you want, but it’s not going to alter the rating on other instances because they run their own tally.

If you really want this to work long term, you need a credible looking instance with credible looking users that are ideally actually subscribed to the target community, and credible activity patterns too. Otherwise, the community can detect what you’re doing and defederate you and purge all the activities from your instance, and also revert all those votes as a side effect.

Remember, all votes are individual activities, and all votes are replicated individually to every instance. On Kbin, you can even see all the votes right from the UI, they don’t even hide it! You can count them yourself if you want. So anyone with the dataset can analyze it and sound the alarm. And each instance can potentially have its own algorithm for that, so instead of having just one target to game, like Reddit and a subreddit, you have hundreds of instances to fool. There’s so many signals I could use to fight spam: instance age, instance user growth, the frequency and timing of the votes, are the users seemingly active 24/7, what other communities those users post into, what are they voting for, do they all vote in agreement with each other, and on and on.

So, you technically can manipulate votes but it takes a lot of effort and care to make it as hard as possible to detect in practice. We play the same cat and mouse game as Reddit, but distributed and with many more eyes on it.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s mostly better, but not in every way. It has a lot of useful features, at a performance cost sometimes. A cost that historically wasn’t a problem with spinning hard drives and relatively slow SATA SSDs but will show up more on really fast NVMes.

The snapshots, it has to keep track of what’s been modified. Depending on the block size, an update of just a couple bytes can end up as a few 4k write because it’s Copy-on-Write and it has to update a journal and it has to update the block list of the file. But at the same time, copying a 50GB file is instantaneous on btrfs because of the same CoW feature. Most people find the snapshots more useful than eeking out every last bit of performance out of your drive.

Even ZFS, often considered to be the gold standard of filesystems, is actually kinda slow. But its purpose isn’t to be the fastest, its purpose is throwing an array of 200 drives at it and trusting it to protect you even against some media degradation and random bit flips in your storage with regular scrubs.

Max_P,
@Max_P@lemmy.max-p.me avatar

People have been running Ext4 systems for decades pretending that if Ext4 does not see the bitrot, the bitrot does not exist. (then BTRFS picks up a bad checksum and people scold it for being a bad filesystem)

ZFS made me discover that my onboard SATA controller sucks and returns bad data occasionally under heavy load. My computer cannot complete a ZFS scrub without finding errors, every single time.

Ext4, bcache and mdadm never complained about it, ever. There was never any sign that something was wrong at all.

100% worth it if you care about your data. I can definitely feel the hit on my NVMe but it’s also doing so much more.

Max_P,
@Max_P@lemmy.max-p.me avatar

RAID doesn’t checksum and heal the rotten data. It’s game over before you even have a filesystem on top of it, because said filesystem can’t directly access the underlying disks because of the RAID layer.

Errors will occur, and RAID has no way of handling it. You have a RAID1, disk 1 says it’s a 0, disk 2 says it’s a 1. Who’s right? RAID can’t tell, btrfs and ZFS can. RAID won’t even notice there’s a couple flipped bits, it’ll just pass them along. ZFS will just retry the read on both disks, pick the block that matches the checksum, and write the correct data back on the other disk. That’s why people with lots of data loves ZFS and RAIDZ.

The solution isn’t more reliable hardware, the solution software that can tell you and recover from your failing hardware.

Max_P,
@Max_P@lemmy.max-p.me avatar

Precisely. It’s not just “it works”, it’s third-party hardware that Canonical tests, certifies and commits to support as fully compatible. They’ll do the work to make sure everything works perfectly, not just when upstream gets around to it. They’ll patch whatever is necessary to make it work. The use case is “we bought 500 laptops from Dell and we’re getting a support contract from Canonical that Ubuntu will run flawlessly on it for the next 5 years minimum”.

RedHat has the exact same: catalog.redhat.com/hardware

Otherwise, most Linux OEMs just focus on first party support for their own hardware. They all support at least one distro where they ensure their hardware runs. Some may or may not also have enterprise support where they commit to supporting the hardware for X years, but for an end user, it just doesn’t matter. As a user, if an update breaks your WiFi, you revert and it’s okay. If you have 500 laptops and an update breaks WiFi, you want someone to be responsible for fixing it and producing a Root Cause Analysis to justify the downtime, lost business and whatnot.

Max_P,
@Max_P@lemmy.max-p.me avatar

What Bluetooth controllers are you using? Is the Linux/Windows machine the same machine?

Not all bluetooth cheaps are equal. My phone will do Bluetooth all the way at the end of my back yard, but my desktop’s Bluetooth doesn’t even reliably reach the next room over.

I doubt it’s the headset, unless it’s defective and you need a replacement, those are pretty well regarded. I have a cheaper model and it’s been a flawless experience for years.

Could Lemmy be used as a classroom tool (like having a classroom's own instance)

I feel like it would be an interesting learning tool cuz I learn a ton on here and it gets me writing without anyone having to hold a gun to my head. I mean like even essay-length or at least essay-worthy treatments of things I respond to in longer-form, and even for the shorter-form stuff

Max_P,
@Max_P@lemmy.max-p.me avatar

There’s a “Private instance” checkbox in the admin that seems like it would do exactly what you’re thinking. No need to defederate, you can turn federation off entirely.

Max_P,
@Max_P@lemmy.max-p.me avatar

You certainly do lose a lot of features, but you still have the advantages of offering users a somewhat familiar platform (they’re likely familiar with Reddit already), and all of the third-party apps we have for Lemmy already. So even though you could just as well host a Facebook group or a phpBB forum or whatever, it’d probably prefer that as a student because I can log into my favorite app and use it seamlessly. And a single-node instance like that would be very privacy friendly as well. So if OP wants user engagement in a private platform it’s not as bad of an idea as it seems, even though without federation you’re not getting the most of it.

Max_P,
@Max_P@lemmy.max-p.me avatar

They claim they didn’t ruin the Internet, but yet every single one I’ve worked with very aggressively keyword stuffed the shit out of the sites, even a blog with fake authors and carefully written junk top 10 blog posts to bring as much traffic as possible. I’ve even discovered they exploited Wordpress instances to stuff links to our site on it, when they weren’t just leaving junk comments with a link to the website.

They’re the very reasons so many sites have so many fucking useless tutorials and top 10s and whatnot. They go after search engines, and in that process, you gotta make your site appear to have loads of articles and content about a topic so it gets favored in search engines.

Max_P,
@Max_P@lemmy.max-p.me avatar

I think it’s just not implemented because the devs didn’t know it was an important feature to prioritize. It does supposed saved posts, which I imagine is what the devs anticipated was most useful for users.

Gotta keep in mind we’re just about to get 0.19.0, it’s not even v1 yet so a lot of things are still missing.

But yes the data exists, in fact I have the vote history of everyone my instance has seen in the communities I’m subscribed to.

Max_P,
@Max_P@lemmy.max-p.me avatar

All the admins with database access can, yes.

This is the fediverse, everything has to be public and auditable otherwise vote manipulation would be easy.

If you want to really be anonymous, your best bet is to use multiple accounts on multiple instances. If you upvote sketchy stuff, use the anonymous account, and otherwise, use your regular more public account.

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

Boost for Lemmy doesn’t appear to be, last update was in September.

I’ve been using a custom proxy to rewrite auth for Boost to work since my accidental 0.19 update. It’s been working great considering how quick and dirty it is.

My partner also reports the Mlem chokes completely on 0.19’s JSON response. I suspect fields were added and it’s not coded to ignore extra fields when deserializing. I haven’t confirmed it’s still present on rc4, I’ll update my instance and report back. Edit: confirm Mlem still doesn’t work even with the proxy hack. Without the hack it can’t authenticate at all. Latest git main branch might, this is based on the 1.1 current release which is about a month old.

Max_P,
@Max_P@lemmy.max-p.me avatar

The mesh proxy would work, but it’s not easy to configure and for somewhat little benefit, especially if they’re all running on the same box. The way that’d work is, NGINX would talk to the mesh proxy which would encrypt it to the other mesh proxy for the target container, and then it would talk to the container unencrypted again. You talk to 3 containers and still end up unencrypted.

Unless you want TLS between nodes and containers, you can skip the intermediate step and have NGINX talk directly to the containers plaintext. That’s why it’s said to do TLS termination: the TLS session ends at that reverse proxy.

Max_P,
@Max_P@lemmy.max-p.me avatar

I got hit by that, basically forced me to make a Google account and add all my sites to it even though I couldn’t care less about SEO and indexing. Now it keeps sending me spam emails about “problems” with my websites. No, I’m intentionally not letting you index this.

What seems to be going on is it’s flagging random widespread open-source software as impersonation/phishing login page because it’s seen it on a bigger site and assumes you’re doing some phishing.

Filed an appeal and it thankfully promptly got resolved. Google ain’t known to be friendly to developers.

I want to like that feature because I’m sure it’s helpful for the less technically savvy. But I hate that Google can just decide my site is unsafe and essentially cut my sites off the Internet for most people. If Google denies your appeal you have basically zero recourse.

Max_P,
@Max_P@lemmy.max-p.me avatar

I’d go Docker for the maturity. Podman is nice but I’ve definitely had some issues, and Buildah lacks any sort of caching and does unnecessary intermediate copies of the layers when pushing to a repository that really slows things down on larger apps/images.

Max_P,
@Max_P@lemmy.max-p.me avatar

Maybe they changed it since last year, but it wouldn’t cache layers for me. Everytime I’d rebuild the app, it would re-run all the actions from the Containerfile. So a whole npm install each build even though I only changed a source file. Building the exact same file with Docker cached every layer as expected, so a config change would only change the last layer and be basically instant vs 5 minutes.

The other issue with pushing to a registry was that it made a whole temporary tar of the image, then gzip it to disk again before starting to upload it. It blew up the disk space I had allocated to my VM really fast, and made uploading those images take minutes instead of seconds. Docker again seemingly does it all in a streaming fashion as it uploads, making it much faster.

This could have changed though, it’s evolving fast. Just didn’t fit my use case then. But because of those experiences, I’d say it’s probably a safer bet to learn Docker first since documentation is abundant, and there’s no little “oh I’m using Podman and have to use a slightly different syntax” gotchas to run into to make it hard for you.

Max_P,
@Max_P@lemmy.max-p.me avatar

They really should unbundle YouTube Music or make a plan that’s for people that want nothing more than ad-free, and make that tier basically about how much they got from serving you ads.

They’re trying too hard to sell Premium as having all of those perks and extras that not everyone wants.

Does federation connect to a single lemmy network, or can there be multiple?

When a lemmy instance federates, does it connect to one big lemmy network, or can there be multiple disconnected, yet locally federated instances? What I’d like to know is, can I simply join any Lemmy server and choose “All” to view everything Lemmy has to offer, or is there still hidden content?...

Max_P,
@Max_P@lemmy.max-p.me avatar

and then it will reach out to other instances to grab content from every external community that at least one local user has subscribed to

It’s the other way around. The local user subscribes to the community on the remote instance, which causes the remote instance to then push you every action that occurs on that community as it happens. The pull method is only used once and doesn’t bring in comments, it’s meant as a preview for when a remote community is used for the first time.

And this is why their content won’t make it to your instance: it expects the other instance to send it to you, but they’re refusing to. Similarly, they won’t accept content from your instance, even though it’s trying to.

Local and remote communities are pretty similar internally, federation happens as a separate process in a queue system.

This leads to this:

you can still subscribe to subs on defederated instances, it’s just the interactions that don’t get passed back and forth.

Max_P,
@Max_P@lemmy.max-p.me avatar

There’s a few WebSocket solutions out there. Basically looks like a normal HTTPS connection to something that uses a WebSocket, so unless they’re decoding TLS and the HTTP session and somehow recognizes that this is a VPN and block it, it’ll likely go through.

Elon Musk gives X employees one year to replace your bank - ‘You won’t need a bank account... it would blow my mind if we don’t have that rolled out by the end of next year.’ (www.theverge.com)

“If it involves money. It’ll be on our platform. Money or securities or whatever. So, it’s not just like send $20 to my friend. I’m talking about, like, you won’t need a bank account.”...

Max_P,
@Max_P@lemmy.max-p.me avatar

Lets put all our money on a platform rushed in under a year made from an exhausted and abused team of engineers. That sounds like a great idea. Definitely will be well thought out and reliable.

Max_P, (edited )
@Max_P@lemmy.max-p.me avatar

<span style="color:#323232;">    fp = fopen("data.json", "r");
</span><span style="color:#323232;">    fread(buffer, 1024, 1, fp);
</span><span style="color:#323232;">    fclose(fp);
</span>

You’re only reading the first 1024 bytes, that document is much larger than 1024 bytes. So the JSON library gets an incomplete JSON that ends unexpectedly. You need a much larger buffer than 1024 (ideally dynamically allocated to the right size or expandable). fread can also fail or read less than the specified amount of bytes. The correct way to use fread is to keep calling it until it returns <1 and feof returns true, and adjust the pointer in buffer to advance by whatever fread returns since you can’t assume it will always return the full requested amount of data. If you have 256b of a file in memory and request to read 1024, it can return you the 256 immediately while it goes fetch the 768 others from disk. It usually doesn’t, but it can:


<span style="color:#323232;">RETURN VALUE
</span><span style="color:#323232;">       On success, the number of bytes read is returned (zero indicates end of
</span><span style="color:#323232;">       file),  and the file position is advanced by this number.  It is not an
</span><span style="color:#323232;">       error if this number is smaller than the  number  of  bytes  requested;
</span><span style="color:#323232;">       this  may happen for example because fewer bytes are actually available
</span><span style="color:#323232;">       right now (maybe because we were close to end-of-file,  or  because  we
</span><span style="color:#323232;">       are reading from a pipe, or from a terminal), or because read() was in‐
</span><span style="color:#323232;">       terrupted by a signal.  See also NOTES.
</span>

Read man 3 fread and man 2 read for more details, or look it up online.

Any particular reason it has to be C? That would be much much easier in Python or JS since you don’t have to worry about that kind of memory management. It’s a good learning experience though, C is useful to know even if you never use it, since everything ends up in libc.

Max_P,
@Max_P@lemmy.max-p.me avatar

seeing what fread does, that is a nightmare.

It’s really not all that bad considering that you’re pretty close to talking directly to the kernel. The C language itself is pretty simple: it doesn’t come with any built-in functions or anything. It’s code, it gets compiled to a binary. But we need a way to talk to the operating system, and that’s where the C standard library or libc comes in. It contains a standard set of operating system utilities and functions that you can expect most operating systems to implement. The C standard library is pretty old, from an era when a megabyte of RAM was a lot of RAM and where every CPU cycle counted. So it doesn’t do a whole lot: it’s supposed to be the building block for everything, it needs to be fast and flexible and as small as possible.

What you want is nicer C libraries that makes those things easier to work with. Or to write a function whenever you encounter something repetitive, in this case it’s probably 20-25 lines to properly implement the necessary malloc, fopen and fread then you’re done with it forever. Copy paste it in your next project, or make yourself a library of C gadgets you accumulate over time.

If you’re looking for an experience close to C but a little more batteries included, you might want to consider C++ which does still get new modern features to it. Most C code is valid C++ code, it’s not like learning an entirely new language. Reading a whole file for example is a lot more straightforward (source):


<span style="color:#323232;">std::ifstream t("file.txt"); // input file stream of "file.txt"
</span><span style="color:#323232;">std::stringstream buffer; // a buffer for an arbitrarily long string
</span><span style="color:#323232;">buffer << t.rdbuf(); // send everything from the read buffer of the file into the string buffer
</span>

You can, however, appreciate how close you’re working with the hardware and kernel, just like the Arduino: the interface with the kernel (on Linux) to read a file is basically:

  • Program: "please read up to N bytes at address X for this file descriptor"
  • Kernel: puts N bytes at X “here’s how many bytes I was able to get for you”

That’s the whole, unabridged thing. From your perspective, you made a syscall and the data magically appeared in your memory. The kernel has no idea what your intentions are, you just ask it to read stuff and it gives it to you. If you misplace it or tell it to read too much or not enough, it doesn’t know, it obliges. If you want to read the whole file, it’s your job to keep asking the kernel for more, just how the kernel will ask the hard drive for more. That’s exactly what read(fd, buffer, bufsize) does, nothing more, nothing less. It’s the smallest unit of work you can ask, and it’s efficient. The data possibly went straight from your NVMe to the memory of your program. Copied exactly once.

And this is why fread is the way it is, as are most libc things. Interestingly, even that has become too inefficient, and now we have things like io_uring, that’s even more complicated to use but really really fast.

It’s a whole rabbit hole and that’s why we use libraries and avoid using those too much.

Max_P,
@Max_P@lemmy.max-p.me avatar

Updated, good catch!

Max_P,
@Max_P@lemmy.max-p.me avatar
  • Flip phone
  • HTC Legend
  • Galaxy Nexus
  • HTC One M8
  • Nexus 5
  • Alcatel OneTouch Idol 3 (boy that one sucked)
  • HTC One M8 (same device, just finally got S-OFF on it to use it with my carrier despite “incompatibility”)
  • Galaxy S7
  • OnePlus 8T
Max_P,
@Max_P@lemmy.max-p.me avatar

I think 0.19 is reverting that behaviour, because it was indeed a certified bad idea.

I think the idea was to attempt to bulletproof potentially crappy clients especially after the XSS incident, but the problem is it’s simply not even always rendered in a web context which makes the processing kind of a pain.

Wouldn’t surprise me if it becomes double and triple encoded too at times because of the federation. Do you encode again or trust that the remote sent you urlencoded data already?

Best format is the original format and transform as late as possible, ideally in clients where there’s awareness of what characters are special. It is in web, not so much in an Android or terminal app.

I don’t think the Lemmy devs are particularly experienced web developers in general. There’s been a fair amount of dubious API design decisions like passing auth as a GET parameter… Thankfully they also fixed that one in 0.19.

Max_P,
@Max_P@lemmy.max-p.me avatar

Because then you need to take care everywhere to decode it as needed and also make sure you never double-encode it.

For example, do other servers receive it pre-encoded? What if the remote instance doesn’t do that, how do you ensure what other instances send you is already encoded correctly? Do you just encode whatever you receive, at risk of double encoding it? And generally, what about use cases where you don’t need it, like mobile apps?

Data should be transformed where it needs it, otherwise you always add risks of messing it up, which is exactly what we’re seeing. That encoding is reversible, but then it’s hard to know how many times it may have been encoded. For example, if I type &amp; which is already an entity, do you detect that and decode it even though I never intended to because I’m posting an HTML snippet?

Right now it’s so broken that if you edit a post, you get an editor… with escaped HTML entities. What happens if you save your post after that? It’s double encoded! Now everyone and every app has to make sure to decode HTML entities and it leads to more bugs.

There is exactly one place where it needs to encode, and that’s in web clients, more precisely, when it’s being displayed as HTML. That’s where it should be encoded. Mobile apps don’t care they don’t even render HTML to begin with. Bots and most things using the API don’t care. They shouldn’t have to care because it may be rendered as HTML somewhere. It just creates more bugs and more work for pretty much everyone involved. It sucks.

Now we have an even worse problem is that we don’t know what post is encoded which way, so once 0.19 rolls out and there’s version mismatches it’s going to be a shitshow and may very well lead to another XSS incident.

Max_P,
@Max_P@lemmy.max-p.me avatar

It still leads to unsolvable problems like, what is expected when two instances federate content with eachother? What if you use a web app to use a third party instance and it spits out unsanitized data?

If you assume it’s part of the API contract, then an evil instance can send you unescaped content and you got an exploit. If you escape it you’ll double escape it from well behaved instances. This applies to apps too: now if Voyager for example starts expecting pre-sanitized data from the API, and it makes an API call to an evil instance that doesn’t? Bam, you’ve got yourself potential XSS. There’s nothing they can do to prevent it. Either it’s inherently unsafe, or safe but will double-escape.

You end up making more vulnerabilities through edge cases than you solve by doing that. Now all an attacker needs to do is find a way to trick you into thinking they have sanitized data when it’s not.

The only safe transport for user data is raw. You can never assume any user/remote input is pre-sanitized. Apps, even web ones, shouldn’t assume the data is sanitized, they should sanitize it themselves because only then you can guarantee that it will come out correctly, and safely.

This would only work if you own both the server and the UI that serves it. It immediately falls apart when you don’t control the entire pipeline from submission to display, and on the fediverse with third party clients and apps and instances, you inherently can’t trust anything.

How do you understand federation?

I don’t really want a definition of what the fediverse should be or was initially envisioned to be. I just want to understand how people actually use it. I started wondering because I felt the talks about its current state and growth stumble in invisible misunderstandings about the basic nature of what we are using or how we...

Max_P,
@Max_P@lemmy.max-p.me avatar

I’m a software engineer so I understand federation in what it really does.

But the common explanation for users is, it works like email: you can have a Gmail and send an email to someone using Yahoo, and it just works. You don’t have to make a Yahoo account to email people still using Yahoo.

That prevents companies from taking the majority of the users and locking it down to outsiders and force people to use a particular instance. People like Elon can’t buy Mastodon as a whole, it’s simply not possible. And if they buy a big instance like mastodon.social, and start charging $10/mo to use it, people can just move to another instance and it’s as if nothing happened.

It guarantees a minimal level of interoperability and resilience, at least for a while. There’s no single Lemmy or Mastodon that can be bought or go under and close down. The content is replicated, so even if an instance goes poof, most of the content will remain on other instances. It can’t become paywalled, or if it does, people would be actively choosing to post their stuff behind a paywall.

Unlike Twitter and Reddit, Lemmy instances don’t have to worry about appeasing to all jurisdictions at once. Americans can use instances that abide by US laws, European instances abide by European laws, Australian instances abide by Australian laws. There might be some defederation going on for legal reasons, but at least you’re not being cut off from the whole network, just bits of it. It doesn’t have to push you to an entirely different service. You can still talk to worldwide communities that are legal for your instance to federate with. There’s no single company there to force you to abide by US/EU/AUS laws even if you and your community members are on the opposite end of the globe. If anything, it prevents a single country from dictating what is allowed on social media.

Max_P,
@Max_P@lemmy.max-p.me avatar

You can also chain rEFInd to GRUB if you don’t want to mess with that.

Max_P,
@Max_P@lemmy.max-p.me avatar

The key is understanding how divisions between 0 and 1 work. Say you take 2/0.5, you end up with 4. 2/0.25 you end up with 8. As you can see, those numbers get big fast. 1/0.0001 is 10,000.

As you approach 0, you get increasingly large numbers. If you flip it negative, again as you approach -0, you have increasingly big negative numbers. As you approach 0 from both sides, you approach positive and negative infinity. But what goes in the middle, at exactly zero? We don’t know, There’s no sensible value there, so it’s considered to be undefined.

In computers, it’s usually either an error, or represented with NaN (Not a Number) when you want to avoid throwing an error condition. NaN is defined so that any operations involving NaN is also NaN, so your entire equation becomes NaN.

Easiest way to visualize that is to input y = 1/x in a graph calculator (Desmos is nice). You clearly see it going to negative infinity then back at positive infinity.

With some other operations like negative square roots, we’ve made up the imaginary number i, which is defined to be the square root of -1, and we can make it do useful things. But what can we reasonably do with the result of dividing by zero? It behaves like infinities: you can’t really add or multiply them, or divide them. You’re just stuck with it. It’s impossible to represent, it’s Not a Number.

why cant Steam Deck detect display resolution?

Every PC I’ve ever used automatically detects and adjusts resolution to the display you connect to it. Even Nintendo Switch will detect when it’s docked and automatically adjust the display resolution. But on Steam Deck you literally have to adjust the display resolution for every game, every time you switch displays....

Max_P,
@Max_P@lemmy.max-p.me avatar

Some games will also save the resolution when it generates the default graphical settings and then save that and reuse that. So if you then plug in an external display, the game doesn’t even look for that. It loads the last settings which would be 720p.

Also the Deck runs games under gamescope which has its own upscaling. So it probably sets the virtual screen to be the deck’s native resolution regardless of what’s plugged in, and hope FSR is good enough and minimize the need to also have to switch other graphical settings for the game to run properly at higher resolution.

Not sure why it would do that for streamed games though. How’s the game on the PC even aware of the resolution of the Deck?

Max_P,
@Max_P@lemmy.max-p.me avatar

I was gonna call antialiasing and smoothing settings, but

With the same font smoothing/anti-aliasing settings

Where did you set those? Do other fonts like Noto / Liberation also look different on Gnome? Is it a difference between GTK vs Qt rather than which DE it runs on?

I don’t have Gnome so I can’t compare directly, but on KDE fonts look identical between GTK and Qt applications, and the compositor isn’t involved with font rendering. Which leads me to, some settings have to be different on Gnome vs KDE.

Max_P,
@Max_P@lemmy.max-p.me avatar

Can you share the exact settings and screenshots to compare?

Max_P,
@Max_P@lemmy.max-p.me avatar

Ultimately this is what it’s running in the background: github.com/ValveSoftware/Fossilize

The idea is to make sure your graphics card shader cache is full with everything the game may use at some point, enabling smoother play and less hitching.

I think on NVIDIA, the cache ain’t that big by default so it may be recompiling everything from scratch, whereas it’s less noticeable on AMD systems because it’s already compiled it so only compiles what’s changed/new.

This issue suggests it’s currently pretty broken on NVIDIA right now: github.com/ValveSoftware/steam-for-linux/…/9803

Max_P,
@Max_P@lemmy.max-p.me avatar

Nope. I tried as a stopgap solution and it’s basically unusable. Literally unusable: sometimes after opening it from a deeplink from Google, the app can’t launch even after a force stop. It goes to a splash screen and calls itself “Popular” instead of Reddit, and the splash icon is some random community or user icon, and then crashes to home screen. No clearing cache gets you out of it, gotta clear data and sign in again. Not to mention, the horrible lag and slughishness.

They can’t fix theirs so instead of competing fairly, they shut down the API so you have no other option.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines