Just some Internet guy

He/him/them 🏳️‍🌈

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Max_P,
@Max_P@lemmy.max-p.me avatar

My instance exists for me and my friends to use. It’s not meant to attract anybody, it’s meant to serve me.

It costs me nothing and I’m permanently in control of my data, and it’ll live however long I want it to live, it updates when I decide I want to update it, if I want features I can just patch them in. When I make a PR, it goes on my instance first to try it out properly. I can post 10GB files from my instance if I want to, I’m the one that will pay for the bandwidth in the end.

I bet if you look at the profile of the admin of those “abandoned” instances, you’ll find they’re active on Lemmy. They just have their own private instance just for themselves.

Doesn’t matter if lemmy.world or lemmy.ml or beehaw.org goes down: I still got all the content and they’ll eventually federate out when they come back up.

Max_P,
@Max_P@lemmy.max-p.me avatar

Guess I should have said it cost me nothing extra because I already own the server.

Although Oracle’s free tier exists.

dunglas, to random French
@dunglas@mastodon.social avatar
Max_P,
@Max_P@lemmy.max-p.me avatar

That’s just how poorly Mastodon and Lemmy integrate with eachother.

Max_P,
@Max_P@lemmy.max-p.me avatar

I wish I did that, at this point my TypeScript template errors are as long as C++'s ._.

Max_P,
@Max_P@lemmy.max-p.me avatar

Apart from Debian, I guess Alpine. It's quite popular in containers for its small size. Even Arch will be much bigger in that case because the packages are much less granular and install development libraries and headers for about everything.

Max_P,
@Max_P@lemmy.max-p.me avatar

Downvotes do federate, but it uses protocol extensions to do it. So the downvotes won&;t federate to Mastodon, but it does for Lemmy and I think Kbin too

Max_P,
@Max_P@lemmy.max-p.me avatar

My network randomly drops. A restart fixes but I can't even download Cyberpunk with my 1GB connection before it crashes. Klogs showed something about the network manager successfully shutting down but I can't find much else.

Share the output of sudo dmesg logs as well as sudo journalctl -u NetworkManager | cat. The first is the kernel logs about what's going on with your connection, and the second one is from the utility that manages networking on most systems (there's alternatives but pretty sure Manjaro uses NM). It should give us more info as to the reason of the disconnections.

No Radeon software. I sometimes need to record clips/ stream so relive is nice but the biggest problem is my second 1080p monitor I Super Resolution to fit more programs on it. I can't find a way to replicate that functionality. I also do not know how to control Radeon anti-lag, chill, Smart Memory Access, etc.

Most of these things are more deeply integrated on Linux, so you don't need to worry about them for the most part. Some of them are also buzzwords for marketing purposes for features that really should be default on, which on Linux, when it's reasonable, do default to on. For example, you don't turn Smart Memory Access on: if it can use it, it will use it. Same with VRR, at least on Wayland: just on by default on KDE.

  • ReLive: you can use any screen recorder that will work on any GPU. Right now with the Wayland transition it's a bit weird and OBS is the better choice there, but on an Xorg session you can just use something like Simple Screen Recorder. On KDE, Spectacle, the default screenshot utility also has the ability to record short video clips but it can be a little buggy.
  • Super Resolution: just set the monitor's scaling to less than 100% in the display settings. It's technically probably better than Super Resolution for apps that supports <100% scaling, because instead of making a fake 4K display for example, it'll render everything at 1080p still but instead cause apps to render smaller, achieving the same result but with the potential of remaining pixel perfect. It won't be doing any AI scaling though, so YMMB.
  • Anti-lag: it's kind of a hack, and on Linux we're trying to get things right for the graphics stack with Wayland. But if you're running Wayland, KWin is already doing what it can to reduce lag on the desktop, and individual applications have to implement similar methods if they want to. Have you run into specific things where it's noticeable? Linux is generally pretty good when it comes to input lag already.
  • Chill: you can run games in Valve's gamescope wrapper to limit framerate. That's exactly how they do it on the Steam Deck. You can also use CoreCtrl to underclock the GPU.
  • Smart Memory Access: it's just marketing for Resizable BAR, and it's on by default. You can check with sudo dmesg | grep BAR=, if it's greater than 256M and equal to your GPU's memory size, it's working.

<span style="color:#323232;">[    7.139260] [drm] Detected VRAM RAM=8176M, BAR=8192M
</span><span style="color:#323232;">[    7.576782] [drm] Detected VRAM RAM=4096M, BAR=4096M
</span>

HDR controls. Nothing in the display settings so I'm lost

Yeah that one's still WIP unfortunately. It's technically possible on Xorg but you have to run everything HDR all the time and things break. It's coming along fairly well!

Alternative Software I haven't spent a lot of time looking but things like wallpaper engine, rainmeter, powertoys.

  • Wallpaper Engine -> KDE's desktop backgrounds have a lot of options to do similar stuff including animated wallpapers. Go to change your wallpaper, there's a button to download new modules and new backgrounds. For example: store.kde.org/p/1413010
  • rainmeter -> Conky, or KDE's desktop widgets. Right click on your desktop, add graphical component.
  • powertoys -> A lot of those have built-in and better equivalents. Fancy zones: we've had that as standard for a good decade here. You can also fairly easily make your own or use other people's KWin scripts, which lets you manipulate the desktop however you want. Here's some examples: store.kde.org/browse?cat=210&amp;ord=latest

You can even download desktop effects, if you like your windows to burn down or have a glitch effect or whatever: store.kde.org/browse?cat=209&amp;ord=latest


It takes some time to adjust, but welcome abord! Depending on how much you customize, you may find it difficult to go back to Windows!

Max_P,
@Max_P@lemmy.max-p.me avatar

How are you currently accessing those services?

If you're using Cloudflare tunnels already, then you're good. It already acts as a secure VPN between you and Cloudflare, and they handle the TLS certificates for you already.

TLS is what puts the S in HTTPS: it provides encryption and security of the connection. If you didn't use Cloudflare tunnels, you'd be port forwarding and serving the content directly from your public IP at home. To secure those connections, you'd need a reverse proxy. That's usually NGINX these days, and its purpose is to serve as a hub to reach all of your services. It would go Internet -> your router -> your server -> NGINX -> whatever container it needs to go to. As you can see, it's basically the entry point of your stuff.

To securely access it from the outside, you can either use a TLS certificate handled by NGINX (LetsEncrypt is easy to use and provides them for free), or you set up a VPN (that's what Tailscale would do) so that it doesn't matter if you access your server over plain text HTTP.

The key here is really just that you want your traffic to be encrypted in some way when it goes over the Internet, as otherwise, it doesn't matter that you have a strong password, everyone could see it anyway.

So, you usually want one of the 3 options: CF tunnels, self managed NGINX that you access directly over the Internet with a TLS certificate, or a VPN to your home network which automatically secures traffic between your device and your home network over the Internet.

Since you use CF tunnels, you used the first option and you're all good out of the box!

Max_P,
@Max_P@lemmy.max-p.me avatar

The default Lemmy UI is pretty lightweight. Do you have extensions, especially Lemmy home instance redirection extensions?

I've definitely used one of those that made browsing lemmy incredibly slow as it was busy rewriting all the links.

Lemmy loads basically instantly on Firefox on my desktop, and same on my phone with Chrome.

Max_P,
@Max_P@lemmy.max-p.me avatar

None - why go out of your way to block communities when you can just subscribe to the ones you want and only see those?

Max_P,
@Max_P@lemmy.max-p.me avatar

whereas png is better for graphics type elements with defined colors and edges?

The reason for that is rather surprising, but PNGs are basically zipped BMPs with an optional filter step to arrange the pixels in a way that compresses better.

And that's why if you give it a photo with lots of details, it's not very effective and just gives you a rather big file. PNG barely does anything compared to JPEG and other formats. That's also why it's great for small things like icons: it decompresses fast and still manages a fairly good compression ratio when a good chunk of the image is transparent or flat background.

Max_P,
@Max_P@lemmy.max-p.me avatar

Especially the safety aspect. The big platforms have extensive moderation tools to keep the creepers out. And most dating apps are still widely unappealing to women because most men on them are creepy as fuck. That's also why you can't send pictures or even links on Tinder: guys just won't stop sending dick pics as if that's appealing.

I can't imagine if you can basically make a fully anonymous profile on an instance like exploding-heads and hexbear how fucking awful the experience would be, especially for women and minorities. And who do you trust to moderate your personal conversations going toxic, especially if the report propagates to the original also toxic enabling instance?

Or the sheer amount of spam, OnlyFans baits and escort services that would dominate such a platform.

Max_P,
@Max_P@lemmy.max-p.me avatar

Electron isn't all that bad honestly. The bad part is people slap the same pile of massive and bloated node modules and framework in it that's the same cause as to why the modern web is so horrible.

A well written web app in Electron can feel quite good and snappy. It's just that the companies that own most of those apps don't care and won't give the developers time to build an optimized app, because that doesn't bring in money, but new features do.

Especially if you share the system electron runtime between apps, even the memory overhead isn't all that bad even compared to modern toolkits like GTK4 and Qt5/6.

But then you load like 5MB of poorly written CSS and a 10MB JS bundle plus all the assets and full screen background image and yeah, it'll chew through resources fast.


Sometimes when I have to debug a modern website, I'm amazed at the amount of crap it's there. Just checking the inspector in the browser, half the elements have hundreds of overriden CSS rules and hacks to make it display correctly instead of writing the CSS proper. Boatload of unnecessary divs and whatnot everywhere. That strains any layout engine.

The profiler in the browser console? Yeah nobody uses it, or even knows it exists and how to use it. I wow'd a lot of people just making a quick flamegraph and speeding up the code 10x like it's nothing.

We have the tools, but not the will to optimize.

Max_P,
@Max_P@lemmy.max-p.me avatar

If they don't hang outside the window they'd have to hang inside the window, and would need a more complicated ventilation system to take air from outside, heat it up and vent it back outside. At that point you'd have a window mounted two hose AC anyway.

So yes, your next best option is going to be a two hose portable AC. One hose takes air from the outside to cool the condenser, one hose to throw that hot air outside.

Single hose works too, but they're less efficient because they take cold inside air, cool the condenser and vents it outside, which waste some of the air it just cooled for that and it creates negative air pressure inside which will bring hot air from the outside to replace it from any cracks and holes in the house.

Max_P,
@Max_P@lemmy.max-p.me avatar

It is buzzword bullshit.

And a fad, probably. Everyone's trying to capitalize on the wow effect of ChatGPT.

Before AI it was neural network, and before that it was machine learning.

The sooner Android accepts RCS is dead, the sooner we can choose the next messaging platform that matters (www.androidpolice.com)

I’m just sitting here frustrated because I’m wanting my family to move away from messaging me over SMS (they mainly use iOS), but they refuse to download any extra apps. But Google’s RCS really doesn’t look like a solution either since it mainly just seems to be a way of enforcing Android as an ecosystem, and they...

Max_P,
@Max_P@lemmy.max-p.me avatar

But Google's RCS really doesn't look like a solution either since it mainly just seems to be a way of enforcing Android as an ecosystem

Not really. It's not even tied to Google, it just happens that most carriers don't care because they can't monetize it like they did with SMS, and Google was getting fed up with slow adoption so they started becoming the defacto provider for RCS. But it's always been a hack.

Anyway, it's an open standard that anyone can implement if they want, and it even reuses a lot of the signaling from existing SMS technology. In fact the first release of RCS was in 2008.

The problem is not technological, it's that a whole bunch of companies like Apple and carriers and even Google to some extent would rather keep all the control. Apple doesn't want to implement RCS nor open iMessage because then they can't weaponize their users against Android users and peer pressure you into getting an iPhone. None of those companies want to implement an open standard because then they'll kill off the era of proprietary messaging apps.

And to top it off, a lot of users also just don't care. They already use Snapchat and Discord, and standard SMS have been free and unlimited for a good decade so it's not even inconvenient to fallback to SMS. Works well enough to exchange Instagram or Twitter handles or whatever. Without users demanding a standard interoperable protocol like RCS, it won't happen.

I don't even think email as we know it today would have a chance to exist if they hadn't made it interoperable from the start thanks to the young Internet being more academic and interoperable focused, before companies got interested in heavily commercializing it and enshittifying it all for profits.

There's practically negative profit to be had by implementing RCS or any other sort of interoperable federated protocol. Even Signal, despite being open-source essentially forces you to use their servers for some reason.

Max_P,
@Max_P@lemmy.max-p.me avatar

I'm struggling to think of a use case where going through XWayland is preferable over direct Wayland. It'll just go through Wayland anyway but with extra X11 hacks to convert between the protocols…

What can it possibly fix that running the game under gamescope wouldn't?

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s probably leaving around DNS entries that only works while on the VPN, and depending which program is making the DNS query, it may or may not fall back to a public server.

Max_P,
@Max_P@lemmy.max-p.me avatar

Pretty sure the only reason Discord went with the server terminology is because they were competing with TeamSpeak, Ventrillo and Mumble which were all actual servers people would host.

They completely murdered the meaning of the term just so they could advertise as “look how easy it is to make a Discord server! You don’t even need a server or mess with firewalls and IP forwarding and Hamachi, it just works!”

I fucking hate it.

Max_P,
@Max_P@lemmy.max-p.me avatar

If we were to use a different term, I’d probably go with something like “provider”, like “email provider” or “internet service provider” or “phone service provider”, you’d have a Lemmy provider. Providers tend to be associated with mostly interoperable things so it fits alright.

Anything else is a wild misuse and abuse of technical terms that have actual meanings. Discord already murdered the definition of “server” with calling their spaces “servers” banking on user ignorance and familiarity with the term to sell them on a completely different kind of service.

Bigger instances run on several servers, maybe even distributed geographically. They’re definitely not proxies. Host could work but it’s a bit nondescript. Hub doesn’t really sell the decentralization aspect. Even server doesn’t quite do it anyway, since thanks to Discord people now associate that with isolated spaces anyway.

Max_P,
@Max_P@lemmy.max-p.me avatar

SSO - Single Sign-On

The idea is that you use one identity from any provider to log in everywhere. It’s also used in the enterprise world to centrally manage every app you can log into. So they assign you an email address, and you can use their SSO service to get into Slack/Teams/Salesforce/Figma/admin panels and whatever else you might need. When you quit, they turn off your email, and by doing that you also lose access to all those other apps and accounts as well automatically.

It’s also widely used by regular people often in the form of login with Facebook/Google/AppleID/Github and others.

The idea behind it is, you can focus on having one account that you keep very safe with a strong password and 2FA. And you don’t have to remember any other password or username or whatever.

Not all SSO systems are compatible with eachother: some use SAML2, some use OpenID, some use a fairly standard OAuth flow, and some are specific to that platform like Facebook and Google. OpenID in particular is federated, because in theory you can use any OpenID provider to log into anything that accepts OpenID. Email is also federated, because it is also an open and interoperable standard: you can send a Gmail to a Yahoo account, or in my case, my own personal server. Or how Lemmy does it: I have my own Lemmy server, but it talks to all the other lemmys so I can subscribe, vote and comment.

The way it works is, you tell the site you would like to log in with a third party provider, the website redirects you to your SSO provider (that you trust and can validate that, for example, you’re indeed on google.com if you select to log in with Google). Then, you log in there (if not already). Then it confirms to you that you’re about to log in to whatever app, and what information about you will be shared such as your name, email address, picture, sometimes more. You validate, and your SSO provider sends you back to the website with a secret key that contains all that information, and voilà, you’re logged in to the website without ever making an account or entering your details. No password or security questions to remember for that site!

It doesn’t have to be an email, but since a lot of SSO providers are also email providers or use emails as your login there, it’s nearly always an email.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s good but it’s not on iOS

Premier Smith says Alberta preparing Sovereignty Act motion over federal emissions plans (www.cbc.ca)

Hours after the operators of the province’s power grid warned that new federal electricity regulations could lead to blackouts, Alberta Premier Danielle Smith said her government is preparing for the possibility of enacting her signature legislation in an effort to push back against Ottawa’s planned emissions reductions....

Max_P,
@Max_P@lemmy.max-p.me avatar

We’ve been headed that way for at least two decades, and they have another decade to figure it out, and they’re acting all shocked pikachu they finally will have to care about their emissions?

The more you postpone acting, the more it’s gonna hurt when we have no choice but to cut emissions to survive.

Questions about Rooting

I use my phone a lot, and a huge thing that bugs me about it is Moto’s airtight grip on everything I do. So many apps I never use are flat out necessary, every update comes with 2 or 3 shitty mobile games I never play, and I am just sick of it. So I am looking at rooting and would like some advice from my comrades on lemdroid....

Max_P,
@Max_P@lemmy.max-p.me avatar
  1. Yes unless it’s a Samsung. If it runs a Qualcomm modem, it usually pretty much just works
  2. Depends on the model. LineageOS is a pretty good one to start with: it’s pretty vanilla Android with extra features, and if your device is officially supported, their policy is that official devices and builds must support every feature of the phone so no broken modems or fingerprints.
  3. Sort of yes. If you root and do nothing with it, it’s not really much of a risk. The question is, do you absolutely trust whatever app you’re going to run as root. Root is full control, so if you run malware as root there’s nothing at all to stop it, it can do whatever it wants. Never had any issues personally, I’m very careful and only run FOSS apps as root as I can validate what it does.
  4. It makes for at least a good practice run.

I personally have been daily driving rooted phones with custom ROMs since the 2.2 Froyo days, always liked the experience more than stock.

Also ROMs like LineageOS don’t come rooted by default: you’ll need an unlocked bootloader in both cases, but if you don’t need root, you can simply not root your custom ROM. So security-wise, running LineageOS is about the same as running stock on a Pixel, if not a touch better. Root is specifically about getting root access, you don’t necessarily need that.

You can also root your stock ROM and debloat it as well, that’s another option and doesn’t involve replacing the entire operating system.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s not for everyone for sure especially if you’re looking for something FOSS. I’d rather use a FOSS app, but it was my Reddit app of choice for a really long time and I’m happy to give the developer a one-time $3.50 purchase to have that experience back. I can barely tell I’m not on Reddit.

Even if it’s just a stopgap solution, I’m all for Reddit app developers porting to Lemmy. Right now the open-source Lemmy apps all lack a lot of polish, so having paid options that offers a more polished experience for people coming over from Reddit is a plus for the platform’s engagement as a whole.

Max_P,
@Max_P@lemmy.max-p.me avatar

I think it’s the other way around: Jerboa takes heavy inspiration from how BaconReader and Boost worked. I remember a while back switching to Boost because it was basically BaconReader but better and more modern and more reliable. I was using BaconReader in like 2012. Boost for Reddit itself was launched in 2016 iirc.

Max_P,
@Max_P@lemmy.max-p.me avatar

I can barely tell them apart, if it wasn’t for the @instance.tld they’re visually identical!

Max_P,
@Max_P@lemmy.max-p.me avatar

Why was that a certified cheque though? That’s literally like sending 300k in cash… by mail.

I typically see those used like when buying a car off Kijiji or Facebook where you can’t really trust a random person’s check to go through or trust the seller to give you the car when the cheque goes through. Hand over a certified cheque, you’re safe to deposit it and hand over the keys.

I don’t see how that kind of guarantee was needed to deposit an inheritance cheque. There’s plenty of legal ways to approach this if it bounces or whatever.

Max_P,
@Max_P@lemmy.max-p.me avatar

My vote goes to whatever makes you feel like you’re offering the best experience to your users. People that don’t like it can always switch to the old one.

I think it makes sense as a mobile-focused instance. Wouldn’t surprise me most of us are using mobile apps anyway…

Max_P,
@Max_P@lemmy.max-p.me avatar

Eyes don’t really have a concept of FPS because we don’t have shutters in the first place. The brain is just continuously interpreting what we see. And it fills in a lot of gaps: for example, we technically have a large blind spot right in the middle of the retina, and that’s why we’re more sensitive to movement in our side vision.

Cats see just fine in the dark, our eyes are just not sensitive enough to low light to be all that useful for us, but we could, if the eyes provided that input. Evolution just made it so we favored speedy and sharp vision in daylight rather than night vision, in part because we quickly developed technology (fire) to keep our areas lit as needed.

Max_P,
@Max_P@lemmy.max-p.me avatar

No, we have a spot in each eye that is not sensitive to light at all because the space is used up by the optic nerves: scientificamerican.com/…/find-your-blind-spot/

Ask Lemmy: Traditional vs natural mouse scrolling; which do you use?

Despite being a heavy cell phone user for more than 25 years, it only recently occurred to me that vertical navigation on most phones is inverted when compared to traditional computers. You swipe down to navigate upward, and up to navigate downward. I recently spent time using a MacBook, which apparently defaults to this...

Max_P,
@Max_P@lemmy.max-p.me avatar

I think the reason Apple also went with natural scrolling for mice is because of their Magic Mouse which attempts to act like it’s a trackpad. The gestures are similar to how they are on their trackpads, so it’s consistent.

Touchscreens and trackpads? Natural scrolling all the way, we’re directly moving the content. It works the same as if your two fingers were click and dragging the content, it does feel pretty natural.

With a traditional mouse, I see the wheel as already inverting the movement: imagine the content is the mousepad, traditional mouse wheel direction scrolling down would be pushing the content under the mouse upward. Although I think the real reasoning is probably just either you’re controlling the scroll bar or the engineers just thought that’s what felt natural and intuitive to them at the time. It was probably born as basically just a more granular page up/down button that became a wheel.

Max_P,
@Max_P@lemmy.max-p.me avatar

It’s better and worse at the same time: it just doesn’t bother with it for the most part. If you have files named with UTF-8 characters, and run it with a locale that uses an ISO-whatever charset, it just displays them wrong. As long as the byte is not a zero or an ASCII forward slash, it’ll take it.

There’s still a path length limit but it’s bigger: 255 bytes for filenames and 4096 bytes for a whole path. That’s bytes, not characters. So if you use UTF-16 like on Windows, those numbers are halved.

That said, it’s assumed to be UTF-8 these days and should be interpreted as UTF-8, nobody uses non-UTF-8 locales anymore. But you technically can.

Max_P,
@Max_P@lemmy.max-p.me avatar

That would be misconfiguration on the remote instance’s end. Are you sure it works if you’re not on Tor in the first place?

That’s definitely server to server traffic, it basically goes client->home instance->remote instance and AFAIK it’s not a passthrough request, it makes it own request.

Max_P,
@Max_P@lemmy.max-p.me avatar

Clone over HTTPS, not git.

Max_P,
@Max_P@lemmy.max-p.me avatar

A lot of bundling in the JS world is also either because of TypeScript, or transpiling to old JS so that it’s more compatible with old node / browser. JS has gone through quite drastic changes in syntax, from vars and prototypes to now let/const, ESM imports, classes, Promises, async/await. Lot of it which may run in an old browser. It also helps runtime speed, slightly, but it’s not something that matters all that much on a server because you just wait a second or two for it to load.

JS is also kind of wild with how many libraries a given project may pull in, and how many minuscule files those tend to use, especially since each library also get their own versions of every dependencies too.

Python uses much fewer libraries and has code cache. PHP has code caching and preloading built-in so filesystem accesses are reduced. Bash usually doesn’t grow that big. Ruby probably just accepts a second or to two to load up for the simplicity of the developer experience. Typically there’s one fairly large framework library and a few plugins and utilities, whereas a big Next.js project will pull in hundreds of libraries and tools in.

A JS solution to a JS problem really. It needs to run in potentially ancient browsers, so we just make a giant JS file. For the other languages, you can pretty much just add it right to the runtime. If bundling was that big of a deal we’d read libraries right off a zip file like Java does with its jar files by default.

Plus, if you really care, you can turn on filesystem compression on your project directory and get the same benefits as

Max_P,
@Max_P@lemmy.max-p.me avatar

AFAIK the app doesn’t even see the finger or anything, it basically just asks the OS to try to do fingerprint unlock and gets back a yes/no answer. Optionally it can decrypt a key using the TPM so the app can store keys securely such as a password.

Apps can use your lock screen pin or password too using the sameish API.

developer.android.com/training/…/biometric-auth

Max_P,
@Max_P@lemmy.max-p.me avatar

Yep, it just works out of the box, that’s how nice the drivers are! You get updates to them as part of your Kubuntu updates, although there’s a PPA to install newer Mesa if you really want to. But for the most part, unless you need specific features of newer versions like for a new game release or a just released GPU, you can use the one that you already have just fine.

There’s no control panel because it all uses generic interfaces that also works for Intel and all other open-source drivers. For example, monitor configuration is configured from your DE’s display settings. You do need a third party GUI for overclocking.

If using Wayland, things like variable refresh rate is enabled by default and works out of the box. When HDR is ready, that will be turned on by default too most likely.

How to inspect a log file?

I have matlog and I let it record for 10-12 hours. Now I would like to inspect it for any suspicious/irregular entries. The problem is that I have end up with a huge log file with something like 3.5M entries. I found some repetitive entries from an UpdateManager which by removing them I’m now down to 95K entries. However it is...

Max_P,
@Max_P@lemmy.max-p.me avatar

What’s a suspicious or irregular entry? It’s hard to inspect a log without at least a reference to what a good log might look like. Every device has fairly unique logs, so I doubt there’s an Android log analyzer that can tell you immediately if something is abnormal.

You can always collect a log file yourself that you deem be normal, and you can write some processing code to automatically remove lines that both have in common.

But for what you’re doing, you pretty much have to keep doing what’ve been doing: find big repetitive offenders, remove them, rinse and repeat until it’s all unique lines and hopefully only stuff worth looking at.

Max_P,
@Max_P@lemmy.max-p.me avatar

There’s a plugin for that: github.com/ccrisan/xfce4-netspeed-plugin

For a command, it’s kind of complicated because you only have totals, there’s no built-in instantanous network speed. The network speed in an single instant is always either zero or full bore: either data is being transmitted/received or it isn’t. You need to average over time with multiple samples. So if you want bits per second, such a command would need to at least take a sample, wait a second, take another sample, substract the two, then return the result. Ideally you want to sample over a few seconds otherwise it’ll still be pretty much all over the place. So the panel plugin is a better idea for that, as it’ll do the math and averaging for you.

Why fediverse clients reinvent the C2S APIs and don't use ActivityPub?

I’m reading the ActivityPub spec here and it seems pretty fit for client-to-server communications. Yeah, it might be somewhat bulkier than your typical rest api, but it’s more universal, which begs the question: why do mastodon and lemmy both decided to implement custom (and incompatible) APIs for their clients to talk to...

Max_P,
@Max_P@lemmy.max-p.me avatar

It could probably work but would quickly turn into a mess of custom extensions.

For example, ActivityPub has no concept of sorting by hot or active or new. ActivityPub also doesn’t specify how a client would authenticate to a server to post on your behalf. There’s definitely no ActivityPub message for registering a user account.

So it makes sense that S2S only does the bare minimum for the purpose of federation of content, while instances with varying implementations can implement whatever C2S protocol makes the most sense.

For example, should ActivityPub expose a Lemmy post as a nested thread, or a Mastodon microblogging-like format and let the clients reassemble the thread? How should a Mastodon client present a Lemmy community and threads? How about a Lemmy client connecting to a Mastodon server?

If we put that in ActivityPub, you’re pretty much bound to supporting it forever because other servers will eventually expect those protocol extensions, whereas it’s much safer to change a C2S protocol.

Keeping the ActivityPub simple has a lot of benefits if we don’t want the fediverse to remain really interoperable. A Lemmy client can reasonably expect that a given server supports a given set of features, providing a much more reliable experience than basically spaguetti of supporting every possible features and presenting the data weirdly.

Max_P,
@Max_P@lemmy.max-p.me avatar

In that case, why don’t everyone just use a generic fediverse server and let the clients make it into Mastodon/Lemmy/Kbin/Pixelfed/Friendica/Firefish/PeerTube/whatever else?

The reason is all those server implementations work differently, have different features, different goals, even different cultures and etiquette, or even just for the heck of it. That’s the point of the fediverse, it’s interoperable but you’re also not limited by one single standard as to how you want to expand. Lets say we settle for Mastodon’s implementation. Cool, now people want downvotes. Mastodon doesn’t do downvotes, but it’s the reference implementation. You can’t have downvotes, you’ll never have downvotes unless you convince everyone to implement downvotes. Nobody will want to make a server that does everything. Maybe the Mastodon guys don’t want to implement downvotes because it promotes negativity and they’d rather posts just stay at zero likes. Maybe a site is implementing an additional score for funny/serious. What do we do, do we just allow clients to include any data they want?

Even if it worked that way, eventually, people would still make servers with proprietary UIs and proprietary features and APIs. You just can’t stop it, developers gonna develop.

Also, if every client supported every format, it would be a nightmare to make clients. And still, there would always be clients with different takes on how to present the data anyway, because again, developers gonna develop. People get creative and do their own thing, and it’s how cool stuff gets born.

If you make a rigid spec outlining every possible feature, then you need a group of people to decide what the spec is, and eventually, people are gonna get tired and make an entirely different protocol anyway, but this time it may not be interoperable.

You’re always gonna end up with specialized servers, and matching specialized clients.

If you want to make a superserver and superclient that supports everything, go ahead and make one. It may take off, it may not.

Ultimately, the fediverse will grow organically, custom implementations will happen if not only for personal toys, because there’s no governing body making a hard spec. And it’s a good thing for things to have a chance to last, or inenevitably there will be disagreements and lead to forks.

How is pushing for a single megaimplementation a good thing?

Why is 60fps a big deal for games?

My background is in telecommunications (the technical side of video production), so I know that 30fps is (or was?) considered the standard for a lot of video. TV and movies don’t seem choppy when I watch them, so why does doubling the frame rate seem to matter so much when it comes to games? Reviewers mention it constantly,...

Max_P,
@Max_P@lemmy.max-p.me avatar

We’re better at seeing in the center, and yet technically we have a big blind spot right in the middle. The brain just fills in the gaps.

Max_P,
@Max_P@lemmy.max-p.me avatar

Lots of good answers already, but I’d also add, if you have the opportunity to go to a computer store like a microcenter or anywhere they have gaming monitors on demo, try one out for a few minutes, run a first person game if you can (there’s plenty of basic demos of them on the internet through WebGL), or run testufo in a browser.

It’s really hard to imagine the smoothness without experiencing it, and it’s why a lot of people say once they experienced it, they can’t unsee it.

24 is all you need to create the illusion of motion, and the brain fill the gaps, but when you control the motion, especially with a high precision mouse, it really breaks the illusion. Your brain can’t fill the gaps anymore, the motion can go anywhere at any time extremely fast. Like, even just dragging windows around on the desktop you can feel the difference. I instantly know when my 144Hz monitor isn’t running at 144. It also becomes a matter of responsiveness as the others said.

High refresh rates are also more effective at higher resolutions, because at 30fps maybe the object will need to travel 100px per frame, but at 240fps, that same object will move 25px 4 times instead. It’s probably fine when you’re watching a TV somewhat far away, but when the monitor is 32 inches and 3 feet in front of you, you notice a lot more.

Linux can be used at your workplaces (lemmy.ml)

I’m just tired. On the last post about having Linux at our work, many people that seems to be an IT worker said there have been several issues with Linux that was not easy to manipulate or control like they do with Windows, but I think they just are lazy to find out ways to provide this support. Because Google forces all their...

Max_P,
@Max_P@lemmy.max-p.me avatar

I mean yeah it’s possible, but the reality is that most people in the company will likely want Windows anyway, and use things like Microsoft Office and a heap of other Windows only software. Probably not the developers, but accounting, HR, and so on. There’s also sales but nowadays they demand MacBooks because of status symbol and apparently it sorta matters, at least according to sales.

As an IT department, if you can get away with supporting only one platform and even one model/brand of computer, it’s much easier. Maybe two so sales and devs get their MacBooks. Adding a third is asking a fair bit from the IT department, and it starts adding up to a really rare skillset. I know very few that are absolutely proficient in all three main OSes.

There’s also the compliance aspect. The reason my current company can’t support Linux users is InfoSec/compliance. Not because Linux is insecure, but because all the standards are written for Windows. You can argue all you want about how Linux doesn’t need an antivirus, tough luck, SOC2, ISO and also insurance policies all explcitly require “controls against malware” and firewalls with every OS held to the swiss cheese security of Windows. So each OS basically requires the InfoSec and IT department to write out unnecessarily detailed procedures and policies about all the security measures, for every OS in use. What antivirus runs, is it a reputable brand, how do you validate that it runs, how do you test that it detects malware, how do you validate and ensures that the incident gets reported, what tooling does the software gives you to establish the root cause and entry point, what exact user action happened that led to the exploit chain, what was the exploit chain, how you’re going to mitigate and clean up after exploitation, how do you know exactly what data was compromised, and so on and on and on.

Right now most vendors support barely support the current version of Windows and macOS (especially macOS, I swear the AV software is always holding back major updates for several months every release). Very few support Linux. So either you have an entirely separate policy and audit for Linux, or you just don’t support Linux.

We’ll see companies open up to Linux when all the vendors also start supporting Linux, and even then, with those that do, it’s a shitshow of only supporting the last version of Ubuntu or RHEL with pinned kernel versions and blatant GPL violations and GPL condoms and binary only kernel modules with no hope of recompiling/adapting them to the current version. The ClamAV trick no longer works, auditors now want real AV software with the whole exploit chain tracking I described. Which is also why those company computers are so damn slow, much slower than you’d expect. They scanning everything and tracking everything, every process tree, what spawned it, what user action led to it. My MacBook started feeling like a Dell Latitude from 7 years ago once they loaded up all the crapware on it. We had to reserve a whole bunch of extra capacity on the Linux servers just for AV to exist and do nothing because it’s all locked up in containers and SELinux policies and it takes a pretty bad 0day to pwn those.

If I was the IT guy, I would also struggle to even begin to make a case for supporting Linux and justifying the time and cost. I don’t like my OS, but I do my work on it, cash my paycheck and move on to enjoy my Linux machines off work.

Max_P,
@Max_P@lemmy.max-p.me avatar

For live streaming, object storage won’t do you much. That’s useful for VODs I guess but the live stream there ain’t much you can do.

You can use a cheap VPS as a proxy to your home server for the IP hiding aspect. Oracle’s free tier might do okay for that purpose, just make sure you keep local backups. Or just Cloudflare Tunnels. A VPS is nice though because you can upload once to the VPS, and it’ll redistribute to the viewers and will have much more bandwidth available.

But self hosting, especially video, generally ain’t cheap. That’s why the big guys like Twitch and YouTube are so invasive with ads and subscriptions that subsidize all the free users.

Max_P,
@Max_P@lemmy.max-p.me avatar

The latency for that is probably horrible, but if that works for you use case, and the fees seems reasonable to you.

Object storage is usually cheap, but the API calls and bandwidth cost tends to add up, at least on AWS. At work we have Cloudflare in front anywah because the storage is cheap but continuously serving files from it gets expensive.

Max_P,
@Max_P@lemmy.max-p.me avatar

Probably selected secure erase or something which would write a bunch of random and then zero it out.

Super common in the enterprise world, probably overkill for OP.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines