I deleted my almost 11 year old account and moved here because of this. I used there shitty app for way too long and after switching to Apollo i suddenly saw all the old subreddits I subscribed too become more prominent in my feed. On there app if felt like I was getting fed rage bait.
Same! I used nothin but Boost, the idea of going to their main app was atrocious, why actively alienate your userbase? They’ve been falling into a corporate shit-hole for too long
I tried to use Reddit over old.reddit and I was OK with it for a while, but I gave up when topics with barely any engagement would show up at the “top” in my feed and I would get suggestions from other subreddits that I wasn’t a part of.
I can adapt to a UI given time and I did like some aspects of their new layout. I’m not on board with desperately trying to fill my feed with “something new” every time I visit the site though because sometimes I want to follow up on a topic from earlier. It just kept burying things and I switched back to old.reddit after maybe six months of trying the new one.
For the sake of the app developers, I hope Reddit reverses course, sets a more reasonable cost, or the devs find ways to hook into something like Lemmy so they can keep doing what they do best. That said, I’m happy to have found a much better community in the whole process :)
It seems to me like StackOverflow is really shooting themselves in the foot by allowing AI generated answers. Even if we assume that all AI generated answers are “correct”, doesn’t that completely destroy the purpose of the site? Like, if I were seeking an answer to some Python-related problem, why wouldn’t I go straight to ChatGPT or similar language models instead then? That way I also don’t have to deal with some of the other issues that plague StackOverflow such as “this question is a duplicate of <insert unrelated question> - closed!”.
I think what sites have been running into is that it’s difficult to tell what is and is not AI-generated, so enforcement of a ban is difficult. Some would say that it’s better to have an AI-generated response out there in the open, which can then be verified and prioritized appropriately from user feedback. If there’s a human generated response that’s higher.quality, then that should win anyway, right? (Idk tbh)
Yeah that’s a good point. I have no idea how you’d go about solving that problem. Right now you can still sort of tell sometimes when something was AI generated. But if we extrapolate the past few years of advances in LLMs, say, 10 years into the future… There will be no telling what’s AI and what’s not. Where does that leave sites like StackOverflow, or indeed many other types of sites?
This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.
This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.
Between this and the “deep fake” tech I’m kinda hoping for a light Butlerian jihad that gets everyone to log tf off and exist in the real world, but that’s kind of a hot take
I run everything on a lean Ubuntu server install. My Ansible playbooks then take over and set up ZFS and docker. All of my hosted services are in docker, and their data and configs are contained, regularly snapshotted, and backed up in ZFS.
I run basically all of the Arr stack, Plex (more friendly to my less tech savvy family then my preferred solution Jellyfin), HAss, Frigate NVR, Obsidian LiveSync, a few Minecraft worlds, Docspell, Tandoor recipes, gitea, Nextcloud, FoundryVTT, an internet radio station, syncthing, Wireguard, ntfy, calibre, Wallabag, Navidrome, and a few pet projects.
I also store or backup all of the important family documents and photos, though I haven't implemented Immich just yet, waiting for a few features and a little more development maturity.
Certainly. Mostly it started as a way to keep tax documents and receipts safe and easily findable.
It's grown into a "huh, maybe this letter from <bank, school, insurance, charity, etc> is important, but it clutters the house less when ones and zeros", so we scan it in.
Then when we need info, we can just search for the name of the sender, the date, account numbers, literally anything remotely legible in the document and get lightning fast results.
I think there was a phone called the Cosmo Communicator or something from a UK company that had a physical keyboard and a touchscreen that seemed cool.
They also have a newer model Astro Slide, but it’s significantly more expensive and has availability problems. I’ve had one since Christmas or so, though, and I’m happy with it. CC is fine as well, and indeed the sale seems quite good.
(I used to have their first phone as well, but truly AS would have been a much better first device to bring to the market, there really aren’t any fundamental flaws in it in my view.)
What do you think are the chances of them being around in 5 years time? I really want to get this as my next phone but for now I’m happy with my current phone (King Kong Mini 2)
When you are creating something like Lemmy, where you want wide uptake, you need to pander to the masses.
The /r/selfhosted surveys show around half of self-hosters mostly or exclusively use docker. A significant portion of the rest can use docker if needed.
If you’re in the 20% that isn’t covered by the most common setup, then it can be frustrating. But supporting that 20% takes as much effort as supporting the other 80% (see 80/20 rule), and when things are new it’s just not where the effort should be focused.
So you have all those servers, but why can’t you install debian or ubuntu server on one of them?
You could also get a $2/month VPS and run it on that. Beehaw is run on something similar (though apparently $12 a month, but a lot more users).
Imagine believing that nobody really has to understand code anymore. Fine, give it a new name, but people aren’t programmers if they mostly use AI and don’t really understand how it works.
If they understand how it works, it’s just like googling how to program, which really is just normal programming.
the issue of "too big to block" is an interesting problem for federation that i've seen no particularly good answers to yet (probably because it hasn't really been an issue up until recently). feels like there's a tightrope act nobody's mastered yet of balancing the desire to be where everyone is with the need to keep the whole system decentralized, while simultaneously ensuring everything can both interoperate as needed and moderate as needed without tearing the system apart.
@rysiek Maybe we could suggest server alternatives to people that complain about stuff.
e.g.: when someone says "hey, Mastodon is cool but I wish I could have quote-toots etc." we could say "hey, come to libranet.de, we have this, but also that&this&etc. And you can get to keep your followers and follows"
Yup. The problem is that these users will have trouble understanding how can it be "Mastodon" without being Mastodon, if you get my drift. Plus, ideally this would also be done by Mastodon-the-software project — "if you want functionality X, check out instances of this compatible-but-different software project."
But absolutely, doing so yourself in such cases makes perfect sense.
it decentralizes the cost to the central authority by pushing data load onto volunteers
the sad reality is that people will buy the hype
I have been discussing BlueSky some time ago with a friend of mine, and we soon agreed exactly on these two things. This is an excellent article, thanks for sharing this.
AP does push the data load onto volunteers (the operators of servers) but those volunteers gain some autonomy in doing so. The important part of that quoted segment is that bluesky has distributed the costs but not the authority, in other words taxation without representation.
The longer part of the windows install process is not the installation. It’s removing all the pre-installed bloatware, removing or disabling all the telemetry and other undesired features that are on by default.
The start menu entries are stored in an encrypted file somewhere beneath a thousand different folders. But its possible to copy a cleaned start menu file and paste it in the correct directory in the default user folder to give new users an ad-free start menu.
Doesn’t Edge send all urls you visit to Microsoft through browser.events.data.msn.com? Microsoft has been tracking every site you visit since the start.
technology
Newest
This magazine is from a federated server and may be incomplete. Browse more on the original instance.