This profile is from a federated server and may be incomplete. Browse more on the original instance.

DrMux,

We’re tired, and we’re scared, I think.

Worse, we're used to being tired and scared. We're apathetic to our own anxieties and exhaustion. The only thing to fear is not fear itself. It's complacency toward fear.

DrMux,

"Hmm... we really only wanted to rule over you to harvest your species' brain power through an interface with our computational networks. This... just won't do. Later losers!"

DrMux,

My guess is that it's more a result of overfitting for alignment. Fine-tuning for "safety" (rather, more corporate-friendly outputs).

That is, by focusing on that specific outcome in training the model, they've compromised its ability to give well-"reasoned" "intelligent" sounding answers. A tradeoff between aspects of the model.

It's something that can happen even in simple statistical models. Say you have a scatter plot of data that loosely follows some trend, and you come up with two equations to describe that trend. One is a simple equation that loosely follows it but makes a good general approximation, and the other is a more complicated equation that very tightly fits the existing data. Then you use those two models to predict future data. But you find that the complicated equation is making predictions way off the mark that no longer fit the trend, and the simple one still has a wide error (how far its prediction is from the actual data) but still more or less accurately fits the general trend. In the more complicated equation, you've traded predictive power for explanatory power. It describes the data you originally had but it's not useful for forecasting data that follows.

That's an example of overfitting. It can happen in super-advanced statistical models like GPT, too. Training the "equation" (or as it's been called, spicy autocorrect) to predict outcomes that favor "safety" but losing the model's power to predict accurate "well-reasoned" outcomes.

If that makes any sense.

I'm not a ML researcher or statistician (I just went through a phase in college), so if this is inaccurate I'm open to corrections.

DrMux,

Seven passengers boarded. The rest are still trying to decipher their boarding pass.

DrMux,

I think you're right; it will probably never have particularly wide reach, but it will (and to some extent does have) deep appeal.

What I mean is that people who are attracted to a platform like Lemmy are the kind of people who are likelier to have those niche passions and knowledge on those topics. And they are the kind of people who are also likelier to participate in communities around those things. No, not everyone, and yes there are still communities with a broader appeal and less depth, but I think my point is clear enough. It's just kind of intrinsic to how the platform works and how it is positioned in the broader internet space.

DrMux,

There's a huge difference between "lol le dum fat burger chez merica" and commentary about the history of the country and the patterns, systems, and dark truths that made it what it is today. Is there any one element in this meme that you'd argue is false?

DrMux,

Neither is very useful until you can correlate it with something else. Like, it takes a lot more effort to find someone in meatspace than in a database. Though, the number of databases with your face in them is a number that goes up faster every year.

DrMux,

Ceci n'est pas une meme.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • Ask_kbincafe
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines