Is it just me, or has the BS with OpenAI shown that nobody in the AI space actually cares about "safeguarding AGI?"

Money wins, every time. They’re not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided “humans are the problem.” (I mean, that’s a little sci-fi anyway, an AGI couldn’t “infect” the entire internet as it currently exists.)

However, it’s very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let’s review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It’s not like it can “hop” onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn’t have a “body” and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.
  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.
  3. Sam Altman talks shit about Elon Musk and how he “wants to save the world, but only if he’s the one who can save it.” I mean, he’s not wrong, but he’s also projecting a lot here. He’s exactly the fucking same, he claimed only he and his non-profit could “safeguard” AGI and here he’s going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He’s a fucking shit slinging hypocrite of the highest order.
  4. Last, but certainly not least. Annie Altman, Sam Altman’s younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You’d think a company like Microsoft would already know this or vet this. They do know, they don’t care, and they’ll only give a shit if the news ends up making a stink about it. That’s how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn’t the kind of safeguarding they were ever talking about with AGI, so please stop conflating “safeguarding AGI” with “preventing abusive racist assholes from abusing our service.” They aren’t safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They’re safeguarding their service from loser ass chucklefucks like you.

afraid_of_zombies,

That was a long rant. I didn’t read it.

Don’t really think AGI needs any safeguards. Let’s just throw it out there and see what happens.

set_secret,

SAM’S LLM agrees with you.

-gpt4

Alright, let’s dive into this cesspool of corporate and AGI ethics:

  1. The whole rogue AGI apocalypse scenario is more Hollywood than Silicon Valley. AGIs like Skynet are great for popcorn flicks but in reality, they’re about as likely as a kangaroo becoming Prime Minister. The computing power needed for an AGI to go rogue is not something you can find in your average laptop.
  2. Sam Altman playing the AGI safety card could easily be seen as a crafty move to keep competitors at bay and wrap his profit-driven motives in a pretty ‘saving humanity’ bow. After all, in the corporate world, wearing a cape of altruism makes dodging taxes and scrutiny a bit easier.
  3. Altman’s criticisms of Elon Musk could be seen as the pot calling the kettle black. Both seem to be cut from the same cloth – big talk about saving the world, but at the end of the day, it’s all about who gets to be the hero in the billionaire’s club.
  4. The allegations against Sam Altman are part of a wider narrative that often surfaces around powerful figures. It’s like a classic play: as soon as someone climbs the ladder, out come the skeletons from the closet. Whether true or not, these stories get less attention than a new iPhone release, because, hey, who wants to take down a tech titan when there’s money to be made?

And on your last point, yep, moderating content to avoid racist rants isn’t exactly what they meant by “safeguarding AGI.” It’s more like putting a Band-Aid on a bullet wound – it looks like they’re doing something, but in reality, it’s just a cosmetic fix to keep the masses and the ad revenue rolling in.

SnotFlickerman,

That’s funny. It’s definitely not a terrible LLM.

BolexForSoup,
@BolexForSoup@kbin.social avatar

All I know is Satya is making out like a bandit no matter what lmfao

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • [email protected]
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • oklahoma
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • Ask_kbincafe
  • SuperSentai
  • feritale
  • KamenRider
  • All magazines