anewbeginning,

Calling LLMs intelligent is what caused this mass hysteria.

xapr,

Another good article to read for balance and more background on why some people may be trying to restrict “AI”: theconversation.com/no-ai-probably-wont-kill-us-a…

Hrafyn,
@Hrafyn@kbin.social avatar

Thou shalt not make a machine in the likeness of a human mind.

Raphael,
@Raphael@lemmy.world avatar

As long as AI doesn’t go around financing wars around the globe and sanctioning opposing or outright bombing them, I’m fine with it.

Nikku772,
@Nikku772@kbin.social avatar

I’m not worried at all. I look forward to our AGI overlords.

CaptainFlintlockFinn,

Covering your bases I see

Spacebar,
@Spacebar@lemmy.world avatar

Large Language Models are nothing but very advanced regurgitation machines. That’s the AI these articles are hand wringing about - not a real Artificial Intelligence.

These articles remind me of the bitcoin articles we used to see.

nanoobot,
@nanoobot@kbin.social avatar

What ability do you think that they are currently missing that makes them 'regurgitation machines' rather than just limited and dumb but genuine early AI?

AFaithfulNihilist,
@AFaithfulNihilist@lemmy.world avatar

Discreet object recognition. Right now they can’t answer the simplest questions that require counting discrete objects. Which to me implies they have no discreet object sense at all. They’re just looking for word patterns.

I’ll give you an example, If you were to ask one of these. “I was on my way to the store when I saw a sow with six piglets, how many feet do we have?”

We could have a lively debate about what potential answers would be acceptable. Maybe there’s only 2 feet because the rest are hooves, maybe there’s 30 because that’s how many foot like appendages there are in total, but the answer it will give you will make absolutely no sense.

Chat GPT will be like, “11” or “15” and if you ask it any questions or follow up it genuinely does not have any answer for how any of these objects could be discreetly counted or partitioned. It can try to explain itself but quickly starts babbling nonsense.

nanoobot,
@nanoobot@kbin.social avatar

I think that might be a chatgpt specific thing, I tried with bing in precise mode and it responded with this:

"A sow is an adult female pig and piglets are baby pigs. Pigs have four feet, so a sow with six piglets would have a total of 28 feet (4 feet for the sow + 6 piglets * 4 feet each). Is that what you were asking?"

m532,

Sentience

nanoobot,
@nanoobot@kbin.social avatar

Why does an AI have to be sentient to be intelligent?

ShoePaste,
@ShoePaste@lemmy.ml avatar

Shit i dunno, everyone dying the same instant doesnt sound so bad. Quick and painless is certainly better than the options most of us face ¯_(ツ)_/¯

Rhoeri,
@Rhoeri@lemmy.world avatar

AI is the be-all-end-all worst idea humans ever conceived.

Potato,

It is both the best and worst idea humans ever conceived.

orclev, (edited )

And monkeys could fly out of my ass. Before we start hand wringing about AI someone would probably need to actually invent one. We’re probably closer to actual room temperature fusion at this point than we are an actual general purpose AI.

Instead of wasting time worrying about a thing that doesn’t even exist and probably won’t in any of our lifetimes, we should probably do something about the things actually killing us like global warming and unchecked corporate greed.

xapr,

Exactly. There was an article floating around just a couple of days ago that from what I recall was saying that billionaires were funding these AI-scare studies in top universities, I presume to distract the public from the very real and near scare of climate disaster, economic inequality, etc. Here, unfortunately paywalled: washingtonpost.com/…/ai-apocalypse-college-studen…

fubo,

A lot of the folks worried about AI x-risk are also worried about climate, and pandemics, and lots of other things too. It’s not like there’s only one threat.

roq,

@fubo @xapr I don’t doubt that, but that begs the question whether the unrealistic concerns raised for by those folks outweigh the realistic ones that need more actual and funding. For example, how much money are the billionaires and top elites putting in to solve climate change, past/future pandemics compared to studying AI-driven doom? I don’t know the answer, and I welcome you to find out.

Dreyns,

It’s all about risks, if you worry about being runover ok it’s reasonable, but if you worry about shark attacks when you live in the forest it is ludicrous and a waste of time.

Vittelius,

There is this concept called “crityhype”. It’s a type of marketing mascarading as criticism. “Careful, AI might become too powerful” is exactly that

HumanPenguin,
@HumanPenguin@feddit.uk avatar

like global warming and unchecked corporate greed.

And the unnecessary cruelty @orclev puts poor monkeys through.

Come on man let the poor things out. No matter what they did to you. They don’t deserve that.

mrnotoriousman,

I absolutely hate this craze. Most of the questions I get about AI are just facepalming because everyone is feeding off each other with these absurd things that could hypothetically happen. Clearly because actually explaining it doesn't generate clicks and controversy

orclev,

Clearly because actually explaining it doesn’t generate clicks and controversy

Solving real problems is hard because if it wasn’t they would be solved already, but making up fake problems is really easy.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • [email protected]
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • oklahoma
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • Ask_kbincafe
  • SuperSentai
  • feritale
  • KamenRider
  • All magazines