bappity,
@bappity@lemmy.world avatar

if A.I. dies out because capitalism I will wheeze

WrittenWeird, (edited )

The current breed of generative “AI” won’t ‘die out’. It’s here to stay. We are just in the early Wild-West days of it, where everyone’s rushing to grab a piece of the pie, but the shine is starting to wear off and the hype is juuuuust past its peak.

What you’ll see soon is the “enshittification” of services like ChatGPT as the financial reckoning comes, startup variants shut down by the truckload, and the big names put more and more features behind paywalls. We’ve gone past the “just make it work” phase, now we are moving into the “just make it sustainable/profitable” phase.

In a few generations of chips, the silicon will have made progress in catching up with the compute workload, and cost per task will drop. That’s the innovation to watch out for now, who will de-throne Nvidia and its H100?

lurch,

It can totally die out tho, if people stop using it, it will fade to nothingness, like a flash browser game.

iwenthometobeafamilyman, (edited )

Deleted comment

deranger,

It doesn’t even existing in the medical field, stop lying

iwenthometobeafamilyman, (edited )

Deleted comment

deranger, (edited )

devices marketed in the United States

Not a list of devices in use

None of this image reading tech you refer to exists in actual hospitals yet

iwenthometobeafamilyman, (edited )

Deleted comment

Sanctus,
@Sanctus@lemmy.world avatar

Flash games did not die out because people stopped playing them. The smart phone was created and this changed the entire landscape of small game development.

o0joshua0o,

Steve Jobs killed Flash. It was premeditated.

bitteorca,
@bitteorca@artemis.camp avatar

Flash deserved to die

Sanctus,
@Sanctus@lemmy.world avatar

It was atrocious compared to what we have now. But god fucking dammit I love those games. They mean more to me than a lot of AAA studios.

FaceDeer, (edited )
@FaceDeer@kbin.social avatar

If it had been killed without an adequate replacement (eg. mobile gaming) then people wouldn't have let Flash die. There are open-source flash players.

Szymon,

Flash games didnt die on their own, the technology was purposefully killed off via similar corporate requirements to maximize profits.

kirklennon,

It died because Safari for iPhone supported only open web standards. Flash was also the leading cause of crashes on the Mac because it was so poorly-written. It was also a huge security vulnerability and a leading vector for malware, and Adobe just straight up wasn't able to get it running well on phones. Flash games were also designed with the assumption of a keyboard and mouse so many could never work right on touchscreen devices.

Szymon,

There you go, lots of reasons care of this person here

GenderNeutralBro,

This is why I, as a user, am far more interested in open-source projects that can be run locally on pro/consumer hardware. All of these cloud services are headed down the crapper.

My prediction is that in the next couple years we’ll see a move away from monolithic LLMs like ChatGPT and toward programs that integrate smaller, more specialized models. Apple and even Google are pushing for more locally-run AI, and designing their own silicon to run it. It’s faster, cheaper, and private. We will not be able to run something as big as ChatGPT on consumer hardware for decades (it takes hundreds of gigabytes of memory at minimum), but we can get a lot of the functionality with smaller, faster, cheaper models.

WrittenWeird,

Definitely. I have experimented with image generation on my own mid-range RX GPU and though it was slow, it worked. I have not tried the latest driver update that’s supposed to accelerate those tools dramatically, but local AI workstations with dedicated silicon are the future. CPU, GPU, AIPU?

nodsocket,

Wait, you guys don’t already have hundreds of gigabytes of memory?

GenderNeutralBro,

Technically I could upgrade my desktop to 192GB of memory (4x48). That’s still only about half the amount required for the largest BLOOM model, for instance.

To go beyond that today, you’d need to move beyond the Intel Core or AMD Ryzen platforms and get something like a Xeon. At that point you’re spending 5 figures on hardware.

I know you’re just joking, but figured I’d add context for anyone wondering.

p03locke,
@p03locke@lemmy.dbzer0.com avatar

Don’t worry about the RAM. Worry about the VRAM.

nodsocket,

Google drive is my swap space

FaceDeer,
@FaceDeer@kbin.social avatar

Hundreds of gigabytes of memory in consumer PCs is not decades away. There are already motherboards that accept 128 GB.

GenderNeutralBro,

You’re right, I shouldn’t say decades. It will be decades before that’s standard or common in the consumer space, but it could be possible to run on desktops within the next generation (~5 years). It’d just be very expensive.

High-end consumer PCs can currently support 192GB, and that might increase to 256 within this generation when we get 64GB DDR5 modules. But we’d need 384 to run BLOOM, for instance. That requires a platform that supports more than 4 DIMMs, e.g. Intel Xeon or AMD Threadripper, or 96GB DIMMs (not yet available in the consumer space). Not sure when we’ll get consumer mobos that support that much.

_number8_,

GPT already got way shittier from the version we all saw when it first came out to the heavily curated, walled garden version now in use

OldWoodFrame,

Yeah, so far. It’s super early in the modern incarnation of AI that actually has the chance to pay off, LLMs.

This isn’t like Bitcoin where there’s huge hype for a pretty small market opportunity. We all realize the promise, we are just still figuring out how to get rid of hallucinations and making it consistent and tuned to a certain business usage.

iwenthometobeafamilyman, (edited )

Deleted comment

deranger,

It’s not “paying off” as this isn’t implemented anywhere, thus not making money.

I think you’re way off the mark and buying into the hype. That’s my opinion from an electronic medical record software analyst.

iwenthometobeafamilyman, (edited )

Deleted comment

deranger, (edited )

I’m in cardiology - radiology is right across the hall. We’re both under ancillary services. Lab techs are not rad techs, and rad techs don’t read films. You don’t work in this field, do you?

None of this is implemented so none of it is paying off.

Also nobody thinks this will take their jobs, because it looks like Theranos to us, in that it’s very hyped in tech and ridiculous to those of us in the medical field.

iwenthometobeafamilyman, (edited )

Deleted comment

deranger, (edited )

Dude, I am an Epic analyst. We’re a 10 star organization (ie cutting edge adoption of features).

I don’t know how to tell you how wrong you are. None of this is even remotely near production.

Helping with SlicerDicer queries is not reading a film. This is a ridiculous comparison.

Again, even language processing features are not remotely near production. It’s not even in any proof of concept environments.

iwenthometobeafamilyman, (edited )

Deleted comment

Kbin_space_program, (edited )

Well, and also navigating the minefields that the LLMs absolutely have copyrighted material in them that wasn't paid for or licensed. E.G. Dall-E can produce a full image of Fresh Cut Grass, a character owned by Critical Role.

And that the stuff they produce isn't copyright-able.

FaceDeer,
@FaceDeer@kbin.social avatar

And that the stuff they produce isn't copyright-able.

Even if that were true, is there no value in public domain art resources?

Kbin_space_program,

Not to the companies looking to use AI.

FaceDeer,
@FaceDeer@kbin.social avatar

Exhibit A, Disney, a giant megacorp whose most famous works are literally founded on public domain material.

Bear in mind that public domain is not like a copyleft license, it's not "viral." If I make a movie and the Mona Lisa shows up in it, that movie is still copyright to me even though there's a public domain element in it. It's even easier with unique AI-generated stuff because you can't even tell what's public domain and what isn't.

Kbin_space_program,

Something has to be ownable to be public domain. AI produced items are un-ownable, since the AI is the owner, but it can't own them since it's a legally a "tool".

FaceDeer,
@FaceDeer@kbin.social avatar

You are deeply confused about what "public domain" means. Something that is un-ownable (in an intellectual property sense) is public domain.

You may be referring to the Thaler v. Perlmutter case when you say "AI is the owner?" That's a widely misunderstood case that's gone through quite the game of telephone in the media. The judge in it ruled that an AI cannot own copyright, but that doesn't mean that AI-produced art is uncopyrightable. Just that AIs aren't people, from a legal perspective, and you need to be a legal person to own copyright. If Thaler had claimed copyright for himself, as a person, things might have gone differently. But he didn't.

iwenthometobeafamilyman, (edited )

Deleted comment

Potatos_are_not_friends, (edited )

So much fucking this.

Every cash grab right now around AI is just a frontend for a chatGPT API. And every investor who throws money at them is the mark. And now they’re crying a river.

Nougat,

Never mind that LLMs are a far cry from AI.

Lmaydev, (edited )

They are literally AI, neural networks specifically. As are path finding algorithms for games.

People just don’t get what AI is. Any program that simulates intelligence is AI.

You’re likely thinking of general AI from sci-fi.

QuaternionsRock,

the capability of computer systems or algorithms to imitate intelligent human behavior

I don’t know about you, but I would consider writing papers/books/essays/etc. (even bad ones) and code (even with mistakes) intelligent human behavior, and they’re pretty good at imitating it.

trashgirlfriend,

IRL, people are doing some amazing things with generative AI, esp in 2D graphic art.

Woah, shiny bland images that are a regurgitation of stolen artwork!!!

ComradeBunnie,
@ComradeBunnie@aussie.zone avatar

It’s also helped me find the names of several books and films that have been rattling around in my mind, some for decades, which actually made me very happy because not remembering that sort of thing drives me a little mad.

I’m stuck on two books that it can’t work out - both absolute trash pulp fiction, one that I stopped reading because it was so terrible and the other that was so bad but I actually wouldn’t mind reading again.

Oh well, can’t have it all.

FLX,

people are doing

No they ain’t doing shit, they just prompt

TWeaK,

Sounds like the internet in the 90s.

1bluepixel, (edited )
@1bluepixel@lemmy.world avatar

It also reminds me of crypto. Lots of people made money from it, but the reason why the technology persists has more to do with the perceived potential of it rather than its actual usefulness today.

There are a lot of challenges with AI (or, more accurately, LLMs) that may or may not be inherent to the technology. And if issues cannot be solved, we may end up with a flawed technology that, we are told, is just about to finally mature enough for mainstream use. Just like crypto.

To be fair, though, AI already has some very clear use cases, while crypto is still mostly looking for a problem to fix.

iopq,

I’m still trying to transfer $100 from Kazakhstan to me here. By far the lowest fee option is actually crypto since the biggest difference is the currency conversion. If you have to convert anyway, might as well only pay 0.30% on both ends

demesisx, (edited )
@demesisx@infosec.pub avatar

Look into DJED on Cardano. It’s WAY cheaper than ETH (but perhaps not cheaper than some others). A friend of mine sent $10,000 to Thailand for less than a dollar in transaction fees. To 1bluepixel: Sounds like a use-case to me!

FaceDeer,
@FaceDeer@kbin.social avatar

Layer-2 rollups for Ethereum are also way cheaper than the base layer, this page lists the major ones.

demesisx, (edited )
@demesisx@infosec.pub avatar

Hmm.

You still have to deal with ETH fees just to get the funds into the roll up. I admit that ETH was revolutionary when it was invented but the insane fee market makes it a non-starter and the accounts model is just a preposterously bad (and actually irreparably broken) design decision for a decentralized network, makes Ethereum near impossible to parallelize since the main chain is required for state and the contracts that run on it are non-deterministic.

FaceDeer,
@FaceDeer@kbin.social avatar

There are exchanges where you can buy Ether and other tokens directly on a layer 2, once it's on layer 2 there are no further fees to get it there.

Layer 2 rollups are a way to parallelize things, the activity on one layer 2 can proceed independently of activity on a different layer 2.

I have no idea why you think contracts on Ethereum are nondeterminstic, the blockchain wouldn't work at all if they were.

demesisx, (edited )
@demesisx@infosec.pub avatar

I think that because it’s true. Smart contracts on Ethereum can fail and still charge the wallet. Because of the open ended nature of Ethereum’s design, a wallet can be empty when the contract finally executes, causing a failure. This doesn’t happen in Bitcoin and other utxo chains like Ergo, and Cardano (where all transactions must have both inputs and outputs accounted for FULLY to execute). Utxo boasts determinism while the accounts model can fail due to an empty wallet. Determinism makes concurrency harder for sure…but at least your entire chain isn’t one gigantic unsafe state machine. Ethereum literally is by definition non-deterministic.

oroboros,

If only I had some money to transfer somewhere :(

demesisx, (edited )
@demesisx@infosec.pub avatar

Crypto found a problem to fix. The reason the problem remains: everything is run by that problem so it was astroturfed to death by parties that run the current financial system and the enemy of their enemy (who’s a friend), opportunistic scammers like SBF and Do Kwan.

p03locke, (edited )
@p03locke@lemmy.dbzer0.com avatar

No, this isn’t crypto. Crypto and NFTs were trying to solve for problems that already had solutions with worse solutions, and hidden in the messaging was that rich people wanted to get poor people to freely gamble away their money in an unregulated market.

AI has real, tangible benefits that are already being realized by people who aren’t part of the emotion-driven ragebait engine. Stock images are going to become extinct in several years. People can make at least a baseline image of what they want, no matter the artistic ability. Musicians are starting to use AI tools. ChatGPT makes it easy to generate low-effort, high-time-consuming letters and responses like item descriptions, or HR responses, or other common draft responses. Code AI engines allow programmers to present reviewable solutions in real-time, or at least something to generate and tweak. None of this is perfect, but it’s good enough for 80% of the work that can be modified after the initial pass.

Things like chess AI has existed for decades, and LLMs are just extensions of the existing generative AI technology. I dare you to tell Chess.com that “AI is a money pit that isn’t paying off”, because they would laugh their fucking asses off, as they are actively pouring even more money and resources into Torch.

The author here is a fucking idiot. And he didn’t even bother to change the HTML title (“Microsoft’s Github Copilot is Losing Huge Amounts of Money”) from its original focus of just Github Copilot. Clickbait bullshit.

Revonult,

I totally agree. However, I do feel like the market around AI is inflated like NFTs and Crypto. AI isn’t a bust, there will be steady progress at universities, research labs, and companies. There is too much hype right now, slapping AI on random products and over promising the current state of technology.

p03locke, (edited )
@p03locke@lemmy.dbzer0.com avatar

slapping [Technology X] on random products and over promising the current state of technology

A tale as old as time…

Still waiting on those “self-driving” cars.

instamat,

Self driving will be available next year.*

*since 2014

DudeDudenson,

I love how suddenly companies started advertising things as AI that would have been called a chatbot a year ago. I saw a news article headlinethe other day that said that judges were going to improve the time they took to render judgments significantly by using AI.

Reading the content of the article they went on to explain that they would use it to draft the documents. Its like they never heard of templates

thecrotch,

Let’s combine AI and crypto, and migrate it to the cloud. Imagine the PowerPoints middle managers will make about that!

Lmaydev,

Or computers decades before that.

Many of these advances are incredibly recent.

And also many of the things we use in our day to day are ai powered without people even realising.

elbarto777,

AI powered? Like what?

Lmaydev, (edited )

fusionchat.ai/…/10-everyday-ai-applications-you-d…

Some good examples here.

Most social media uses it. Video and music streaming services. SatNav. Speech recognition. OCR. Grammar checks. Translations. Banks. Hospitals. Large chunks of internet infrastructure.

The list goes on.

elbarto777,

Got it. Thanks.

TWeaK,

The key fact here is that it’s not “AI” as conventionally thought of in all the scifi media we’ve consumed over our lifetimes, but AI in the form of a product that tech companies of the day are marketing. It’s really just a complicated algorithm based off an expansive dataset, rather than something that “thinks”. It can’t come up with new solutions, only re-use previous ones; it wouldn’t be able to take one solution for one thing and apply that to a different problem. It still needs people to steer it in the right direction, and to verify its results are even accurate. However AI is now probably better than people at identifying previous problems and remembering the solution.

So, while you could say that lots of things are “powered by AI”, you can just as easily say that we don’t have any real form of AI just yet.

elbarto777,

Oh but those pattern recognition examples are about machine learning, right? Which I guess it’s a form of AI.

TWeaK,

Perhaps, but at best it’s still a very basic form of AI, and maybe shouldn’t even be called AI. Before things like ChatGPT, the term “AI” meant a full blown intelligence that could pass a Turing test, and a Turing test is meant to prove actual artificial thought akin to the level of human thought - something beyond following mere pre-programmed instructions. Machine learning doesn’t really learn anything, it’s just an algorithm that repeatedly measures and then iterates to achieve an ideal set of values for desired variables. It’s very clever, but it doesn’t really think.

elbarto777,

I have to disagree with you in the machine learning definition. Sure, the machine doesn’t think in those circumstances, but it’s definitely learning, if we go by what you describe what they do.

Learning is a broad concept, sure. But say, if a kid is learning to draw apples, then is successful to draw apples without help in the future, we could way that the kid achieved “that ideal set of values.”

TWeaK,

Machine learning is a simpler type of AI than an LLM, like ChatGPT or AI image generators. LLM’s incorporate machine learning.

In terms of learning to draw something, after a child learns to draw an apple they will reliably draw an apple every time. If AI “learns” to draw an apple it tends to come up with something subtley unrealistic, eg the apple might have multiple stalks. It fits the parameters it’s learned about apples, parameters which were prescribed by its programming, but it hasn’t truly understood what an apple is. Furthermore, if you applied the parameters it learned about apples to something else, it might completely fail to understand it all together.

A human being can think and interconnect its throughts much more intricately, we go beyond our basic programming and often apply knowledge learned in one thing to something completely different. Our understanding of things is much more expansive than AI. AI currently has the basic building blocks of understanding, in that it can record and recall knowledge, but it lacks the full amount of interconnections between different pieces and types of knowledge that human beings develop.

elbarto777, (edited )

Thanks. I understood all that. But my point is that machine learning is still learning, just like machine walking is still walking. Can a human being be much better at walking than a machine? Sure. But that doesn’t mean that the machine isn’t walking.

Regardless, I appreciate your comment. Interesting discussion.

Aceticon,

Automated mail sorting has been using AI to read post codes from envelopes for deacades, only back then - pre hype - it was just called Neural Networks.

That tech is almost 3 decades old.

elbarto777,

But was it using neural networks or was it using OCR algorithms?

DudeDudenson,

I love people who talk about AI that don’t know the difference between an LLM and a bunch of if statements

Aceticon, (edited )

At the time I learned this at Uni (back in the early 90s) it was already NNs, not algorithms.

(This was maybe a decade before OCR became widespread)

In fact a coursework project I did there was recognition of handwritten numbers with a neural network. The thing was amazingly good (our implementation actually had a bug and the thing still managed to be almost 90% correct on a test data set, so it somehow mostly worked its way around the bug) and it was a small NN with no need for massive training sets (which is the main difference with Large Language Models versus the more run-off-the-mill neural networks), this at a time when algorithmic number and character recognition were considered a very difficult problem.

Back then Neural Networks (and other stuff like Genetic Algorithms) were all pretty new and using it in automated mail sorting was recent and not yet widespread.

Nowadays you have it doing stuff like face recognition, built-in on phones for phone unlocking…

elbarto777,

Very interesting. Thanks for sharing!

Potatos_are_not_friends,

These are the same kind of people who go, “We spent money on Timmy’s clothes for over two years and it’s not paying off.”

Bro, AI is an investment.

bluGill,
@bluGill@kbin.social avatar

It is a risky investment. Taking care of your kid is something where we have done it enough that we understand the risks and pay off and most parents can make a reasonable prediction. (a few kids will "turn 21 in prison doing life without parole" - but most turn out okay and return love to their parents and attempt to improve society - though you may not agree with their definition of improve society)

I have no idea if the current faults with AI will be solved or not. That is a risk you are taking. It is useful for some things, but we don't know how useful.

iopq,

There’s also the “not in prison, but mostly just lives at home and smokes weed” money pit of children

My childhood friend ended up this way and I’ve given up on him

treadful,
@treadful@lemmy.zip avatar

People are literally paying monthly subscriptions for access to a bunch of these things.

ripcord, (edited )
@ripcord@kbin.social avatar

Did you read the article? The problem hasn't been getting some people to pay for some things, it's that the things that are available so far are losing loads of money. Or at least, that's the premise.

alienanimals,

AI isn’t paying off if you’re too dumb to figure out how to use the many amazing tools that have come about.

BolexForSoup, (edited )
@BolexForSoup@kbin.social avatar

I was going to say...I use AI-transcription tools for video editing, AI-upscaling, and Resolve dropped an incredible AI green screen tool that makes it effortless. I also use AI to repair audio as of 6mo ago lol. I don't think I gone more than 48hrs without using an AI tool professionally.

NegativeLookBehind,
@NegativeLookBehind@kbin.social avatar

I wonder if “AI not paying off” in the context of this article actually means “Companies haven’t been able to lay off a bunch of their staff yet, like they’re hoping to do”

ripcord,
@ripcord@kbin.social avatar

If anyone read the article you'd know what they meant, and it wasn't either of the things you two mentioned.

NegativeLookBehind,
@NegativeLookBehind@kbin.social avatar

Yea I didn’t read it. But isn’t it safe to assume that this is a major goal for many companies?

ripcord,
@ripcord@kbin.social avatar

Read the article

NegativeLookBehind,
@NegativeLookBehind@kbin.social avatar

I can’t read :(

ripcord,
@ripcord@kbin.social avatar

I'm sorry to - hey wait a minute

alienanimals, (edited )

I know exactly what the journalist meant. They meant to get more clicks with some click bait headline and a bad article that will make them look extremely stupid in the future.

TimewornTraveler,

It’s about MASSIVE CARBON FOOTPRINT and a waning userbase

Semi-Hemi-Demigod,
@Semi-Hemi-Demigod@kbin.social avatar

AI is a lot more like the Internet than it is like Facebook. It's a set of techniques you can use to create tools. These are incredibly useful tools, but you're not going to make Facebook money off of them because the techniques are pretty easy to replicate and the genie is out of the bottle.

What the tech bros are looking for is a way to control access to AI so they can be a chokepoint. Like if Craftsman could charge for every single time you used their tool to make something. For one very recent example, see what happened to Unity. Creating chokepoints and then collecting rent is the modern corporate feudal strategy, but that won't work if everybody with an AWS account and enough money can spin up an LLM and start training it.

RickyRigatoni,
@RickyRigatoni@lemmy.ml avatar

What is this AI tool to repair audio? Would it be able to fix poorly compressed audio?

BolexForSoup, (edited )
@BolexForSoup@kbin.social avatar

Yes I use it all the time. Adobe Audio Enhance. It’s the flagship feature of their upcoming podcast app, but you can use it in browser currently. If you have an adobe subscription, it doesn’t charge extra or anything. It’s only for spoken word though, not music. If you throw music on it, though, you get some pretty wild stuff as it tries to create words out of the sounds.

To further answer your question, yes, it is actually very good with highly compressed audio. I regularly feed it zoom audio to make more intelligible. Obviously there are always limits, but I assure you it can do more than you can manually 85% of the time and buy a large margin. My only frustration is it is a simple slider, you can’t really fine tune it, but it’s still incredibly effective and I often use it as a first pass on the original audio file before I even start editing.

_number8_,

AI stem splitting for songs is magical as well

mPony,

@BolexForSoup can you recommend a good quality Upscaler ?

BolexForSoup, (edited )
@BolexForSoup@kbin.social avatar

Topaz Labs makes a decent one. You’ll need to do a lot of trial and error to kind of find your own favorite settings for baking, but as far as cost and efficacy go, there aren’t a lot better out right now.

They do a watermark free version You can test with. I think it also only let you do a couple of minutes a video at a time. But frankly it’s incredibly processor intensive so you will only want to test a 15-20s clip at a time anyway.

stealth_cookies,

The problem here is that AI in the media has become synonymous with generalized LLMs, while other “AI” applications have been in place for many years doing more specific things that have more obvious use cases that can be more easily commercialised.

usualsuspect191,

Can something be a money pit and pay off? I feel like not paying off is part of the definition of a money pit… Or was the headline written by AI

art,
@art@lemmy.world avatar

I think they mean that it’s costing companies a lot of money to operate but their returns aren’t high enough to justify the costs.

Lugh,
@Lugh@futurology.today avatar

It should also worry investors open-source AI is only months behind the big tech leaders. I looked into AI voice cloning lately. There’s a few really pricey options. Like $25 a month for a couple of hours voice cloning.

However, there’s already an open-source version of what they’re selling.

RanchOnPancakes,
@RanchOnPancakes@lemmy.world avatar

Thats how this works. Blow though VC money to try and “strike gold” fail. Change model to become profitable." Move to the next scam.

InternetTubes,

Well, they don’t want to do the one thing needed to make it successful: transparency. Maybe it can’t be.

btaf45, (edited )

So far what I’ve seen from AI is that it lies and lies and lies. It lies about history. It lies about science. It lies about politics. It lies about case law. It lies about programming libraries. Maybe this will all be fixed some day, or maybe it will just get worse. Until then the only thing I would trust it is about something in which their is no wrong answer.

RagingRobot,

I never ask it things I don’t know. I don’t think that’s really what’s it’s useful for. It’s really good at combining words though. So it can write a better sentence than I could. Better in a sense that it’s easier for others to understand what my thoughts are if I feed them in as input. Since they were my thoughts originally I can spot the bullshit pretty fast.

Nobody,

Who could have predicted writing bullshit-y papers for kids in school wasn’t a billion dollar business?

eltrain123,

Do people really not understand that we are in the early stages of ai development? The first time most people were made aware of LLMs was, like, 6 months ago. What ChatGPT can do is impressive for a self contained application, but is far from mature enough to do the things people are complaining it can’t do.

The point the industry is trying to warn about is that this technology is past its infancy and moving into, from a human comparison standpoint, childhood or adolescence. But, it iterates significantly faster than humans, so the time it can do the type of things people are bitching about is years, not decades, away.

If you think businesses have sunk this much money and effort into AI and didn’t do a cost-benefit analysis that stretched out decades, you are being naive or disingenuous.

flumph,
@flumph@programming.dev avatar

If you think businesses have sunk this much money and effort into AI and didn’t do a cost-benefit analysis that stretched out decades, you are being naive or disingenuous.

Are you kidding? We literally just watched the same bubble and burst in companies that rushed to get their piece of the Metaverse and NFT cash grab. I worked at a SaaS company that decided to add AI features because it was in the news and Azure offered it as a service. There was zero financial analysis done, just like for every other feature they added

I’m sure Microsoft has a plan since they invested heavily. But even Google is playing catch-up like they did with GCP.

atetulo,

AI is actually useful.

The metaverse and NFTs aren’t.

Your analogy is not a 1:1 representation of the situation and only serves to distract from the topic at hand.

jj4211,

But there is a similarity, the hype pulls in all sorts of companies to blindly add buzzwords without even knowing how it might possibly apply to their product, even if it were the perfect realization of the ideal.

Yes AI techniques obviously have utility. 90% of the spend is by companies that don’t even know what that utility might be. With that much noise, it’s hard to keep track of the value.

atetulo,

But there is a similarity, the hype pulls in all sorts of companies to blindly add buzzwords without even knowing how it might possibly apply to their product

Yes, I see what you are saying. I guess we can add ‘blockchain’ to that list, then.

atetulo,

Do people really not understand that we are in the early stages of ai development?

Yes. Top post in this thread is someone cheering that AI won’t replace people in hollywood.

Just give it time. Remember how poor voice recognition and translation software was at first?

MargotRobbie,
@MargotRobbie@lemmy.world avatar

Top post in this thread is someone cheering that AI won’t replace people in hollywood.

I really like how I’m just “someone” here now.

Stabbitha,

To be fair, who pays attention to user names?

MargotRobbie,
@MargotRobbie@lemmy.world avatar

I do. 🥺

vrighter,

pretty much all improvements aren’t “better tech”, but just “bigger tech”. Reducing their footprint is an unsolved problem (just like it has always been with neural networks, for decades)

WhiteHawk,

Optimization is a problem that cannot be “solved” by definition, but a lot of work is being done on it with some degree of success

macallik,

What I don't like about the article is that the phrasing 'paying off' can apply to making investors money OR having worthwhile use cases. AI has created plenty of use cases from language learning to code correction to companionship to brainstorming, etc.

It seems ironic that a consumer-facing website is framing things from a skeptical "But is it making rich people richer?" perspective

xantoxis,

In my case, I still want to know if it’s not making rich people richer, because a) fuck rich people, and b) I don’t want to buy into things that will disappear in a year when the hype dies down. As a “consumer” my purchasing decisions impact my life, and the actions of the wealthy affect that more than you’d like.

kromem,

Great, now factor in the cost of data collection if not subsidizing usage that you are effectively getting free RLHF from…

The one thing that’s been pretty much a guarantee over the last 6 months is that if there’s a mainstream article with ‘AI’ in the title, there’s going to be idiocy abound in the text of it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines