efrique,

Just need to get AI on that.

nucleative,

We need to embrace AI written content fully. Language is just a protocol for communication. If AI can flesh out the “packets” for us nicely in a way that fits what the receiving humans need to understand the communication then that’s a major win. Now I can ask AI to write me a nice letter and prompt it with a short bulleted list of what I want to say. Boom! Done, and time is saved.

The professional writers who used to slave over a blank Word document are now obsolete, just like the slide rule “computers” of old (the people who could solve complicated mathematics and engineering problems on paper).

Teachers who thought a hand written report could be used to prove that “education” has happened are now realizing that the idea was a crutch (it was 25 years ago too when we could copy/paste Microsoft Encarta articles and use as our research papers).

The technology really just shows us that our language capabilities really are just a means to an end. If a better means asrises we should figure out how to maximize it.

ram,
@ram@lemmy.ca avatar

Huh?

Shameless,

I just realised that especially in teaching, people are treating these LLM’s the same way that I remember teachers in school used to treat computers and later the internet.

“Now class you need a 5 page essay on Hamlet by next Friday, it should be hand written and no copying from the internet!! It needs to be hand written because you can’t always rely on computers to be there…”

Turun,

Or, because you can’t rely on computers to tell you the truth. Which is exactly the issue with LLMs as well.

sfgifz,

You can’t rely on books or people tell you the truth either.

atrielienz,

Which is why bibliographies exist.

Turun,

I was mostly referring to the top comment. If you need to write an essay on Hamlet, the book can in fact not lie, because the entire exercise is to read the book and write about the contents of it.

But in general, you are right. (Which is why it is proper journalistic procedure to talk to multiple experts about a topic you write about. Also a good article does not present a forgone conclusion, but instead let’s readers form their own opinion on a topic by providing the necessary context and facts without the author’s judgement. LLMs as a one-stop-shop do not provide this and are less reliable than listening to a single expert would be)

Boddhisatva,

OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

If you ask this thing whether or not some given text is AI generated, and it is only right 26% of the time, then I can think of a real quick way to make it 74% accurate.

Leate_Wonceslace,
@Leate_Wonceslace@lemmy.dbzer0.com avatar

I feel like this must stem from a misunderstanding of what 26% accuracy means, but for the life of me, I can’t figure out what it would be.

dartos,

Looks like they got that number from this quote from another arstechnica article ”…OpenAI admitted that its AI Classifier was not “fully reliable,” correctly identifying only 26 percent of AI-written text as “likely AI-written” and incorrectly labeling human-written works 9 percent of the time”

Seems like it mostly wasn’t confident enough to make a judgement, but 26% it correctly detected ai text and 9% incorrectly identified human text as ai text. It doesn’t tell us how often it labeled AI text as human text or how often it was just unsure.

EDIT: this article arstechnica.com/…/openai-discontinues-its-ai-writ…

schzztl,

Specificity vs sensitivity, no?

cmfhsu,

In statistics, everything is based off probability / likelihood - even binary yes or no decisions. For example, you might say “this predictive algorithm must be at least 95% statistically confident of an answer, else you default to unknown or another safe answer”.

What this likely means is only 26% of the answers were confident enough to say “yes” (because falsely accusing somebody of cheating is much worse than giving the benefit of the doubt) and were correct.

There is likely a large portion of answers which could have been predicted correctly if the company was willing to chance more false positives (potentially getting studings mistakenly expelled).

notatoad,

it seemed like a really weird decision for OpenAI to have an AI classifier in the first place. their whole business is to generate output that’s good enough that it can’t be distinguished from what a human might produce, and then they went and made a tool to try and point out where they failed.

Boddhisatva,

That may have been the goal. Look how good our AI is, even we can’t tell if its output is human generated or not.

Matriks404,

Did human-generated content really become so low quality that it is distinguishable from AI-generated content?

technicalogical,

Should I be able to detect whether or not this is an AI generated comment?

nodsocket,

As an AI language model, I am unable to confirm whether or not the above post was written by an AI.

Arsenal4ever,

have you seen exTwitter?

DogMuffins,

Not necessarily. It’s just that AI’s can’t tell the difference.

Although I don’t know whether humans can.

funktion,

People kind of just suck at writing in general. It’s not a skill that’s valued so much, otherwise writers, editors, and proofreaders would be paid more.

Jargus,

So Democracy is basically fucked and countries without freedom of expression/speech have a advantage while our social media will be a cesspool and will divide and weaken our societies. The future looks bright /s

robbotlove,

this comment could have been written in 2005 and still have been true.

SpaceCowboy,
@SpaceCowboy@lemmy.ca avatar

AI might democratize grifting. You no longer will have to have the resources that Russia and China have devoted to this kind of thing. Anyone will be able to generate vast amounts of fake inflammatory rhetoric.

Then once there’s a 99.9% chance that the person you’re talking to on social media is an AI, people might realize how stupid it is to believe anything they read on the internet.

Blackmist,

The only thing AI writing seems to be useful for is wasting real people’s time.

itsmaxyd,

True -

  1. Write points/summary
  2. Have AI expand in many words
  3. Post
  4. Reader uses AI to generate summarize post preferably in points
  5. Profit??
driving_crooner,
@driving_crooner@lemmy.eco.br avatar

Terence Tao just did a thread on Mathstodon talking about jow ChatGPT help him program a algorithm for looking for numbers.

Absolutemehperson,

mfw just asking ChatGPT to write an undetectable essay.

Later, losers!

m0darn,

Aren’t there very few student priced ai writers? And isn’t the writing done on their servers? And aren’t they saving all the outputs?

Can’t the ai companies sell to schools the ability to check paper submissions against recent outputs?

dyc3,

Chatgpt 3.5 is free. Can’t get more student priced than that.

Regarding the second part about outputs: that’s not practical. Suppose you ignore students running their own LLMs offline on their gaming gpus, where these corps wouldn’t have access to the info. It’s still wildly impractical because students can paraphrase LLM output into something that doesn’t look like the original output.

m0darn,

Chatgpt 3.5 is free. Can’t get more student priced than that.

Yeah, my point was I don’t think there are many offering the service for free. And they are probably looking for revenue streams.

Suppose you ignore students running their own LLMs offline on their gaming gpus

I actually feel like this is the one that shouldn’t be ignored. But I don’t have a good sense of the computational power vs quality output.

It’s still wildly impractical because students can paraphrase LLM output into something that doesn’t look like the original output.

At least doing that is likely to result in the student internalizing the information to some degree. It’s also not so different (not at all different?) from the most benign academic dishonesty that existed when I was a student.

One issue with the approach I suggested is the copyright issue of profs submitting students’ original work for AI processing without understanding/caring about copyright implications.

dyc3,

And they are probably looking for revenue streams.

Yeah of course. As it stands right now gpt 3.5 is free, but gpt 4.0, which has been demonstrated to produce better output and get do more, costs a monthly subscription.

At least doing that is likely to result in the student internalizing the information to some degree.

This is a good point, and I agree.

irotsoma,
@irotsoma@lemmy.world avatar

A lot of these relied on common mistakes that “AI” algorithms make but humans generally don’t. As language models are improving, it’s harder to detect.

Cethin,

They’re also likely training on the detector’s output. That why they build detectors. It isn’t for the good of other people. It’s to improve their assets. A detector is used to discard some inputs it knows are written by AI so it doesn’t train on that data, which leads to it out competing the detection AI.

Nioxic,

I have to hand in a short report

I wrote parts of it and asked chatgpt for a conclusion.

So i read that, adjusted a few points. Added another couple points…

Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)

We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)

TropicalDingdong,

I found out on the last screen of a travel grant application I needed a coverletter.

I pasted in the requirements for the cover letter and what I had put in my application.

I pasted the results in as the cover letter without review.

I got the travel grant.

Blurrg,

Who reads cover letters? At most they are skimmed over.

TropicalDingdong,

Exactly. But they still need to exist. That’s what chat gpt is for. Letters, bullshit emails, applications. The shit that’s just tedious.

ReallyKinda,

I know a couple teachers (college level) that have caught several gpt papers over the summer. It’s a great cheating tool but as with all cheating in the past you still have to basically learn the material (at least for narrative papers) to proof gpt properly. It doesn’t get jargon right, it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

Using translation tools is extra obvious—have a native speaker proof your paper if you attempt to use an AI translator on a paper for credit!!

pc_admin,

Any teacher still issuing out of class homework or assignments is doing a disservice IMO.

Of coarse people will just GPT it… you need to get them off the computer and into an exam room.

ReallyKinda,

Even in college? I never had a college course that allowed you to work on assignments in class

Muffi,

I studied engineering. Most classes were split into 2 hours of theory, followed by 2 hours of practical assignments. Both within the official class hours, so teachers could assist with the assignments. The best college-class structure by far imo.

SmoothLiquidation,

GPT is a tool that the students will have access to their entire professional lives. It should be treated as such and worked into the curriculum.

Forbidding it would be like saying you can’t use Photoshop in a photography class.

Neve8028,

It can definitely be a good tool for studying or for organizing your thoughts but it’s also easily abused. School is there to teach you how to take in and analyze information and chat AIs can basically do that for you (whether or not their analysis is correct is another story). I’ve heard a lot of people compare it to the advent of the calculator but I think that’s wrong. A calculator spits out an objective truth and will always say the same thing. Chat GPT can take your input and add analysis and context in a way that circumvents the point of the assignment which is to figure out what you personally learned.

ComicalMayhem,

This is such a great analysis.

Benj1B,

Where it gets really challenging is that LLMs can take the assignment input and generate an answer that is actually more educational for the student than what they learned d in class. A good education system would instruct students in how to structure their prompts in a way that helps them learn the material - because the LLMs can construct virtually limitless examples and analogies and write in any kind of style, you can tailor them to each student with the correct prompts and get a level of engagement equal to a private tutor for every student.

So the act of using the tool to generate an assignment response could, if done correctly and with guidance, be more educational than anything the student picked up in class - but if its not monitored, if students don’t use the tool the right way, it is just going to be seen as a shortcut for answers. The education system needs to move quickly to adapt to the new tech but I don’t have a lot of hope - some individual teachers will do great as they always have, others will be shitty, and the education departments will lag behind a decade or two as usual.

Neve8028,

Where it gets really challenging is that LLMs can take the assignment input and generate an answer that is actually more educational for the student than what they learned d in class.

That’s if the LLM is right. If you don’t know the material, you have no idea if what it’s spitting out is correct or not. That’s especially dangerous once you get to undergrad level when learning about more specialized subjects. Also, how can reading a paper be more informative than doing research and reading relevant sources? The paper is just the summary of the research.

and get a level of engagement equal to a private tutor for every student.

Eh. Even assuming it’s always 100% correct, there’s so much more value to talking to a knowledgeable human being about the subject. There’s so much more nuance to in person conversations than speaking with an AI.

Look, again, I do think that LLMs can be great resources and should be taken advantage of. Where we disagree is that I think the point of the assignment is to gain the skills to do research, analysis, and generally think critically about the material. You seem to think that the goal is to hand something in.

ReallyKinda,

Depends on how it’s used of course. Using it to help brainstorm phrasing is very useful. Asking it to write a paper and then editing and turning it in is no different than regular plagiarism imo. Bans will apply to the latter case and the former case should be undetectable.

MrMcGasion,

I’ve been in photography classes where Photoshop wasn’t allowed, although it was pretty easily enforced because we were required to use school provided film cameras. Half the semester was 35mm film, and the other half was 3x5 graphic press cameras where we were allowed to do some editing - providing we could do the edits while developing our own film and prints in the lab. It was a great way to learn the fundamentals and learning to take better pictures in the first place. There were plenty of other classes where Photoshop was allowed, but sometimes restricting which tools can be used, can help push us to be better.

pinkdrunkenelephants,

No it won’t. People will get it banned and they ought to.

SpikesOtherDog,

it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

It doesn’t hardly understand logic. I’m using it to generate content and it continuously will assert information in ways that don’t make sense, relate things that aren’t connected, and forget facts that don’t flow into the response.

mayonaise_met,

As I understand it as a layman who uses GPT4 quite a lot to generate code and formulas, it doesn’t understand logic at all. Afaik, there is currently no rational process which considers whether what it’s about to say makes sense and is correct.

It just sort of bullshits it’s way to an answer based on whether words seem likely according to its model.

That’s why you can point it in the right direction and it will sometimes appear to apply reasoning and correct itself. But you can just as easily point it in the wrong direction and it will do that just as confidently too.

Aceticon,

It has no notion of logic at all.

It roughly works by piecing together sentences based on the probability of the various elements (mainly words but also more complex) being there in various relations to each other, the “probability curves” (not quite probability curves but that’s a good enough analog) having been derived from the very large language training sets used to train them (hence LLM - Large Language Model).

This is why you might get things like pieces of argumentation which are internally consistent (or merelly familiar segments from actual human posts were people are making an argument) but they’re not consistent with each other - the thing is not building an argument following a logic thread, it’s just putting together language tokens in common ways which in its training set were found associate with each other and with language token structures similar to those in your question.

CosmicCleric,
@CosmicCleric@lemmy.world avatar

That’s a great summary of how it works. Well done.

cheese_greater, (edited )

I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering or shitposting.

Steeve,

We found the source

BananaOnionJuice,
@BananaOnionJuice@lemmy.dbzer0.com avatar

Do you also need help from a friend to prove you are not a robot?

cheese_greater,

I need a lotta help, just not from a friend and about anything robot-related 😮‍💨

BananaOnionJuice,
@BananaOnionJuice@lemmy.dbzer0.com avatar

Hope you have some good friends and family that can help.

TropicalDingdong,

I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering.

It’s not unusual for well-constructed human writing to resemble the output of advanced language models like ChatGPT. After all, language models like GPT-4 are trained on vast amounts of human text, and their main goal is to replicate and generate human-like text based on the patterns they’ve observed.

/gpt-4

cheese_greater, (edited )

Be me

well-constructed human writing

You guys?! 🤗

doublejay1999,
@doublejay1999@lemmy.world avatar

AI company says their AI is smart, but other companies are sell snake oil.

Gottit

canihasaccount,

They tried training an AI to detect AI, too, and failed

learningduck,

Typically for generative AI. I think during their training of the Nobel, they must have developed another model that detect if GPT produce a more natural language. I think that other model may reached the point where it couldn’t flag it with acceptable false positive.

hellothere,

Regardless of if they do or don’t, surely it’s in the interests of the people making the “AI” to claim that their tool is so good it’s indistinguishable from humans?

stevedidWHAT,
@stevedidWHAT@lemmy.world avatar

Depends if they’re more researchers or a business imo. Scientists generally speaking are very cautious about making shit claims bc if they get called out that’s their career really.

hellothere,

It’s literally a marketing blog posted by OpenAI on their site, not a study in a journal.

Zeth0s,

Few decades ago probably, nowadays “scientists” make a lot of bs claims to get published. I was in the room when a “scientist” publishing several nature per year asked to her student to write a paper for a research without any result in a way that it looked like it had something important for a relatively good IF publication.

That day I decided I was done with academia. I had seen enough.

pc_admin,

Cool story bro

stevedidWHAT,
@stevedidWHAT@lemmy.world avatar

You did not just drop arguably one of the most stale, dead memes of all time to try and look fucking cool

Thanks for the laugh

pc_admin,

RIP Harambe

BetaDoggo_,

OpenAI hasn’t been focused on the science since the Microsoft investment. A science focused company doesn’t release a technical report that doesn’t contain any of the specs of the model they’re reporting on.

stevedidWHAT,
@stevedidWHAT@lemmy.world avatar

:(

Kolrami,

Yes, but it’s such a falsifiable claim that anyone is more than welcome to prove them wrong. There’s a lot of slightly different LLMs out there. If you or anyone else can definitively show there’s a machine that can identify AI writing vs human writing, it will either result in better AI writing or it would be an amazing breakthrough in understanding the limits of AI.

hellothere,

People like to view the problem as a paradox - can an all powerful God create a rock they cannot lift? - but I feel that’s too generous, it’s more marking your own homework.

If a system can both write text, and detect whether it or another system wrote that text, then “all” it needs to do is change that text to be outside of the bounds of detection. That is to say, it just needs to convince itself.

I’m not wanting to imply that that is easy, because it isn’t, but it’s a very different thing to convincing someone else, especially a human, that understands the topic.

There is also a false narrative involved here, that we need an AI to detect AI which again serves as a marketing benefit to OpenAI.

We don’t, because they aren’t that good, at least, not yet anyway.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines