dragfyre,
@dragfyre@mastodon.sandwich.net avatar

Summoning my folks and my folks to weigh in: What should the attitude of teachers and educators be towards the use of AI by themselves and by students? (Yes, you can assume this mainly concerns the use of .) @edutooters

Research_FTW,
@Research_FTW@sciences.social avatar

@dragfyre @edutooters
This is a well thought out lecture around LLM usage. https://youtu.be/9DpM_TXq2ws?si=QIH1QL9fAxH2u6la

Or philosophy tube has a great talk on ethical AI. https://youtu.be/AaU6tI2pb3M?si=VyJmdkQGGwhb_LOW

FantasticalEconomics,
@FantasticalEconomics@geekdom.social avatar

@dragfyre @edutooters

When I was in school, teachers banned us from using Wikipedia. As a professional, it's the first place I go for background info.

Let's not make the same mistake. Teach students how to use the ever more powerful tools at their disposal, rather than preventing their use because it's new or has some downsides.

If AI makes it challenging to assess your students, it's your assessment that is broken/outdated.

AI is here to stay. Don't fight a losing battle.

dragfyre,
@dragfyre@mastodon.sandwich.net avatar

@FantasticalEconomics @edutooters Besides the potential for plagiarism, would you still encourage students to use LLMs given their exorbitant energy usage (which is said to be on par with that of small countries')?

mensrea,
@mensrea@freeradical.zone avatar

@dragfyre @FantasticalEconomics @edutooters i assume you're talking about the likes of LLM and other synthetic media functions. in that case educators should take the time to show how these systems can not be trusted as they can only output content that resembles reality and not reflect reality

FantasticalEconomics,
@FantasticalEconomics@geekdom.social avatar

@mensrea @dragfyre @edutooters

I would frame it as teaching them how to use it effectively.

LLMs can create a good starting point, but needs to be fact checked in order to be useful, just like Wikipedia.

mensrea,
@mensrea@freeradical.zone avatar

@FantasticalEconomics more important is what they can actually be useful for: basic translation, basic transcription, rephrasing something in a different style. all of these start from the point of the user being the subject matter expert. the key point is LLMs don't know anything, they're not even supposed to output their training data, all they can produce is something that is statistically related to content that it's model was built on @dragfyre @edutooters

lewriley,
@lewriley@mas.to avatar

@mensrea @FantasticalEconomics @dragfyre @edutooters This. And furthermore, students need to develop the ability to generate a good starting point on their own before relying on automated systems. Writing is thinking.

dragfyre,
@dragfyre@mastodon.sandwich.net avatar

@lewriley @mensrea @FantasticalEconomics @edutooters That's why I wouldn't use LLMs to teach writing (not that I teach writing). I feel like LLMs would be good to teach critical thinking skills: generate a text and call on students to examine it for errors or bias. Still, the reckless waste of resources they entail makes me not want to use them even for that 😕

lewriley,
@lewriley@mas.to avatar

@dragfyre @mensrea @FantasticalEconomics @edutooters I wonder if LLMs are well-suited even to this. A statistical "plausible language extruder" is not skilled in subtle manipulation. We'd be training students on weak material. Will that prepare them for the real thing?

FantasticalEconomics,
@FantasticalEconomics@geekdom.social avatar

@dragfyre @edutooters

Honestly, no. I abandoned using LLMs once I discovered their huge environmental costs and pass that info onto my students.

But I don't think for a second that pushes many away from it. And if they are going to use it, I hope they use it well, at least.

dsmith,
@dsmith@mstdn.social avatar

@FantasticalEconomics @dragfyre @edutooters

But it’s not the same mistake.

The concern with Wikipedia: Ss wouldn’t apply critical thinking regarding the info quality of user-generated content.

The concern with AI: access to a higher grade of better-disguised plagiarism. Ss can generate products without applying critical or creative thinking anywhere in their process.

There's nothing broken about how educators assess student thinking. The biggest challenge: ensuring Ss are actually thinking.

FantasticalEconomics,
@FantasticalEconomics@geekdom.social avatar

@dsmith @dragfyre @edutooters

I'm of the opinion that if a student can complete an assignment solely using AI, it's a bad assignment. The challenge, for teachers, becomes creating projects that can't be answered easily with a quick LLM query. So if students choose to use AI, it's in a way that they critically evaluate output and apply portions that work.

Though that is at the college level, I doubt it can apply at all levels.

languageservicesco,
@languageservicesco@mastodon.social avatar

@FantasticalEconomics @dsmith @dragfyre @edutooters This was my first thought when ChatGPT became mainstream - this may be an opportunity for a lot of bad assessments to be improved. However, it won't work for everything. If you are assessing the ability of a person to write an academic text in a foreign language, it is difficult to see how to do that without them writing said text. But that kind of assessment is probably a small minority.

FantasticalEconomics,
@FantasticalEconomics@geekdom.social avatar

@languageservicesco @dsmith @dragfyre @edutooters

This new article that tackles this question. AI is already better at writing than students. Time to adapt!

"While she previously required her students to write argumentative essays, instead, she now asks her students to have ChatGPT generate them. Students are then tasked with editing the work and assessing the AI’s arguments, considering their effectiveness on specific audiences. Students then turn in a rewrite."

https://bigthink.com/thinking/students-calculators-math-chatgpt-writing-essays/

dragfyre,
@dragfyre@mastodon.sandwich.net avatar

@FantasticalEconomics @languageservicesco @dsmith @edutooters I can't help but feel some cognitive dissonance here though. If we can reject the use of for their exorbitant use of resources and attendant environmental effects, then how does it make sense to encourage students (who are much more numerous than we are) to use them? :thonking:

FantasticalEconomics,
@FantasticalEconomics@geekdom.social avatar

@dragfyre

I guess it depends on how much think it'll impact usage. If the majority of students are already using AI (estimates vary drastically, I've seen it range from 30%-70%), then having AI add part of your assignment doesn't have much negative impact.

Alternatively, we can adapt by designing assignments that AI are less good at answering to try to reduce the use.

https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2023/10/31/most-students-outrunning-faculty-ai-use

https://www.teachingtimes.com/two-thirds-of-secondary-school-students-use-ai-to-do-their-school-work

@languageservicesco @dsmith @edutooters

languageservicesco,
@languageservicesco@mastodon.social avatar

@dragfyre @FantasticalEconomics @dsmith @edutooters I don't think it is about encouragement or not. There will always be a proportion of students who will (need to) use anything that will help them get better grades, so I think it is about making sure that assessment is actually effective. What do we want to test? Can we have confidence in the results?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines