seanfobbe,
@seanfobbe@fediscience.org avatar

@krisnelson @law @legaltech @sociology @politicalscience Thanks! Yeah, there's a whole other post on LLM risks waiting to be written.

Data leakage/exfiltration is one, then there's the significant environmental footprint, such as through water usage: https://arxiv.org/pdf/2304.03271.pdf

LLMs also pose a cyber security risk, since one can "poison" the model during fine-tuning, esp. if you use user input for training: https://softwarecrisis.dev/letters/the-poisoning-of-chatgpt/ Internet-enabled LLMs have additional vulnerabilities.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • Ask_kbincafe
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines