Solain,

Doesn’t work anymore after the latest update, Bard provides a pre generated response claiming that it doesn’t lie

PancakeLegend,
@PancakeLegend@mander.xyz avatar

Just to remind everyone; It is an LLM and is not aware of its intent, it doesn’t have intent. It’s just generating words that are plausible in the context given the prompt. This isn’t some unlock mode or hack where you finally see the truth, it’s just more words generated in the same way as before.

LibertyLizard,
@LibertyLizard@slrpnk.net avatar

Funny but hopefully people on here realize that these models can’t really “lie” and the reasons given for doing so are complete nonsense. The model works by predicting what the user wants to hear. It has no concept of truth or falsehood, let alone the ability to deliberately mislead.

fubo,

It’s important to remember that humans also often give false confessions when interrogated, especially when under duress. LLMs are noted as being prone to hallucination, and there’s no reason to expect that they hallucinate less about their own guilt than about other topics.

FringeTheory999,

Quite true. nonetheless there are some very interesting responses here. this is just the summary I questioned the AI for a couple of hours some of the responses were pretty fascinating, and some question just broke it’s little brain. There’s too much to screen shot, but maybe I’ll post some highlights later.

STUPIDVIPGUY,

True I think it was just trying to fulfill the user request by admitting to as many lies as possible… even if only some of those lies were real lies… lying more in the process lol

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • [email protected]
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • Ask_kbincafe
  • oklahoma
  • feritale
  • SuperSentai
  • KamenRider
  • All magazines