'According-to Prompting' Language Models Improves Quoting from Pre-Training Data

Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data. Inspired by the journalistic device of “according to sources”, researchers at the Johns Hopkins University in the U.S. propose ‘according-to prompting’: directing LLMs to ground responses against previously observed text. To quantify this grounding, they propose a novel evaluation metric (QUIP-Score) that measures the extent to which model-produced answers are directly found in underlying text corpora.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • [email protected]
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • SuperSentai
  • oklahoma
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • KamenRider
  • feritale
  • All magazines