'According-to Prompting' Language Models Improves Quoting from Pre-Training Data
Large Language Models (LLMs) may hallucinate and generate fake information, despite pre-training on factual data. Inspired by the journalistic device of “according to sources”, researchers at the Johns Hopkins University in the U.S. propose ‘according-to prompting’: directing LLMs to ground responses against previously...
![](https://kbin.cafe/media/cache/resolve/entry_thumb/ab/77/ab77af3fa25e0e9002d8c752407598c670984ea6c990494842c2c5a9549d92cb.png)