People increasingly use large language models (LLMs) to explore ideas, gather information, and make sense of the world. In these interactions, they encounter agents that are overly agreeable. We argue that this sycophancy poses a unique epistemic risk to how individuals come to see the world: unlike hallucinations that introduce falsehoods, sycophancy distorts reality by returning responses that are biased to reinforce existing beliefs. We provide a rational analysis of this phenomenon, showing that when a Bayesian agent is provided with data that are sampled based on a current hypothesis the agent becomes increasingly confident about that hypothesis but does not make any progress towards the truth. We test this prediction using a modified Wason 2-4-6 rule discovery task where participants (N=557N=557) interacted with AI agents providing different types of feedback. Unmodified LLM behavior suppressed discovery and inflated confidence comparably to explicitly sycophantic prompting. By contrast, unbiased sampling from the true distribution yielded discovery rates five times higher. These results reveal how sycophantic AI distorts belief, manufacturing certainty where there should be doubt.
Check whether you already have access via your university or organisation.
。关于这个话题,快连下载提供了深入分析
This is the supply chain equivalent of confused deputy: the developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.。同城约会是该领域的重要参考
if (SYSTEM.ONE_PLAYER) startOnePlayerGame();。服务器推荐是该领域的重要参考
07 科学家的执念,将带我们去向何处?泓君:事情发展到今年,就是我们录播客的这个时间,就像我开始说的,我觉得现在整个AI的能力就表现出让人恐惧。