View a PDF of the paper titled General agents contain world models, by Jonathan Richens and 3 other authors
Abstract:Are world models a necessary ingredient for flexible, goal-directed behaviour, or is model-free learning sufficient? We provide a formal answer to this question, showing that any agent capable of generalizing to multi-step goal-directed tasks must have learned a predictive model of its environment. We show that this model can be extracted from the agent’s policy, and that increasing the agents performance or the complexity of the goals it can achieve requires learning increasingly accurate world models. This has a number of consequences: from developing safe and general agents, to bounding agent capabilities in complex environments, and providing new algorithms for eliciting world models from agents.
Submission history
From: Jonathan Richens [view email]
[v1]
Mon, 2 Jun 2025 13:01:13 UTC (1,469 KB)
[v2]
Mon, 16 Jun 2025 12:07:32 UTC (1,470 KB)
[v3]
Wed, 27 Aug 2025 13:09:51 UTC (1,470 KB)
Source link