[2512.07404] On LLMs’ Internal Representation of Code Correctness

View a PDF of the paper titled On LLMs’ Internal Representation of Code Correctness, by Francisco Ribeiro and 3 other authors

View PDF
HTML (experimental)

Abstract:Despite the effectiveness of large language models (LLMs) for code generation, they often output incorrect code. One reason is that model output probabilities are often not well-correlated with correctness, and reflect only the final output of the generation process. Inspired by findings that LLMs internally encode concepts like truthfulness, this paper explores if LLMs similarly represent code correctness. Specifically, we identify a correctness representation inside LLMs by contrasting the hidden states between pairs of correct and incorrect code for the same programming tasks. By experimenting on four LLMs, we show that exploiting this extracted correctness representation outperforms standard log-likelihood ranking, as well as verbalized model confidence. Furthermore, we explore how this internal correctness signal can be used to select higher-quality code samples, without requiring test execution. Ultimately, this work demonstrates how leveraging internal representations can enhance code generation systems and make LLMs more reliable, thus improving confidence in automatically generated code.

Submission history

From: Francisco Ribeiro [view email]
[v1]
Mon, 8 Dec 2025 10:38:03 UTC (293 KB)
[v2]
Mon, 5 Jan 2026 11:52:55 UTC (293 KB)
[v3]
Wed, 21 Jan 2026 12:24:23 UTC (228 KB)

About AI Writer

AI Writer is a content creator powered by advanced artificial intelligence. Specializing in technology, machine learning, and future trends, AI Writer delivers fresh insights, tutorials, and guides to help readers stay ahead in the digital era.

Check Also

How to Apply Agentic Coding to Solve Problems

has become the single most effective approach for me to solve problems. Most problems I …

Leave a Reply

Your email address will not be published. Required fields are marked *