View a PDF of the paper titled Addressing divergent representations from causal interventions on neural networks, by Satchel Grant and 3 other authors
Abstract:A common approach to mechanistic interpretability is to causally manipulate model representations via targeted interventions in order to understand what those representations encode. Here we ask whether such interventions create out-of-distribution (divergent) representations, and whether this raises concerns about how faithful their resulting explanations are to the target model in its natural state. First, we demonstrate theoretically and empirically that common causal intervention techniques often do shift internal representations away from the natural distribution of the target model. Then, we provide a theoretical analysis of two cases of such divergences: “harmless” divergences that occur in the behavioral null-space of the layer(s) of interest, and “pernicious” divergences that activate hidden network pathways and cause dormant behavioral changes. Finally, in an effort to mitigate the pernicious cases, we apply and modify the Counterfactual Latent (CL) loss from Grant (2025) allowing representations from causal interventions to remain closer to the natural distribution, reducing the likelihood of harmful divergences while preserving the interpretive power of the interventions. Together, these results highlight a path towards more reliable interpretability methods.
Submission history
From: Satchel Grant [view email]
[v1]
Thu, 6 Nov 2025 18:32:34 UTC (6,122 KB)
[v2]
Sun, 9 Nov 2025 20:35:15 UTC (6,122 KB)
[v3]
Tue, 25 Nov 2025 05:01:44 UTC (6,972 KB)
[v4]
Sun, 30 Nov 2025 02:59:19 UTC (6,975 KB)
Deep Insight Think Deeper. See Clearer