View a PDF of the paper titled STAPO: Stabilizing Reinforcement Learning for LLMs by Silencing Rare Spurious Tokens, by Shiqi Liu and 12 other authors
Abstract:Reinforcement Learning (RL) has significantly improved large language model reasoning, but existing RL fine-tuning methods rely heavily on heuristic techniques such as entropy regularization and reweighting to maintain stability. In practice, they often suffer from late-stage performance collapse, leading to degraded reasoning quality and unstable training. Our analysis shows that the magnitude of token-wise policy gradients in RL is negatively correlated with token probability and local policy entropy. We find that training instability can be caused by a tiny fraction of tokens, approximately 0.01\%, which we term \emph{spurious tokens}. When such tokens appear in correct responses, they contribute little to the reasoning outcome but inherit the full sequence-level reward, leading to abnormally amplified gradient updates. To mitigate this instability, we design S2T (silencing spurious tokens) mechanism to efficiently identify spurious tokens through characteristic signals with low probability, low entropy, and positive advantage, and then to suppress their gradient perturbations during optimization. Incorporating this mechanism into a group-based objective, we propose Spurious-Token-Aware Policy Optimization (STAPO), which promotes stable and effective large-scale model refinement. Across six mathematical reasoning benchmarks using Qwen 1.7B, 8B, and 14B base models, STAPO consistently demonstrates superior entropy stability and achieves an average performance improvement of 7.13\% ($\rho_{\mathrm{T}}$=1.0, top-p=1.0) and 3.69\% ($\rho_{\mathrm{T}}$=0.7, top-p=0.9) over GRPO, 20-Entropy and JustRL.
Submission history
From: Shiqi Liu [view email]
[v1]
Tue, 17 Feb 2026 14:46:48 UTC (6,890 KB)
[v2]
Wed, 18 Feb 2026 14:13:03 UTC (6,922 KB)
Deep Insight Think Deeper. See Clearer