View a PDF of the paper titled Statistical Inference for Temporal Difference Learning with Linear Function Approximation, by Weichen Wu and 3 other authors
Abstract:We investigate the statistical properties of Temporal Difference (TD) learning with Polyak-Ruppert averaging, arguably one of the most widely used algorithms in reinforcement learning, for the task of estimating the parameters of the optimal linear approximation to the value function. Assuming independent samples, we make three theoretical contributions that improve upon the current state-of-the-art results: (i) we establish refined high-dimensional Berry-Esseen bounds over the class of convex sets, achieving faster rates than the best known results, and (ii) we propose and analyze a novel, computationally efficient online plug-in estimator of the asymptotic covariance matrix; (iii) we derive sharper high probability convergence guarantees that depend explicitly on the asymptotic variance and hold under weaker conditions than those adopted in the literature. These results enable the construction of confidence regions and simultaneous confidence intervals for the linear parameters of the value function approximation, with guaranteed finite-sample coverage. We demonstrate the applicability of our theoretical findings through numerical experiments.
Submission history
From: Weichen Wu [view email]
[v1]
Mon, 21 Oct 2024 15:34:44 UTC (3,489 KB)
[v2]
Thu, 13 Feb 2025 13:11:46 UTC (3,503 KB)
[v3]
Wed, 28 May 2025 00:49:57 UTC (3,443 KB)
[v4]
Fri, 3 Oct 2025 14:41:02 UTC (2,139 KB)
[v5]
Tue, 24 Feb 2026 12:51:18 UTC (2,206 KB)
Deep Insight Think Deeper. See Clearer