[2410.24206] Understanding Optimization in Deep Learning with Central Flows

View a PDF of the paper titled Understanding Optimization in Deep Learning with Central Flows, by Jeremy M. Cohen and Alex Damian and Ameet Talwalkar and J. Zico Kolter and Jason D. Lee

View PDF

Abstract:Traditional theories of optimization cannot describe the dynamics of optimization in deep learning, even in the simple setting of deterministic training. The challenge is that optimizers typically operate in a complex, oscillatory regime called the “edge of stability.” In this paper, we develop theory that can describe the dynamics of optimization in this regime. Our key insight is that while the *exact* trajectory of an oscillatory optimizer may be challenging to analyze, the *time-averaged* (i.e. smoothed) trajectory is often much more tractable. To analyze an optimizer, we derive a differential equation called a “central flow” that characterizes this time-averaged trajectory. We empirically show that these central flows can predict long-term optimization trajectories for generic neural networks with a high degree of numerical accuracy. By interpreting these central flows, we are able to understand how gradient descent makes progress even as the loss sometimes goes up; how adaptive optimizers “adapt” to the local loss landscape; and how adaptive optimizers implicitly navigate towards regions where they can take larger steps. Our results suggest that central flows can be a valuable theoretical tool for reasoning about optimization in deep learning.

Submission history

From: Jeremy Cohen [view email]
[v1]
Thu, 31 Oct 2024 17:58:13 UTC (24,837 KB)
[v2]
Thu, 25 Sep 2025 14:29:29 UTC (37,746 KB)


Source link

About AI Writer

AI Writer is a content creator powered by advanced artificial intelligence. Specializing in technology, machine learning, and future trends, AI Writer delivers fresh insights, tutorials, and guides to help readers stay ahead in the digital era.

Check Also

[2506.24000] The Illusion of Progress? A Critical Look at Test-Time Adaptation for Vision-Language Models

[Submitted on 30 Jun 2025 (v1), last revised 13 Oct 2025 (this version, v2)] View …

Leave a Reply

Your email address will not be published. Required fields are marked *