
Anbu Huang
Research Scientist, Author, AI Enthusiast, THU
- ShenZhen, China
- Github
- Stackoverflow
- Google Scholar
- ORCID
You May Also Enjoy
• A General Discussion of Flow Matching
5 minute read
Published:
Flow matching 12 is a continuous-time generative framework in which you learn a time-dependent vector field $v_{\theta}$, whose flow transports samples from a simple prior distribution ( usually a standard gaussian distribution) at $t=0$ to your target data distribution at $t=1$.
• PF-ODE Sampling in Diffusion Models
37 minute read
Published:
Diffusion sampling can be cast as integrating the probability flow ODE (PF-ODE), but dropping it into a generic ODE toolbox rarely delivers the best speed–quality trade-off. This post first revisits core numerical-analysis ideas. It then explains why vanilla integrators underperform on the semi-linear, sometimes stiff PF-ODE in low-NFE regimes, and surveys families that exploit diffusion-specific structure: pseudo-numerical samplers (PLMS/PNDM) and semi-analytic/high-order solvers (DEIS, DPM-Solver/++/UniPC). The goal is a practical, unified view of when and why these PF-ODE samplers work beyond “just use RK4.”
• A Panoramic View of Diffusion Model Sampling: From Classic Theory to Frontier Research
34 minute read
Published:
This article takes a deep dive into the evolution of diffusion model sampling techniques, tracing the progression from early score-based models with Langevin Dynamics, through discrete and non-Markov diffusion processes, to continuous-time SDE/ODE formulations, specialized numerical solvers, and cutting-edge methods such as consistency models, distillation, and flow matching. Our goal is to provide both a historical perspective and a unified theoretical framework to help readers understand not only how these methods work but why they were developed.
• Analysis of the Stability of Diffusion Model Training
19 minute read
Published:
while diffusion models have revolutionized generative AI, their training challenges stem from a combination of resource intensity, optimization intricacies, and deployment hurdles. A stable training process ensures that the model produces good quality samples and converges efficiently over time without suffering from numerical instabilities.