• Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
This is a page not in th emain menu
Published:
Flow matching 12 is a continuous-time generative framework in which you learn a time-dependent vector field $v_{\theta}$, whose flow transports samples from a simple prior distribution ( usually a standard gaussian distribution) at $t=0$ to your target data distribution at $t=1$.
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
Diffusion sampling can be cast as integrating the probability flow ODE (PF-ODE), but dropping it into a generic ODE toolbox rarely delivers the best speed–quality trade-off. This post first revisits core numerical-analysis ideas. It then explains why vanilla integrators underperform on the semi-linear, sometimes stiff PF-ODE in low-NFE regimes, and surveys families that exploit diffusion-specific structure: pseudo-numerical samplers (PLMS/PNDM) and semi-analytic/high-order solvers (DEIS, DPM-Solver/++/UniPC). The goal is a practical, unified view of when and why these PF-ODE samplers work beyond “just use RK4.”
Published:
This article takes a deep dive into the evolution of diffusion model sampling techniques, tracing the progression from early score-based models with Langevin Dynamics, through discrete and non-Markov diffusion processes, to continuous-time SDE/ODE formulations, specialized numerical solvers, and cutting-edge methods such as consistency models, distillation, and flow matching. Our goal is to provide both a historical perspective and a unified theoretical framework to help readers understand not only how these methods work but why they were developed.
Published:
while diffusion models have revolutionized generative AI, their training challenges stem from a combination of resource intensity, optimization intricacies, and deployment hurdles. A stable training process ensures that the model produces good quality samples and converges efficiently over time without suffering from numerical instabilities.
Published:
Diffusion models have been shown to be a highly promising approach in the field of image generation. They treat image generation as two independent processes: the forward process, which transforms a complex data distribution into a known prior distribution (typically a standard normal distribution) by gradually injecting noise; and the reverse process, which transforms the prior distribution back into the complex data distribution by gradually removing the noise.
• Authors: Jiahuan Luo, Xueyang Wu, Yun Luo, Anbu Huang, Yunfeng Huang, Yang Liu, Qiang Yang
• Published in Published in NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, 2019
• Authors: Dashan Gao, Yang Liu, Anbu Huang, Ce Ju, Han Yu, Qiang Yang
• Published in Published in 2019 IEEE International Conference on Big Data (Big Data), 2019
• Authors: Anbu Huang, Yuanyuan Chen, Yang Liu, Tianjian Chen, Qiang Yang
• Published in Published in the 24th European Conference on Artificial Intelligence, 2020
• Authors: Yang Liu, Anbu Huang, Yun Luo, He Huang, Youzhi Liu, Yuanyuan Chen, Lican Feng, Tianjian Chen, Han Yu, Qiang Yang
• Published in Published in the Proceedings of the AAAI Conference on Artificial Intelligence, 2020
• Authors: Xin Hou, Biao Wang, Wanqi Hu, Lei Yin, Anbu Huang, Haishan Wu
• Published in Published in ICLR 2020 Workshop on Tackling Climate Change with Machine Learning, 2020
• Authors: Anbu Huang
• Published in ArXiv Preprint, 2020
• Authors: Anbu Huang, Yang Liu, Tianjian Chen, Yongkai Zhou, Quan Sun, Hongfeng Chai, Qiang Yang
• Published in Published in ACM Transactions on Intelligent Systems and Technology (ACM TIST), 2021
• Authors: Yang Liu, Anbu Huang, Yun Luo, He Huang, Youzhi Liu, Yuanyuan Chen, Lican Feng, Tianjian Chen, Han Yu, Qiang Yang
• Published in Published in AI Magazine 2021, 2021
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.