Analysis of score-based generative models

-
Jianfeng Lu, Duke University
Fine Hall 214

In-Person Talk 

Score-based generative modeling (SGM) is a highly successful approach for learning a probability distribution from data and generating further samples, based on learning the score function (gradient of log-pdf) and then using it to simulate a stochastic differential equation that transforms white noise into the data distribution.

In this talk, we will discuss some recent works in convergence analysis of SGM. In particular, we established convergence of SGM applying to any distribution with bounded 2nd moment, relying only on a $L^2$-accurate score estimate, with polynomial dependence on all parameters and no reliance on smoothness or functional inequalities. 

Based on joint works with Hongrui Chen (Peking University), Holden Lee (Johns Hopkins) and Yixin Tan (Duke).