Local laws of sample covariance matrices beyond separable case
Local laws of sample covariance matrices beyond separable case
Sample covariance matrices are among the most fundamental objects in random matrix theory and statistics. In this talk, I'll discuss recent work identifying the assumptions on random vectors that allow local laws to hold for their sample covariance matrices — these are matrices with iid rows sampled from a fixed distribution. A local law says that the empirical eigenvalue distribution converges to its deterministic limit—in this case the deformed Marchenko–Pastur law—not just globally, but on short intervals which still contain a power of dimension many eigenvalues. This fine-grained control is essential for many applications, including universality for the local eigenvalue distributions. The classical approach assumes the data vectors take a separable form g=Xw where w has independent entries—but this excludes many natural examples. We ask: what assumptions on g are really needed? It turns out that concentration of quadratic forms suffices for an optimal averaged local law, while a structural condition on cumulant tensors—interpolating between independence and generic dependence—suffices for the full anisotropic local law. I'll discuss key examples where our assumptions can be verified: sign-invariant vectors, the 'random features model’ from machine learning, and some examples of spin-glass type. I'll also give a short overview of the proof, which introduces a tensor network framework for fluctuation averaging in the presence of higher-order cumulant structure. Joint with Jack Ma (Yale), Zhou Fan (Yale), Zhichao Wang (Berkeley)