High-dimensional Regression and Dictionary Learning: Some Recent Advances for Tensor Data

-
Waheed U. Bajwa, Princeton University
Fine Hall 214

Please note only Princeton University ID holders will be admitted. All attendees must be masked and sign in upon entry.

Data in many modern signal processing, machine learning, and statistics problems tend to have tensor (aka, multidimensional / multiway array) structure. While traditional approaches to processing of such data involve 'flattening' of data samples into vectors, it has long been realized that explicit exploitation of tensor structure of data can lead to improved performance. Recent years, in particular, have witnessed a flurry of research activity centered around development of computational algorithms for improved processing of tensor data. Despite the effectiveness of such algorithms, an explicit theoretical characterization of the benefits of exploitation of tensor structure remains unknown in the high-dimensional setting for several problems.

In this talk, we focus on two such high-dimensional problems for tensor data, namely, high-dimensional regression and high-dimensional dictionary learning. The basic assumption in this talk for both these problems is that the dimensionality of data far exceeds the number of available data samples, so much so that existing approaches to regression (e.g., sparse regression) and dictionary learning (e.g., K-SVD) may not result in meaningful results. Under this high-dimensional setting, we discuss algorithms capable of exploiting certain low-dimensional structures underlying tensor data for effective regression and dictionary learning. In addition, we present sample complexity results for both high-dimensional problems that highlight the usefulness of the latent tensor structures being exploited by the presented algorithms in relation to existing works in the literature.