A new method for the best subset selection problem

-
Boaz Nadler, Weizmann Institute of Science
Fine Hall 214

In this talk we consider the following sparse approximation or best subset selection problem:

Given a response vector y and a matrix A, find a k-sparse vector x that minimizes the residual ||Ax-y||. This sparse linear regression problem and related variants play a key role in high dimensional statistics, machine learning, compressed sensing, signal and image processing and more. This NP-hard problem is typically solved by minimizing a relaxed objective, consisting of a data-fit term and a penalty term, for example, the popular Lasso. In this talk, we focus on the non-separable trimmed lasso penalty, defined as the L_1 norm of x minus the L_1 norm of its top k entries in absolute value. We show that this penalty has several appealing theoretical properties. However, it is difficult to optimize, being non-smooth. We propose the generalized soft-min penalty, a smooth surrogate that takes into account all possible k-sparse patterns. We derive a polynomial time algorithm to compute it, which in turn yields a novel method for the best subset selection problem. Numerical simulations illustrate its competitive performance compared to current state of the art.  

Joint work with Tal Amir and Ronen Basri.

Boaz Nadler received his Ph.D. at Tel Aviv University. After 3 years as a Gibbs instructor/assistant professor at Yale University. He joined the faculty at the department of computer science and applied mathematics at the Weizmann Institute of Science. His research interests span mathematical statistics, machine learning and various applications, including optics and signal processing.