Seminars & Events for PACM/Applied Mathematics Colloquium

September 21, 2015
4:30pm - 5:30pm
Reduced Order Models for the numerical modelling of complex systems

This talk will address the challenge of complexity in numerical simulations of complex physical problems.When the latter exhibit a multiscale and or a multiphysics nature, appropriate mathematical models and accurate numerical methods are required to catch the essential features of the manifold components of the physical solution. Often the associated numerical problem is so large that devising computational reduction techniques, and developing efficient parallel algorithms by exploiting a dimensional reduction paradigm, becomes necessary.

Speaker: Alfio Quarteroni , École Polytechnique Fédérale de Lausanne, Switzerland
Location:
Fine Hall 214
September 24, 2015
4:30pm - 5:30pm
Special Colloquium: Bayesian Inversion for Functions and Geometry

Please note special day (Thursday). Many problems in the physical sciences require the determination of an unknown function from a finite set of indirect measurements. Examples include oceanography, medical imaging, oil recovery, water resource management and weather forecasting. Furthermore there are numerous inverse problems where geometric characteristics, such as interfaces, are key unknown features of the overall inversion. Applications include the determination of layers and faults within subsurface formations, and the detection of unhealthy tissue in medical imaging. We describe a theoretical and computational Bayesian framework relevant to the solution of inverse problems for functions, and for geometric features. We formulate Bayes' theorem on separable Banach spaces, a conceptual approach which leads to a form of probabilistic well-posedness and also to new and efficient MCMC algorithms which exhibit order of magnitude speed-up over standard methodologies. Furthermore the approach can be developed to apply to geometric inverse problems, where the geometry is parameterized finite-dimensionally and, via the level-set method, to infinite-dimensional parameterizations. In the latter case this leads to a well-posedness that is difficult to achieve in classical level-set inversion, but which follows naturally in the probabilistic setting.

[1] A.M. Stuart. Inverse problems: a Bayesian perspective. Acta Numerica 19(2010) 451--559. http://homepages.warwick.ac.uk/~masdr/BOOKCHAPTERS/stuart15c.pdf

[2] M. Dashti and A.M. Stuart. The Bayesian approach to inverse problems.To appear in Handbook of Uncertainty Quantification, Springer, 2016. http://arxiv.org/abs/1302.6989

[3] S.L.Cotter, G.O.Roberts, A.M. Stuart and D. White, MCMC methods for functions: modifying old algorithms to make them faster. Statistical Science, 28 (2013) 424-446. http://homepages.warwick.ac.uk/~masdr/JOURNALPUBS/stuart103.pdf

[4] M.A. Iglesias, K. Lin, A.M. Stuart, "Well-Posed Bayesian Geometric Inverse Problems Arising in Subsurface Flow", Inverse Problems, 30 (2014) 114001. http://arxiv.org/abs/1401.5571

[5] M.A. Iglesias, Y. Lu, A.M. Stuart, "A level-set approach to Bayesian geometric inverse problems", submitted. http://arxiv.org/abs/1504.00313

Speaker: Andrew Stuart , Warwick University
Location:
Fine Hall 214
September 28, 2015
4:30pm - 5:30pm
The Resilience of the Perceptron

The most widely used optimization method in machine learning practice is the Perceptron Algorithm, also known as the Stochastic Gradient Method (SGM). This method has been used since the fifties to build statistical estimators, iteratively improving models by correcting errors observed on single data points. SGM is not only scalable, robust, and simple to implement, but achieves the state-of-the-art performance in many different domains. In contemporary systems, SGM powers enterprise analytics systems and is the workhorse tool used to train complex pattern-recognition systems in speech and vision.

Speaker: Ben Recht, University of California, Berkeley
Location:
Fine Hall 214
October 5, 2015
4:30pm - 5:30pm
An Applied Math Perspective on Climate Science, Turbulence, and Other Complex Systems

You can view slides of the full Lagrange Prize lecture at:  http://www.math.nyu.edu/faculty/majda/pdfFiles/2014%20updates/2015/Lagrange_Prize-1.pdf . To read Professor Madja's full biography, click here: http://www.math.nyu.edu/faculty/majda/MajdaBio.html 

Speaker: Andrew Majda , New York University
Location:
Fine Hall 214
October 12, 2015
4:30pm - 5:30pm
Comparison Lemmas and Convex-Optimization-Based Signal Recovery

In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, finite constellation, etc.) from (possibly) noisy measurements in a variety of applications in statistics, signal processing, machine learning, communications, etc. I will describe a fairly general theory for how to determine the performance (minimum number of measurements, mean-square-error, probability-of-error, etc.) of such methods for certain measurement ensembles (Gaussian, Haar, quantized Gaussian, etc.).

Speaker: Babak Hassibi , Caltech
Location:
Fine Hall 214
October 19, 2015
4:30pm - 5:30pm
Applications of interlacing polynomials in graph theory

In this talk, I will discuss some recent applications of interlacing polynomials in graph theory. I will begin by discussing the more classical results concerning graph polynomials, including the real rootedness of the matching polynomial and the extension of Chudnovsky and Seymour to the independence polynomial of claw-free graphs. I will then discuss results using the ``method of interlacing polynomials'' developed by the speaker with Daniel Spielman and Nikhil Srivastava. In particular, I will mention recent results concerning the existence of Ramanujan graphs of all sizes and degrees as well as a recent improvement of the best known upper bound on the integrality gap in the asymmetric traveling salesman problem due to Anari and Oveis-Gharan.

Speaker: Adam Marcus , Princeton University
Location:
Fine Hall 214
October 26, 2015
4:30pm - 5:30pm
The master equation and the convergence problem in mean-field games

We discuss the convergence, as $N$ tends to infinity, of a system of $N$ coupled Hamilton-Jacobi equations, called the Nash system. This system arises in differential game theory. We describe the limit problem in terms of the so-called ``master equation", a kind of second order partial differential equation stated on the space of probability measures. Our first main result is the well-posedness of the master equation. To do so, we first show the existence and uniqueness of a solution to the ``mean field game system with common noise", which consists in a coupled system made of a backward stochastic Hamilton-Jacobi equation and a forward stochastic Kolmogorov equation and which plays the role of characteristics for the master equation.

Speaker: François Delarue , Université Nice-Sophia Antipolis
Location:
Fine Hall 214
November 9, 2015
4:30pm - 5:30pm
An Axiomatic Foundation for Non-Bayesian Learning in Networks

Rational learning postulates that individuals incorporate new information into their beliefs in a Bayesian fashion. Despite its theoretical appeal, this Bayesian learning framework has been criticized on the basis of placing unrealistic computational demands on the agents. Furthermore, experiments have  shown that the way agents update their beliefs in networked settings is often inconsistent with predictions of  Bayesian learning models.  Motivated by these issues, A large body of literature has emerged that  proposes a series of non-Bayesian updates that are often inspired by the linear (consensus) learning model of DeGroot. However, a systematic framework that captures behavioral deviations of such updates from Bayesian learning has been lacking.

Speaker: Ali Jadbabaie , Massachusetts Institute of Technology (MIT)
Location:
Fine Hall 214
November 16, 2015
4:30pm - 5:30pm
On the Complexity of Detecting Planted Solutions

Many combinatorial problems appear to be hard even when inputs are drawn from simple, natural distributions, e.g., SAT for random formulas, clique in random graphs etc. To understand their complexity, we consider random problems with planted solutions, e.g., planted k-SAT/k-CSP (clauses are drawn at random from those satisfying a fixed assignment), planted clique (a large clique is added to a random graph), planted partitions (edges of a random graph have different probabilities between and within parts of a fixed vertex partition). How many clauses/edges does one need  to find (or detect) planted solutions? At the moment there are large gaps between the point at which the planted solution becomes unique (information-theoretically) and when it can be found efficiently (in time polynomial in the size of the input).

Speaker: Santosh Vempala , Georgia Institute of Technology
Location:
Fine Hall 214
November 30, 2015
4:30pm - 5:30pm
Dissipation at Maximal Rate

The lecture will present facets of the conjecture that the role of entropy is to maximize the rate of dissipation.  BIO: Constantine Dafermos was born in Athens, in 1941. He received a diploma in Civil Engineering from the National Technical University, in 1964, and a Ph.D. in Mechanics from Johns Hopkins, in 1967. After teaching for three years at Cornell, he moved to Brown, in 1971, where he is currently a Professor of Applied Mathematics. His work lies at the interface between continuum mechanics and partial differential equations. 

Speaker: Constantine M. Dafermos , Brown University
Location:
Fine Hall 214
December 7, 2015
4:30pm - 5:30pm
Faster Convex Optimization - Simulated Annealing with an Efficient Universal Barrier

Interior point methods and random walk approaches have been long considered disparate approaches for convex optimization. We show how simulated annealing, one of the most common random walk algorithms, is equivalent, in a certain sense, to the central path interior point algorithm applied to the entropic universal barrier function. Using this observation we improve the state of the art in polynomial time convex optimization in the membership-oracle model.

Speaker: Elad Hazan , Princeton University
Location:
Fine Hall 214
December 14, 2015
4:30pm - 5:30pm
Active subspaces: Emerging ideas for dimension reduction in parameter studies

Scientists and engineers use computer simulations to study relationships between a physical model's input parameters and its outputs.  However, thorough parameter studies---e.g., constructing response surfaces, optimizing, or averaging---are challenging, if not impossible, when the simulation is expensive and the model has several inputs.  To enable studies in these instances, the engineer may attempt to reduce the dimension of the model's input parameter space.  Active subspaces are a set of dimension reduction tools that identify important directions in the parameter space.  I will describe methods for discovering a model's active subspace and propose strategies for exploiting the reduced dimension to enable otherwise infeasible parameter studies.

Speaker: Paul Constantine , Colorado School of Mines
Location:
Fine Hall 214
February 8, 2016
4:30pm - 5:30pm
Julia Computing (with some random matrices)

This talk is designed to be of interest for a computer science and mathematics audience, I will combine my two “passions” by  (1)  Explaining why the Julia Computing language is so fast.  Julia in some ways resembles Python, or R, or MATLAB, which makes Julia easy to use, but Julia is deeply architected very differently from these earlier languages, making Julia's modern approach  fast, flexible, and productive.  Julia is being used worldwide by scientists, engineers, in classes,  and in industry for big data, financial applications, and the internet of things. (2) Showing how Julia has enabled research in one of my own favorite topics: random matrices.  We will demonstrate several results that quantify and illuminate finite random matrix theory.

Speaker: Alan Edelman , Massachusetts Institute of Technology
Location:
Fine Hall 214
February 15, 2016
4:30pm - 5:30pm
Laplacian growth, sandpiles and scaling limits

How can repeating simple local operations lead to an intricate large scale structure? This phenomenon arises in several growth models originating in Physics: Internal diffusion limited aggregation (IDLA) and the Abelian sandpile. The first of these is closely related to free boundary problems for the Laplacian and an algebraic operation introduced by Diaconis and Fulton known as ``smash sum’’. These connections allow a precise description of large scale geometry, using a least action principle. The abelian sandpile, discovered independently by Statistical Physicists and Combinatorialists is harder to analyze, yet has recently yielded many of its secrets in works of Pegden, Smart and Levine.

Speaker: Yuval Peres , Microsoft Research & UC Berkeley
Location:
Fine Hall 214
February 22, 2016
4:30pm - 5:30pm
Computationally feasible greedy algorithms for neural nets

Previously, greedy algorithms have been shown to have good approximation and estimation properties for superpositions of a sigmoidal function or other ridge activation functions.  In each step the parameters of a new sigmoid are fit to the residuals of the previous sigmoids.

Speaker: Andrew Barron , Yale University
Location:
Fine Hall 214
February 29, 2016
4:30pm - 5:30pm
Using Graphons to Model Large Networks

A fundamental question in the study of large networks is how we compare networks.  Given two large networks, say FB and LinkedIn today, or alternatively, FB 5 year ago and FB today, how similar do we consider them to be?  Another question is how to describe large networks in terms of a suitable class of (possibly non-parametric) models.  Finally, how do we consistently estimate such a model from an observed graph?  In this talk, I describe how the theory of graph limits and graphons developed in the last ten years can help to address these questions, reviewing both the classical theory for dense graphs and newer results for sparse graphs including power law graphs.

Speaker: Christian Borgs, Microsoft
Location:
Fine Hall 214
March 7, 2016
4:30pm - 5:30pm
Navigating SU(2) and Universal Quantum Gates

We discuss recent developments concerning number theoretic generators of SU(2) and their application to optimally efficient universal quantum gates.

Speaker: Peter Sarnak, Princeton University
Location:
Fine Hall 214
March 28, 2016
4:30pm - 5:30pm
Beyond Matrix Completion

Here we study some of the statistical and algorithmic problems that arise in recommendation systems. We will be interested in what happens when we move beyond the matrix setting, to work with higher order objects — namely, tensors. To what extent does inference over more complex objects yield better predictions, but at the expense of the running time? We will explore the computational vs. statistical tradeoffs for some basic problems about recovering approximately low rank tensors from few observations, and will show that our algorithms are nearly optimal among all polynomial time algorithms, under natural complexity-theoretic assumptions.  This is based on joint work with Boaz Barak.

Speaker: Ankur Moitra, Massachusetts Institute of Technology
Location:
Fine Hall 214
April 4, 2016
4:30pm - 5:30pm
Coloring some perfect graphs

Perfect graphs are a class of graphs that behave particularly well with respect to coloring. In the 1960's Claude Berge made two conjectures about this class of graphs, that motivated a great deal of research, and by now they have both been solved.  The following remained open however: design a combinatorial algorithm that produces an optimal coloring of a perfect graph. Recently, we were able to make progress on this question, and we will discuss it in this talk. Last year, in joint work with Lo, Maffray, Trotignon and Vuskovic we were able to construct such an algorithm under the additional assumption that the input graph is square-free (contains no induced four-cycle).

Speaker: Maria Chudnovsky, Princeton University
Location:
Fine Hall 214
April 11, 2016
4:30pm - 5:30pm
A 3D Code in the Human Genome

Stretched out from end-to-end, the human genome – a sequence of 3 billion chemical letters inscribed in a molecule called DNA – is over 2 meters long. Famously, short stretches of DNA fold into a double helix,

Speaker: Erez Lieberman Aiden , Baylor College of Medicine and Rice University
Location:
Fine Hall 214

Pages