Exact and approximate symmetries in machine learning

Soledad Villar, Johns Hopkins
Fine Hall 214


There has been enormous progress in the last few years in designing neural networks that respect the fundamental symmetries and coordinate freedoms of physical law. Some of these frameworks make use of irreducible representations, some make use of high-order tensor objects, and some apply symmetry-enforcing constraints. Different physical laws obey different combinations of fundamental symmetries, but a large fraction (possibly all) of classical physics is equivariant to translation, rotation, reflection (parity), boost (relativity), and permutations. In this talk, we give an overview of the use of exact and approximate symmetries in ML, from graph neural networks to self-supervised learning.