Mathematics of Machine Learning Summer School

Learning theory is a rich field at the intersection of statistics, probability, computer science, and optimization. Over the last decades the statistical learning approach has been successfully applied to many problems of great interest, such as bioinformatics, computer vision, speech processing, robotics, and information retrieval. These impressive successes relied crucially on the mathematical foundation of statistical learning.

Recently, deep neural networks have demonstrated stunning empirical results across many applications like vision, natural language processing, and reinforcement learning. The field is now booming with new mathematical problems, and in particular, the challenge of providing theoretical foundations for deep learning techniques is still largely open. On the other hand, learning theory already has a rich history, with many beautiful connections to various areas of mathematics (e.g., probability theory, high dimensional geometry, game theory). The purpose of the summer school is to introduce graduate students (and advanced undergraduates) to these foundational results, as well as to expose them to the new and exciting modern challenges that arise in deep learning and reinforcement learning.

We are excited to have five wonderful lecturers: Joan Bruna, (NYU) Emma Brunskill (Stanford University), Sebastien Bubeck (Microsoft Research), Kevin Jamieson (University of Washington), and Robert Schapire (Microsoft Research).

General Information

Organizers: Sebastien Bubeck (Microsoft Research), Anna Karlin (University of Washington) and Adith Swaminathan (Microsoft Research).

Brought to you through the funding support of the academic sponsoring institutions of the Mathematical Sciences Research Institute, Berkeley, CA, Microsoft Research, and the Paul G. Allen School of Computer Science and Engineering at the University of Washington, and in cooperation with the Algorithmic Foundations of Data Science Institute at the University of Washington.