This winter program provides 5 days of concentrated study of topics in the mathematical foundations of Machine Learning at the graduate level.
The school will offer 2 courses on topics at the forefront of current research in Machine Learning. Lectures will be paired with discussion sections, and programing projects. During discussion sessions students will be divided in small groups and work on sets of problems related to the courses topics.
The registration form can be found here. Deadline: 12/01/2023. Funding is available for graduate students that are US citizens or permanent residents. International students are still encouraged to participate in the school, however we do not expect funding to become available for them. For any further inquiries please contact the organizers via email at email@example.com.
Organizers: Matias Delgadino, Joe Kileel, and Richard Tsai.
Support: NSF RTG 1840314, NSF DMS 220593
Lecturers (advanced courses):
Nicolas Garcia-Trillos (Univ. Madison, Wisconsin)
Joe Kileel (UT Austin)
Titles and Abstracts:
Adversarial machine learning is an area of modern machine learning whose main goal is to study and develop methods for the design of learning models that are robust to adversarial perturbations of data. It became a prominent research field in machine learning less than a decade ago, not long after neural networks became the state of the art technology for tackling image processing and natural language processing tasks, when it was noticed that neural network models, as well as other learning models, although highly effective at making accurate predictions on clean data, were quite sensitive to adversarial attacks.
This mini-course seeks an exploration of the mathematical underpinnings of this active and vibrant field. We will be particularly interested in exploring it from analytic and geometric perspectives and discussing connections with topics such as regularization theory, game theory, optimal transport, geometry, and distributionally robust optimization. The mini-course aims to present the topic of adversarial machine learning within the bigger objective of designing safe, secure, and trustworthy AI models.
Machine learning is currently being applied to analyze large collections of high-dimensional data, in domains ranging from scientific computing to geometric data processing. A particular case of special interest is when data points themselves carry geometrical information, for example, when they represent images or three-dimensional volumes. In such settings it becomes critical for computational methods to exploit the underlying geometrical structure.
This mini-course is devoted to numerical analysis and data science techniques for analyzing such data sets — in short, shape spaces. Important concepts include manifold learning, group equivariance, optimal transport and fast numerical transforms. In particular, I will draw motivation from an imaging problem known as cryo-electron microscopy, although much of the discussion is applicable very broadly.