Joint IFML/MPG Symposium at Simons Institute

IFML /Simons
researchers from the NSF AI Institute for Foundations of Machine Learning (IFML) and the Simons Institute fall programs on Modern Paradigms in Generalization, and on LLMs and Transformers, hosted a joint symposium. The workshop focused on relationships between learning algorithms, generalization, and advanced architectures in LLMs. With this goal, the symposium gathered speakers from these areas to foster collaborations cross-cutting these themes.

Organizers
Matus Telgarsky (NYU)

Parameter-free (Second Order) for Mon-Max Optimization
Ali Kavis (UT Austin)

Testing Noise Assumptions of Learning Algorithms
Arsen Vasilyan (Simons Institute)

Looking at the Problem—Let X and Y Be Two Sample Spaces
Zaid Harchaoui (UW)

On the Role of Attention-Prompt Tuning with Tangents to Particle Picking in cryo-ET
Mahdi Soltanolkotabi (USC)

Learning General Gaussian Mixtures With Efficient Score Matching
Vasilis Kontonis (UT Austin)

Robust Mixture Learning when Outliers Overwhelm Small Groups
Fanny Yang (ETH Zurich)

Beyond Decoding: Meta-Generation Algorithms for Large Language Models
Sean Welleck (Carnegie Mellon)

Omnipredicting Single-Index Models with Multi-Index Models
Kevin Tian (UT Austin)

Understanding Contrastive Learning and Self-training
Sujay Sanghavi (UT Austin)

Revisiting Scalarization in Multi-Task Learning
Han Zhao (University of Illinois Urbana-Champaign)

Bypassing the Impossibility of Online Learning Thresholds: Unbounded Losses and Transductive Priors
Nikita Zhivotovskiy (UC Berkeley)

DataComp: Creating large public datasets for the AI open-source community
Alex Dimakis (UT Austin)

Some Easy Optimization Problems Have the Overlap-Gap Property 
Tselil Schramm (Stanford)

The Truth about your Lying Calibrated Forecaster
Nika Haghtalab (UC Berkeley)

Learning from Dynamics
Ankur Moitra (MIT)