Publications
-
Towards Long-Form Video Understanding
Chao-Yuan Wu, Philipp Krähenbühl
CVPR, 2021
-
Learning to Set Waypoints for Audio-Visual Navigation
C. Chen, S. Majumder, Z. Al-Halah, R. Gao, S. Ramakrishnan, K. Grauman
ICLR, 2021
-
A Robust Spectral Clustering Algorithm for Sub-Gaussian Mixture Models with Outliers
Prateek R. Srivastava, Purnamrita Sarkar, Grani A. Hanasusanto
arXiv, 2021
-
Dexterous Robotic Grasping with Object-Centric Visual Affordances
P. Mandikal and K. Grauman
ICRA, 2021
-
SPECTRE: Defending against backdoor attacks using robust covariance estimation
Jonathan Hayase, Weihao Kong, Raghav Somani, Sewoong Oh
arXiv, 2021
-
Robust and Differentially Private Mean Estimation
Xiyang Liu, Weihao Kong, Sham Kakade, Sewoong Oh
arXiv, 2021
-
VisualVoice: Audio-Visual Speech Separation with Cross-Modal Consistency
R. Gao and K. Grauman
CVPR, 2021
-
A Spectral Analysis of Dot-product Kernels
Meyer Scetbon, Zaid Harchaoui
AISTATS, 2021
-
The PETLON Algorithm to Plan Efficiently for Task-Level-Optimal Navigation
Shih-Yun Lo, Shiqi Zhang, and Peter Stone
JAIR, 2020
-
A Penny for Your Thoughts: The Value of Communication in Ad Hoc Teamwork
Reuth Mirsky, William Macke, Andy Wang, Harel Yedidsion, and Peter Stone
IJCAI, 2020
-
Is a Good Representation Sufficient for Sample Efficient Reinforcement Learning
Simon S. Du, Sham M. Kakade, Ruosong Wang, Lin F. Yang
ICLR, 2020
-
Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey
Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E. Taylor, and Peter Stone
JMLR, 2020
-
End-to-End Learning for Retrospective Change-Point Estimation
Corinne Jones, Zaid Harchaoui
MLSP, 2020
-
Good Systems, Bad Data? Interpretations of AI Hype and Failures
Stephen C. Slota, Kenneth R. Fleischmann, Sherri Greenberg, Nitin Verma, Brenna Cummings, Lan Li, Chris Shenefiel
asis&t, 2020
-
Generalizing Curricula for Reinforcement Learning
Sanmit Narvekar and Peter Stone
ICML, 2020
-
First-order Optimization for Superquantile-based Supervised Learning
Yassine Laguel, Jérome Malick, Zaid Harchaoui
MLSP, 2020
-
Is Reinforcement Learning More Difficult Than Bandits? A Near-optimal Algorithm Escaping the Curse of Horizon
Zihan Zhang, Xiangyang Ji, Simon S. Du
arXiv, 2020
-
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
Daniel S. Brown, Russell Coleman, Ravi Srinivasan, Scott Niekum
ICML, 2020