Aravind Rajeswaran

Research Scientist at Facebook AI Research (FAIR)

Visiting PostDoc / Collaborator,
Berkeley AI Research Lab (BAIR)

Contact: (or)
Google Scholar | Bio | CV | GitHub | Twitter

I am a research scientist in the FAIR labs division of Meta AI. I also collaborate closely with Prof. Pieter Abbeel's lab at UC Berkeley and Prof. Abhinav Gupta's lab at CMU.

I work on algorithmic foundations of deep learning and reinforcement learning. My current research focuses on pretraining and foundation models for decision making and Embodied Intelligence. Relevent topics include self-supervised representation learning, sequence models for decision making, and offline RL.

I recieved my PhD in CSE from the University of Washington working with Profs. Sham Kakade and Emo Todorov. During this time, I also worked closely with Sergey Levine and Chelsea Finn, and spent time as a student researcher at Google Brain and OpenAI. Before that, I recieved my Bachelors degree along with the best undergraduate thesis award from IIT Madras.

Representative Papers

Representation Learning for Decision Making

The (Un)Surprising Effectiveness of Pre-Trained Vision Models for Control
Simone Parisi*, Aravind Rajeswaran*, Senthil Purushwalkam, Abhinav Gupta
International Conference on Machine Learning (ICML) 2022 | (Long Oral)
Project Webpage

R3M: A Universal Visual Representation for Robot Manipulation
Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, Abhinav Gupta
Scaling Robot Learning Workshop at ICRA 2022 | (Best Paper Award)
Project Webpage

Can Foundation Models Perform Zero-Shot Task Specification For Robot Manipulation?
Yuchen Cui, Scott Niekum, Abhinav Gupta, Vikash Kumar, Aravind Rajeswaran
Learning for Dynamics and Control (L4DC) 2022
Scaling Robot Learning Workshop at RSS 2022 | (Finalist for Best Paper Award)
Project Webpage

Offline Reinforcement Learning

Decision Transformer: Reinforcement Learning via Sequence Modeling
Lili Chen*, Kevin Lu*, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin,
Pieter Abbeel, Aravind Srinivas, Igor Mordatch
Neural Information Processing Systems (NeurIPS) 2021 | Project Website

MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi*, Aravind Rajeswaran*, Praneeth Netrapalli, Thorsten Joachims
Neural Information Processing Systems (NeurIPS) 2020; Project Webpage

Meta-Learning: Theoretical & Algorithmic Foundations

Meta Learning with Implicit Gradients
Aravind Rajeswaran, Chelsea Finn, Sham Kakade, Sergey Levine
Neural Information Processing Systems (NeurIPS) 2019; Project Webpage

Online Meta-Learning
Chelsea Finn, Aravind Rajeswaran, Sham Kakade, Sergey Levine
International Conference on Machine Learning (ICML) 2019; arXiv:1902.08438

Applications in Robotics

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, John Schulman, Emanuel Todorov, Sergey Levine
Robotics: Science and Systems (RSS) 2018; Project Webpage

Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system
Kendall Lowrey, Svetsolav Kolev, Jeremy Dao, Aravind Rajeswaran, Emanuel Todorov
IEEE SIMPAR 2018; arXiv:1803.10371 (Best paper award!)


I enjoy collaborating with a diverse set of students and researchers. I have had the pleasure of mentoring some highly motivated students at both the undergraduate and PhD levels. List of current students and alumni.

All Publications and Preprints

See this publication page or google scholar.


CSE599G: Deep Reinforcement Learning (Instructor)
I designed and co-taught a course on deep reinforcement learning at UW in Spring 2018. The course presents a rigorous mathematical treatment of various RL algorithms along with illustrative applications in robotics. Deep RL courses at UW, MIT, and CMU have borrowed and built upon the material I developed for this course.

CSE547: Machine Learning for Big Data (Teaching Assistant)
This is an advanced graduate level course on machine learning with emphasis on machine learning at scale and distributed algorithms. Topics covered include hashing, sketching, streaming, large-scale distributed optimization, federated learning, and contextual bandits. I was the lead TA for this class.

CSE546: Machine Learning (Teaching Assistant)
This is the introductory graduate level machine learning class at UW.