Aravind Rajeswaran

Research Scientist at Facebook AI Research (FAIR)

Visiting PostDoc / Collaborator,
Berkeley AI Research Lab (BAIR)

Contact:   aravraj@fb.com (or) aravraj@cs.uw.edu
Google Scholar | Bio | CV | GitHub | Twitter


I am a research scientist at FAIR, working with the Embodied AI and RL teams. I also collaborate with and advise students in Prof. Abhinav Gupta's lab in FAIR/CMU and Prof. Pieter Abbeel's lab in UC Berkeley.

I work on algorithmic foundations of deep learning and reinforcement learning. My recent focus areas include learning from passive or offline experience, representation learning, and learning generative models for decision making. I use these algorithmic tools in applications like robotics, personalized recommendation systems, and character animation.

I recieved my PhD in CSE from the University of Washington working with Profs. Sham Kakade and Emo Todorov. During this time, I also worked closely with Sergey Levine and Chelsea Finn, and spent time as a student researcher at Google Brain and OpenAI. Before that, I recieved my Bachelors degree along with the best undergraduate thesis award from IIT Madras.


Representative Papers

A Game Theoretic Framework for Model Based Reinforcement Learning
Aravind Rajeswaran, Igor Mordatch, Vikash Kumar
International Conference on Machine Learning (ICML) 2020; Project Webpage

MOReL : Model-Based Offline Reinforcement Learning
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims
Neural Information Processing Systems (NeurIPS) 2020; Project Webpage

Meta Learning with Implicit Gradients
Aravind Rajeswaran, Chelsea Finn, Sham Kakade, Sergey Levine
Neural Information Processing Systems (NeurIPS) 2019; Project Webpage

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, John Schulman, Emanuel Todorov, Sergey Levine
Robotics: Science and Systems (RSS) 2018; Project Webpage

Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch
International Conference on Learning Representations (ICLR) 2019; Project Webpage


Mentoring

I enjoy collaborating with a diverse set of students and researchers. I have had the pleasure of mentoring some highly motivated students at both the undergraduate and PhD levels.


All Publications and Preprints

See this publication page or google scholar.


Teaching

CSE599G: Deep Reinforcement Learning (Instructor)
I designed and co-taught a course on deep reinforcement learning at UW in Spring 2018. The course presents a rigorous mathematical treatment of various RL algorithms along with illustrative applications in robotics. Deep RL courses at UW, MIT, and CMU have borrowed and built upon the material I developed for this course.

CSE547: Machine Learning for Big Data (Teaching Assistant)
This is an advanced graduate level course on machine learning with emphasis on machine learning at scale and distributed algorithms. Topics covered include hashing, sketching, streaming, large-scale distributed optimization, federated learning, and contextual bandits. I was the lead TA for this class.

CSE546: Machine Learning (Teaching Assistant)
This is the introductory graduate level machine learning class at UW.