I lead a research group, RPAD: Robots Perceiving And Doing, at Carnegie Mellon University
You can check out my lab website:

[RPAD Website][Publications][Lab Members]

About Me

I am an assistant professor at CMU in the Robotics Institute. Prior to my appointment at CMU, I worked as a post-doc at UC Berkeley with Pieter Abbeel on deep reinforcement learning for object manipulation. I completed my Ph.D. in computer science at Stanford working with Sebastian Thrun and Silvio Savarese on perception for self-driving cars. I also have a B.S. and M.S. in mechanical engineering from MIT.

You can also download my CV.

Joining my Group

If you are interested in coming to CMU to join my group as a Ph.D. student, there is no need to email me. Just apply to CMU's Ph.D. program! You should apply to either the Robotics Institute Ph.D. program or the Machine Learning Ph.D. program and mention my name in your research statement. After you get accepted, you should contact me to discuss the possibility of working in my group.

Teaching

Spring 2018:16-831: Statistical Techniques in Robotics
Spring 2019: 16-881: Seminar: Deep Reinforcement Learning for Robotics
Fall 2019: 16-831: Statistical Techniques in Robotics
Spring 2020: 16-881: Seminar: Deep Reinforcement Learning for Robotics
Fall 2020: 16-831: Statistical Techniques in Robotics
Spring 2021: 16-881: Seminar: Deep Reinforcement Learning for Robotics

Research Interests

My research lies at the intersection of robotics, machine learning, and computer vision.

I am interested in developing methods for robotic perception and control that can allow robots to operate in the messy, cluttered environments of our daily lives. My approach is to design new deep learning / machine learning algorithms to understand environmental changes: how dynamic objects in the environment can move and how to affect the environment to achieve a desired task.

I have applied this idea of learning to understand environmental changes to improve a robot's capabilities in two domains: object manipulation and autonomous driving. I am currently working on learning to control indoor robots for various object manipulation tasks, dealing with questions about multi-task learning, robust learning, simulation to real-world transfer, and safety. Within autonomous driving, I have shown how, by modeling object appearance changes, we can improve a robot's capabilities for every part of the robot perception pipeline: segmentation, tracking, velocity estimation, and object recognition. By teaching robots to understand and affect environmental changes, I hope to open the door to many new robotics applications, such as robots for our homes, assisted living facilities, schools, hospitals, or disaster relief areas.

To find out more, check out my lab website: [RPAD Website][Publications][Lab Members]







Elliot Dunlap Smith Hall (EDSH), Room 213