Bio
I’m a recent PhD graduate of CMU’s Robotics Institute. My research combines machine learning and model-based planning and control for structured, scalable autonomy. In the past, I’ve worked on planning for contact-rich manipulation, safe control for agile aerial robots, and adaptive control for locomotion under disturbances. At CMU, I was advised by Changliu Liu and John Dolan and a recipient of the Qualcomm Graduate Fellowship. I have also spent time at the Robotics and AI Institute (previously the Boston Dynamics AI Institute).
Prior to CMU, I did my undergrad at UC Berkeley in EECS and Math, where I worked with Sergey Levine on deep RL for robotics.
You can reach me at simin.liu.1314 -at- gmail dot com
I am on the job market — please reach out if you have a relevant role!
News
- [Fall 2025] Passed defense!
- [April 2025] Paper accepted at ACM Transactions on Cyber-Physical Systems
- [Sept 2024–May 2025] Research internship at the Robotics and AI Institute, with Tao Pang
- [Jun 2024] Paper accepted at ECC
- [May 2023] Selected for Qualcomm Graduate Fellowship
- [April 2023] Paper accepted at ICLR
- [Sept 2022] Paper accepted at CORL
Portfolio
High-Performance Planning for Contact-Rich Manipulation
Sampling-based planners for contact-rich manipulation are common, but they produce circuitous, inefficient trajectories. Improving beyond these methods is hard because the action space is combinatorial and cannot be exhaustively searched. Our insight is to reduce the action space to higher-level, algorithmically-generated reachable set primitives, enabling optimal search in this space in under a minute for bimanual manipulation.
Our method generates short, direct plans that leverage all manipulator surfaces, not just end-effectors.
Safe Control
We build reactive safety filters that wrap a nominal controller, modifying its commands only when safety is at risk. A good filter is minimally invasive while respecting input bounds and system dynamics that limit how quickly safety maneuvers can be executed.
Safe Control for Uncertain Systems
Most safety filter synthesis approaches assume a known model, which is impractical. We consider systems with uncertain model parameters and devise a sum-of-squares programming algorithm for synthesis. We generate a geofencing (stay-within-region) safety filter for a drone with unknown drag in minutes on a regular laptop CPU.
Our safety filter keeps the drone inside the geofence despite unknown wind gusts.
Safe Control for High-Dimensional Systems
Grid-based RL can synthesize safety filters via an optimal control formulation, but it quickly becomes intractable beyond ~6D. We take inspiration from deep RL and nonlinear control, posing this problem as training a neural function to satisfy control barrier function (CBF) conditions. We synthesize a safety filter for a 10D system with <2 hours of training, and it triggers orders of magnitude less often than model predictive control (MPC).
Our safety filter prevents the pendulum from falling while the nominal controller stabilizes the quadrotor (10D quadrotor–pendulum system).
Model-Based RL for Locomotion Under Disturbances
We study adaptive locomotion under a broad range of previously unseen disturbances (external forces, state-estimation error, and unmodeled effects), where both purely model-based methods and standard RL can struggle to generalize. We combine adaptive control with meta-learning, performing online model estimation on a neural dynamics model and applying the model inside a sampling-based controller. We pre-train dynamics features offline using 1–2 hours of disturbance data, and at deployment we find the controller can track a path closely despite unseen disturbances.
The robot closely tracks the path despite leg loss, terrain changes, payload variation, and state-estimation error.
