Yuxiang Yang

I'm an AI Resident working in Robotics at Google. My research focuses on applying data-driven methods to solve complex robot control problems in the real world. More specifically, I currently work on applying reinforcement learning algorithms for legged locomotion in our quadruped platforms.

Prior to joining Google, i was an undergraduate student at UC Berkeley. I was honored to work with professor Ronald Fearing on the OpenRoACH project, where I gained my first knowledge about the full spectrum of robotics, from soldering circuits to designing leg controllers.

profile photo

Email  /  GitHub  /  Google Scholar  /  LinkedIn  /  CV

Research

I'm generally interested in robotics, control theory and machine learning. Even better, I would love to see the combination of them that solves complex, real-world problems.

project image

ES-MAML: Simple Hessian-Free Meta Learning


Xingyou Song, Wenbo Gao, Yuxiang Yang, Krzysztof Choromanski, Aldo Pacchiano, Yunhao Tang
NeurIPS Workshop on Meta Learning, 2019
arxiv /

We introduce ES-MAML, a new framework for solving the model agnostic meta learning (MAML) problem based on Evolution Strategies (ES). We show how ES can be applied to MAML to obtain an algorithm which avoids the problem of estimating second derivatives, and is also conceptually simple and easy to implement. Moreover, ES-MAML can handle new types of nonsmooth adaptation operators, and other techniques for improving performance and estimation of ES methods become applicable. We show empirically that ES-MAML is competitive with existing methods and often yields better adaptation with fewer queries.

project image

Data Efficient Reinforcement Learning for Legged Robots


Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Tingnan Zhang, Jie Tan, Vikas Sindhwani
Conference on Robot Learning (CoRL), 2019
arxiv / video /

We apply model-based reinforcement learning for the Minitaur quadruped robot. With a novel loss function to ensure long-horizon accuracy, careful consideration of planning latency and accounting for safe exploration, our algorithm allows the robot to learn to walk within 5 minutes , which is order-of-magnitudes more sample efficient than model-free methods. Since we are modeling the dynamics not the policy, the algorithm also generalizes zero-shot to unseen tasks.

project image

NoRML: No-Reward Meta Learning


Yuxiang Yang, Ken Caluwaerts, Atil Iscen, Jie Tan, Chelsea Finn
International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2019
arxiv / code / website /

We introduce a new algorithm for meta reinforcement learning that is more effective at adapting to dynamics changes. The key difference is the introduction of a learned advantage function, with which adaptation can happen without ground-truth reward signal. The new algorithm adapts more effectively and remains effective on sparse-reward occasions.

project image

OpenRoACH: A Durable Open-Source Hexapedal Platform


Liyu Wang, Yuxiang Yang, Gustavo Correa, Konstantinos Karydis, Ronald S Fearing
IEEE International Conference on Robotics and Automation (ICRA), 2019
arxiv / video / website /

We present a open-sourced, ROS-enabled legged robot platform for research and education. The robot is fully open-sourced, costs less than $200 to build and survived a 24-hour burn-in test. The ROS interface makes it easy for the robot to interact with existing sensors, and we demonstrate several tasks including path following and AR Tag tracking.





Design and source code from Jon Barron's website