Learning Safe-Falling Strategies for Humanoids
We develop an off-policy reinforcement learning algorithm with a mixture of actor-critic experts to teach robots how to fall by minimizing impulse.
Improving Humanoid Safety by Combining Model-Based Control with Reinforcement Learning
We develop an algorithm to teach robots ability to balance without falling when an external push is applied. We combine model-based control inputs with model free policy learning to improve performance. We also present a curriculum to enable efficient learning.
Joint space control
We develop a curriculum leaning algorithm to learn joint space control of robot manipulators which can achieve low end-effector error (<1cm) ,generate smooth control comparable to Operational space control and avoid obstacles.
Learning To Prevent Falls with Assistive Device
We develop reinforcement learning algorithm to learn fall prevention control policy for an assistive device like lower-limb exoskeleton.
Visio-tactile control policy for multi-fingered robot hand
We investigated the effect of incorporating different sensing modalities such as tactile and vision on policy performance on tasks such as grasping on real world Kuka-Allegro robot system.
Learning to Walk on Treadmill
We learn a control policy to create human agents that can walk on treadmill in simulation. The biomechanical gait characteristics of the human agent is similar to real-world human walking