Object Manipulation

Object manipulation in unstructured environments like homes and offices requires decision making under uncertainty. We want to investigate how we can do general, fast and robust object manipulation under uncertainty in a principled manner.

Learning To Grasp Under Uncertainty Using POMDPs

N. P. Garg, D. Hsu, and W. S. Lee. Learning To Grasp Under Uncertainty Using POMDPs. In Proc. IEEE Int. Conf. on Robotics & Automation 2019.

Robust object grasping under uncertainty is an essential capability of service robots. Many existing approaches rely on far-field sensors, such as cameras, to compute a grasp pose and perform open-loop grasp after placing gripper under the pose. This often fails as a result of sensing or environment uncertainty. This paper presents a principled, general, and efficient approach to adaptive grasping, using both tactile and visual sensing as feedback. We first model adaptive grasping as a partially observable Markov decision process (POMDP), which handles uncertainty naturally. We solve the POMDP for sampled objects from a set, in order to generate data for learning. Finally, we train a grasp policy, represented as a deep recurrent neural network (RNN), in simulation through imitation learning. By combining model-based POMDP planning and imitation learning, the proposed approach achieves robustness under uncertainty, generalization over many objects, and fast execution. In particular, we show that modeling only a small sample of objects enables us to learn a robust strategy to grasp previously unseen objects of varying shapes and recover from failure over multiple steps. Experiments on the G3DB object dataset in simulation and a smaller object set with a real robot indicate promising results.

Push-Net: Deep Planar Pushing for Objects with Unknown Physical Properties

J.K. Li, D. Hsu, and W.S. Lee. Push-Net: Deep planar pushing for objects with unknown physical properties. In Proc. Robotics: Science & Systems, 2018.

We introduce Push-Net, a deep recurrent neural network model, which enables a robot to push objects of unknown physical properties for re-positioning and re-orientation, using only visual camera images as input. The unknown physical properties is a major challenge for pushing. Push-Net overcomes the challenge by tracking a history of push interactions with an LSTM module and training an auxiliary objective function that estimates an object’s center of mass. We trained Push-Net entirely in simulation and tested it extensively on many different objects in both simulation and on two real robots, a Fetch arm and a Kinova MICO arm. Experiments suggest that Push-Net is robust and efficient. It achieved over 97% success rate in simulation on average and succeeded in all real robot experiments with a small number of pushes.

The code for Push-Net is available in github.

 

[Thesis 2019] Act to See and See to Act: A Robotic System for Object Retrieval in Clutter — Li Juekun

Object retrieval in clutter is an extraordinary challenge for robots. The challenges come from the incomplete knowledge of the environment. A robot has imperfect sensing due to occlusion among objects. At the same time, it must physically interact with objects of unknown physical properties.

We hypothesize that humans adopt the strategy of Act to See and See to Act to retrieve objects in clutter. We may rearrange (Act) objects to better understand (See) the scene, which in turn guides us to select better actions (Act) towards achieving the goal. This thesis adopts the same strategy that enables a robotic system to robustly and efficiently retrieve objects in clutter under uncertainties in sensing due to occlusion and uncertainties in control due to unknown objects’ physical properties, such as center of mass.

To alleviate uncertainties in sensing, we formulate the problem of object search in clutter as a Partially Observable Markov Decision Process (POMDP) with large state, action and observation spaces. With insights in spatial constraints of the problem, we improve the state-of-the-art POMDP solver, DEterminized Sparse Partially Observable Tree (DESPOT), to solve the POMDP efficiently. Through experiments in simulation, we show that the proposed planner was able to select actions to remove occlusion and reveal the target object efficiently. We further conclude that POMDP planning is effective for problems which require multi-step lookahead search.

To handle uncertainties in control, we devise Push-Net, a deep recurrent neural
network, which enables a robot to push an object from one configuration to another
robustly and efficiently. Capturing history of push interactions enables Push-Net
to push novel objects robustly. We perform experiments in simulation and on a
real robot, and show that embedding physical understanding (center of mass) about
objects in Push-Net helps select more effective push actions.

Finally, we improve and integrate both the POMDP planner and the Push-Net
into a real robotic system. We evaluate the system on a set of challenging scenarios.
The results demonstrate that the proposed system is able to retrieve the target object
robustly and efficiently in clutter. The success of the system is attributed to 1)
the ability to handle perceptual uncertainty due to occlusion; 2) the ability to push
objects of unknown physical properties in clutter; 3) the ability to perform multi-step
lookahead planning for efficient object search in complex environment.