Published work

W. Bejjani, W. C. Agboh, M. R. Dogar and M. Leonetti, "Occlusion-Aware Search for Object Retrieval in Clutter". In: IEEE/RSJ IROS, 2021.

Abstract: We address the manipulation task of retrieving a target object from a cluttered shelf. When the target object is hidden, the robot must search through the clutter for retrieving it. Solving this task requires reasoning over the likely locations of the target object. It also requires physics reasoning over multi-object interactions and future occlusions. In this work, we present a data-driven hybrid planner for generating occlusion-aware actions in closed-loop. The hybrid planner explores likely locations of the occluded target object as predicted by a learned distribution from the observation stream. The search is guided by a heuristic trained with reinforcement learning to act on observations with occlusions. We evaluate our approach in different simulation and real-world settings (video available on https://youtu.be/dY7YQ3LUVQg). The results validate that our approach can search and retrieve a target object in near real time in the real world while only being trained in simulation.

Click here for the full paper.

W. Bejjani, M. Leonetti and M. R. Dogar, "Learning Image-Based Receding Horizon Planning for Manipulation in Clutter". In: Robotics and Autonomous Systems, 2021.

Abstract: The manipulation of an object into a desired location in a cluttered and restricted environment requires reasoning over the long-term consequences of an action while reacting locally to the multiple physics-based interactions. We present Visual Receding Horizon Planning (VisualRHP) in a framework which interleaves real-world execution with look-ahead planning to efficiently solve a short-horizon approximation to a multi-step sequential decision making problem. VisualRHP is guided by a learned heuristic that acts on an abstract colour-labelled image-based representation of the state. With this representation, the robot can generalize its behaviours to different environment setups, that is, different number and shape of objects, while also having transferable manipulation skills that can be applied to a multitude of real-world objects. We train the heuristic with imitation and reinforcement learning in discrete and continuous actions spaces. We detail our heuristic learning process for environments with sparse rewards, and non-linear, non-continuous, dynamics. In particular, we introduce necessary changes for improving the stability of existing reinforcement learning algorithms that use neural networks with shared parameters. In a series of simulation and real-world experiments, we show the robot performing prehensile and non-prehensile actions in synergy to successfully manipulate a variety of real-world objects in real-time.

Click here for the full paper.

W. Bejjani, M. R. Dogar and M. Leonetti, "Learning Physics-Based Manipulation in Clutter: Combining Image-Based Generalization and Look-Ahead Planning". In: IEEE/RSJ IROS, 2019.

Abstract: Physics-based manipulation in clutter involves complex interaction between multiple objects. In this paper, we consider the problem of learning, from interaction in a physics simulator, manipulation skills to solve this multi-step sequential decision making problem in the real world. Our approach has two key properties: (i) the ability to generalize and transfer manipulation skills (over the type, shape, and number of objects in the scene) using an abstract image-based representation that enables a neural network to learn useful features; and (ii) the ability to perform look-ahead planning in the image space using a physics simulator, which is essential for such multi-step problems. We show, in sets of simulated and real-world experiments (video available on https://youtu.be/EmkUQfyvwkY), that by learning to evaluate actions in an abstract image-based representation of the real world, the robot can generalize and adapt to the object shapes in challenging real-world environments.

Click here for the full paper.

D. Leidner, G. Bartels, W. Bejjani, A. Albu-Schäffer and M. Beetz, "Cognition-enabled robotic wiping: Representation, planning, execution, and interpretation". In: Robotics and Autonomous Systems, 2019.

Abstract: Advanced cognitive capabilities enable humans to solve even complex tasks by representing and processing internal models of manipulation actions and their effects. Consequently, humans are able to plan the effect of their motions before execution and validate the performance afterwards. In this work, we derive an analog approach for robotic wiping actions which are fundamental for some of the most frequent household chores including vacuuming the floor, sweeping dust, and cleaning windows. We describe wiping actions and their effects based on a qualitative particle distribution model. This representation enables a robot to plan goal-oriented wiping motions for the prototypical wiping actions of absorbing, collecting and skimming. The particle representation is utilized to simulate the task outcome before execution and infer the real performance afterwards based on haptic perception. This way, the robot is able to estimate the task performance and schedule additional motions if necessary. We evaluate our methods in simulated scenarios, as well as in real experiments with the humanoid service robot Rollin’ Justin.

Click here for the full paper.

W. Bejjani, R. Papallas, M. Leonetti and M. R. Dogar, "Planning with a Receding Horizon for Manipulation in Clutter Using a Learned Value Function". In: IEEE-RAS Humanoids, 2018.

Abstract: Manipulation in clutter requires solving complex sequential decision making problems in an environment rich with physical interactions. The transfer of motion planning solutions from simulation to the real world, in open-loop, suffers from the inherent uncertainty in modelling real world physics. We propose interleaving planning and execution in real-time, in a closed-loop setting, using a Receding Horizon Planner (RHP) for pushing manipulation in clutter. In this context, we address the problem of finding a suitable value function based heuristic for efficient planning, and for estimating the cost-to-go from the horizon to the goal. We estimate such a value function first by using plans generated by an existing sampling-based planner. Then, we further optimize the value function through reinforcement learning. We evaluate our approach and compare it to state-of-the-art planning techniques for manipulation in clutter. We conduct experiments in simulation with artificially injected uncertainty on the physics parameters, as well as in real world tasks of manipulation in clutter. We show that this approach enables the robot to react to the uncertain dynamics of the real world effectively.

Click here for the full paper.

L. Daniel, W. Bejjani, A. Albu-Schäffer and M. Beetz, "Robotic agents representing, reasoning, and executing wiping tasks for daily household chores". In: AAMAS, 2016.

Abstract: Universal robotic agents are envisaged to perform a wide range of manipulation tasks in everyday environments. A common action observed in many household chores is wiping, such as the absorption of spilled water with a sponge, skimming breadcrumbs off the dining table, or collecting shards of a broken mug using a broom. To cope with this versatility, the agents have to represent the tasks on a high level of abstraction. In this work, we propose to represent the medium in wiping tasks (e.g. water, breadcrumbs, or shards) as generic particle distribution. This representation enables us to represent wiping tasks as the desired state change of the particles, which allows the agent to reason about the effects of wiping motions in a qualitative manner. Based on this, we develop three prototypical wiping actions for the generic tasks of absorbing, collecting and skimming. The Cartesian wiping motions are resolved to joint motions exploiting the free degree of freedom of the involved tool. Furthermore, the workspace of the robotic manipulators is used to reason about the reachability of wiping motions. We evaluate our methods in simulated scenarios, as well as in a real experiment with the robotic agent Rollin' Justin.

Click here for the full paper.

W. Bejjani, "Automated Planning of Whole-Body Motions for Everyday Household Chores with a Humanoid Service Robot", Master thesis, Technical University of Dortmund, 2015.

Wiping actions are required in many everyday household activities. Whether for skimming bread crumbs of a table top with a sponge or for collecting shards of a broken mug on the surface floor with a broom, future service robots are expected to master such manipulation tasks with high level of autonomy. In contrast to actions such as pick-and-place where immediate effects are observed, wiping tasks require a more elaborate planning process to achieve the desired outcome. The wiping motions have to autonomously adapt to different environment layouts and to the specifications of the robot and the tool involved.

The work presented in this report proposes a strategy, called extended Semantic Directed Graph eSDG, for mapping wiping related semantic commands to joint motions of a humanoid robot. The medium (e.g. bread crumbs, water or shards of a broken mug) and the physical interaction parameters are represented in a qualitative model as particles distributed on a surface. eSDG combines information from the qualitative, the geometric state of the robot and the environment together with the specific semantic goal of the wiping task in order to generate executable and goal oriented Cartesian paths. The robot reachable workspace is used to reason about the wiping the partitioning of the tasks into smaller sub-tasks that can be executed from a static position of the robot base. For the eSDG path following problem a cascading structure of inverse kinematics solvers is developed, where the degree of freedom of the involved tool is exploited in favor of the wiping quality. The proposed approach is evaluated in a series of simulated scenarios. The results are validated by a real experiment on the humanoid robot Rollin’ Justin.

Click here for the full paper.