Skip to main content

Humanoids15 WS on Proprioceptive and Exteroceptive Data Fusion for State Estimation and Whole-Body Control of Humanoid Robots

Posted in Events

 

Organizers:

Federico L. Moro, Consiglio Nazionale delle Ricerche (CNR), Italy

Dimitrios Kanoulas, Istituto Italiano di Tecnologia (IIT), Italy

Jaeheung Park, Seoul National University (SNU), South Korea

Luis Sentis, University of Texas at Austin, USA

 

 

Abstract:

Humanoid robots need to reliably locomote and manipulate continuously in very uncertain environments.  For completing challenging tasks, sensor data need to be used for estimating the robot’s own state with respect to the environment.  Simultaneous localization and mapping (SLAM) has been studied a lot over the last few years but still there is not a commonly accepted solution to the problem, especially since different tasks may require different estimation accuracy.  From one side, exteroception, like the visual perception, has been used to build either sparse or dense maps of the environment, localizing continuously the robot in it, but with potentially big errors and drifts over time.  From the other side, proprioception, like the robot kinematics, could provide a more accurate state estimation of the joints but under some assumptions, for instance that the robot maintains either non-slipping or static contacts with the environment.  Sensor data fusion is required for a more accurate and continuous state estimation, in order to allow a reliable whole-body control.  The problem becomes even more challenging when the sensors not only provide uncertain measurements but also provide them in an non-continuous base.

This workshop will provide a platform for researchers working on SLAM from the proprioception and/or exteroception point of view, to exchange ideas on sensor data fusion methods for a better state estimation.  The aim is to foster collaboration among researchers that are working on SLAM and sensor fusion, either they work on the control, the planning, or the perception side of the problem, to advance the state of the art in robot locomotion and manipulation in challenging and uncertain environments.  We propose a full day workshop consisting of a mixture of presentations on topics including SLAM, sensing, and data fusion with the goal of applying them to humanoids. Moreover we will allocate adequate time for questions and discussion to make the workshop as interactive as possible.

 

Keywords: data fusion; state estimation; whole-body control; integration of perception and control.

 

Program:

Session 1
10:00 - 10:05 Welcome
10:05 - 10:35 Nikolaus Vahrenkamp, Mikro Wächter, Tamim Asfour, Karlsruhe Institute of Technology, Germany
10:40 - 11:10 Siyuan Feng, Chris Atkeson, Carnegie Mellon University, USA
11:15 - 11:45 Coffee Break
Session 2
11:45 - 12:15 Dimitrios Kanoulas, IIT, Italy
12:20 - 12:50 Marilena Vendittelli, Università di Roma “La Sapienza”, Italy
12:55 - 13:30 Lunch Break
Session 3
13:30 - 14:00 Ludovic Righetti, Max Planck Institute, Germany
14:05 - 14:35 Luis Sentis, University of Texas at Austin, USA
14:40 - 15:10 Jaeheung Park, Seoul National University, South Korea
15:15 - 16:15 Discussion
16:15 - 16:20 Closing Remarks

 

List of speakers:

1. Nikolaus Vahrenkamp, Mikro Wächter, Tamim Asfour
Title: "Memory-based robot architectures for whole-body motion control"
Abstract:
Providing a consistent internal representation of the robot's state together with the perceived environment is an essential feature for component-based robot software frameworks.  In order to provide such representations, a robot software architecture should offer consistent memory concepts for storing and fusing a wide variety of sensor data, ranging from low level sensor readings to inferred entities on symbolic level. Based on this representation, the robot's high-level control system is able to operate on seamless data while being decoupled from sensor data processing.  We present our component-based robot software framework ArmarX, which has been developed to inherently support memory-based robot software architectures. ArmarX provides two memory structures:  a persistent memory which  supports long-term memory and data storage of prior knowledge and a  robot's working memory  as central component for representing the robot's current state and for building a representation of the world by continuous fusion of all sensor modalities. Here, probabilistic concepts are used to ensure that information about the certainty of perceived entities are available in the temporal and the spatial domain. Based on these memories, robot skills are developed and encoded as event-driven and reusable statecharts.
In this talk, we will show how motion control skills can be realized by incorporating memory structures. This include visual servoing based grasping and  pick&place tasks. The execution in the ArmarX simulation environment will be presented with an  evaluation as well as discussion of its  limitations. Further, we will present ongoing work on ControlX, the low level controller in of ArmarX, which aims at reducing the effort in porting control strategies.
 
2. Siyuan Feng, Chris Atkeson:
Title: "Controller and State Estimator Design and Implementation for the Atlas Robot"
Abstract:
In this talk, I will present the controller and state estimator we have implemented on the Atlas robot. For the first half of the talk, I will present an overview of the system in the context of the DARPA Robotics Challenge. For the second half, I will focus on our recent work on dynamic walking and improvements in state estimation by incorporating multiple IMUs.
Bio:
Siyuan Feng is a Phd student at the Robotics Institute at Carnegie Mellon University. He received a M.S. degree in Robotics and a B.S degree in Computer Science from Carnegie Mellon University in 2014 and 2010. 
 
3. Dimitrios Kanoulas
Title: "Towards Rough Terrain Perception for Localization and Mapping"
Abstract:
One of the advantages of legged vs other forms of locomotion is the use of sparse foothold affordances, especially when dealing with rough outdoor environments. Recent advancements in robotics have enabled bipeds to walk mainly on flat surfaces, leaving the problem of unstructured rough (e.g. rocky) terrain locomotion open. We present a 3D perceptual system for identifying contact areas for bipedal locomotion in rough terrain, by modeling, localizing, and mapping sparse local surfaces using the concept of the 3D curved patches. Range and IMU sensing has been used to automatically find sparse patches of the size of the foot in the environment using a bio-inspired approach. This set of potentially good for contact patches could be refined later to actual contacts in a higher-level selection planning process. Using the robot’s kinematic chain we also introduce the concept of assembling these patches into a map, where patches locally approximate both the environment surfaces and key contact surfaces on a robot for reasoning contacts. Finally we will describe a way to integrate a dense volumetric range data fusion system to keep the detected patches mapped around the robot in real-time. We finally present a real-time experiment where a mini-biped robot (RPBP) is using the introduced algorithm for foot placement on rocks.
Bio
Dimitrios Kanoulas is a postdoctoral researcher at the Italian Institute of Technology (IIT) in Genoa, working in the field of perception for robotics and in particular on detecting foothold and handhold affordances in uncertain environments.  He was the perception team leader for the WALK-MAN DARPA Robotics Challenge 2015 team.  He graduated from Northeastern Univ. in Boston in 2014, advised by prof. Marsette Vona.
 
4. Marilena Vendittelli
Title: "Data fusion and sensing for humanoids locomotion and physical interaction"
Abstract:
This talk will present data fusion and sensing techniques applied to humanoid robots to achieve tasks implying locomotion and physical interaction. First, we will focus on the EKF-based integration of kinematic, inertial and visual information for odometric localization. Then, we will present a vision-based controller, with formal convergence property, for the navigation of humanoids in indoor environments made of networks of corridors connected through curves and junctions. 
Finally, we will report on recent results on force reconstruction in physical interaction of humanoids with humans or the environment. The presented sensing techniques are based on the perception of the equilibrium perturbation or on the measurements of joint positions and motor currents.
In illustrating the experimental results we will briefly discuss the challenges arising in the application of the developed methods to the small humanoid NAO.
Bio:
Marilena Vendittelli received the Ph.D. in Systems Engineering in 1997 from Sapienza University of Rome. She held a two years post-doc position at LAAS-CNRS in Toulouse (France) funded through a Marie Curie fellowship.
Since 1998 she is with the Department of Computer, Control, and Management Engineering of Sapienza University of Rome and she is a member of the Robotics Laboratory of the same department. From January 2010 to December 2013 she has been Associate Editor for the IEEE Transactions on Robotics.
 
5. Ludovic Righetti 
Title: "Momentum estimation and planning for legged robots."
Abstract:
Recently there have been a growing interest in optimization based inverse dynamics approaches for the control of legged robots. In particular, this framework has been successfully used to regulate the robot’s (linear+angular) momentum, which naturally relates interaction forces to robot motion, in order to create more stable and dynamic
behaviors. However, this approach is limited by at least two aspects. First, the planning over multiple contact sequences of the momentum as well as the control of the required interaction forces necessary to generate it is generally reduced to linear momentum planning (e.g. preview control with a LIPM model) where angular momentum is regulated to zero. While this approach works well on flat ground, it cannot generalize to more complicated contact scenarios or more dynamic motions
involving non-zero angular momentum. Second, there is still a large gap between simulation and real robot performance and one reason lies in the inaccurate estimation of important quantities that cannot be measured directly, such as the robot pose in space, its overall momentum and possible (time-varying) biases coming from noisy indirect sensor measurements, inaccurate nonlinear process models and external disturbances. An inaccurate and noisy estimation of these quantities severely limits the control bandwidth available on a real robot and therefore its performance. In this presentation, I will discuss our recent work addressing these two limitations. First, I will discuss our recent results on trajectory optimization for momentum during
multi-contact non-coplanar tasks. Then I will highlight our theoretical and experimental results on the estimation of legged robot’s pose and overall momentum through the fusion of force, inertial and position measurements. Finally I will show how these results together with our work on optimization-based inverse dynamics provide a consistent planning, control and estimation framework for generating whole-body
behaviors on real robots.
Bio:
Ludovic Righetti leads the Movement Generation and Control group at the Max-Planck Institute for Intelligent Systems (Tübingen, Germany) since September 2012 and holds a W2 Research Group Leader position since October 2014. Before, he was a postdoctoral fellow at the Computational Learning and Motor Control Lab (University of Southern California) between March 2009 and August 2012. He studied at the Ecole Polytechnique Fédérale de Lausanne where he received a diploma in Computer Science in 2004 and a Doctorate in Science in 2008. His doctoral thesis was awarded the 2010 Georges Giralt PhD Award given by the European Robotics Research Network (EURON) for the best robotics thesis in Europe. His research focuses on the generation and control of movements for autonomous robots, with a special emphasis on legged locomotion and manipulation.
 
6. Luis Sentis: TBD
 
7. Jaeheung Park:
Title: "Active sensing strategies for contact using constraints between the robot and environment"
Abstract:
When robots are operating in human complex environments, they often require to deal with contacts. The contact between the robot and environment inevitably introduces uncertainties because of the relatively less precise sensing technology for contact and the modeling error of the environment. Therefore, it is an important issue how to perform tasks in contact situations with uncertainties. In this talk, first, the basic concept of active sensing is explained through a simple example. Then, we demonstrate the use of the concept “see and touch” in peg-in-hole task and box-packing task using a dual-arm robot. In the peg-in-hole task, the peg and hole are located using a vision sensor, but the positions of the objects are not precise enough for peg-in-hole task once they are grasped by the robot-hands. Therefore, we use active motions to locate contact position or settle the robot into a desired state of the task. Finally, the active sensing strategy is applied to locate the contact position of the unknown object on the ground during walking. This can be especially effective when the lower-body occludes the vision sensors. The experimental result demonstrates its performance and possibility to other applications.
Bio:
Jaeheung Park is an associate professor at Seoul National University, Korea. He was the team leader for TEAM SNU for DRC Finals. He received the B.S. and M.S. degrees from Seoul National University, Korea, in 1995 and 1999, respectively, and the Ph.D. degree from Stanford University, U.S. in 2006. From 2006 to 2009, He was a Post-doctoral researcher and later a Research Associate at Stanford Artificial Intelligence Laboratory. From 2007 to 2008, he also worked part-time at Hansen Medical Inc., a medical robotics company in U.S. Since 2009, he has been a professor in the department of Transdisciplinary Studies at Seoul National University, Korea. His research interests lie in the areas of robot-environment interaction, contact force control, robust haptic teleoperation, multicontact control, whole-body dynamic control, biomechanics, and medical robotics.
 
No video selected.