We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass o... more We propose an imitation learning methodology that allows robots to seamlessly retrieve and pass objects to and from human users. Instead of hand-coding interaction parameters, we extract relevant information such as joint correlations and spatial relationships from a single task demonstration of two humans. At the center of our approach is an interaction model that enables a robot to generalize an observed demonstration spatially and temporally to new situations. To this end, we propose a data-driven method for generating interaction meshes that link both interaction partners to the manipulated object. The feasibility of the approach is evaluated in a within user study which shows that human-human task demonstration can lead to more natural and intuitive interactions with the robot. Keywords Human-human demonstration • Human-robot interaction • Handover • Interaction mesh This is one of the several papers published in Autonomous Robots comprising the Special Issue on Learning for Human-Robot Collaboration.
State Machine for Arbitrary Robots for Exploration and Inspection Tasks
In this paper, a novel state machine for mobile robots is described that enables a direct use for... more In this paper, a novel state machine for mobile robots is described that enables a direct use for exploration and inspection tasks. It offers a graphical user interface (GUI) to supervise the process and to issue commands if necessary. The state machine was developed for the open-source framework Robot Operating System (ROS) and can interface arbitrary algorithms for navigation and exploration. Interfaces to the commonly used ROS navigation stack and the explore_lite package are already included and can be utilized. In addition, routines for mapping and inspection can be added freely to adapt to the area of application. The state machine features a teleoperation mode to which it changes as soon as a respective command was issued. It also implements a software emergency stop and multiplexes all movement commands to the motor controller. To show the state machine's capabilities several simulations and real-world experiments are described in which it was used.
A system for learning continuous human-robot interactions from human-human demonstrations
We present a data-driven imitation learning system for learning human-robot interactions from hum... more We present a data-driven imitation learning system for learning human-robot interactions from human-human demonstrations. During training, the movements of two interaction partners are recorded through motion capture and an interaction model is learned. At runtime, the interaction model is used to continuously adapt the robot's motion, both spatially and temporally, to the movements of the human interaction partner. We show the effectiveness of the approach on complex, sequential tasks by presenting two applications involving collaborative human-robot assembly. Experiments with varied object hand-over positions and task execution speeds confirm the capabilities for spatio-temporal adaption of the demonstrated behavior to the current situation.
This survey, compiled from our recent research reports, describes research accomplishments in int... more This survey, compiled from our recent research reports, describes research accomplishments in interactive design and assembly with 3D computer graphics environments, carried out in the AI & Computer Graphics Lab at the University of Bielefeld. As a means of communicating with such environments, agent techniques and dynamic knowledge representations are used to process qualitative verbal instructions to quantitative scene changes. A key idea is to exploit situated 'perceptive' information by inspecting the computer graphics scene models.
In this paper we present XSAMPL3D, a novel language for the high-level representation of actions ... more In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and humanreadable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details Digital Peer Publishing Licence Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the current version of the Digital Peer Publishing Licence (DPPL). The text of the licence may be accessed and retrieved via Internet at http://www.dipp.nrw.de/. First presented at 6th Workshop 'Virtuelle und Erweiterte Realität', GI-Fachgruppe VR/AR 2009, extended and revised for JVRB of the demonstrated action, such as motion trajectiories, hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.
Underground mines are a dangerous working environment and, therefore, robots could help putting l... more Underground mines are a dangerous working environment and, therefore, robots could help putting less humans at risk. Traditional robots, sensors, and software often do not work reliably underground due to the harsh environment. This paper analyzes requirements and presents a robot design capable of navigating autonomously underground and manipulating objects with a robotic arm. The robot's base is a robust four wheeled platform powered by electric motors and able to withstand the harsh environment. It is equipped with color and depth cameras, lighting, laser scanners, an inertial measurement unit, and a robotic arm. We conducted two experiments testing mapping and autonomous navigation. Mapping a 75 meters long route including a loop closure results in a map that qualitatively matches the original map to a good extent. Testing autonomous driving on a previously created map of a second, straight, 150 meters long route was also successful. However, without loop closure, rotation errors cause apparent deviations in the created map. These first experiments showed the robot's operability underground.
Numerical Assessment of the Immersion Process of a Ceramic Foam Filter in a Steel Melt
Advanced Engineering Materials, Feb 1, 2022
Herein, the immersion process of a ceramic foam filter in a steel melt is investigated by means o... more Herein, the immersion process of a ceramic foam filter in a steel melt is investigated by means of numerical simulations, which are mainly based on the volume of fluid approach. The geometry of the used filters is modeled using an artificially generated beam model, which is convolved with a Gaussian kernel. The modeling approach enables the generation of filter geometries with, e.g., pore density and strut thickness similar to real ceramic foam filters. The main scope of the article is to show the effect of the immersion velocity of the filter on the formation of gas bubbles inside the pore cavities of the ceramic filter. Moreover, the influence of the contact angle on the volume fraction of gas bubbles, which remain in the filter, is investigated. For better understanding, the numerical results are underlined using 3D visualization and virtual reality.
In this paper, a novel approach is introduced which utilizes a Rapidly-exploring Random Graph to ... more In this paper, a novel approach is introduced which utilizes a Rapidly-exploring Random Graph to improve sampling-based autonomous exploration of unknown environments with unmanned ground vehicles compared to the current state of the art. Its intended usage is in rescue scenarios in large indoor and underground environments with limited teleoperation ability. Local and global sampling are used to improve the exploration efficiency for large environments. Nodes are selected as the next exploration goal based on a gain-cost ratio derived from the assumed 3D map coverage at the particular node and the distance to it. The proposed approach features a continuously-built graph with a decoupled calculation of node gains using a computationally efficient ray tracing method. The Next-Best View is evaluated while the robot is pursuing a goal, which eliminates the need to wait for gain calculation after reaching the previous goal and significantly speeds up the exploration. Furthermore, a grid map is used to determine the traversability between the nodes in the graph while also providing a global plan for navigating towards selected goals. Simulations compare the proposed approach to state-of-the-art exploration algorithms and demonstrate its superior performance.
While model-drivena pproaches are nowadays commonplace in the development of manykinds of softwar... more While model-drivena pproaches are nowadays commonplace in the development of manykinds of software, 3D applications are often still developed in an ad-hoc and code-centric manner.T his state of affairs is somewhat surprising, as there are obvious benefits to amore structured 3D development process. E.g., model-based techniques could help to ensure the mutual consistencyofthe code bases produced by the heterogeneous development groups, i.e. 3D designers and programmers. Further, 3D applications are often developed for multiple platforms in different programming environments for which some support for synchronization during development iterations is desirable. This paper presents am odel-drivena pproach for the structured development of multi-platform 3D applications based on round-trip engineering. Abstract models of the application are specified in SSIML, aDSL tailored for the development of 3D applications. In aforward phase, consistent 3D scene descriptions and program code are generated from the SSIML model. In areverse phase, code refinements are abstracted and synchronized to result in an updated SSIML model. And so on in subsequent iterations. In particular,o ur approach supports the synchronization of multiple target platforms, such as WebGL-enabled web applications with JavaScript and immersive Virtual Reality software using VRML and C++.
Combining SURF and SIFT for Challenging Indoor Localization using a Feature Cloud
Indoor localization for smartphone users enables applications such as indoor navigation or augmen... more Indoor localization for smartphone users enables applications such as indoor navigation or augmented information services. Indoor localization can be achieved by using camera images to resolve the position based on a precomputed training set of images. This technique is widely known as imagebased localization. In particular, we create a feature cloud from a Structure from Motion (SfM) approach as training set. At runtime, a feature-based matching identifies similarities between a test image and the trained set in order to solve the perspective n-point (PNP) problem and compute the camera position. Since indoor environments are challenging regarding wall structure, light conditions and glass elements, we combine SIFT and SURF image features to exploit the advantages of both techniques and, thus, provide a highly robust localization technology. We can even show that our novel approach can be used for a realtime image-based localization of a smartphone using remote processing.
While model-driven approaches are nowadays common-place in the development of many kinds of softw... more While model-driven approaches are nowadays common-place in the development of many kinds of software, 3D applications are often still developed in an ad-hoc and code-centric manner. This state of affairs is somewhat surprising, as there are obvious benefits to a more structured 3D development process. E.g., model-based techniques could help to ensure the mutual consistency of the code bases produced by the heterogeneous development groups, i.e. 3D designers and programmers. Further, 3D applications are often developed for multiple platforms in different programming environments for which some support for synchronization during development iterations is desirable. This paper presents a model-driven approach for the structured development of multi-platform 3D applications based on round-trip engineering. Abstract models of the application are specified in SSIML, a DSL tailored for the development of 3D applications. In a forward phase, consistent 3D scene descriptions and program code...
Uploads
Papers by Bernhard Jung