Papers by Michael Zillich
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS '10). International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-10), May 10-14, 2010, May 2010
Dora the Explorer is a mobile robot with a sense of curiosity and a drive to explore its world. G... more Dora the Explorer is a mobile robot with a sense of curiosity and a drive to explore its world. Given an incomplete tour of an indoor environment, Dora is driven by internal motivations to probe the gaps in her spatial knowledge. She actively explores regions of space which she hasn't previously visited but which she expects will lead her to further unexplored space. She will also attempt to determine the categories of rooms through active visual search for functionally important objects, and through ontology-driven inference on the results of this search.
Computer Vision and Image Understanding, 2009
A Pilot Study on Eye-tracking in 3D Search Tasks
In order to estimate multiple structures without prior knowledge of the noise scale, this paper u... more In order to estimate multiple structures without prior knowledge of the noise scale, this paper utilizes Jensen-Shannon Divergence (JSD), which is a similarity measurement method, to represent the relations between pairwise data conceptually. This conceptual representation encompasses the geometrical relations between pairwise data as well as the information about whether pairwise data coexist in one model's inlier set or not. Tests on datasets comprised of noisy inlier and a large percentage of outliers demonstrate that the proposed solution can efficiently estimate multiple models without prior information. Superior performance in terms of synthetic experiments and pragmatic tests is also demonstrated to validate the proposed approach.
The EU funded ROBVISION project develops a vision system that finds and measures the location of ... more The EU funded ROBVISION project develops a vision system that finds and measures the location of 3D structures with respect to a CAD-model. The main objective is to build an integrated vision system capable of providing adequate information to navigate a walking robot through a ship structure. The key aspect is the integration of a CAD-model to visual measurement and

Semantic visual perception for knowledge acquisition plays an important role in human cognition, ... more Semantic visual perception for knowledge acquisition plays an important role in human cognition, as well as in the many tasks of cognitive robot. In this paper, we present a vision system designed for indoor mobile robotic systems. Inspired by recent studies on holistic scene understanding, we generate spatial information in the scene by considering plane estimation and stereo line detection coherently within a unified probabilistic framework, and indicate how the resultant spatial information can be used for facilitating more accurate visual perception and reasoning visual elements in the scene. We also demonstrate how the proposed system facilitates and increase the robustness of, two robotics applications -visual attention and continuous learning. Experiments demonstrate that our system provides plausible representation of visual objects as well as accurate spatial layout of the scene.
Accurate 3D plane estimation in complex environments is an important functionality in many roboti... more Accurate 3D plane estimation in complex environments is an important functionality in many robotics applications such as navigation, manipulation, human-machine interaction. Following recent research in coherent geometrical contextual reasoning and object recognition, this paper proposes a joint probabilistic model which uses the results of wireframe feature detection to facilitate refinement of supporting plane estimation. By maximizing the probability of the joint model, our method has the capability of simultaneously estimating multiple 3D surfaces. The experiments using both synthetic data and an indoor mobile robot scenario demonstrate the benefits of our coherent model approach.
Fast Tracking of Ellipses Using Edge-Projected Integration of Cues
International Conference on Pattern Recognition, 2000
Commercial applications of ellipse tracking require robustness and real-time capability. The meth... more Commercial applications of ellipse tracking require robustness and real-time capability. The method presented tracks ellipses at field rate using a Pentium PC. Robustness is obtained by integrating gradient and intensity values for the detection of contour edges and by using a RANSAC-like method to find the most likely ellipse. The method adapts to the appearance along the ellipse circumference and
Fast tracking of ellipses using edge-projected integration of cues
Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, 2000
Commercial applications of ellipse tracking require robustness and real-time capability. The meth... more Commercial applications of ellipse tracking require robustness and real-time capability. The method presented tracks ellipses at field rate using a Pentium PC. Robustness is obtained by integrating gradient and intensity values for the detection of contour edges and by using a RANSAC-like method to find the most likely ellipse. The method adapts to the appearance along the ellipse circumference and
Knowing your limits - self-evaluation and prediction in object recognition
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011
Allowing a robot to acquire 3D object models autonomously not only requires robust feature detect... more Allowing a robot to acquire 3D object models autonomously not only requires robust feature detection and learning methods but also mechanisms for guiding learning and assessing learning progress. In this paper we present probabilistic measures for observed detection success, predicted detection success and the completeness of learned models, where learning is incremental and online. This allows the robot to decide

Advances in real-time object tracking
Journal of Real-Time Image Processing, 2013
ABSTRACT The huge amount of literature on real-time object tracking continuously reports good res... more ABSTRACT The huge amount of literature on real-time object tracking continuously reports good results with respect to accuracy and robustness. However, when it comes to the applicability of these approaches to real-world problems, often no clear statements about the tracking situation can be made. This paper addresses this issue and relies on three novel extensions to Monte Carlo particle filtering. The first, confidence dependent variation, together with the second, iterative particle filtering, leads to faster convergence and a more accurate pose estimation. The third, fixed particle poses removes jitter and ensures convergence. These extensions significantly increase robustness and accuracy, and further provide a basis for an algorithm we found to be essential for tracking systems performing in the real world: tracking state detection. Relying on the extensions above, it reports qualitative states of tracking as follows. Convergence indicates if the pose has already been found. Quality gives a statement about the confidence of the currently tracked pose. Loss detects when the algorithm fails. Occlusion determines the degree of occlusion if only parts of the object are visible. Building on tracking state detection, a model completeness scheme is proposed as a measure of which views of the object have already been learned and which areas require further inspection. To the best of our knowledge, this is the first tracking system that explicitly addresses the issue of estimating the tracking state. Our open-source framework is available online, serving as an easy-access interface for usage in practice.
Lessons Learnt from Scenario-Based Integration
Cognitive Systems Monographs, 2010
From the very start the CoSy project set out to demonstrate and evaluate its progress in implemen... more From the very start the CoSy project set out to demonstrate and evaluate its progress in implemented, integrated systems. Chapters 9 & 10 set out both the two scenarios we chose to integrate around, and the contributions we made by studying problems following an ...
3D piecewise planar object model for robotics manipulation
2011 IEEE International Conference on Robotics and Automation, 2011
Man-made environments are abundant with pla- nar surfaces which have attractive properties for ro... more Man-made environments are abundant with pla- nar surfaces which have attractive properties for robotics manipulation tasks and are a prerequisite for a variety of vision tasks. This work presents automatic on-line 3D object model ac- quisition assuming a robot to manipulate the object. Objects are represented with piecewise planar surfaces in a spatio-temporal graph. Planes once detected as homographies are
Anytimeness avoids parameters in detecting closed convex polygons
2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2008
Many perceptual grouping algorithms depend on parameters one way or another. It is always difficu... more Many perceptual grouping algorithms depend on parameters one way or another. It is always difficult to set these parameters appropriately for a wide range of input images, and parameters tend to be tuned to a small set of test cases. Especially certain thresholds often seem unavoidable to limit search spaces in order to obtain reasonable runtime complexity. Furthermore early pruning
Self-monitoring to improve robustness of 3D object tracking for robotics
2011 IEEE International Conference on Robotics and Biomimetics, 2011
In robotics object tracking is needed to steer towards objects, check if grasping is successful, ... more In robotics object tracking is needed to steer towards objects, check if grasping is successful, or investigate objects more closely by poking or handling them. While many 3D object tracking approaches have been proposed in the past, real world settings pose challenges such as automatically detecting tracking failure, real-time processing, and robustness to occlusion, illumination, and view point changes. This
The PlayMate System
Cognitive Systems Monographs, 2010
Research in CoSy was scenario driven. Two scenarios were created, the Play-Mate and the Explorer.... more Research in CoSy was scenario driven. Two scenarios were created, the Play-Mate and the Explorer. One of the integration goals of the project was to build integrated systems that addressed the tasks in these two scenarios. This chapter concerns the integrated system for the PlayMate scenario.
Measuring Scene Complexity to Adapt Feature Selection of Model-Based Object Tracking
Lecture Notes in Computer Science, 2003
In vision-based robotic systems the robust tracking of scene features is a key element of graspin... more In vision-based robotic systems the robust tracking of scene features is a key element of grasping, navigation and interpretation tasks. The stability of feature initialisation and tracking is strongly influenced by ambient conditions, like lighting and background, and their changes over time. This work presents how robustness can be increased especially in complex scenes by reacting to a measurement of

Dynamic Aspects of Visual Servoing and a Framework for Real-Time 3D Vision for Robotics
Lecture Notes in Computer Science, 2002
Vision-based control needs fast and robust tracking. The conditions for fast tracking are derived... more Vision-based control needs fast and robust tracking. The conditions for fast tracking are derived from studying the dynamics of the visual servoing loop. The result indicates how to build the vision system to obtain high dynamic performance of tracking. Maximum tracking velocity is obtained when running image acquisition and processing in parallel and using appropriately sized tracking windows. To achieve the second criteria, robust tracking, a model-based tracking approach is enhanced with a method of Edge Projected Integration of Cues (EPIC). EPIC uses object knowledge to select the correct feature in real-time. The object pose is calculated from the features at every tracking cycle. The components of the tracking system have been implemented in a framework called Vision for Robotics (V4R). V4R has been used within the EU-funded project RobVision to navigate a robot into a ship section using the model data from the CAD-design. The experiments show the performance of tracking in different parts of the ship mock-up.
A Software Integration Framework for Cognitive Systems
Communications in Computer and Information Science, 2012
A Framework for Cognitive Vision Systems or Identifying Obstacles to Integration
Lecture Notes in Computer Science, 2006
Cognitive Vision Systems (CVS) attempt to provide solutions for tasks such as exploring the envir... more Cognitive Vision Systems (CVS) attempt to provide solutions for tasks such as exploring the environment, making robots act autonomously or understanding actions of people. What these systems have in common is the use of a large number of models and techniques, e.g., perception-action mapping, recognition and categorisation, prediction, reaction and symbolic interpretation, and communication to humans. Within this contribution these
Uploads
Papers by Michael Zillich