Academia.eduAcademia.edu

Outline

Visual Learning by Imitation With Motor Representations

2005, IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics)

https://doi.org/10.1109/TSMCB.2005.846654

Abstract

We propose a general architecture for action (mimicking) and program (gesture) level visual imitation. Action-level imitation involves two modules. The viewpoint Transformation (VPT) performs a "rotation" to align the demonstrator's body to that of the learner. The Visuo-Motor Map (VMM) maps this visual information to motor data.

References (32)

  1. S. Schaal, "Is imitation learning the route to humanoid robots," Trends Cognitive Sci., vol. 3, no. 6, 1999.
  2. J. Yang, Y. Xu, and C. S. Chen, "Hidden Markov model approach to skill learning and its application to telerobotics," IEEE Trans. Robotics Autom., vol. 10, no. 5, pp. 621-631, Oct. 1994.
  3. T. G. Williams, J. J. Rowland, and M. H. Lee, "Teaching from examples in assembly and manipulation of snack food ingredients by robot," in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., Oct. 29-Nov. 03 2001, pp. 2300-2305.
  4. A. D'Souza, S. Vijayakumar, and S. Schaal, "Learning inverse kine- matics," in Proc. Int. Conf. Intell. Robots Syst., Maui, HI, 2001.
  5. J. S. Bruner, "Nature and use of immaturity," Amer. Psychol., vol. 27, pp. 687-708, 1972.
  6. M. Asada, Y. Yoshikawa, and K. Hosoda, "Learning by observation without three-dimensional reconstruction," Intell. Auton. Syst., pp. 555-560, 2000.
  7. A. Billard and G. Hayes, "Drama, a connectionist architecture for control and learning in autonomous robots," Adaptive Behavior, vol. 7, no. 1, pp. 35-63, 1999.
  8. M. J. Mataric ´, "Sensory-motor primitives as a basis for imitation: Linking perception to action and biology to robotics," in Imitation in Animals and Artifacts, C. Nehaniv and K. Dautenhahn, Eds: MIT Press, 2000.
  9. G. Metta, G. Sandini, L. Natale, and F. Panerai, "Sensorimotor inter- action in a developing robot," in Porc. First Int. Workshop Epigenetic Robotics: Modeling Cognitive Development Robotic Syst., Lund, Sweden, Sep. 2001.
  10. G. Metta, R. Manzotti, F. Panerai, and G. Sandini, "Development: Is it the right way toward humanoid robotics?," in Proc. IAS, Venice, Italy, Jul. 2000.
  11. M. Asada, K. F. MacDorman, H. Ishiguro, and Y. Kuniyoshi, "Cognitive developmental robotics as a new paradigm for the design of humanoid robots," Robotics Autom., vol. 37, pp. 185-193, 2001.
  12. V. G. Payne and L. D. Isaacs, Human Motor Development: A Lifespan Approach. Mountain View, CA: Mayfield, 2002.
  13. L. Fadiga, L. Fogassi, V. Gallese, and G. Rizzolatti, "Visuomotor neu- rons: Ambiguity of the discharge or 'motor' perception?," Int. J. Psy- chophysiol., vol. 35, 2000.
  14. V. S. Ramachandran, "Mirror neurons and imitation learning as the driving force behind the great leap forward in human evolution," Edge, vol. 69, Jun. 2000.
  15. A. Murata, L. Fadiga, L. Fogassi, V. Gallese, V. Raos, and G. Rizzolatti, "Object representation in the ventral premotor cortex (area f5) of the monkey," J. Neurophysiol., vol. 78, no. 4, pp. 2226-2230, Oct. 1997.
  16. E. Oztop, "Modeling the Mirror: Grasp Learning and Action Recogni- tion," Ph.D. Dissertation, Univ. Southern Calif., Los Angeles, CA, Aug. 2002.
  17. J. J. Gibson, The Ecological Approach to Visual Perception. Boston, MA: Houghton Mifflin, 1979.
  18. L. Fogassi, V. Gallese, G. Buccino, L. Craighero, L. Fadiga, and G. Riz- zolatti, "Cortical mechanism for the visual guidance of hand grasping movements in the monkey: A reversible inactivation study," Brain, vol. 124, no. 3, pp. 571-586, Mar. 2001.
  19. J. M. Rehg and T. Kanade, "Visual tracking of high DOF articulated structures: An application to human hand tracking," in Proc. ECCV (2), 1994, pp. 35-46.
  20. Y. Wu and T. S. Huang, "Capturing articulated human hand motion: A divide-and-conquer approach," in Proc ICCV (1), 1999, pp. 606-611.
  21. M. J. Black and A. D. Jepson, "Eigentracking: Robust matching and tracking of articulated objects using a view-based representation," in Proc. ECCV (1), 1996, pp. 329-342.
  22. D. M. Gavrila, "The visual analysis of human movement: A survey," in Proc. CVIU, vol. 73, 1999, pp. 82-98.
  23. J. M. Rehg and T. Kanade, "Model-based tracking of self-occluding ar- ticulated objects," in Proc. ICCV, 1995, pp. 612-617.
  24. Y. Wu and T. S. Huang, "View-independent recognition of hand pos- tures," in Porc. CVPR, Jun. 2000, pp. 88-94.
  25. R. L. Gregory, Eye and Brain, The Psychology of Seeing. Princeton, NJ: Princeton Univ. Press, 1990.
  26. CyberGlove [Online]. Available: http://www.immersion.com
  27. M. Cabido-Lopes and J. Santos-Victor, Visual transformations in gesture imitation: What you see is what you do, in Proc. Int. Conf. Robotics Autom., 2003.
  28. P. Rochat, "Ego function of early imitation," in The Imitative Mind, A. N. Meltzoff and W. Prinz, Eds. Cambridge, U.K.: Cambridge Univ. Press, 2002.
  29. C. Taylor, "Reconstruction of articulated objects from point correspon- dences in a single uncalibrated image," Comput. Vision Image Under- standing, vol. 80, 2000.
  30. Statistical Modeling Using Gaussian Mixtures and HMMs with Matlab, P. M. Baggenstoss. [Online]. Available: http://www.npt.nuwc.navy.mil/ Csf/htmldoc/pdf/
  31. N. Vlassis and A. Likas, "A kurtosis-based dynamic approach to Gaussian mixture modeling," IEEE Trans. Syst., Man, Cybern. A, vol. 29, no. 4, pp. 393-399, JUl. 1999.
  32. M. Schenatti, L. Natale, G. Metta, and G. Sandini, Object Grasping Data-Set. Genova, Italy: Lira Lab., Univ. Genova, 2003.