Beyond Teleoperation: Exploiting Human Motor Skills with MARIOnET
Abstract
ABSTRACT Although machine learning has improved the rate and accuracy at which robots are able to learn, there still exist tasks for which humans can improve performance significantly faster and more robustly than computers. While some ongoing work considers the role of human reinforcement in intelligent algorithms, the burden of learning is often placed solely on the computer. These approaches neglect the expressive capabilities of humans, especially regarding our ability to quickly refine motor skills.
References (23)
- REFERENCES
- Hri '09: Proc. of the 4th acm/ieee intl. conf. on human robot interaction, 2009.
- C. Atkeson and S. Schaal. Robot learning from demonstration. In Machine learning: Proc. 14th Int. Conf. (ICML '97), pages 12-20, 1997.
- C. Azevedo, B. Espiau, B. Amblard, and C. Assaiante. Bipedal locomotion: toward unified concepts in robotics and neuroscience. Biological Cybernetics, 96(2):209-228, 2007.
- C. Breazeal and B. Scassellati. Challenges in building robots that imitate people. Defense Tech. Information Center, 2000.
- S. Buss. Introduction to inverse kinematics with jacobian transpose, pseudoinverse, and damped least squares methods. Technical report, UCSD, 2004.
- S. Chernova and M. Veloso. Interactive policy learning through confidence-based autonomy. Journal of Artificial Intelligence Research, 34:1-25, 2009.
- K. Dautenhahn and C. Nehaniv. Imitation in animals and artifacts. MIT Press Cambridge, MA, USA, 2002.
- J. Denavit and R. Hartenberg. A kinematic notation for lower-pair mechanisms based on matrices. Journal of Applied Mechanics, 22(2):215-221, 1955.
- W. B. Knox and P. Stone. TAMER: Training an Agent Manually via Evaluative Reinforcement. In IEEE 7th Intl. Conf. on Development and Learning, August 2008.
- N. Kohl and P. Stone. Machine learning for fast quadrupedal locomotion. In The Nineteenth National Conf. on Artificial Intelligence, pages 611-616, July 2004.
- J. Z. Kolter, P. Abbeel, and A. Y. Ng. Hierarchical apprenticeship learning with application to quadruped locomotion. In NIPS, 2007.
- D. Kulic, W. Takano, and Y. Nakamura. Combining automated on-line segmentation and incremental clustering for whole body motions. In Robotics and Automation, 2008. ICRA 2008. IEEE Intl. Conf. on, pages 2591-2598, 2008.
- A. Lawson. The neurological basis of learning, development and discovery: implications for science and mathematics instruction. Springer, 2003.
- W. Muellbacher, U. Ziemann, B. Boroojerdi, L. Cohen, and M. Hallett. Role of the human motor cortex in rapid motor learning. Experimental Brain Research, 136(4):431-438, 2001.
- J. Nakanishi, J. Morimoto, G. Endo, G. Cheng, S. Schaal, and M. Kawato. Learning from demonstration and adaptation of biped locomotion. Robotics and Autonomous Systems, 47(2-3):79-91, 2004.
- J. Peters, S. Vijayakumar, and S. Schaal. Reinforcement learning for humanoid robotics. In Proc. of the Third IEEE-RAS Intl. Conf. on Humanoid Robots, pages 1-20, 2003.
- M. Saggar, T. D'Silva, N. Kohl, and P. Stone. Autonomous learning of stable quadruped locomotion. Lecture Notes in Computer Science, 4434:98, 2007.
- S. Schaal, J. Peters, J. Nakanishi, and A. Ijspeert. Learning movement primitives. In Intl. Symposium on Robotics Research (ISRR2003). springer, 2004.
- R. Schmidt and T. Lee. Motor Control And Learning: A Behavioral Emphasis. Human Kinetics, 2005.
- A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and M. Goodrich. Common metrics for human-robot interaction. In Proc. of the 1st ACM SIGCHI/SIGART Conf. on Human-robot interaction, pages 33-40. ACM New York, NY, USA, 2006.
- A. Thomaz and C. Breazeal. Teachable robots: Understanding human teaching behavior to build more effective robot learners. Artificial Intelligence, 172(6-7):716-737, 2008.
- R. Zhang and P. Vadakkepat. An evolutionary algorithm for trajectory based gait generation of biped robot. In Proc. of the Intl. Conf. on CIRAS, 2003.