Academia.eduAcademia.edu

Outline

Learning domain knowledge for teaching procedural skills

2002, Proceedings of the first international joint conference on Autonomous agents and multiagent systems part 3 - AAMAS '02

https://doi.org/10.1145/545056.545134

Abstract

This paper describes a method for acquiring procedural knowledge for use by pedagogical agents in interactive simulation-based learning environments. Such agents need to be able to adapt their behavior to the changing conditions of the simulated world, and respond appropriately in mixed-initiative interactions with learners. This requires a good understanding of the goals and causal dependencies in the procedures being taught. Our method, inspired by human tutorial dialog, combines direct specification, demonstration, and experimentation. The human instructor demonstrates the skill being taught, while the agent observes the effects of the procedure on the simulated world. The agent then autonomously experiments with the procedure, making modifications to it, in order to understand the role of each step in the procedure. At various points the instructor can provide clarifications, and modify the developing procedural description as needed. This method is realized in a system called Diligent, which acquires procedural knowledge for the STEVE animated pedagogical agent.

References (35)

  1. REFERENCES
  2. R. Angros, Jr. Learning What to Instruct: Acquiring Knowledge from Demonstrations and Focussed Experimentation. PhD thesis, Department of Computer Science, University of Southern California, Los Angeles, CA, 2000.
  3. S. Benson. Inductive learning of reactive action models. Machine Learning: Proceedings of the 12th International Conference, pages 47-54, 1995.
  4. R. Bindiganavale, W. Schuler, J. M. Allbeck, N. I. Badler, A. K. Joshi, and M. Palmer. Dynamically altering agent behaviors using natural language instructions. In Proceedings of the Fourth International Conference on Autonomous Agents, pages 293-300, New York, 2000. ACM Press.
  5. A. Cypher, editor. Watch What I Do: Programming by Demonstration. MIT Press, Cambridge, MA, 1993.
  6. J. Delin, A. Hartley, C. Paris, D. Scott, and K. Vander Linden. Expressing procedural relationships in multilingual instructions. In Proceedings of the Seventh International Workshop on Natural Language Generation, pages 61-70, Kennebunkpurt, Maine, 1994.
  7. Y. Gil. Acquiring Domain Knowledge for Planning by Experimentation. PhD thesis, Carnegie Mellon University, 1992.
  8. T. R. Gruber. Automated knowledge acquisition for strategic knowledge. Machine Learning, 4:293-336, 1989.
  9. S. B. Huffman and J. E. Laird. Flexibly instructable agents. Journal of Artificial Intelligence Research, 3:271-324, 1995.
  10. W. L. Johnson, J. Rickel, R. Stiles, and A. Munro. Integrating pedagogical agents into virtual environments. Presence: Teleoperators and Virtual Environments, 7(6):523-546, December 1998.
  11. B. J. Krawchuk and I. H. Witten. On asking the right questions. In 5th International Machine Learning Conference, pages 15-21. Morgan Kaufmann, 1988.
  12. C. Mellish and R. Evans. Natural language generation from plans. Computational Linguistics, 15(4):233-249, 1989.
  13. T. M. Mitchell. Generalization as search. Artificial Intelligence, 18:203-226, 1982.
  14. T. M. Mitchell, S. Mahadevan, and L. I. Steinberg. LEAP: A learning apprentice for VLSI design. In Machine Learning An Artificial Intelligence Approach, volume III, pages 271-289. Morgan Kaufmann, San Mateo, CA, 1990.
  15. T. M. Mitchell, P. E. Utgoff, and R. Banerji. Learning by experimentation: Acquiring and refining problem-solving heuristics. In R. Michalski, J. Carbonell, and T. Mitchell, editors, Machine Learning An Artificial Intelligence Approach, volume I. Morgan Kaufmann, San Mateo, CA, 1983.
  16. A. Munro, M. Johnson, Q. Pizzini, D. Surmon, and D. Towne. Authoring simulation-centered tutors with RIDES. International Journal of Artificial Intelligence in Education, 8:284-316, 1997.
  17. M. Nicolescu and M. Mataric. Experience-based representation construction: learning from human and robot teachers. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2001.
  18. T. Oates and P. R. Cohen. Searching for planning operators with context-dependent and probabilistic effects. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96), pages 863-868, Menlo Park, CA, 1996. AAAI Press/MIT Press.
  19. D. Pearson and J. Laird. Toward incremental knowledge correction for agents in complex environments. In S. Muggleton, D. Michie, and K. Furukawa, editors, Machine Intelligence, volume 15. Oxford University Press, 1995.
  20. B. W. Porter and D. F. Kibler. Experimental goal regression: A method for learning problem-solving heuristics. Machine Learning, 1:249-286, 1986.
  21. M. A. Redmond. Learning by observing and understanding expert problem solving. PhD thesis, Georgia Institute of Technology, 1992.
  22. J. Rickel and W. L. Johnson. Animated agents for procedural training in virtual reality: Perception, cognition, and motor control. Applied Artificial Intelligence, 13:343-382, 1999.
  23. J. Rickel and W. L. Johnson. Task-oriented collaboration with embodied agents in virtual worlds. In J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, editors, Embodied Conversational Agents. MIT Press, Cambridge, MA, 2000.
  24. S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, Englewood Cliffs, NJ, 1995.
  25. C. Sammut and R. B. Banerji. Learning concepts by asking questions. In R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, volume II, pages 167-191. Morgan Kaufmann, Los Altos, CA, 1986.
  26. A. M. Segre. A learning apprentice system for mechanical assembly. In IEEE Third Conference on Artificial Intelligence Applications, pages 112-117, 1987.
  27. E. Shaw, W. L. Johnson, and R. Ganeshan. Pedagogical agents on the web. In Proceedings of the Third International Conference on Autonomous Agents, pages 283-290, New York, 1999. ACM Press.
  28. W.-M. Shen. Discovery as autonomous learning from the environment. Machine Learning, 11(4):250-265, 1993.
  29. B. D. Smith and P. S. Rosenbloom. Incremental non-backtracking focusing: A polynomial bounded generalization algorithm for version spaces. In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), pages 848-853, Menlo Park, 1990. AAAI Press.
  30. D. C. Smith, A. Cypher, and J. Spohrer. KIDSIM: Programming agents without a programming language. CACM, 94(7):55-67, 1994.
  31. R. Stiles, L. McCarthy, and M. Pontecorvo. Training studio interaction. In Workshop on Simulation and Interaction in Virtual Environments (SIVE-95), pages 178-183, Iowa City, IW, July 1995. ACM Press.
  32. G. Tecuci and M. R. Hieb. Teaching intelligent agents: the Disciple approach. International Journal of Human-Computer Interaction, 8(3):259-285, 1996.
  33. X. Wang. Learning Planning Operators by Observation and Practice. PhD thesis, Carnegie Mellon University, 1996.
  34. X. Wang. A multistrategy learning system for planning operator acquisition. In The Third International Workshop on Multistrategy Learning, Harpers Ferry, West Virginia, May 1996.
  35. R. M. Young. Generating Descriptions of Complex Activities. PhD thesis, University of Pittsburgh, 1997.