Agents that Learn to Instruct
1997
Abstract
This paper describes a software agent that learns procedural knowledge from a human instructor well enough to teach human students. In order to teach, the agent needs more than the ability to perform a procedure. It must also be able to monitor human students performing the procedure and be able to articulate the reasons why actions are necessary. Our research concentrates on helping an instructor instruct the agent in a natural manner, on reducing the burden on the instructor, and on focusing learning on the procedure being taught. Initially the agent has little domain knowledge. The instructor demonstrates a procedure by directly manipulating a simulated environment. However, one demonstration is not sufficient for understanding the causal relationships between a demonstration's actions. Unfortunately, the more demonstrations a procedure requires, the greater the instructor's burden. However, fewer demonstrations can be required if the agent autonomously experiments. Our experiments attempt to understand the causal dependencies between a demonstration's actions by perturbing the order of the demonstration's actions.
References (21)
- Benson, S. 1995. Inductive learning of reactive action models. In Machine Learning: Proceedings of the Twelfth International Conference, 47-54. San Francisco, Calif.; Morgan Kaufmann.
- Cypher, A. et al., eds. 1993. Watch WhatlDo: Programming by Demonstration. Cambridge, Mass.: The MIT Press.
- Chi, M. T. H.; Bassok, M.; Lewis, M. W.; Reimann, P.; and Glaser, R. 1989. Self-explanations: How students study and use examples to solve problems. Cognitive Science, 13:145-182.
- Gil, Y. 1992. Acquiring Domain Knowledge for Planning by Experimentation. Ph.D. diss., School of Computer Science, Carnegie Mellon Univ.
- Huffman, S. B.; and Laird, J. E. 1995. Flexibly instructable agents. Journal of Artificial lntelligence Research, 3:271- 324. Johnson, W. L. 1995. Pedagogical agents in virtual learning environments. In Proceedings of the International Conference on Computers in Education, 41-48;
- Johnson, W. L.; Rickel, J.; Stiles, R.; and Munro, A. 1997. Integrating pedagogical agents into virtual environments. Presence, forthcoming.
- Krishnamurthy, B. ed. 1995. Practical Reusable UNIX Software. New York, NY.; John Wiley & Sons. http://portal.research.bell-labs.com/orgs/ssr/book/reuse
- Laird, J. E.; Newell, A.; and Rosenbloom, P. S. 1987. Soar: An architecture for general intelligence. Artificial Intelligence, 33(1): 1-64.
- McAllester, D.; and Rosenblitt, D. 1991. Systematic Nonlinear Planning. In Proceedings of the Ninth National Conference on Artificial Intelligence (AAAI-91), 634-639. Menlo Park, Calif.; AAAI Press.
- Mitchell, T. M. 1978. Version Spaces: An Approachto Concept Learning. Ph.D. diss., Dept. of Computer Science, Stanford Univ.
- Munro, A. ; Johnson, M. C.; Surmon, D. S.; and Wogulis, J.L. 1993. Attribute-centered simulation authoring for instruction. In Proceedings of the AI-ED 93 World Conference of Artificial Intelligence in Education, 82-89. Edinburgh, Scotland.
- Ousterhout, J. K. 1994. Tcl and the Tk Toolkit. Reading, Mass.; Addison-Wesley.
- Pearson, D. J. 1995. Active learning in correcting domain theories: Help or hindrance. In AAAI Symposium on Active Learning.
- Porter, B. W.; and Kubler, D.F. 1986. Experimental goal regression: A method for learning problem-solving heuristics. Machine Learning, 1:249-286.
- Redmond, M.A. 1992. Learning by observing and understanding expert problem solving. Ph.D. diss., Georgia Institute of Technology.
- Rickel, J.; and Johnson ,W. L. 1997a. Integrating pedagogical capabilities in a virtual environment agent. In Proceedings of the First International Conference on Autonomous Agents.; ACM Press.
- Rickel, J.; and Johnson ,W. L. 1997b. Intelligent tutoring in virtual reality: a preliminary report. In Proceedings of the Eighth World Conference on AI in Education.; IOS Press.
- Shen ,W. 1994. Autonomous Learning From The Environment. New York, NY: W. H. Freeman.
- Tecuci, G.; and Hieb, M. R. 1996. Teaching intelligent agents: The disciple approach. International Journal of Human-Computer Interaction. 8(3):259-285.
- Wang, X. 1996. Learning Planning Operators by Observation and Practice. Ph.D. diss., School of Computer Science, Carnegie Mellon Univ.
- Wilkins, D. C. 1990. Knowledge base refinement as improving an incorrect and incomplete domain theory. In Machine Learning An Artificial Intelligence Approach, volume 111, 493-513. San Mateo, Calif.: Morgan Kaufmann.