Approved for public release: distribution unlimited. In Block 20, f different from Report) IS. SU... more Approved for public release: distribution unlimited. In Block 20, f different from Report) IS. SUPPLEMENTARY NOTES Judith Orasanu, contracting officer's representative. This matrial is to appear in S. Vosniadou and A. Ortony (eds.): Similarity and " Analo,ical Reasonina. 19. KEY WORDS (Continue on reverse side If necessany end Identify by block number) Analogy and Similarity Artificial Intelligence Cognitive Psycholoqy Machine Learninq S. , -20. A TRACT rcfue veww eftb ft naeemwW a ndentify by block number) ,. This research note is divided into three main sections. The first distinquishes the different kinds of information that are related by analony and similarity mappings. The second section discusses the different contexts or tasks that give rise to mappings. The third section catalogues different . solutions proposed or possible for each issue. The note concludes by arquinq that the issue is not whether analogies are helpful or harmful, but what determines when they are helpful and when they are harmful.
International Conference on Autonomic and Autonomous Systems, Mar 25, 2012
Today's computer systems are under relentless attack from cyber attackers armed with sophisticate... more Today's computer systems are under relentless attack from cyber attackers armed with sophisticated vulnerability search and exploit development toolkits. To protect against such threats, we are developing FUZZBUSTER, an automated system that provides adaptive immunity against a wide variety of cyber threats. FUZZBUSTER reacts to observed attacks and proactively searches for never-before-seen vulnerabilities. FUZZBUSTER uses a suite of fuzz testing and vulnerability assessment tools to find or verify the existence of vulnerabilities. Then FUZZBUSTER conducts additional tests to characterize the extent of the vulnerability, identifying ways it can be triggered. After characterizing a vulnerability, FUZZBUSTER synthesizes and applies an adaptation to prevent future exploits.
We are developing a prototype of a simulation development tool to aid administrators in the proce... more We are developing a prototype of a simulation development tool to aid administrators in the process of redesigning organizational structures. The purpose of the system is to help organization designers to more precisely model their hypothetical designs, and, by simulation, to predict key facets of the overall behavior of their proposed organizational structures. The tool will help them to evaluate the restructured organization's potential for improved efficiency, or spot potential weaknesses in the system during peak loads. With this tool, organizational models are built using a library of simulation components characterizing commonly used coordination structures and communications mechanisms. Our hypothesis is that the structuring of the design tool around a model construction library of coordination mechanisms will allow designers to readily compose existing and proposed organizational structures to effectively evaluate their options.
We examine the issues that arise in extending an estimatedregression planner to find plans for mu... more We examine the issues that arise in extending an estimatedregression planner to find plans for multiagent teams, cooperating agents that take orders but do no planning themselves. An estimated-regression system is a classical planner that searches situation space, using as a heuristic numbers derived from a backward search through a simplified space, summarized in the regression-match graph. Extending the planner to work with multiagent teams requires it to cope with autonomous processes, and objective functions that go beyond the traditional step count. Although regressing through process descriptions is no more difficult than regressing through standard action descriptions, figuring out how good an action recommended by the regression-match graph really is requires projecting the subtree suggested by the action. We are in the process of implementing the algorithm.
International Joint Conference on Artificial Intelligence, Jul 11, 2009
Existing work on workflow mining ignores the dataflow aspect of the problem. This is not acceptab... more Existing work on workflow mining ignores the dataflow aspect of the problem. This is not acceptable for service-oriented applications that use Web services with typed inputs and outputs. We propose a novel algorithm WIT (Workflow Inference from Traces) which identifies the context similarities of the observed actions based on the dataflow and uses model merging techniques to generalize the control flow and the dataflow simultaneously. We identify the class of workflows that WIT can learn correctly. We implemented WIT and tested it on a real world medical scheduling domain where WIT was able to find a good approximation of the target workflow.
15. NUMBER OF PAGES 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT N... more 15. NUMBER OF PAGES 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 5c. PROGRAM ELEMENT NUMBER 5b. GRANT NUMBER 5a. CONTRACT NUMBER Form Approved OMB NO. 0704-0188 3. DATES COVERED (From -To)
Cognitive systems face the challenge of pursuing changing goals in an open world with unpredictab... more Cognitive systems face the challenge of pursuing changing goals in an open world with unpredictable collaborators and adversaries. Considerable work has focused on automated planning in dynamic worlds, and even re-planning and plan repair due to unexpected changes. Less work explores how humans and computers can negotiate to define shared goals and collaborate over the fulfillment of those goals. Our work takes a domain-general approach to plan localization, the problem of establishing the set of steps within the plan that are candidates (potentially after some adaptive repair actions) for next actions given the world's unforeseeen changes. We use analogical mapping to help agents determine the nearest states in a diverse plan relative to the current world state, identifying both the maximal satisfied states that the world conforms to presently, and the closest desired states adjacent to satisfied states that are both achievable by an action and makes progress toward the goal. These are demonstrated in a system called CLiC. The system's overall purpose is to engage in symmetric dialog with human users about goals and recommended actions to achieve those goals. Both the human and the system may choose to take those actions, or describe them to the other party. They may not always do what they are told. Preliminary results indicate that our approach suits collaborative situated agents with flexible goals in open worlds.
Proceedings of the AAAI Conference on Artificial Intelligence
We demonstrate an integrated system for building and learning models and structures in both a rea... more We demonstrate an integrated system for building and learning models and structures in both a real and virtual environment. The system combines natural language understanding, planning, and methods for composition of basic concepts into more complicated concepts. The user and the system interact via natural language to jointly plan and execute tasks involving building structures, with clarifications and demonstrations to teach the system along the way. We use the same architecture for building and simulating models of biology, demonstrating the general-purpose nature of the system where domain-specific knowledge is concentrated in sub-modules with the basic interaction remaining domain-independent. These capabilities are supported by our work on semantic parsing, which generates knowledge structures to be grounded in a physical representation, and composed with existing knowledge to create a dynamic plan for completing goals. Prior work on learning from natural language demonstratio...
Assessing the credibility of research claims is a central, continuous, and laborious part of the ... more Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts. Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment. The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates. Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide...
International Semantic Web Conference, Jul 30, 2001
One vision of the "Semantic Web" of the future is that software agents will interact with each ot... more One vision of the "Semantic Web" of the future is that software agents will interact with each other using formal metadata that reveal their interfaces. We examine one plausible paradigm, where agents provide service descriptions that tell how they can be used to accomplish other agents' goals. From the point of view of these other agents, the problem of deciphering a service description is quite similar to the standard AI planning problem, with some interesting twists. Two such twists are the possibility of having to reconcile contradictory ontologies -or conceptual frameworks -used by the agent, and having to rearrange the data structures of a message-sending agent so they match the expectations of the recipient. We argue that the former problem requires human intervention and maintenance, but that the latter can be fully automated. Original meaning: the philosophical study of being. As used in AI, the word "ontology" has come to mean "what is represented as existing." 2 We depart from Lisp notation in two contexts. We represent finite sets using braces and tuples using angle brackets. Lisp purists may prefer to read {a, b, c} as (set a b c), and <a b c> as (tuple a b c).
This is the third in a series of workshops related to this topic, the first of which was the AAAI... more This is the third in a series of workshops related to this topic, the first of which was the AAAI-10 Workshop on Goal-Directed Autonomy while the second was the Self-Motivated Agents (SeMoA) Workshop, held at Lehigh University in November 2012. Our objective for holding this meeting was to encourage researchers to share information on the study, development, integration, evaluation, and application of techniques related to goal reasoning, which concerns the ability of an intelligent agent to reason about, formulate, select, and manage its goals/objectives. Goal reasoning differs from frameworks in which agents are told what goals to achieve, and possibly how goals can be decomposed into subgoals, but not how to dynamically and autonomously decide what goals they should pursue. This constraint can be limiting for agents that solve tasks in complex environments when it is not feasible to manually engineer/encode complete knowledge of what goal(s) should be pursued for every conceivable state. Yet, in such environments, states can be reached in which actions can fail, opportunities can arise, and events can otherwise take place that strongly motivate changing the goal(s) that the agent is currently trying to achieve. This topic is not new; researchers in several areas have studied goal reasoning (e.g., in the context of cognitive architectures, automated planning, game AI, and robotics). However, it has infrequently been the focus of intensive study, and (to our knowledge) no other series of meetings has focused specifically on goal reasoning. As shown in these papers, providing an agent with the ability to reason about its goals can increase performance measures for some tasks. Recent advances in hardware and software platforms (involving the availability of interesting/complex simulators or databases) have increasingly permitted the application of intelligent agents to tasks that involve partially observable and dynamically-updated states (e.g., due to unpredictable exogenous events), stochastic actions, multiple (cooperating, neutral, or adversarial) agents, and other complexities. Thus, this is an appropriate time to foster dialogue among researchers with interests in goal reasoning. Research on goal reasoning is still in its early stages; no mature application of it yet exists (e.g., for controlling autonomous unmanned vehicles or in a deployed decision aid). However, it appears to have a bright future. For example, leaders in the automated planning community have specifically acknowledged that goal reasoning has a prominent role among intelligent agents that act on their own plans, and it is gathering increasing attention from roboticists and cognitive systems researchers. In addition to a survey, the papers in this workshop relate to, among other topics, cognitive architectures and models, environment modeling, game AI, machine learning, meta-reasoning, planning, selfmotivated systems, simulation, and vehicle control. The authors discuss a wide range of issues pertaining to goal reasoning, including representations and reasoning methods for dynamically revising goal priorities. We hope that readers will find that this theme for enhancing agent autonomy to be appealing and relevant to their own interests, and that these papers will spur further investigations on this important yet (mostly) understudied topic. Many thanks to the participants and ACS for making this event happen!
Proceedings of the AAAI Conference on Artificial Intelligence
This paper describes a novel combination of Java program analysis and automated learning and plan... more This paper describes a novel combination of Java program analysis and automated learning and planning architecture to the domain of Java vulnerability analysis. The key feature of our “HACKAR: Helpful Advice for Code Knowledge and Attack Resilience” system is its ability to analyze Java programs at development-time, identifying vulnerabilities and ways to avoid them. HACKAR uses an improved version of NASA’s Java PathFinder (JPF) to execute Java programs and identify vulnerabilities. The system features new Hierarchical Task Network (HTN) learning algorithms that (1) advance stateof-theart HTN learners with reasoning about numeric constraints, failures, and more general cases of recursion, and (2) contribute to problem-solving by learning a hierarchical dataflow representation of the program from the inputs of the program. Empirical evaluation demonstrates that HACKAR was able to suggest fixes for all of our test program suites. It also shows that HACKAR can analyze programs with st...
Using a description classifier to enhance knowledge representation
IEEE Expert, 1991
1 were the only tools available to discover problems with representationalcomponents. KREME, a kn... more 1 were the only tools available to discover problems with representationalcomponents. KREME, a knowledge acquisition and editing tool, uses a classifier to help knowl-edge engineers maintain consistency while developing knowledge bases. One of KREME&amp;amp;#x27;s functions is to ...
Multinational coalitions are increasingly important in military operations. But coalitions today ... more Multinational coalitions are increasingly important in military operations. But coalitions today suffer from heterogeneous command systems, labour-intensive information collection and coordination, and different and incompatible ways of representing information. The purpose of Network Enabled Capability (NEC) is to enhance military capability by exploiting information better. The Coalition Agents Experiment (CoAX) was an international collaborative research effort to examine how the emerging technologies of software agents and the semantic web could help to construct coherent command support systems for coalition operations. Technology demonstrations based on a realistic coalition scenario showed how agents and associated technologies facilitated run-time interoperability across the coalition, responded well to unexpected battlespace events, and aided the selective sharing of information between coalition partners. We describe the CoAX experiments, the approaches and technologies us...
The Semantic Web should enable greater access not only to content but also to services on the Web... more The Semantic Web should enable greater access not only to content but also to services on the Web. Users and software agents should be able to discover, invoke, compose, and monitor Web resources offering particular services and having particular properties. As part of the DARPA Agent Markup Language program, we have begun to develop an ontology of services, called DAML-S, that will make these functionalities possible. In this paper we describe the overall structure of the ontology, the service profile for advertising services, and the process model for the detailed description of the operation of services. We also compare DAML-S with several industry efforts to define standards for characterizing services on the Web. 1 Introduction: Services
Uploads
Papers by Mark Burstein