Conference Presentations by Siddhartha Bhattacharyya

Autonomous intelligent agents are often used to command autonomous systems in their mission execu... more Autonomous intelligent agents are often used to command autonomous systems in their mission execution. These intelligent agents can be implemented in a cognitive architecture in order to perform human-like reasoning about the mission. In this design approach the agent can be modeled by associating behaviors with the components present in the cognitive architecture. One of the architectures we investigate in our research effort is the rule-based reasoning system, Soar. In this framework, agents input data to form the perception of the situation, and then use Soar rules to make decisions and propose new actions. But the rule-based representations in cognitive architectures like Soar are not amenable to formal, rigorous analysis. Understanding when the cognitive architecture will deviate from the desired behavior due to environment or adaptation requires creating and proving adherence to a formal specification of acceptability. There is a critical need for verification methods to be integrated with the design and operation of intelligent agents to assure correct execution before deploying in safety critical missions. In this work we evaluate verification by model transformation focused on one class of cognitive architectures translated into an analytical domain. This translation enables analysis of the complete set of rules in the cognitive architecture to identify if there exists any conflict among the rules, violation of safety properties, or reduction in performance characteristics. The rules used in the Soar cognitive architecture were transformed into the format of the Uppaal real-time verification tool. Using Uppaal, we were able to verify temporal logic properties about the behavior of the autonomous agent. This involved properties such as completion of action sequences and appropriate action selection. This was demonstrated in a prototype system that performed in-flight checklist maintenance. The translation from a rule-based representation to a verifiable representation accomplishes an important first step in achieving verifiable operation of adaptive decision-making computational agents. As adaptive computational agents gain more ground in cooperative and autonomous driving cars, unmanned aerial vehicles, and production equipment, the ability to constrain the operation of these agents within a formally specified and well-understood set of performance and
Papers by Siddhartha Bhattacharyya

arXiv (Cornell University), Oct 21, 2021
As aircraft systems become increasingly autonomous, the human-machine role allocation changes and... more As aircraft systems become increasingly autonomous, the human-machine role allocation changes and opportunities for new failure modes arise. This necessitates an approach to identify the safety requirements for the increasingly autonomous system (IAS) as well as a framework and techniques to verify and validate that an IAS meets its safety requirements. We use Crew Resource Management techniques to identify requirements and behaviors for safe human-machine teaming behaviors. We provide a methodology to verify that an IAS meets its requirements. We apply the methodology to a case study in Urban Air Mobility, which includes two contingency scenarios: unreliable sensor and aborted landing. For this case study, we implement an IAS agent in the Soar language that acts as a copilot for the selected contingency scenarios and performs takeoff and landing preparation, while the pilot maintains final decision authority. We develop a formal human-machine team architecture model in the Architectural Analysis and Design Language (AADL), with operator and IAS requirements formalized in the Assume Guarantee REasoning Environment (AGREE) Annex to AADL. We formally verify safety requirements for the human-machine team given the requirements on the IAS and operator. We develop an automated translator from Soar to the nuXmv model checking language and formally verify that the IAS agent satisfies its requirements using nuXmv. We share the design and requirements errors found in the process as well as our lessons learned. * We would like to thank Natasha Neogi and Paul Miner of NASA LaRC for their input on Urban Air Mobility scenarios of interest and for their feedback on safety assessment and verification approaches.

IEEE Access
Social Media is used by many as a source of information for current world events, followed by pub... more Social Media is used by many as a source of information for current world events, followed by publicly sharing their sentiment about these events. However, when the shared information is not trustworthy and receives a large number of interactions, it alters the public's perception of authentic and false information, particularly when the origin of these stories comes from malicious sources. Over the past decade, there has been an influx of users on the Twitter social network, many of them automated bot accounts with the objective of participating in misinformation campaigns that heavily influence user susceptibility to fake information. This can affect public opinion on real-life matters, as previously seen in the 2020 presidential elections and the current COVID-19 epidemic, both plagued with misinformation. In this paper, we propose an agent-based social simulation environment that utilizes the social network Twitter, with the objective of evaluating how the beliefs of agents representing regular Twitter users can be influenced by malicious users scattered throughout Twitter with the sole purpose of spreading misinformation. We applied two scenarios to compare how these regular agents behave in the Twitter network, with and without malicious agents, to study how much influence malicious agents have on the general susceptibility of the regular users. To achieve this, we implemented a belief value system to measure how impressionable an agent is when encountering misinformation and how its behavior gets affected. The results indicated similar outcomes in the two scenarios as the affected belief value changed for these regular agents, exhibiting belief in the misinformation. Although the change in belief value occurred slowly, it had a profound effect when the malicious agents were present, as many more regular agents started believing in misinformation. INDEX TERMS Agent-based modeling, agent-based social simulation, multi-agent systems, social media, twitter, twitter bot.

Assurance for Integrating Advanced Algorithms in Autonomous Safety-Critical Systems
IEEE Systems Journal, 2021
Although advanced algorithms are needed to enable increasingly autonomous civil aviation applicat... more Although advanced algorithms are needed to enable increasingly autonomous civil aviation applications, there are limitations in assurance technologies, which must be addressed to gain trust in the performance of these algorithms. This gap emphasizes the need to guarantee safety by capturing performance boundaries, as these algorithms are integrated. Additionally, multiple similar algorithms might need to be executed sequentially or concurrently to accomplish a mission or provide guidance for safety-critical operations. The selection among algorithm functionalities is a complex and critical activity that needs to be systematically designed and analyzed before actual implementation. Toward this end, we discuss our proposed process, which includes formally modeling abstractions of the algorithms in an architectural framework, then identifying the key performance parameters, followed by verification of the composition of these algorithms with formal contracts based on assumptions and guarantees. Finally, to reduce the gap between design and implementation, an automated translation from the architectural model to source code has been developed, which is a Java-based outline of the implementation. We demonstrate our compositional approach in assuring the behavior of an autonomous aerial system via a collision avoidance case study with advanced algorithms to handle critical emerging situations.
Autonomous systems are designed and deployed in different modeling paradigms. These environments ... more Autonomous systems are designed and deployed in different modeling paradigms. These environments focus on specific concepts in designing the system. We focus our effort in the use of cognitive architectures to design autonomous agents to collaborate with humans to accomplish tasks in a mission. Our research focuses on introducing formal assurance methods to verify the behavior of agents designed in Soar, by translating the agent to the formal verification environment Uppaal.

Electronic Proceedings in Theoretical Computer Science, 2021
As aircraft systems become increasingly autonomous, the human-machine role allocation changes and... more As aircraft systems become increasingly autonomous, the human-machine role allocation changes and opportunities for new failure modes arise. This necessitates an approach to identify the safety requirements for the increasingly autonomous system (IAS) as well as a framework and techniques to verify and validate that an IAS meets its safety requirements. We use Crew Resource Management techniques to identify requirements and behaviors for safe human-machine teaming behaviors. We provide a methodology to verify that an IAS meets its requirements. We apply the methodology to a case study in Urban Air Mobility, which includes two contingency scenarios: unreliable sensor and aborted landing. For this case study, we implement an IAS agent in the Soar language that acts as a copilot for the selected contingency scenarios and performs takeoff and landing preparation, while the pilot maintains final decision authority. We develop a formal human-machine team architecture model in the Architectural Analysis and Design Language (AADL), with operator and IAS requirements formalized in the Assume Guarantee REasoning Environment (AGREE) Annex to AADL. We formally verify safety requirements for the human-machine team given the requirements on the IAS and operator. We develop an automated translator from Soar to the nuXmv model checking language and formally verify that the IAS agent satisfies its requirements using nuXmv. We share the design and requirements errors found in the process as well as our lessons learned. * We would like to thank Natasha Neogi and Paul Miner of NASA LaRC for their input on Urban Air Mobility scenarios of interest and for their feedback on safety assessment and verification approaches.

Assuring Intelligent Systems: Contingency Management for UAS
IEEE Transactions on Intelligent Transportation Systems, 2021
Unmanned aircraft systems (UAS) collaborate with humans to operate in diverse, safety-critical ap... more Unmanned aircraft systems (UAS) collaborate with humans to operate in diverse, safety-critical applications. However, assurance technologies need to be integrated into the design process in order to guarantee safe behavior, thereby enabling UAS operations in the National Airspace System (NAS). In this paper, formal methods are integrated with learning-enabled systems representations. The generation and representation of knowledge are captured via monadic second-order logic rules in the cognitive architecture Soar. These rules are translated into timed automata, and a proof of correctness for the translation is provided so that safety and liveness properties can be checked in the formal verification environment Uppaal. This approach is agnostic to the learning mechanism used to generate the learned rules (e.g., chunking, etc.). An example of a fault-tolerant, learning-enabled UAS deciding which of four contingency procedures to execute under a lost link scenario while overflying an urban area is used to illustrate the approach.

International Journal of Advanced Computer Science and Applications, 2020
Advancement in artificial intelligence, internet of things and information technology have enable... more Advancement in artificial intelligence, internet of things and information technology have enabled the delegation of execution of autonomous services to autonomous systems for civil applications. It is envisioned, that with an increase in the demand for autonomous systems, the decision making associated in the execution of the autonomous services will be distributed, with some of the responsibility in decision making, shifted to the autonomous systems. Thus, it is of utmost importance that we assure the correctness of distributed protocols, that multiple autonomous systems will follow, as they interact with each other in providing the service. Towards this end, we discuss our proposed framework to model, analyze and assure the correctness of distributed protocols executed by autonomous systems to provide a service. We demonstrate our approach by formally modeling the behavior of autonomous systems that will be involved in providing services in the Urban Air Mobility framework that enables air taxis to transport passengers.

Lecture Notes in Computer Science, 2018
Developing trust in intelligent agents requires understanding the full capabilities of the agent,... more Developing trust in intelligent agents requires understanding the full capabilities of the agent, including the boundaries beyond which the agent is not designed to operate. This paper focuses on applying formal verification methods to identify these boundary conditions in order to ensure the proper design for the effective operation of the human-agent team. The approach involves creating an executable specification of the human-machine interaction in a cognitive architecture, which incorporates the expression of learning behavior. The model is then translated into a formal language, where verification and validation activities can occur in an automated fashion. We illustrate our approach through the design of an intelligent copilot that teams with a human in a takeoff operation, while a contingency scenario involving an engineout is potentially executed. The formal verification and counterexample generation enables increased confidence in the designed procedures and behavior of the intelligent copilot system.
Apparatus and Method for Monitoring a Liquid Product in a Sealed Vessel
Visualization of Kentucky Lake
Certification Considerations for Adaptive Systems
2006 American Control Conference, 2006
We present a systematic method of verification for a hierarchical hybrid system. The method of ve... more We present a systematic method of verification for a hierarchical hybrid system. The method of verification is developed using a bottom-up approach, in which the bottom level of the hybrid system hierarchy is verified first, and each higher-level is subsequently verified with the assumption that all lower levels are correct. At each step in the verification process, lower and higher levels than the one currently being verified may be abstracted, thus reducing the complexity of verification. This method is algorithmically developed and integrated into the design of a hierarchical hybrid mission-level controller for an autonomous underwater.
Hybrid-model based hierarchical mission control architecture for autonomous underwater vehicles
Proceedings of the 2005, American Control Conference, 2005., 2005
... The middle level of the mission control hierarchy consists of Operation Controllers, where an... more ... The middle level of the mission control hierarchy consists of Operation Controllers, where an operation represents a mission segment or phase that is integral to the completion of the overallAUV mission, and is user/mission-centric. ... Figure 2: Survey AUV Mission Controller ...
OF DISSERTATION ABSTRACT OF DISSERTATION HIERARCHICAL HYBRID-MODEL BASED DESIGN, VERIFICATION, SI... more OF DISSERTATION ABSTRACT OF DISSERTATION HIERARCHICAL HYBRID-MODEL BASED DESIGN, VERIFICATION, SIMULATION, AND SYNTHESIS OF MISSION CONTROL FOR AUTONOMOUS UNDERWATER VEHICLES
We study diagnosis of discrete-event systems (DESs) modeled in the rules-based modeling formalism... more We study diagnosis of discrete-event systems (DESs) modeled in the rules-based modeling formalism introduced in [6, 7], and applied in to model failure-prone systems. An attractive feature of rules-based model is it's compactness (size is polynomial in number of signals). A motivation for the work presented is to develop failure diagnosis techniques that are able to exploit this compactness. In this regard, we develop symbolic techniques for testing diagnosability and online diagnosis. Diagnosability test is shown to be an instance of first-order temporal logic model-checking. An algorithm for online diagnosis is obtained by using predicates and their transformers.
Uploads
Conference Presentations by Siddhartha Bhattacharyya
Papers by Siddhartha Bhattacharyya