Adaptive Model Context Protocols for Multi-Agent Collaboration
2025, Journal of Information Systems Engineering and Management
https://doi.org/10.52783/JISEM.V10I50S.10365…
7 pages
1 file
Sign up for access to the world's latest research
Abstract
A new framework for adaptive model context protocols that improve multi-agent cooperation in dispersed environments is presented in this study. The suggested method makes use of dynamic context sharing mechanisms that adjust to task complexity, communication band-width, and computational limitations. The framework allows agents to negotiate the best parameters for information exchange by implementing a hierarchical context model with bidirectional context flow. In comparison to static approaches, the adaptive protocol lowers communication overhead while preserving task performance, as demonstrated by experimental evaluation in distributed sensor networks, autonomous vehicle coordination, and collaborative problem-solving. In order to intelligently filter information exchange, the framework presents context relevance scoring and selective propagation techniques. By providing solutions for autonomous systems functioning under fluctuating resource constraints, this research fills the gap between multi-agent collaboration and distributed systems optimization.
Related papers
Lecture Notes in Computer Science, 2014
In most multiagent-based simulation (MABS) frameworks, a scheduler activates the agents who compute their context and decide the action to execute. This context computation by the agents is executed based on information about themselves, the other agents and the objects of the environment that are accessible to them. The issue here is the identification of the information subsets that are relevant for each agent. This process is time-consuming and is one of the barriers to increased use of MABS for large simulations. Moreover, this process is hidden in the agent behavior and no algorithm has been designed to decrease its cost. We propose a new context model where each subset of information identifying a context is formalized by a so called "filter" and where the filters are clustered in ordered trees. Based on this context model, we also propose an algorithm to find efficiently for each agent their filters following their perceptible information. The agents receive perceptible information, execute our algorithm to know their context and decide which action to execute. Our algorithm is compared to a "classic" one, where the context identification uses no special data structure. Promising results are presented and discussed.
In many occasions of our daily lives, we are willing to spontaneously interact or collaborate with nearby people for sharing ideas, chatting, saving time/money, or helping each other. For that purpose, it is often necessary to identify and reason about shared context situations that are based on distributed sources of local contexts. So far, most of the work that investigates mechanisms to support spontaneous discovery and interaction among mobile users has not thoroughly explored means of automatic detection of common Global Context States (GCS). In this paper, we discuss a distributed reasoning approach and algorithm that determines a distributed Global Context State among potentially interacting agents. We also evaluate the complexity of the algorithm — through simulation — and identify how the convergence of the algorithm is influenced by users’ mobility patterns, the requested minimum number of contributing agents required to conclude a reasoning process and the volatility of each agent’s local context.
ACADIA 2014 Proceedings
This paper presents research and experimentation with context-aware multi-agent based design systems to simulate and propose urban schemes that specifically utilize fields of differentiated intensity data in order to propose an infrastructure to support urban revitalization.
International Journal of Computer Vision, 2007
Applying multi-agent systems in real world scenarios requires several essential research questions to be answered. Agents have to perceive their environment in order to take useful actions. In a multiagent system this results in a distributed perception of partial information, which has to be fused. Based on the perceived environment the agents have to plan and coordinate their actions. The relation between action and perception, which forms the basis for planning, can be learned by perceiving the result of an action. In this paper we focus these three major research questions First, we investigate distributed world models that describe the aspects of the world that are relevant for the problem at hand. Distributed Perception Networks are introduced to fuse observations to obtain robust and efficient situation assessments. Second, we show how coordination graphs can be applied to multi-robot teams to allow for efficient coordination. Third, we present techniques for agent planning in uncertain environments, in which the agent only receives partial information (through its sensors) regarding the true state of environment.
2015
Mobile robots are used in a variety of applications including manufacturing, logistics and disaster recovery. In these domains, there is often a requirement to act autonomously, but cooperatively. In this case, autonomous agents and multi-agent systems are useful approaches to represent and execute individual robot decision-making as well as robot coordination and cooperation. A central problem in multi-robot systems is how to store and organize the knowledge individual robots acquire from their sensors or from other robots, and to create an adequate common representation of the environment (including robots’ beliefs, goals, and commitments). In this paper, we present the architecture of a Distributed Common Information Model (dCIM), which can be used as a knowledge base for intelligent agents controlling mobile robots. We describe the concept and use of the so-called Information Integration Interface (3I); also the architecture covers methods for reliable communication. Additionall...
This paper presents an approach for multi-robot coordination based both on coordinated navigation and task allocation method. An ad hoc agent based architecture is defined in order to implement the robot control system in both simulation and real applications. The coordination of the multi-robot system is based on agent interaction and negotiation, and a communication infrastructure based on open web standards is provided. The system employs the RFID technology for building a context aware information system which is the base of the coordination strategies.
An Application science for …, 2004
2004
Situation Awareness requires teammates to share data with limited network bandwidth and computing power. These limitations require an intelligent method of selecting data for dissemination. Lockheed Martin Advanced Technology Laboratories (LM ATL) has created a selection process to track data and has extended this process for sending inferred and other relevant data, including semantic relationships, threat aggregates and enemy courses of action (ECOAs). With a wide variety of data to choose from, the selection process requires a rich set of criteria based on the needs of the teammates. These needs are captured in the context of each teammate. Context is made up of a role, current state, capabilities, and explicit or implied needs. The context is used to select data for dissemination in a threefold process: filtering, prioritizing, and sending the data. This paper will describe the evolution of the Shared Understanding technology developed at LM ATL.
2003
We propose a form of group communication, called channeled multicast, for active rooms and other scenarios featuring strict real-time requirements, inherently unreliable communication, and a continuously changing set of contextaware autonomous systems. In our approach, rooted in multi-agent and team programming, coordination and cooperation are supported via "social awareness" and overhearing. Overhearing also allows the collection of contextual information without interfering with running systems. We introduce the concept of implicit organization for coordinating agents, outline a general architecture, describe some of the protocols in use in our applications (interactive museums), and report on some initial experimental results.
IEEE Access
Autonomous Vehicles are becoming a reality in places with advanced infrastructure to support their operations. In crowded places, harsh environments, missions that require these vehicles to be aware of the context in which they are operating, and situations requiring continuous coordination with humans such as in disaster relief, Advanced-Vehicle Systems (AVSs) need to be better contextually aware. The vast literature referring to "context-aware systems" is still sparse, focusing on very limited forms of contextual awareness. It requires a structured approach to bring it together to truly realise contextual awareness in AVSs. This paper uses a Human-AVSs (HAVSs) lens to polarise the literature in a coherent form suitable for designing distributed HAVSs. We group the relevant literature into two categories: contextual-awareness related to the vehicle infrastructure itself that enables AVSs to operate, and contextualawareness related to HAVSs. The former category focuses on the communication backbone for AVSs including ad-hoc networks, services, wireless communication, radio systems, and the cyber security and privacy challenges that arise in these contexts. The latter category covers recommender systems, which are used to coordinate the actions that sit at the interface of the human and AVSs, human-machine interaction issues, and the activity recognition systems as the enabling technology for recommender systems to operate autonomously. The structured analysis of the literature has identified a number of open research questions and opportunities for further research in this area.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.