Networking is IPC
2008, Proceedings of the 2008 ACM CoNEXT Conference on - CONEXT '08
https://doi.org/10.1145/1544012.1544079…
6 pages
1 file
Sign up for access to the world's latest research
Abstract
This position paper outlines a new network architecture that is based on the fundamental principle that networking is interprocess communication (IPC). In this model, application processes (APes) communicate via an IPC facility. The IPC processes that make up this facility provide a protocol that implements an IPC mechanism, and a protocol for managing distributed IPC (routing, security and other management tasks). Our architecture is recursive in that the IPC processes can themselves be APes requesting services from lower IPC facilities. We present the repeating patterns and structures in our architecture, and show how the proposed model would cope with the challenges faced by today's Internet (and that of the future).
Related papers
2020
Nowadays with improvement in computer science, distributed systems have attracted remarkable attention and increasingly become an indispensable factor in our life. Massive-scale data processing, weather forecasting, industrial control systems, medical science, multi-tire architectures in enterprise applications, and aerospace to name but a few are the cases in point that distributed systems play a notable role. Inter-Process Communication or in a short form, IPC is specified as the heart of all distributed systems, therefore they are not formed without IPC. Numerous methods concerning IPC have been proposed so far that are utilized in diverse circumstances. According to the physical location of communication processes in applications, IPC could be established among either multiple processes on the same computer or several computers across a network. From the communication pattern's perspective, these IPCs can be classified into two broad groups namely, shared memory and message passing. Although, it is not true to say when processes are performed on the same computer definitely employ shared memory to communicate if processes are executed on the different systems they inevitably communicate through message passing. By way of illustration, pipes use message passing patterns to make a connection between various processes but all of the processes are carried out on the same system. The aim of this research is to depict a categorization of the some IPC methods, give a brief description of them, and assess their performance in terms of transferring rate by sending multiple files in different sizes between a client and server. As we expected, socket as the basic IPC, since it does not perform extra operations on the input data to be sent had a desirable performance compared to others. Although, to achieve some of the capabilities, like eliminating platform dependencies and asynchronous communication, it needs to add additional layers that make poor performance.
IEEE Network, 2014
In recent years, many new Internet architectures are being proposed to solve shortcomings in the current Internet. A lot of these new architectures merely extend the current TCP/IP architecture and hence do not solve the fundamental cause for these problems. The Recursive InterNet Architecture (RINA) is a true new network architecture, developed from scratch, building on experiences learned in the past. RINA prototyping efforts have been ongoing since 2010, but a prototype upon which a commercial RINA implementation can be built has not been developed yet. The goal of the IRATI reseach project is to develop and evaluate such a prototype in Linux/OS. This article focuses on the software design required to implement a network stack in Linux/OS. We motivate the placement of, and communication between, the different software components in either kernel or user space. A first open source prototype of the IRATI implementation of RINA will be available in June 2014 for researchers, developers and early adopters.
XRDS: Crossroads, The ACM Magazine for Students, 2000
Connector column will cover various computer networking concepts and Internet-specific technologies. Our first columnist is Shvetima Gulati and we look forward to learning about this hot topic each month...so make sure you come back again and again to see what's new! If you would like an email reminder about the release of new articles on the Crossroads website, then simply subscribe to xrds-announce, our announcemenst mailing list. Basic Concepts Most network applications are described as either client-side applications or server-side applications. A Web browser is an example of a client-side application. It receives its data from a remote program known as a Web server application. In general, a server-side application provides the services that a client-side application demands. The term server might also refer to the powerful hardware devices on which software-server applications are executed. To clarify this distinction, I will use the term server machine for the hardware platform and either server-side application or just server for a software program running on that platform. A protocol is set of rules and conventions used to impose a standardized, structured language for the communication between multiple parties. For example, a protocol might define the order in which information is exchanged between two parties. In fact, a data exchange can only take place between two computers using the same protocol.
IEICE Transactions on Communications, 2006
Today's Internet remains faithful to its original design that dates back more than two decades. In spite of tremendous diversity in users, as well as the sheer variety of applications that it supports, it still provides a single, basic, service offering-unicast packet delivery. While this legacy architecture seemed adequate till recently, it cannot support the requirements of newer services and applications which are demanded by the growing, and increasingly sophisticated, user population. The traditional way to solve this impasse has been by using overlay networks to address individual requirements. This does not address the fundamental, underlying problem, i.e., the ossification of the Internet architecture. In this paper, we describe the design of a new Service Oriented Internet framework that enables the flexible and effective deployment of new applications and services. The framework we describe utilizes the existing IP network and presents the abstraction of a service layer that enables communication between service end-points and can better support requirements such as availability, robustness, mobility, etc., that are demanded by the newly emerging applications and services.
Lecture Notes in Computer Science, 2012
Design principles play a central role in the architecture of the Internet as driving most engineering decisions at conception level and operational level. This paper is based on the EC Future Internet Architecture (FIArch) Group results and identifies some of the design principles that we expect to govern the future architecture of the Internet. We believe that it may serve as a starting point and comparison for most research and development projects that target the so-called Future Internet Architecture. 2 Definition and Background 2.1 Definitions We define architecture the set of functions, states, and objects/information together with their behavior, structure, composition, relationships and spatio-temporal distribution. The specification of the associated functional, object/informational and state models leads to an architectural model comprising a set of components (i.e., procedures, data structures, state machines) and the characterization of their interactions (i.e., messages, calls, events, etc.). Design principles refer to agreed structural and behavioral rules on how a designer/an architect can best structure the various architectural components and describe the fundamental and time invariant laws underlying an engineered artefact (i.e., an object formed/produced by engineering). By "structural and behavioral rules" we refer to the set of commonly accepted and agreed rules serving to guide, control, or regulate a proper and acceptable structure of a system at design time and a proper and acceptable behavior of a system at running time. Time invariance refers to a system whose output does not depend explicitly on time (this time invariance is to be seen as within a given set of initial conditions due to the technological change and paradigms shifts, the economical constraints, etc.). We use the term data to refer to any organized group of bits, e.g., packets, traffic, information, etc. and service to refer to any action or set of actions performed by a provider in fulfillment of a request, which occurs through the Internet (i.e., by exploiting data communication, as defined below) with the aim of creating and/or providing added value or benefits to the requester(s). "Resource" is any fundamental element (i.e., physical, logical or abstract) that can be identified. This paper refers to communication as the exchange of data (including both control messages and data) between a physical or logical source and sink referred to as communication end-points; when end-points sit at the same physical or logical functional level, communication is qualified as "end-to-end". Security is a process of taking into account all major constraints that encompasses robustness, confidentiality and integrity. Robustness is the degree to which a system operates correctly in the presence of exceptional inputs or stressful environmental conditions. Confidentiality is the property that ensures that information is accessible only to those authorized to have access and integrity includes both "data integrity" and "system integrity". The term complexity refers to the architectural complexity (i.e., proportional to the needed number of components and interactions among components), and communication complexity (i.e., proportional to the needed number of messages for proper operation). Finally, scalability refers to the ability of a computational system to continue to function without making changes to the system
Computer Communication Review, 1990
The current generation of protocol architectures, such as TCP/IP or the IS0 suite, seem successful at meeting the demands of todays networks.
2009
With the increasingly widespread application of the network, a distributed operating system based on Linux cluster has been developing rapidly. This paper presents a kernel level distributed interprocess communication system model with support for distributed process synchronization and communication. This system model uses the System V interprocess communication programming interface and enhances it to provide functionality in distributed environments. A key feature of this system model is the use of the semaphore interface to support distributed synchronization and the implementation is done at the Linux kernel level to reduce the overhead. Finally, we realized this model introduced in this paper in redhat Linux cluster and test result show it is a good solution of distributed processes communication. Compared with other similar systems, the advantage of this system is that the implementation is done at the Linux kernel level to reduce the overhead and System V IPC API is easy to extend. User is easy to call the application interface of the distributed operating system, without considering any details of the network.
A large fraction of today's Internet applications are internally publish/subscribe in nature; the current architecture makes it cumbersome and inept to support them. In essence, supporting efficient publish/subscribe requires data-oriented naming, efficient multicast, and in-network caching. Deployment of native IP-based multicast has failed, and overlay-based multicast systems are inherently inefficient. We surmise that scalable and efficient publish/subscribe will require substantial architectural changes, such as moving from endpoint-oriented systems to information-centric architectures.
Proceedings of the 2018 Workshop on Networking for Emerging Applications and Technologies, 2018
Future networking applications place demands on networking services that become increasingly difficult to address using existing internetworking technology. This paper presents a new framework and protocol that is designed to meet this challenge, BPP (Big Packet Protocol). BPP is intended as an enabler for a new generation of networking services that depend on the ability to provide precise service level guarantees while facilitating operations. In addition, BPP allows users to define and customize networking behavior from the network edge for their flows in isolation from other users and without needing to rely on lengthy vendor or network operator product cycles.
System Sciences, 2000. …, 2000
In recent years, business on the Internet has exponentially increased. Consequently, the deployment and management of business applications on the Internet is becoming more and more complex, which requires the development of new Internet architectures suitable to efficiently run these business applications. In this paper, we present and evaluate several computing models for application service providers and introduce the serverbased model and the corresponding Internet architecture. Two case studies, which use the proposed architecture for application deployment, are also described in the paper.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (10)
- Ed. C. Perkins. IP Mobility Support for IPv4. Internet RFC 3344, August 2002.
- J. Day. Patterns in Network Architecture I, II, III. Presentation Slides, SC6 in Seoul Korea, NIST, BBN, November 1996.
- J. Day. Patterns in Network Architecture: A Return to Fundamentals. Prentice Hall, 2008.
- D. Farinacci, V. Fuller, D. Oran, and D. Meyer. Locator/ID Separation Protocol (LISP). Internet Draft, November 2007.
- K. Mattar, I. Matta, J. Day, V. Ishakian, and G. Gursun. Declarative Transport: No more transport protocols to design, only policies to specify. Technical Report BUCS-TR-2008-014, CS Dept, Boston U., July 12 2008.
- J. Saltzer. Naming and Binding of Objects. In R. Bayer, editor, Operating Systems, Lecture notes in Computer Science, volume 60. Springer-Verlag, New York, 1978.
- J. Saltzer. On the Naming and Binding of Network Destinations. In International Symposium on Local Computer Networks, pages 311-317, April 1982.
- J. Shoch. Inter-Network Naming, Addressing, and Routing. In IEEE Conference on Computer Communication Networks, pages 72-79, Washington DC, 1978.
- R. Stewart and C. Metz. SCTP: New Transport Protocol for TCP/IP. IEEE Internet Computing, 05(6):64-69, 2001.
- J. Touch, Y-S. Wang, and V. Pingali. A Recursive Network Architecture. Technical report, USC/ISI, October 2006.