Elba Flow table Synchronization Across Network Switches
Sign up for access to the world's latest research
Abstract
The Aruba CX 10000 Series Switch represents a cutting-edge solution for data center networking, offering advanced performance capabilities combined with built-in security and services, powered by AMD Pensando technology. The architecture of the Aruba CX 10000 Series Switch is designed with embedded Pensando Distributed Services Modules (DSMs), featuring two DSMs per switch to create a Distributed Services Switch (DSS). The Aruba CX 10000 Series Switch is perfectly suited for data centers and edge deployments requiring high-performance switching, built-in firewall and telemetry capabilities, and a streamlined approach to reduce appliance sprawl by integrating services directly into the switch. One limitation of the Aruba 10000 Series switch is that flow synchronization is managed using VSX, a proprietary solution. This synchronization takes place over the ISL (Inter-Switch Link), which can pose challenges in non-VSX environments, especially EVPN standard-based Multihoming. This paper addresses this issue by presenting an alternative method for flow synchronization that operates independently of VSX, enabling wider applicability in modern data center architectures.
Related papers
Server-based architectures have generated recently a considerable interest. They provide an effective means to support composability, i.e., the integration of diverse components while guaranteeing the required service-levels to each one. While common in CPU scheduling, the support for server-oriented architectures in the domain of real-time communication protocols is more limited due to distribution and specific medium access control and queues management policies within network controllers, network devices and protocol stacks. Consequently, server-based traffic scheduling is either not supported or supported in a limited and inefficient way, e.g., only basic servers, no hierarchical composition, static configuration. To overcome such limitations, the authors proposed recently the Server-SE protocol, which supports unconstrained server-based traffic scheduling over switched Ethernet, using the FTT-SE protocol and common off-the-shelf (COTS) switches as platform. This paper extends such work by bringing the servers inside a customized Ethernet switch. This option provides a high level of determinism, robustness and flexibility, being particularly suited to open systems as servers can easily be added, composed, adapted and removed at run-time. The proposal is validated with a prototype implementation and experimental results that show its effectiveness in enforcing correct resource reservations.
Computer Communication Review, 2009
To be agile and cost e ective, data centers should allow dynamic resource allocation across large server pools. In particular, the data center network should enable any server to be assigned to any service. To meet these goals, we present VL, a practical network architecture that scales to support huge data centers with uniform high capacity between servers, performance isolation between services, and Ethernet layer- semantics. VL uses () at addressing to allow service instances to be placed anywhere in the network, () Valiant Load Balancing to spread tra c uniformly across network paths, and () end-system based address resolution to scale to large server pools, without introducing complexity to the network control plane. VL's design is driven by detailed measurements of tra c and fault data from a large operational cloud service provider. VL's implementation leverages proven network technologies, already available at low cost in high-speed hardware implementations, to build a scalable and reliable network architecture. As a result, VL networks can be deployed today, and we have built a working prototype. We evaluate the merits of the VL design using measurement, analysis, and experiments. Our VL prototype shu es . TB of data among servers in seconds -sustaining a rate that is of the maximum possible.
2019
Building scalable data centers, and network devices that fit within these data centers, has become increasingly hard. With modern switches pushing at the boundary of manufacturing feasibility, being able to build suitable, and scalable network fabrics becomes of critical importance. We introduce Stardust, a fabric architecture for data center scale networks, inspired by network-switch systems. Stardust combines packet switches at the edge and disaggregated cell switches at the network fabric, using scheduled traffic. Stardust is a distributed solution that attends to the scale limitations of network-switch design, while also offering improved performance and power savings compared with traditional solutions. With ever-increasing networking requirements, Stardust predicts the elimination of packet switches, replaced by cell switches in the network, and smart network hardware at the hosts.
2009
Network virtualization has long been a goal of of the network research community. With it, multiple isolated logical networks each with potentially different addressing and forwarding mechanisms can share the same physical infrastructure. Typically this is achieved by taking advantage of the flexibility of software (e.g. [20, 23]) or by duplicating components in (often specialized) hardware[19]. In this paper we present a new approach to switch virtualization in which the same hardware forwarding plane can be shared among multiple logical networks, each with distinct forwarding logic. We use this switch-level virtualization to build a research platform which allows multiple network experiments to run side-by-side with production traffic while still providing isolation and hardware forwarding speeds. We also show that this approach is compatible with commodity switching chipsets and does not require the use of programmable hardware such as FPGAs or network processors. We build and deploy this virtualization platform on our own production network and demonstrate its use in practice by running five experiments simultaneously within a campus network. Further, we quantify the overhead of our approach and evaluate the completeness of the isolation between virtual slices.
Safety and security are two reliability properties of a system. A "Safe" system provides protection against errors of trusted users, while a "Secure" system protects against errors introduced by untrusted users. There is considerable overlap between mechanisms to support each property.
Intelligent Computing. SAI 2018. Advances in Intelligent Systems and Computing, 2018
Developers of Software Defined Network (SDN) faces a lack of or difficulty in getting a physical environment to test their inventions and developments. That drives them to use a virtual environment for their experiments. This work addresses the differences between the SDN virtual environment and physical SDN switches, which leads to equip a more realistic SDN virtual environment. Consequently, this paper presents a precise performance evaluation and comparison of off-the-shelf SDN devices, HP Aruba 3810M, with Open Virtual Switch (OVS) inside Mininet emulator. This work examines the variability of the path delay, throughput, packet losses and jitter of SDN in a different windows size of the packets and network background loads. Our conducted experiments consider a number of protocols such as ICMP, TCP and UDP. In order to evaluate the network latency accurately, a new asynchronous latency measurement technique is proposed. The developed technique shows more precise results in comparison to other techniques. Furthermore, the work focuses on extracting the flow-setup latency, caused by the external SDN controller when setting flow rules into the switch. The comparison of results shows a dissimilarity in the behaviour of SDN hardware and the Mininet emulator. The SDN hardware exposed higher latency and flowsetup time due to extra resources of delay, which the emulator does not possess.
2010
Systems is steadily increasing in quantity, size, complex ity and heterogeneity, with growing requirements for ar bitrary arrival patterns and guaranteed QoS. One of the networking protocols that is becoming more common in such systems is Ethernet and its real-time Ethernet vari ants. However, they hardly support all the referred require ments in an efficient manner since they either favour deter minism or throughput, but not both. A potential solution recently proposed by the authors is the Server-SE proto col that uses servers to confine traffic associated to specific applications or subsystems. Such an approach is dynam ically reconfigurable and adaptive, being more bandwidth efficient while providing compos ability in the time domain. This paper proposes integrating the servers inside the Eth ernet switch, boosting both the flexibility and the robustness of Server-SE, allowing, for example, the seamless connec tion of any Ethernet node. The switch is an FIT-enabled Ethernet Switch and the paper discusses two specific ways of integrating the servers, namely in software or in hard ware. These options are described and compared analyti cally and experimentally. The former favours flexibility in the servers design and management while the latter pro vides lower latency.
2010
Today's data centers offer tremendous aggregate bandwidth to clusters of tens of thousands of machines. However, because of limited port densities in even the highest-end switches, data center topologies typically consist of multi-rooted trees with many equal-cost paths between any given pair of hosts. Existing IP multipathing protocols usually rely on per-flow static hashing and can cause substantial bandwidth losses due to longterm collisions.
The OpenFlow protocol allows production networking environments such as campus networks, metropolitan networks or R&D networks, to be used as experimental infrastructure hosting, future internet architectures, softwares and protocols, in isolation to the production traffic. During rollout, one practical problem arises with legacy switches that do not support the OpenFlow protocol and need to be replaced/upgraded or worked around by means of costly network re-engineering. This poster proposes a new OpenFlow datapath, which is able to interact with non-OpenFlow legacy equipment, creating a new approach to hybrid OpenFlow networks.
2020
Integrating optical circuit switches in data-centers is an on-going research challenge. In recent years, state-of-the-art solutions introduce hybrid packet/circuit architectures for different optical circuit switch technologies, control techniques, and traffic re-routing methods. These solutions are based on separated packet and circuit planes that cannot utilize an optical circuit with flows that do not arrive from or delivered to switches directly connected to the circuit’s end-points. Moreover, current SDN-based elephant flow re-routing methods require a forwarding rule for each flow, which raises scalability issues. In this paper, we present C-Share – a scalable SDN-based circuit sharing solution for data center networks. C-Share inherently enables elephant flows to share optical circuits by exploiting a flat top-of-rack tier network topology.C-Share is based on a scalable and decoupled SDN-based elephant flow re-routing method comprised of elephant flow detection, tagging and i...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.