PRICING MECHANISMS IN THE CONTROL OF LlhEAR DYNAMIC SYSTEMS
A necessary and sufficient condition is given for the existence of prices that induce coordinatio... more A necessary and sufficient condition is given for the existence of prices that induce coordination in coupled linear dynamic systems. The condition is shown to be equivalent to the servomechanism controllability of an adjoint system.
For the engineering, operation and administration of switching systems, it is desirable to be abl... more For the engineering, operation and administration of switching systems, it is desirable to be able to quickly and accurately estimate the grade of service for different traffic loadings and for hypothetical scenarios and thus to be able to answer what-if questions. This paper presents a probabilistic model that meets this need for a class of overload controls in distributed switching systems. The model is modular and can capture the salient features of a variety of throttle and monitor designs. The model accurately calculates the probability a call is blocked given hypothetical traffic mixes, customer retry probabilities, load imbalances and load variations during the busy hour.
This volume is recommended to researchers and practitioners who are interested in the state of th... more This volume is recommended to researchers and practitioners who are interested in the state of the art in performance analysis of queuing networks with blocking.
Proceedings of the 2015 Internet Measurement Conference, 2015
There is striking volume of WorldWide Web activity on IPv6 today. In early 2015, one large Conten... more There is striking volume of WorldWide Web activity on IPv6 today. In early 2015, one large Content Distribution Network handles 50 billion IPv6 requests per day from hundreds of millions of IPv6 client addresses; billions of unique client addresses are observed per month. Address counts, however, obscure the number of hosts with IPv6 connectivity to the global Internet. There are numerous address assignment and subnetting options in use; privacy addresses and dynamic subnet pools significantly inflate the number of active IPv6 addresses. As the IPv6 address space is vast, it is infeasible to comprehensively probe every possible unicast IPv6 address. Thus, to survey the characteristics of IPv6 addressing, we perform a year-long passive measurement study, analyzing the IPv6 addresses gleaned from activity logs for all clients accessing a global CDN. The goal of our work is to develop flexible classification and measurement methods for IPv6, motivated by the fact that its addresses are not merely more numerous; they are different in kind. We introduce the notion of classifying addresses and prefixes in two ways: (1) temporally, according to their instances of activity to discern which addresses can be considered stable; (2) spatially, according to the density or sparsity of aggregates in which active addresses reside. We present measurement and classification results numerically and visually that: provide details on IPv6 address use and structure in global operation across the past year; establish the efficacy of our classification methods; and demonstrate that such classification can clarify dimensions of the Internet that otherwise appear quite blurred by current IPv6 addressing practices.
This article summarises a 2.5 day long Dagstuhl seminar on Global Measurements: Practice and Expe... more This article summarises a 2.5 day long Dagstuhl seminar on Global Measurements: Practice and Experience held in January 2016. This seminar was a followup of the seminar on Global Measurement Frameworks held in 2013, which focused on the development of global Internet measurement platforms and associated metrics. The second seminar aimed at discussing the practical experience gained with building these global Internet measurement platforms. It brought together people who are actively involved in the design and maintenance of global Internet measurement platforms and who do research on the data delivered by such platforms. Researchers in this seminar have used data derived from global Internet measurement platforms in order to manage networks or services or as input for regulatory decisions. The entire set of presentations delivered during the seminar is made publicly available at [1].
Proceedings of the ACM Internet Measurement Conference, 2020
This work presents a large-scale, longitudinal measurement study on the adoption of application u... more This work presents a large-scale, longitudinal measurement study on the adoption of application updates, enabling continuous reporting of potentially vulnerable software populations worldwide. Studying the factors impacting software currentness, we investigate and discuss the impact of the platform and its updating strategies on software currentness, device lock-in effects, as well as user behavior. Utilizing HTTP User-Agent strings from end-hosts, we introduce techniques to extract application and operating system information from myriad structures, infer version release dates of applications, and measure population adoption, at a global scale. To deal with loosely structured User-Agent data, we develop a semi-supervised method that can reliably extract application and version information for some 87% of requests served by a major CDN every day. Using this methodology, we track release and adoption dynamics of some 35,000 applications. Analyzing over three years of CDN logs, we sho...
Proceedings of the 2017 Internet Measurement Conference, 2017
The Border Gateway Protocol (BGP) has been used for decades as the de facto protocol to exchange ... more The Border Gateway Protocol (BGP) has been used for decades as the de facto protocol to exchange reachability information among networks in the Internet. However, little is known about how this protocol is used to restrict reachability to selected destinations, e.g., that are under attack. While such a feature, BGP blackholing, has been available for some time, we lack a systematic study of its Internet-wide adoption, practices, and network ecacy, as well as the prole of blackholed destinations. In this paper, we develop and evaluate a methodology to automatically detect BGP blackholing activity in the wild. We apply our method to both public and private BGP datasets. We nd that hundreds of networks, including large transit providers, as well as about 50 Internet exchange points (IXPs) oer blackholing service to their customers, peers, and members. Between 2014-2017, the number of blackholed prexes increased by a factor of 6, peaking at 5K concurrently blackholed prexes by up to 400 Autonomous Systems. We assess the eect of blackholing on the data plane using both targeted active measurements as well as passive datasets, nding that blackholing is indeed highly eective in dropping trac before it reaches its destination, though it also discards legitimate trac. We augment our ndings with an analysis of the target IP addresses of blackholing. Our tools and insights are relevant for operators considering oering or using BGP blackholing services as well as for researchers studying DDoS mitigation in the Internet.
This article summarises a 2.5 day long Dagstuhl seminar on Global Measurements: Practice and Expe... more This article summarises a 2.5 day long Dagstuhl seminar on Global Measurements: Practice and Experience held in January 2016. This seminar was a followup of the seminar on Global Measurement Frameworks held in 2013, which focused on the development of global Internet measurement platforms and associated metrics. The second seminar aimed at discussing the practical experience gained with building these global Internet measurement platforms. It brought together people who are actively involved in the design and maintenance of global Internet measurement platforms and who do research on the data delivered by such platforms. Researchers in this seminar have used data derived from global Internet measurement platforms in order to manage networks or services or as input for regulatory decisions. The entire set of presentations delivered during the seminar is made publicly available at [1].
IEEE Global Telecommunications Conference, 1989, and Exhibition. 'Communications Technology for the 1990s and Beyond
The Digital Private Network Switching System (DPNSS) layer 2 protocol specification does not expl... more The Digital Private Network Switching System (DPNSS) layer 2 protocol specification does not explicitly indicate methods of layer 2 overload control or link stability verification. This paper describes how AT&T's SESS-PRX Switch has implemented these features. The key to preventing and controlling overload is to explicitly ignore some received retransmissions of incoming frames. This strategy reliably handles the range of frame transmission strategies that are consistent with the DPNSS protocd. Periodic transmission of specialized layer 2 test frames over the DPNSS link assures link stability.
We compare Web traffic characteristics of mobile-versus fixed-access end-hosts, where herein the ... more We compare Web traffic characteristics of mobile-versus fixed-access end-hosts, where herein the term "mobile" refers to access via cell towers, using for example the 3G/UMTS standard, and the term "fixed" includes Wi-Fi access. It is well-known that connection speeds are in general slower over mobile-access networks, and also that often there is higher packet loss. We were curious whether this leads mobile-access users to have smaller connections. We examined the distribution of the number of bytesper-connection, and packet loss from a sampling of logs from servers of Akamai Technologies. We obtained 149 million connections, across 57 countries. The mean bytes-per-connection was typically larger for fixed-access: for two-thirds of the countries, it was at least onethird larger. Regarding distributions, we found that the difference between the bytes-per-connection for mobileversus fixed-access, as well as the packet loss, was statistically significant for each of the countries; however the visual difference in plots is typically small. For some countries, mobile-access had the larger connections. As expected, mobile-access often had higher loss than fixed-access, but the reverse pertained for some countries. Typically packet loss increased during the busy period of the day, when mobile-access had a larger increase. Comparing our results from 2010 to those from 2009 of the same time period, we found that connections have become a bit smaller.
Probability in the Engineering and Informational Sciences, 1995
Motivated by extreme-value engineering in service systems, we develop and evaluate simple approxi... more Motivated by extreme-value engineering in service systems, we develop and evaluate simple approximations for the distributions of maximum values of queueing processes over large time intervals. We provide approximations for several different processes, such as the waiting times of successive customers, the remaining workload at an arbitrary time, and the queue length at an arbitrary time, in a variety of models. All our approximations are based on extreme-value limit theorems. Our first approach is to approximate the queueing process by one-dimensional reflected Brownian motion (RBM). We then apply the extremevalue limit for RBM, which we derive here. Our second approach starts from exponential asymptotics for the tail of the steady-state distribution. We obtain an approximation by relating the given process to an associated sequence of i.i.d. random variables with the same asymptotic exponential tail. We use estimates of the asymptotic variance of the queueing process to determine ...
Journal of Applied Mathematics and Stochastic Analysis, 1994
An open-loop window flow-control scheme regulates the flow into a system by allowing at most a sp... more An open-loop window flow-control scheme regulates the flow into a system by allowing at most a specified window size W of flow in any interval of length L. The sliding window considers all subintervals of length L, while the jumping window considers consecutive disjoint intervals of length L. To better understand how these window control schemes perform for stationary sources, we describe for a large class of stochastic input processes the asymptotic behavior of the maximum flow in such window intervals over a time interval [0,T] as T and Lget large, with T substantially bigger than L. We use strong approximations to show that when T≫L≫logT an invariance principle holds, so that the asymptotic behavior depends on the stochastic input process only via its rate and asymptotic variability parameters. In considerable generality, the sliding and jumping windows are asymptotically equivalent. We also develop an approximate relation between the two maximum window sizes. We apply the asympt...
ATM switches are now being designed to allow connections to be partitioned into priority classes,... more ATM switches are now being designed to allow connections to be partitioned into priority classes, with packets being emitted for higher priority classes before packets are emitted for lower priority classes. Accordingly, allocation of network resources based on different priority levels is becoming a realistic possibility. Thus we need new methods to do connection admission control and capacity planning that take account of the priority structure. In this paper we show that the notion of effective bandwidths can be used for these purposes when appropriately extended. The key is to have admissibility of a set of connections determined by a linear constraint for each priority level, involving a performance criterion for each priority level. For this purpose, connections are assigned more than one effective bandwidth, one for its own priority level and one for each lower priority level. Candidate effective bandwidths for each priority level can be determined by using previous methods associated with the first-in first-out discipline, including the method based on large-buffer asymptotics. The proposed effective-bandwidth structure makes it possible to apply product-form stochastic loss network models to do dimensioning.
End-to-End (E2E) packet delivery in the Internet is achieved through a system of interconnections... more End-to-End (E2E) packet delivery in the Internet is achieved through a system of interconnections between heterogeneous entities called Autonomous Systems (ASes). As of March 2007, there were over 26,000 in use [ASN07]. Most ASes are ISPs, but they also include enterprises, governmental or educational institutions, and increasingly large content providers with mostly outbound traffic such as Google, Yahoo, and YouTube as well as overlay content distribution networks such as Akamai and Limelight [CLA05]. Each AS controls or administers its own domain of addresses but ASes must physically interconnect to provide end-to-end connectivity across the Internet. Interconnection is not only important from a reachability perspective but also quality and performance perspective, because how ASes interconnect, both physically and contractually, determines how packets are routed and impacts the quality and choice of services that may be supported.
Uploads
Papers by Arthur Berger