Journal of the Royal Statistical Society: Series B (Methodological), 1978
Identical components are available for use in a piece of machinery. The number of components need... more Identical components are available for use in a piece of machinery. The number of components needed to operate the machine is a function of time and the lifetime of each component is described by a known probability distribution. Once a certain number of components have failed there will not be enough left to operate the machine. We find a strategy which for certain lifetime distributions delays this occurrence for as long as possible.
Applied Probability— Computer Science: The Interface, 1982
A number of identical machines operating in parallel are to be used to complete the processing of... more A number of identical machines operating in parallel are to be used to complete the processing of a collection of jobs so as to minimize the jobs' makespan or flowtime. 111e amount of processing required to complete the jobs have known probability distributions. It has been established by several researchers that when the required amounts of processing are all distributed as exponential random variables, then tile strategy (LEPT) of always processing jobs with the longest expected 1.
In this paper, stochastic shop models with m machines and n jobs are considered. A job has to be ... more In this paper, stochastic shop models with m machines and n jobs are considered. A job has to be processed on all m machines, while certain constraints are imposed on the order of processing. The effect of the variability of the processing times on the expected completion time of the last job (the makespan) and on the sum of the expected completion times of all jobs (the flow time) is studied. Bounds are obtained for the expected makespan when the processing time distributions are New Better (Worse) than Used in Expectation.
The mean queueing time in a G/GI/m queue is shown to be a nonincreasing and convex function of th... more The mean queueing time in a G/GI/m queue is shown to be a nonincreasing and convex function of the number of servers, m. This means that the marginal decrease in mean queueing time brought about by the addition of two extra servers is always less than twice the decrease brought about by the addition of one extra server. As a consequence, a method of marginal analysis is optimal for allocating a number of servers amongst several service facilities so as to minimize the sum of die mean queueing times at the facilities.
Low-order polynomial time algorithms for near-optimal solutions to the problem of bin packing are... more Low-order polynomial time algorithms for near-optimal solutions to the problem of bin packing are studied. The previously analyzed FIRST FIT and BEST FIT packing rules are shown to be members of a more generalized class of packing rules all of which have the same worst case behavior. If the input list is in decreasing order, the worst case behavior of the packing rules in the class is considerably improved and, if not the same for all, at least restricted to a narrow range of possibilities. Finally, after showing that any implementation of a packing rule in the class requires at least 0(n log n) comparisons, we present linear-time approximations to these packing rules whose worst case behavior is as good as that of FIRST FIT under a large variety of restrictions on the input. 1. INTRODUCTION The bin packing problem has recently received attention [5, 11] as a model for such problems as table formating, packing of tracks on a disk, and prepaging. It also is a simplified form of many "stock cutting" problems encountered in industry [2]. Suppose we are given a finite list L = (a I , a 2 ,..., an) of real numbers in the range (0, 1], and a sequence of unit-capacity bins, BIN 1 , BIN s ..... extending from left to right. The problem is to find an assignment or packing of the numbers into the bins so that no bin has contents totaling more than one, and yet the number of bins used, i.e., nonempty, is minimized. For a given list L we will denote this minimim by L*. In this paper we consider simple heuristic algorithms for producing packings which are guaranteed of using no more than a fixed percentage of bins in excess of this minimum number. More formally, if S is an algorithm which generates packings, let S(L) be the number of bins used in the packing resulting when S is applied to list L.
A number of multi-priority jobs are to be processed on two heterogeneous processors. Of the jobs ... more A number of multi-priority jobs are to be processed on two heterogeneous processors. Of the jobs waiting in the buffer, jobs with the highest priority have the first option of being dispatched for processing when a processor becomes available. On each processor, the processing times of the jobs within each priority class are stochastic, but have known distributions with decreasing mean residual (remaining) processing times. Processors are heterogeneous in the sense that, for each priority class, one has a lesser average processing time than the other. It is shown that the non-preemptive scheduling strategy for each priority class to minimize its expected flowtime is of threshold type. For each class, the threshold values, which specify when the slower processor is utilized, may be readily computed. It is also shown that the social and the individual optimality coincide.
We model a selection process arising in certain storage problems. A sequence (X1, · ··, Xn) of no... more We model a selection process arising in certain storage problems. A sequence (X1, · ··, Xn) of non-negative, independent and identically distributed random variables is given. F(x) denotes the common distribution of the Xi′s. With F(x) given we seek a decision rule for selecting a maximum number of the Xi′s subject to the following constraints: (1) the sum of the elements selected must not exceed a given constant c > 0, and (2) the Xi′s must be inspected in strict sequence with the decision to accept or reject an element being final at the time it is inspected.We prove first that there exists such a rule of threshold type, i.e. the ith element inspected is accepted if and only if it is no larger than a threshold which depends only on i and the sum of the elements already accepted. Next, we prove that if F(x) ~ Axα as x → 0 for some A, α> 0, then for fixed c the expected number, En(c), selected by an optimal threshold is characterized by Asymptotics as c → ∞and n → ∞with c/n he...
We show that the fluid approximation to Whittle's index policy for restless bandits has a glo... more We show that the fluid approximation to Whittle's index policy for restless bandits has a globally asymptotically stable equilibrium point when the bandits move on just three states. It follows that in this case the index policy is asymptotic optimal.
20 papers are presented from a workshop held in 1987. A survey is given in a section where the re... more 20 papers are presented from a workshop held in 1987. A survey is given in a section where the rest of the papers concentrate on optimal cyclical policies. Three papers deal with theoretical aspects and the final part is concerned with the dynamics of the firm. The book does not appear to be of immediate benefit to practical OR and will appeal to research workers in the theory of Optimal Control. Gittins, J.C.
For certain scheduling problems wi th pre-emptive processing, a dynamic programming formulation r... more For certain scheduling problems wi th pre-emptive processing, a dynamic programming formulation reduces the problem to a sequence of deterministic optimal control problems. Simple necessary and sufficient optimality conditions for these deterministic problems are obtainable from the standard results of optimal control theory, and s 0 met i me s 1e a d to a na 1y tic sol uti 0 n s. Vlh ere t his doe s not happen, then as with many dynamic programming formulations, computational solution is possible in principle, but infeasible in practice. After a survey of this approach to scheduling problems, this paper discusses a simplification of the method which leads to computationally tractable problems which can be expected to yield good, though sub-optimal, scheduling strategies. This new approach is based on the notion of sequential open-loop control, sometimes used in control engineering to solve stochastic control problems by deterministic means, and is not based on dynamic programming. n~mely those in which scheduling is completely pre-emptive, this is not the case. In such problems, processor 385 !fl. A. H. Dempster et at. (eds.), Deterministic and Stochastic Scheduling. 385 397.
Wiley-Interscience Series in Systems and Optimization
Throughout this book we have considered optimization problems that were subject to constraints. T... more Throughout this book we have considered optimization problems that were subject to constraints. These include the problem of allocating a finite amounts of bandwidth to maximize total user benefit, the social welfare maximization problem, and the time of day pricing problem. We make frequent use of the Lagrangian method to solve these problems.
Proceedings of the 13th Workshop on Economics of Networks, Systems and Computation, 2018
In this paper, we analyse how a peer-to-peer sharing platform should price its service (when imag... more In this paper, we analyse how a peer-to-peer sharing platform should price its service (when imagined as an excludable public good) to maximize profit, when each user's participation adds value to the platform service by creating a positive externality to other participants. To characterize network externalities as a function of the number of participants, we consider different bounded and unbounded user utility models. The bounded utility model fits many infrastructure sharing applications with bounded network value, in which complete coverage has a finite user valuation (e.g., WiFi or hotspot). The unbounded utility model fits the large scale data sharing and explosion in social media, where it is expected that the network value follows Metcalfe's or Zipf's law. For both models, we analyze the optimal pricing schemes to select heterogeneous users in the platform under complete and incomplete information of users' service valuations. We propose the concept of price ...
Wiley-Interscience Series in Systems and Optimization
These notes are about prices that are directly related to cost. They consist of material extracte... more These notes are about prices that are directly related to cost. They consist of material extracted from Chapter7 of the book Pricing Communication Networks, by C. Courcoubetis and R. Weber. We explain the distinction between cost-based prices that are based upon accounting ideas and those that are motivated by concepts of stability and fairness, and related to notions of entry and bypass. The text in blue may be skipped on a first reading.
29th IEEE Conference on Decision and Control, 1990
Pet,ri-Nets with fiiiitely iimiiy transitions and places a.re considered. A transition process is... more Pet,ri-Nets with fiiiitely iimiiy transitions and places a.re considered. A transition process is associated with each transition that describes the production and coiisuinptioii of tolteiis wheii the transition is fired. Under certain assumptions about the fluctuat,ioii of the above processes and for various models of the underlying Petri-Net, we derive conditions for the existence of firing policies under which the number of tokens in tlie net satisfies some stability conditions. 1. Petri-Nets with fluctuating transition processes Petri-Nets (PN's) have been widely used to modcl systems involving coordinatioii aiiioiig various components like multiprocessor systems and niaiiufacturing facilities. In t,his 1iq)er nv consider the usual specification of a PIU, consisting of i i i t m i i s itions and 71 places, each with enough space to hold a n arliit,rarj. number of tokeim In this paper we speak of running the net in a certaiii mode. We suppose that ruiiiiiiig the net for one period in mode k is equivalent to performing traaisitioii k once, a.ntl changes the inventory level in place i by Sk,; tokens. It is a key idea i n this paper that S k , ; may be a randoin variable. For exa1Tiple. when modelling a manufacturing system this corresponds to t1iei.e Ixing unpredictable va,riations in tlie processes of demand aiitl production. Let x ; (t) denote the number of tokens ill place i at tiiiic t. Allowing x,(t j to assume negative values corrcspontls to allowing a backlog of tokens in place i. This models situat.ions in wliich borrowing lokeiis of a certain type is possible provided there is a c.onipensa,ting production of tokens a t a later time. I-\ negative value of ,Sk,i indica.t,es that traiisition k coiisuines toltens from place i and, iu the case that .ri(t) is negative, it increases t l i c bdilog of tolteiis in thal pla.ce. Iniagiiic that i1.t cacii discrete tiine t , (t = 0, 1 ,. . .), onc mode of ~i~aii~ii,ioii iniist lie selected and the net run in that inode Jor tlic i i c 7 s t time period. The central notion of the paper is fliat iI tlic, cost ol' cili.t.yiiig iiiventor>~ of tolielis is to lie fi1iit.c thei> tlic t,ot,iil J i~i l l l~~('~ o l t o l i (~~i s
We present a methodology for the on-line estimation of the cell-loss probability of an ATM link. ... more We present a methodology for the on-line estimation of the cell-loss probability of an ATM link. It is particularly suitable for estimating small probabilities of the order 10 ?6 {10 ?9 , with variance orders of magnitude smaller than traditional estimators. The method is justi ed by the theory of large deviations and the information it requires is based on the actual tra c ows rather than the analysis of some speci c tra c models. The method is e ective when there is a large degree of statistical multiplexing; in other words, when the number of input tra c sources is large. The statistical properties we require for the tra c are very general and are met by most real-time tra c source models. Implementation issues of this methodology, which demonstrate its simplicity, are also discussed.
Uploads
Papers by Richard Weber