Papers by Fabrizio Grandoni

Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm - SODA '06, 2006
For more than 30 years Davis-Putnam-style exponentialtime backtracking algorithms have been the m... more For more than 30 years Davis-Putnam-style exponentialtime backtracking algorithms have been the most common tools used for finding exact solutions of NP-hard problems. Despite of that, the way to analyze such recursive algorithms is still far from producing tight worst case running time bounds. The "Measure and Conquer" approach is one of the recent attempts to step beyond such limitations. The approach is based on the choice of the measure of the subproblems recursively generated by the algorithm considered; this measure is used to lower bound the progress made by the algorithm at each branching step. A good choice of the measure can lead to a significantly better worst case time analysis. In this paper we apply "Measure and Conquer" to the analysis of a very simple backtracking algorithm solving the well-studied maximum independent set problem. The result of the analysis is striking: the running time of the algorithm is O(2 0.288 n), which is competitive with the current best time bounds obtained with far more complicated algorithms (and naive analysis). Our example shows that a good choice of the measure, made in the very first stages of exact algorithms design, can have a tremendous impact on the running time bounds achievable.
Algorithms and Constraint Programming
Lecture Notes in Computer Science, 2006
Graph-Theoretic Concepts in Computer Science, 2004
The clique problem consists in determining whether an undirected graph G of order n contains a cl... more The clique problem consists in determining whether an undirected graph G of order n contains a clique of order. In this paper we are concerned with the decremental version of clique problem, where the property of containing an-clique is dynamically checked during deletions of nodes. We provide an improved dynamic algorithm for this problem for every fixed value of ≥ 3. Our algorithm naturally applies to filtering for the constraint satisfaction problem. In particular, we show how to speed up the filtering based on an important local consistency property: the inverse consistency.
Computer Science Review, 2007
Some of today's applications run on computer platforms with large and inexpensive memories, which... more Some of today's applications run on computer platforms with large and inexpensive memories, which are also error-prone. Unfortunately, the appearance of even very few memory faults may jeopardize the correctness of the computational results. An algorithm is resilient to memory faults if, despite the corruption of some memory values before or during its execution, it is nevertheless able to get a correct output at least on the set of uncorrupted values. In this paper we will survey some recent work on reliable computation in the presence of memory faults.

Integer Programming and Combinatorial Optimization, 2013
The Unsplittable Flow Problem on a Path (UFPP) is a core problem in many important settings such ... more The Unsplittable Flow Problem on a Path (UFPP) is a core problem in many important settings such as network flows, bandwidth allocation, resource constraint scheduling, and interval packing. We are given a path with capacities on the edges and a set of tasks, each task having a demand, a profit, a source and a destination vertex on the path. The goal is to compute a subset of tasks of maximum profit that does not violate the edge capacities. In practical applications generic approaches such as integer programming (IP) methods are desirable. Unfortunately, no IP-formulation is known for the problem whose LP-relaxation has an integrality gap that is provably constant. For the unweighted case, we show that adding a few constraints to the standard LP of the problem is sufficient to make the integrality gap drop from Ω(n) to O(1). This positively answers an open question in [Chekuri et al., APPROX 2009]. For the general (weighted) case, we present an extended formulation with integrality gap bounded by 7 + ε. This matches the best known approximation factor for the problem [Bonsma et al., FOCS 2011]. This result exploits crucially a technique for embedding dynamic programs into linear programs. We believe that this method could be useful to strengthen LP-formulations for other problems as well and might eventually speed up computations due to stronger problem formulations.
Algorithms and Computation, 2006
In the single-sink buy-at-bulk network design problem we are given a subset of source nodes in a ... more In the single-sink buy-at-bulk network design problem we are given a subset of source nodes in a weighted undirected graph: each source node wishes to send a given amount of flow to a sink node. Moreover, a set of cable types is given, each characterized by a cost per unit length and by a capacity: the ratio cost/capacity decreases from small to large cables by economies of scale. The problem is to install cables on edges at minimum cost, such that the flow from each source to the sink can be routed simultaneously. The approximation ratio of this NPhard problem was gradually reduced from O(log 2 n) to 65.49 by a long series of papers. In this paper, we design a better 24.92 approximation algorithm for this problem.
Fast Low Degree Connectivity of Ad-Hoc Networks Via Percolation
Lecture Notes in Computer Science, 2007
SIAM Journal on Computing, 2007

SIAM Journal on Computing, 2013
Given a universe U of n elements and a weighted collection S of m subsets of U , the universal se... more Given a universe U of n elements and a weighted collection S of m subsets of U , the universal set cover problem is to a-priori map each element u ∈ U to a set S(u) ∈ S containing u, such that any set X ⊆ U is covered by S(X) = ∪ u∈X S(u). The aim is to find a mapping such that the cost of S(X) is as close as possible to the optimal set-cover cost for X. (Such problems are also called oblivious or a-priori optimization problems.) Unfortunately, for every universal mapping, the cost of S(X) can be Ω(√ n) times larger than optimal if the set X is adversarially chosen. In this paper we study the performance on average, when X is a set of randomly chosen elements from the universe: we show how to efficiently find a universal map whose expected cost is O(log mn) times the expected optimal cost. In fact, we give a slightly improved analysis and show that this is the best possible. We generalize these ideas to weighted set cover and show similar guarantees to (non-metric) facility location, where we have to balance the facility opening cost with the cost of connecting clients to the facilities. We show applications of our results to universal multi-cut and disc-covering problems, and show how all these universal mappings give us algorithms for the stochastic online variants of the problems with the same competitive factors.
Operations Research Letters, 2008
Only recently, Hurkens, Keijsper, and Stougie proved the VPN Tree Routing Conjecture for the spec... more Only recently, Hurkens, Keijsper, and Stougie proved the VPN Tree Routing Conjecture for the special case of ring networks. We present a short proof of a slightly stronger result which might also turn out to be useful for proving the VPN Tree Routing Conjecture for general networks.
Operations Research Letters, 2010
The Spanning Tree Protocol routes traffic on shortest path trees. If some edges fail, the traffic... more The Spanning Tree Protocol routes traffic on shortest path trees. If some edges fail, the traffic has to be rerouted consequently, setting up alternative trees. In this paper we design efficient algorithms to compute polynomial-size integer weights so as to enforce the following stability property: if q = O(1) edges fail, traffic demands that are not affected by the failures are not redirected. Stability is a goal pursued by network operators in order to minimize transmission delays due to the restoration process.

ACM Journal of Experimental Algorithmics, 2013
We address the problem of implementing data structures resilient to memory faults, which may arbi... more We address the problem of implementing data structures resilient to memory faults, which may arbitrarily corrupt memory locations. In this framework, we focus on the implementation of dictionaries and perform a thorough experimental study using a testbed that we designed for this purpose. Our main discovery is that the best-known (asymptotically optimal) resilient data structures have very large space overheads. More precisely, most of the space used by these data structures is not due to key storage. This might not be acceptable in practice, since resilient data structures are meant for applications where a huge amount of data (often of the order of terabytes) has to be stored. Exploiting techniques developed in the context of resilient (static) sorting and searching, in combination with some new ideas, we designed and engineered an alternative implementation, which, while still guaranteeing optimal asymptotic time and space bounds, performs much better in terms of memory without c...

Journal of Computer and System Sciences, 2010
We present a simple randomized algorithmic framework for connected facility location problems. Th... more We present a simple randomized algorithmic framework for connected facility location problems. The basic idea is as follows: We run a black-box approximation algorithm for the unconnected facility location problem, randomly sample the clients, and open the facilities serving sampled clients in the approximate solution. Via a novel analytical tool, which we term core detouring, we show that this approach significantly improves over the previously best known approximation ratios for several NP-hard network design problems. For example, we reduce the approximation ratio for the connected facility location problem from 8.55 to 4.00 and for the single-sink rent-or-buy problem from 3.55 to 2.92. The mentioned results can be derandomized at the expense of a slightly worse approximation ratio. The versatility of our framework is demonstrated by devising improved approximation algorithms also for other related problems.
Information Processing Letters, 2005
Memorization is a technique which allows to speed up exponential recursive algorithms at the cost... more Memorization is a technique which allows to speed up exponential recursive algorithms at the cost of an exponential space complexity. This technique already leads to the currently fastest algorithm for fixed-parameter vertex cover, whose time complexity is O(1.2832 k k 1.5 + kn), where n is the number of nodes and k is the size of the vertex cover. Via a refined use of memorization, we obtain a O(1.2759 k k 1.5 + kn) algorithm for the same problem. We moreover show how to further reduce the complexity to O(1.2745 k k 4 + kn).
Electronic Notes in Discrete Mathematics, 2005
Discrete Mathematics, 2006
Kumar and Madhavan [Minimal vertex separators of chordal graphs, Discrete Appl. Math. 89 (1998) 1... more Kumar and Madhavan [Minimal vertex separators of chordal graphs, Discrete Appl. Math. 89 (1998) 155-168] gave a linear time algorithm to list all the minimal separators of a chordal graph. In this paper we give another linear time algorithm for the same purpose. While the algorithm of Kumar and Madhavan requires that a specific type of PEO, namely the MCS PEO is computed first, our algorithm works with any PEO. This is interesting when we consider the fact that there are other popular methods such as Lex BFS to compute a PEO for a given chordal graph.
Multi-Commodity Connected Facility Location
ACM Transactions on Algorithms, 2008
In this article, we consider the problem of computing a minimum-weight vertex-cover in an n -node... more In this article, we consider the problem of computing a minimum-weight vertex-cover in an n -node, weighted, undirected graph G = ( V , E ). We present a fully distributed algorithm for computing vertex covers of weight at most twice the optimum, in the case of integer weights. Our algorithm runs in an expected number of O (log n + log Ŵ ) communication rounds, where Ŵ is the average vertex-weight. The previous best algorithm for this problem requires O (log n (log n + log Ŵ )) rounds and it is not fully distributed. For a maximal matching M in G , it is a well-known fact that any vertex-cover in G needs to have at least | M | vertices. Our algorithm is based on a generalization of this combinatorial lower-bound to the weighted setting.
Sharp Separation and Applications to Exact and Parameterized Algorithms
Algorithmica, 2011

Theoretical Computer Science, 2009
We investigate the problem of reliable computation in the presence of faults that may arbitrarily... more We investigate the problem of reliable computation in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we consider the problems of sorting and searching in optimal time while tolerating the largest possible number of memory faults. In particular, we design an O(n log n) time sorting algorithm that can optimally tolerate up to O(√ n log n) memory faults. In the special case of integer sorting, we present an algorithm with linear expected running time that can tolerate O(√ n) faults. We also present a randomized searching algorithm that can optimally tolerate up to O(log n) memory faults in O(log n) expected time, and an almost optimal deterministic searching algorithm that can tolerate O((log n) 1−ǫ) faults, for any small positive constant ǫ, in O(log n) worst-case time. All these results improve over previous bounds.
Uploads
Papers by Fabrizio Grandoni