In the metric distortion problem there is a set of candidates and a set of voters, all residing i... more In the metric distortion problem there is a set of candidates and a set of voters, all residing in the same metric space. The objective is to choose a candidate with minimum social cost, defined as the total distance of the chosen candidate from all voters. The challenge is that the algorithm receives only ordinal input from each voter, in the form of a ranked list of candidates in non-decreasing order of their distances from the voter, whereas the objective function is cardinal. The distortion of an algorithm is its worst-case approximation factor with respect to the optimal social cost. A series of papers culminated in a 3-distortion algorithm, which is tight with respect to all deterministic algorithms. Aiming to overcome the limitations of worst-case analysis, we revisit the metric distortion problem through the learning-augmented framework, where the algorithm is provided with some (machine-learned) prediction regarding the optimal candidate. The quality of this prediction is unknown, and the goal is to evaluate the performance of the algorithm under a perfectly accurate prediction (known as consistency), while simultaneously providing worst-case guarantees even for arbitrarily inaccurate predictions (known as robustness). For our main result, we characterize the robustness-consistency Pareto frontier for the metric distortion problem. We first identify an inevitable trade-off between robustness and consistency. We then devise a family of learning-augmented algorithms that achieves any desired robustnessconsistency pair on this Pareto frontier. Furthermore, we provide a more refined analysis of the distortion bounds as a function of the prediction error (with consistency and robustness being two extremes). Finally, we also prove distortion bounds that integrate the notion of αdecisiveness, which quantifies the extent to which a voter prefers her favorite candidate relative to the rest.
Proceedings of the 23rd ACM Conference on Economics and Computation, Jul 12, 2022
A central goal in algorithmic game theory is to analyze the performance of decentralized multiage... more A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can enforce how these systems are utilized, the users can strategically interact with the system, aiming to maximize their own utility, possibly leading to very inefficient outcomes, and thus a high price of anarchy. To alleviate this issue, the system designer can use decentralized mechanisms that regulate the use of each resource (e.g., using local queuing protocols or scheduling mechanisms), but with only limited information regarding the state of the system. These information limitations have a severe impact on what such decentralized mechanisms can achieve, so most of the success stories in this literature have had to make restrictive assumptions (e.g., by either restricting the structure of the networks or the types of cost functions). In this paper, we overcome some of the obstacles that the literature has imposed on decentralized mechanisms, by designing mechanisms that are enhanced with predictions regarding the missing information. Specifically, inspired by the big success of the literature on "algorithms with predictions", we design decentralized mechanisms with predictions and evaluate their price of anarchy as a function of the prediction error, focusing on two very well-studied classes of games: scheduling games and multicast network formation games. CCS Concepts: • Theory of computation → Quality of equilibria; Network games.
We study the problem of allocating indivisible items to budget-constrained agents, aiming to prov... more We study the problem of allocating indivisible items to budget-constrained agents, aiming to provide fairness and efficiency guarantees. Specifically, our goal is to ensure that the resulting allocation is envy-free up to any item (EFx) while minimizing the amount of inefficiency that this needs to introduce. We first show that there exist two-agent problem instances for which no EFx allocation is Pareto-efficient. We, therefore, turn to approximation and use the (Pareto-efficient) maximum Nash welfare allocation as a benchmark. For two-agent instances, we provide a procedure that always returns an EFx allocation while achieving the best possible approximation of the optimal Nash social welfare that EFx allocations can achieve. For the more complicated case of three-agent instances, we provide a procedure that guarantees EFx, while achieving a constant approximation of the optimal Nash social welfare for any number of items.
In their seminal paper that initiated the field of algorithmic mechanism design, Nisan and Ronen ... more In their seminal paper that initiated the field of algorithmic mechanism design, Nisan and Ronen [27] studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an n-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, n remains the best known approximation and very recent work by Christodoulou et al. [13] has been able to prove an Ω(√ n) approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the "learning-augmented framework", whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of Nisan and Ronen [27] using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is 6-consistent and 2n-robust. We thus achieve the "best of both worlds": an O(1) consistency and an O(n) robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any 1-consistent deterministic strategyproof mechanism has unbounded robustness.
We study fair resource allocation with strategic agents. It is well-known that, across multiple f... more We study fair resource allocation with strategic agents. It is well-known that, across multiple fundamental problems in this domain, truthfulness and fairness are incompatible. For example, when allocating indivisible goods, there is no truthful and deterministic mechanism that guarantees envyfreeness up to one item (EF1), even for two agents with additive valuations. Or, in cake-cutting, no truthful and deterministic mechanism always outputs a proportional allocation, even for two agents with piecewise-constant valuations. Our work stems from the observation that, in the context of fair division, truthfulness is used as a synonym for Dominant Strategy Incentive Compatibility (DSIC), requiring that an agent prefers reporting the truth, no matter what other agents report. In this paper, we instead focus on Bayesian Incentive Compatible (BIC) mechanisms, requiring that agents are better off reporting the truth in expectation over other agents' reports. We prove that, when agents know a bit less about each other, a lot more is possible: using BIC mechanisms we can overcome the aforementioned barriers that DSIC mechanisms face in both the fundamental problems of allocation of indivisible goods and cake-cutting. We prove that this is the case even for an arbitrary number of agents, as long as the agents' priors about each others' types satisfy a neutrality condition. En route to our results on BIC mechanisms, we also strengthen the state of the art in terms of negative results for DSIC mechanisms.
Proceedings of the 23rd ACM Conference on Economics and Computation, Jul 12, 2022
In this work we introduce an alternative model for the design and analysis of strategyproof mecha... more In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design. We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness guarantees. This provides the designer with a menu of mechanism options to choose from, depending on her confidence regarding the prediction accuracy. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well even when the predictions are not fully accurate.
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict b... more We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller's true cost for providing their service. These solutions predominantly take the form of randomized sealedbid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer's budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of very appealing properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer's valuation function is submodular over the set of services. In addition to this, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work.
In their seminal paper that initiated the field of algorithmic mechanism design, Nisan and Ronen ... more In their seminal paper that initiated the field of algorithmic mechanism design, Nisan and Ronen [27] studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an n-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, n remains the best known approximation and very recent work by Christodoulou et al. [13] has been able to prove an Ω(√ n) approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the "learning-augmented framework", whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of Nisan and Ronen [27] using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is 6-consistent and 2n-robust. We thus achieve the "best of both worlds": an O(1) consistency and an O(n) robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any 1-consistent deterministic strategyproof mechanism has unbounded robustness.
Proceedings of the 23rd ACM Conference on Economics and Computation
In this work we introduce an alternative model for the design and analysis of strategyproof mecha... more In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design. We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness guarantees. This provides the designer with a menu of mechanism options to choose from, depending on her confidence regarding the prediction accuracy. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well even when the predictions are not fully accurate. CCS Concepts: • Theory of computation → Algorithmic mechanism design.
Proceedings of the 23rd ACM Conference on Economics and Computation
A central goal in algorithmic game theory is to analyze the performance of decentralized multiage... more A central goal in algorithmic game theory is to analyze the performance of decentralized multiagent systems, like communication and information networks. In the absence of a central planner who can enforce how these systems are utilized, the users can strategically interact with the system, aiming to maximize their own utility, possibly leading to very inefficient outcomes, and thus a high price of anarchy. To alleviate this issue, the system designer can use decentralized mechanisms that regulate the use of each resource (e.g., using local queuing protocols or scheduling mechanisms), but with only limited information regarding the state of the system. These information limitations have a severe impact on what such decentralized mechanisms can achieve, so most of the success stories in this literature have had to make restrictive assumptions (e.g., by either restricting the structure of the networks or the types of cost functions). In this paper, we overcome some of the obstacles that the literature has imposed on decentralized mechanisms, by designing mechanisms that are enhanced with predictions regarding the missing information. Specifically, inspired by the big success of the literature on "algorithms with predictions", we design decentralized mechanisms with predictions and evaluate their price of anarchy as a function of the prediction error, focusing on two very well-studied classes of games: scheduling games and multicast network formation games. CCS Concepts: • Theory of computation → Quality of equilibria; Network games.
We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict b... more We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller’s true cost for providing their service. These solutions predominantly take the form of randomized sealedbid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer’s budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisf...
A set of divisible resources becomes available over a sequence of rounds and needs to be allocate... more A set of divisible resources becomes available over a sequence of rounds and needs to be allocated immediately and irrevocably. Our goal is to distribute these resources to maximize fairness and efficiency. Achieving any non-trivial guarantees in an adversarial setting is impossible. However, we show that normalizing the agent values, a very common assumption in fair division, allows us to escape this impossibility. Our main result is an online algorithm for the case of two agents that ensures the outcome is envy-free while guaranteeing 91.6% of the optimal social welfare. We also show that this is near-optimal: there is no envy-free algorithm that guarantees more than 93.3% of the optimal social welfare.
Uploads
Papers by Xizhi Tan