2nd Research Workshop on Flexible Network Design
2-6 October 2006 |
---|
Network Design with its many variants is one of the most active mathematical research areas involving researchers from Theoretical Science, Graph Theory, Operations Research, Discrete Optimization, Game Theory and Information Theory. In addition, new problems in this area are constantly propounded by practitioners working in various aspects of network design such as construction, routing and staged deployment. Furthermore, many new design paradigms such as ATM, Ad-Hoc and Wireless networking add rich new flavors to existing problems. The goal of this workshop is to focus on this active area of applications of algorithms to understand current trends, identify understudied areas, and formulate new directions for further investigation.
The first edition of this workshop has been organized in Princeton, NJ, on November 4 - 5, 2005, by the ALADDIN research project. You can find out more about the previous workshop at http://www.aladdin.cs.cmu.edu/workshops/netdes/index.html
The workshop will consist of talks from the participants on their current research, time for informal discussions and collaboration, and ample opportunity to sample fine Italian cuisine.
October | ||||||
8.00-9.00 | arrivals | breakfast (Center Canteen) | ||||
9:00-9:45 | Rajaraman | Even | Oriolo | Reed | Hajiaghayi | |
9:45-10:30 | Talwar | Naor | Stougie | Räcke | Erlebach | |
10:30-11:00 | coffee | |||||
11:00-11:45 | Ravi | Voecking | Schaefer | Schindelhauer | Sankowski | |
11:45-12:30 | Olonetsky | Krysta | Roughgarden | Caprara/ Halldórsson |
Dinitz | |
12:45-13:45 | Lunch (Center Canteen) | |||||
13:45-14:45 | Free | Excursion: Ravenna (Meet at Cantina at 2:15PM) |
Free | Free/ Departures |
||
14:45-15:30 | Scheideler | Khuller | Grandoni | |||
15:30-16:15 | Albers | König | Suri | |||
16:15-16:45 | coffee | coffee | ||||
16:45-17:30 | Gavoille | Fraigniaud | Goyal | |||
17:30-18.15 | Gavoille/ Malkhi/ OpenProbs |
Fleischer | Mirrokni/ Immorlica |
|||
18:15-20:00 | Reception & Dinner (6-9P) |
OpenProbs (contd) |
Free | Free | ||
20:00- | Belvedere | Osteria della Serafina |
Capello (Ravenna) |
La Grotta | Dinner at Cantina |
Notes: All breakfasts, lunches, and Friday's dinner will be at Center Canteen. Monday, Tuesday and Thursday dinners will be at the restaurants in Bertinoro mentioned above, and Wednesday's restaurant is in Ravenna.
Arrival: | Sunday October 1st, 2006 |
---|---|
Workshop: | Monday-Friday, October 2nd-6th, 2006 |
Departure: | Friday October 6 & 7, 2006 |
Bertinoro itself is picturesque, with many narrow streets and walkways winding around the central peak. The meeting will be held in a redoubtable ex-Episcopal fortress that has been converted by the University of Bologna into a modern conference center with computing facilities and Internet access. From the fortress you can enjoy a beautiful vista that stretches from the Tuscan Apennines to the Adriatic coast.
The Bertinoro Center provides good instructions for reaching Bertinoro.
Check out the ride sharing board at http://groups.google.com/group/FlexibleND-Workshop.
Abstract TBA.
Abstract TBD.
We study the impact of combinatorial structure in congestion games on the complexity of computing pure Nash equilibria and the convergence time of best response sequences. In particular, we investigate which properties of the strategy spaces of individual players ensure a polynomial convergence time. We show, if the strategy space of each player consists of the bases of a matroid over the set of resources, then the lengths of all best response sequences are polynomially bounded in the number of players and resources. We can also prove that this result is tight, that is, the matroid property is a necessary and sufficient condition on the players' strategy spaces for guaranteeing polynomial time convergence to a Nash equilibrium. In addition, we present an approach that enables us to devise hardness proofs for various kinds of combinatorial games, including first results about the hardness of market sharing games and congestion games for overlay network design. Our approach also yields a short proof for the PLS-completeness of network congestion games. In particular, we can show that network congestion games are PLS-complete for directed and undirected networks even in case of linear latency functions.
This is joint work with Heiner Ackermann and Heiko Roeglin.
In this talk I will study the problem of how to keep honest and adversarial nodes well-spread in an overlay network. More precisely, I will be focusing on overlay network based on the space [0,1) of real numbers. At any time, every node in the system is assigned to some point in that space, and nodes may join or leave the system as they like. The problem is to find simple and efficient join and leave protocols for the nodes that are oblivious to their state yet can preserve the property that, given n nodes in the system, the following conditions are met: For any interval I of size (c \log n)/n, where c is a sufficiently large constant, I contains Theta(|I| n) nodes and the honest nodes in I are in the majority.
First, I consider the case that the adversary can only issue adaptive join/leave requests for adversarial nodes and then the case that the adversary can issue adaptive join/leave requests for all nodes. For both cases, simple join and leave protocols can be constructed that preserve the conditions above, with high probability. This has interesting applications for peer-to-peer systems.
We consider the problem of route discovery in a mesh network with faulty nodes. The number and the positions of the faulty nodes are unknown. It is known that a flooding strategy like expanding ring search can route a message linear in the minimum number of steps d while it causes a traffic (i.e. the total number of messages) of O(d^2). For optimizing traffic a single-path strategy is optimal producing traffic O(d + p), where p is the number of nodes that are adjacent to faulty nodes. We present a deterministic multi-path online routing algorithm that delivers a message within O(d) time steps causing traffic O(d + p log^2 d). This algorithm is asymptotically as fast as flooding and nearly traffic- optimal up to a polylogarithmic factor.
This is joint work with Stefan Rührup, University of Paderborn.
After a brief presentation of the Compact Routing problem (whose goal is to design schemes with the best trade-off between the routing table size per node and the stretch on the route length achieved by scheme), we present a state-of-the-art of the field. This includes the best current upper and lower bounds for the problem and all its variants (undirected & directed graph model, the metric model, etc.) A list of open questions and further directions for this field are presented.
Abstract TBD.
Consider the following classical problem in ad-hoc networks: n devices are distributed uniformly at random in a given region. Each device is allowed to choose its own transmission radius, and two devices can communicate if and only if they are within the transmission radius of each other. The aim is to (quickly) establish a connected network of low average and maximum degree.
In this talk we present the first efficient distributed protocol that, in poly-logarithmically many rounds and with high probability, sets up a connected network with O(1) average degree and O(log n) maximum degree. This is asymptotically the best possible.
Our algorithm is based on the following result, which is a non-trivial consequence of classical percolation theory: suppose that all the devices set up their transmission radius in order to reach the K closest devices. There exists a universal constant K (independent of n) such that, with high probability, there will be a unique giant component, i.e. a connected component of size Theta(n). Furthermore, all the remaining components will be of size O(log^2 n). This leads to an efficient distributed probabilistic test for membership in the giant component, which can be used in a second phase to achieve full connectivity. Preliminary experiments suggest that our approach might very well lead to efficient protocols in real wireless applications.
The optimization of traffic flow in congested urban road networks faces a well-known dilemma: Optimizing system performance is unfair with respect to the individual drivers’ travel times; and a fair user equilibrium may result in bad system performance. As a remedy, computing a system optimum with fairness conditions, realized by length constraints on the routes actually used by drivers, has been suggested. The resutling Constrained System Optimum Problem (CSO) poses interesting mathematical challenges, namely the non-linearity of the objective function and the necessity to deal with path constraints in large networks.
We present a Lagrangian relaxation of the CSO problem for which the Lagrangian dual function can be evaluated by a decomposition into constrained shortest path problems which we solve exactly employing state-of-the-art acceleration techniques. The Lagrangian dual problem is then solved by a special cutting plane method.
Finally, we obtain test results which suggest that this approach outperforms previously described solution schemes for the CSO problem.
A crucial assumption in many network design problems is that of knowing the traffic demand in advance. Unfortunately, measuring and predicting traffic demands are difficult problems. Moreover, in several applications, communication patterns change over time, and therefore we are not given a single static traffic matrix demand, but a set of non-simultaneous traffic demands. Still, we would like to design a min-cost network that is able to support any traffic demand that is from the given set.
In the talk I will discuss several strategies for approaching this problem when the set is either convex or finite, insisting in particular on single-source cases.
In the prize-collecting Steiner forest problem (PCSF) we are given an undirected graph G with non-negative edge costs c(e) for all edges, a set of k terminal pairs R and penalties \pi_1, \dots, \pi_k. A feasible solution (F,Q) consists of a forest F and a subset Q of terminal pairs such that for each terminal pair (s,t) in R either s and t are connected by F or (s,t) is contained in Q. The objective is to compute a feasible solution of minimum cost c(F) + \pi(Q).
A game-theoretic version of the above problem has k players, one for each terminal-pair in R.
Player i's ultimate goal is to connect s_i and t_i, and the player derives a privately held
utility u_i from being connected. A service provider can connect the terminals s_i and t_i of
player i in two ways:
(1) by buying the edges of an s_i,t_i-path in G, or
(2) by buying an alternate connection between s_i and t_i (maybe from some other provider) at a
cost of \pi_i.
In this paper, we present a simple $3$-budget-balanced and group-strategyproof mechanism for the above problem. We also show that our mechanism computes client sets whose social cost is at most O(\log^2 k) times the minimum social cost of any player set. This matches a lower-bound that was recently given by Roughgarden and Sundararajan (STOC '06).
This is joint work with Anupam Gupta (Carnegie Mellon), Jochen Könemann (University of Waterloo), Stefano Leonardi (Università di Roma), and R. Ravi (Carnegie Mellon).
Approximation algorithms for the following scheduling problem are presented. A set of jobs is given where each job has a length that specifies the amount of time needed to process it. In addition, the input contains a conflict graph over the jobs. An edge between two jobs means that the intervals when these jobs are processed may not overlap. Given $m$ identical machines, the goal is to find a schedule that minimizes the makespan.
This is joint work with Lotem Kaplan and Dana Ron.
In the sorting buffer problem we are given an input sequence of points in a metric space. The task is to visit all points while minimizing the total distance that is traveled. For this, a sorting buffer can be used to rearrrange/reorder the input sequence into an output sequence with improved travel distance. At each step the sorting buffer holds the first $k$ yet unvisited points of the input sequence. An online algorithm has to decide which of these points should be visited next; and upon this decision the corresponding point is removed from the buffer and the slot is filled with the next point from the input sequence.
We present an online algorithm that obtains a competitive ratio of $O(\log n\cdot \log^3k)$ for arbitrary metric spaces and $O(\log^2\Delta\cdot\log k)$ for metric spaces of aspect ratio $\Delta$.
This is joint work with Matthias Englert and Matthias Westermann.
Given a graph G, we consider the problem of finding the minimum spanning tree (MST) that minimizes the maximum degree in the tree. This problem generalizes the Hamiltonian path and is NP-hard. For the unweighted case, Furer and Raghavachari gave an algorithm with an additive error of one; i.e. it outputs a tree of degree D_{OPT}+1, where D_{OPT} is the max degree of the optimal tree. For the weighted case, Fischer gave a local-search algorithm that outputs a tree of max degree at most 2D_{OPT} + \log n.
We give the first constant-factor approximation for this problem. For a weighted graph G, our algorithm outputs a tree of max degree 2D_{OPT}+o(D_{OPT}). We use the push-relabel framework that has previously been used for fast, exact algorithms for the maximum flow problem by Goldberg and Tarjan.
We note that more recent work by Goemans improves the approximability of the problem to D_{OPT}+2, almost matching the lower bound.
This is joint work with Kamalika Chaudhuri, Satish Rao and Samantha Riesenfeld.
A basic question in Virtual Private Network (VPN) design is if the symmetric version of the problem always has an optimal solution which is a tree network. An affirmative answer would imply that the symmetric VPN problem is solvable in polynomial time. We give an affirmative answer in case the communication network, within which the VPN must be created, is a circuit. This seems to be an important step towards an answer to the general question. The proof relies on a dual pair of linear programs and actually implies an even stronger property of VPNs. In this lecture I would like to give an interpretation to the dual problem (if we see the primal problem as the VPN problem). This interpretation is not restricted to the cycle but is valid for any graph.
This is joint work with Cor Hurkens and Judith Keijsper.
While much recent attention has focussed on the computational tractability of computing equilibria in various game-theoretic settings, less attention has been paid to how the players themselves might converge to an equilibrium while making individually rational decisions.
We consider this question in the context of the classical market problem as put forth by Walras: There is a market consisting of a set of infinitely divisible goods; and a set of agents with an initial allocation of the goods and a utility function over combinations of goods. Trade is driven by the prices of the goods. Each agent can buy at most the worth of their initial allocation. Prices provide an equilibrium if, in addition, the demand for each good is bounded by the supply.
In 1874, Walras hypothesized that some form natural price adjustment should lead to equilibrium prices for the market problem. Much later, Arrow et al. showed that a differential process specified by Samuelson did indeed converge to equilibrium in markets of gross substitutes. Shortly afterward, Uzawa described a discrete process that also converges. However, Uzawa's process depends on global knowledge of the market -- information that market players are unlikely to consistently share.
We describe a discrete and distributed process in which player actions depend on reasonable local knowledge only. We show not only that this converges in many markets of gross substitutes, but also that it does so quickly -- in polynomial time.
This is joint work with Richard Cole of NYU.
Abstract TBA.
Abstract TBA.
We consider approximation algorithms for non-uniform buy-at-bulk network design problems. Buy-at-bulk network design problems arise in settings where economies of scale and/or the availability of capacity in discrete units result in concave or sub-additive cost functions on the edges. One of the main application areas is in the design of telecommunication networks. The typical scenario is that capacity (or bandwidth) on a link can be purchased in some discrete units u_1 < u_2 < ... < u_r with costs c_1 < c_2 < ... < c_r such that the cost per bandwidth decreases c_1/u_1 > c_2/u_2 > ... > c_r/u_r. The capacity units are sometimes referred to as cables or pipes. A basic problem that needs to be solved in this setting is the following: given a set of bandwidth demands, install sufficient capacity on the links of an underlying network topology so as to be able to route the demands. The goal is to minimize the total cost. Alternatively, we can think of the following scenario. We have two independent cost functions on the links of the network: a buy cost b(e) a rent cost r(e). We are given a set of demands between some pairs of nodes. A feasible solution for the non-uniform multicommodity buy-at-bulk instance is to buy some links and rent some other ones such that all the demands can be routed and the total cost is minimized; the cost of every bought edge is b(e) and for rented edges it is r(e) per unit of flow (bandwidth) over that link.
This problem generalizes several classical problems, such as minimum cost flow to the settings that the cost of each edge is a concave monotone function. It is known that the problem is hard to approximate within a factor of \Omega(log^{1/4-\eps} n) for any \eps>0. We give the first poly-logarithmic approximation algorithm for this problem.
Time permitting we will mention the applications of our approach for stochastic two-stage network design and network design with vertex costs.
Abstract TBA.
Augmenting a network G consists in adding links (at random) so that the resulting augmented network H satisfies specific properties. Following the seminal work on small worlds by Kleinberg [STOC 2000], this talk will focus on greedy routing (GR) in augmented networks: nodes are identified by their IDs in the underlying graph G, and every intermediate node forwards the message to its neighbor in H that is the closest to the destination -- according to the metric of G. It is known [Slivkins, PODC'05] that if G has doubling dimension O(loglog n), then it can be augmented by adding one link to every node so that GR in the augmented network performs in polylogarithmic expected number of steps. We have shown [ESA 2006] that this bound is the best possible in the sense that there are networks of doubling dimension >> loglog n that cannot be augmented so that GR performs in polylogarithmic expected number of steps. This talk will survey all these results, and presents some open problems related to augmented networks navigability.
This is joint work with Emmanuelle Lebhar (ENS Lyon) and Zvi Lotker (CWI).
Suppose a seller wants to sell a finite collection of goods which can be available in limited or unlimited supply. We have a set of potential customers and each customer specifies a single subset of goods she is interested in and the maximum price she is willing to pay for that subset. If the goods are the edges of a graph and customers are requesting to purchase paths in this graph, then we can think of the problem as pricing computer network connections or transportation links. We call such customers single-minded as they are interested in whole single subset of goods. The problem is to compute the prices for goods so as to maximize the overall revenue of the seller.
In another setting we will consider, called unit-demand, each customer also declares a single subset of goods together with non-zero budgets for each single good, and a ranking of all the goods the customer is interested in. Once prices are fixed, each customer chooses to buy one of the goods she can afford based on some predefined selection rule, such as min-buying, max-buying, and rank-buying. Again, the objective is to find the prices of goods to maximize the revenue of the seller.
In this talk we will consider the approximability of such problems, and will discuss both approximation algorithms and non-approximability results for many variants of these problems.
This is joint work with Patrick Briest.
We study the problem of dynamically maintaining the information about the vertex connectivity of the graph, i.e., we want to construct an algorithm that allows:
In this talk, I will describe some recent results on finding near-perfect binary phylogenies and a related result on imperfect phylogeny haplotyping. The main problem addressed can be recast as one of finding a minimum Steiner tree of n points in d-dimensional Hamming space, in particular, to find a tree of length (d+q) in time O(q^q poly(n,d)). Such fixed parameter tractable algorithms can potantially break practical barriers to solving these problems on current datasets.
In this talk, we show how to obtain such a result in (binary) Hamming space. We then present the first results for a related problem of haplotyping that arises in bioinformatics.
This talk describes joint work with Srinath Sridhar, Guy Blelloch, and Russell Schwartz, (all from Carnegie Mellon), Kedar Dhamdhere (Google), and Eran Halperin (ICSI).
Consider a network in which a collection of source nodes maintain and periodically update data objects for a collection of sink nodes, each of which periodically accesses the data originating from some specified subset of the source nodes. We consider the task of efficiently relaying the dynamically changing data objects to the sinks from their sources of interest. Our focus is on the following "push-pull" approach for this data dissemination problem. Whenever a data object is updated, its source relays the update to a designated subset of nodes, its push set; similarly, whenever a sink requires an update, it propagates its query to a designated subset of nodes, its pull set. The push and pull sets need to be chosen such that every pull set of a sink intersects the push sets of all its sources of interest. We study the problem of choosing push sets and pull sets to minimize total global communication while satisfying all communication requirements.
We formulate and study several variants of the above data dissemination problem, that take into account different paradigms for routing between sources (resp., sinks) and their push sets (resp., pull sets) -- multicast, unicast, and controlled broadcast -- as well as the aggregability of the data objects. Under the multicast model, we present an optimal polynomial time algorithm for tree networks, which yields a randomized logarithmic-approximation algorithm for general networks, for which the problem is hard to approximate within a constant factor. Under the unicast model, we present a randomized logarithmic-approximation algorithm for non-metric costs and a matching hardness result. For metric costs, we present a constant-approximation and hardness result for the case where the interests of any two sinks are either disjoint or identical. Finally, under the controlled broadcast model, we present optimal polynomial-time algorithms.
This is joint work with R. Chakinala, A. Kumarasubramanian, K. Laing, R. Manokaran, and P. Rangan.
The K-center problem is a fundamental clustering question. How do we cluster n points into K clusters, so that we minimize the radius of the largest cluster? This problem is NP-hard, and an optimal approximation factor of 2 can be obtained quite easily. Our focus will be on studying generalizations of this for the case where we have to cluster only a certain fraction of the points, rather than all the points. We also study generalizations where there are upper and lower bounds on the sizes of the clusters.
This is a survey talk with results from several papers.
A unified approach, based on the primal-dual method, is discussed for a wide range of online covering and packing problems, having various objective functions. This approach has lead to a simple alternative view and analysis of many previously suggested algorithms, as well as new results.
Sensor networks are a promising technology for achieving sensing coverage over large geographical areas inexpensively. Because of the lightweight nature of sensor nodes, however, it is essential to explore a minimalistic architecture of these networks. In this talk, I will discuss several algorithmic problems motivated by a minimalism in the design of sensor networks.
We study a network creation game recently proposed by Fabrikant, Luthra, Maneva, Papadimitriou and Shenker. In this game, each player (vertex) can create links (edges) to other players at a cost of alpha per edge. The goal of every player is to minimize the sum consisting of (a) the cost of the links he has created and (b) the sum of the distances to all other players. Fabrikant et al. proved an upper bound on the price of anarchy of O(sqrt(alpha)). They also formulated a tree conjecture stating that there exists a constant A such that, for any alpha > A, all non-transient Nash equilibria form a tree. They showed that if the tree conjecture holds, the price of anarchy is constant, for all alpha.
We disprove the tree conjecture, i.e. we construct a family of arbitrarily large graphs that contain cycles and form non-transient Nash equilibria for non-constant alpha. Furthermore, we give improved upper bounds on the price of anarchy. We develop a constant upper bounds for alpha in O(sqrt(n)) and for alpha >= 12n log n. For the intermediate values we show an improved bound of O(1+ min{ alpha^2/n, n^2/alpha }^{1/3}). Additionally, we present characterizations of Nash equilibria and extend our results to a weighted network creation game as well as to scenarios with cost sharing.
This is joint work with Stefan Eilts (Freiburg) and Eyal Even-Dar, Yishay Mansour and Liam Roditty (Tel-Aviv).
In this paper we address the open problem of bounding the price of stability for network design with fair cost allocation for undirected graphs posed in "The Price of Stability for Network Design with Fair Cost Allocation" (Anshelevich et. al). We consider the case where there is an agent in every vertex. We show that the price of stability is O(loglog n). We prove this by defining a particular improving dynamics in a related graph. This proof technique may have other applications and is of independent interest.
This is joint work with Amos Fiat, Haim Kaplan, Meital Levy, and Ronen Shabo.
Connected dominating sets have been proposed as routing backbones for wireless ad-hoc networks. In networks with heterogeneous nodes, the weighted version of the dominating set problem in unit disk graphs arises: For a given unit disk graph with weighted vertices, compute a vertex subset of smallest weight such that each vertex of the graph is contained in the subset or has a neighbor in the subset. We present the first constant-factor approximation algorithm for this problem. The algorithm is obtained in two steps: First, the problem is reduced to the problem of covering a set of points located in a small square using a minimum-weight set of unit disks. Then, a constant-factor approximation algorithm for the latter problem is obtained using enumeration and dynamic programming techniques exploiting the geometry of unit disks. We also show how to extend our algorithm to get a constant-factor approximation algorithm for the minimum-weight connected dominating set problem in unit disk graphs.
This is joint work with C. Ambuhl, M. Mihalak and M. Nunkesser.
We study a class of network cost-sharing games, where the set of Nash equilibria depends fundamentally on the choice of the underlying edge cost-sharing protocols. Our goal is to design cost-sharing protocols that minimize the inefficiency of equilibria in the resulting network design game.
We differentiate between oblivious cost-sharing protocols, where each edge computes cost shares using only "local" information, and the more powerful but less practical class of non-oblivious protocols, where the cost-sharing method of an edge can be informed by the global structure of the network. In undirected networks, we give nearly matching upper and lower bounds on the worst-case price of anarchy achievable by both oblivious and non-oblivious cost-sharing. In directed networks, simple examples show that non-trivial positive results are possible only for the price of stability. We show that among oblivious cost-sharing protocols, the Shapley cost-sharing scheme minimizes the worst-case price of stability.
This is joint work with Ho-Lin Chen and Gregory Valiant.
Following the well-studied two-stage optimization framework for stochastic optimization, we study approximation algorithms for robust two-stage optimization problems with exponential number of scenarios. Prior to this work, Dhamdhere et al. introduced approximation algorithms for two-stage robust optimization problems with polynomial number of scenarios. To model exponential number of scenarios, we assume the set of possible scenarios is given implicitly, for example by an upper bound on the number of active clients. In two-stage robust optimization, we need to pre-purchase some resources in the first stage before the adversary's action. In the second stage, after the adversary chooses the clients that need to be covered, we need to complement our solution by purchasing additional resources at an inflated price. The goal is to minimize the cost in the worst-case scenario. We give a general approach for solving such problems using LP rounding. Our approach uncovers an interesting connection between robust optimization and online competitive algorithms. We use this approach, together with known online algorithms, to develop approximation algorithms for several robust covering problems, such as set cover, vertex cover, and edge cover.
We also study a simple buy-at-once algorithm that either covers all items in the first round or does nothing in the first round and waits to build the complete solution in the second round. We show that this algorithm gives tight approximation factors for unweighted variants of these covering problems, but performs poorly for general weighted problems.
This is joint work with Uri Feige, Kamal Jain, and Mohammad Mahdian.
Robust optimization has traditionally focused on uncertainty in data and costs in optimization problems to formulate models whose solutions will be optimal in the worst-case among the various uncertain scenarios in the model. While these approaches may be thought of defining data- or cost-robust problems, we formulate a new "demand-robust" model motivated by recent work on two-stage stochastic optimization problems. We propose this in the framework of general covering problems and prove a general structural lemma about special types of first-stage solutions for such problems: There exist a first-stage solution that is feasible for the union of the demands for some subset of the scenarios whose objective function value is no more than twice the optimal. We then provide approximation algorithms for a variety of standard discrete covering problems in this setting, including minimum cut, minimum multicut, shortest paths, Steiner trees, vertex cover and uncapacitated facility location.
In this talk, I will mainly focus on the demand-robust minimum cut problem and describe two approximation algorithms- O(log n) and constant approximations. These algorithms can be easily adapted to give similar approximation guarantees for the stochastic version as well.
These results are from two papers and are joint work with Kedar Dhamdhere, Daniel Golovin, R. Ravi, and Mohit Singh.
Scientific Organizing Committee | Matthew Andrews Lucent Bell Labs |
---|---|
Moses Charikar Princeton University | |
Anupam Gupta Carnegie Mellon University | |
Stefano Leonardi Università di Roma "La Sapienza" | |
R. Ravi Carnegie Mellon University | |
Local Organization | |
Eleonora Campori, Centro Congressi di Bertinoro | |
Sponsored by |
BICI Bertinoro International Center for Informatics ALADDIN Center for ALgorithm ADaptation Dissemination and INtegration |