Abstracts2012

Networks that fix themselves aka Self-healing Networks

by Amitabh Trehan (Technion)

Given a connected graph, two players play a turn-based game: First. the red guy removes a node (and therefore, its adjoining edges too), now the blue guy adds edges between the remaining nodes. What edges should the blue guy add so that over a whole run of the game, the network remains connected, no node gets too many new edges and the distance between any pair of nodes (i.e. the network stretch) does not blow up by much? Now, imagine that the nodes in the graph are computers and the graph is a distributed network; the nodes themselves are the blue guys but they do not know anybody beyond the nodes they share an edge with. Solving such problems is the essence of self-healing distributed networks.

We shall present the distributed self-healing model which is especially applicable to reconfigurable networks such as peer-to-peer and wireless mesh networks and present fully distributed algorithms that can 'heal' certain global and topological properties using only local information. Forgiving Graph [PODC 2009; DC 2012] uses a 'virtual graphs' approach maintaining connectivity, low degree increase and closeness of nodes (i.e. stretch). Xheal [PODC 2011; Xheal: localized self-healing using expanders] further maintains expansion and spectral properties of the network. We present a full distributed implementation in the LOCAL message passing model. However, we are working on ideas to allow even more efficient implementations and stronger guarantees.


Fast Parallel Matrix Multiplication

by Oded Schwartz (UC Berkeley)

Faster algorithms can be obtained by minimizing communication. That is,
reducing the amount of data sent across the memory hierarchy and
between processors.
The communication costs of algorithms, in terms of time or energy, are
typically much higher than the arithmetic costs.
We have computed lower bounds on these communication costs by analyzing
geometric and expansion properties of the underlying computation DAG of
algorithms. These techniques (honored by SIAG-LA prize for 2009-2011
and SPAA'11 best paper award) inspired many new algorithms, where
existing ones proved not to be communication-optimal.

Parallel matrix multiplication is one of the most studied fundamental
problems in parallel computing. We obtain a new parallel algorithm
based on Strassen's fast matrix multiplication that is communication-optimal.
It exhibits perfect strong scaling, within the maximum possible range.
The algorithm asymptotically outperforms all known parallel matrix
multiplication algorithms, classical and Strassen-based. It also
demonstrates significant speedups in practice, as benchmarked on
several super-computers (Cray XT4, Cray XE6, and IBM BG/P).
Our parallelization approach is simple to implement, and extends to other
algorithms. Both the lower bounds and the new algorithms have an
immediate impact on saving power and energy at the algorithmic level.

Based on joint work with
Grey Ballard, James Demmel, Olga Holtz, Ben Lipshitz


Predecessor Queries on Dynamic Subsets of an Ordered List, with Applications

by Tsvi Kopelowitz (Weizmann Institute)

In the famous order maintenance problem, one wishes to maintain a dynamic list L of size n under insertions, deletions, and order queries. In an order query, one is given two nodes from L and must determine which node precedes the other in L. In an extension to this problem, named the
Predecessor search on Dynamic Subsets of an Ordered Dynamic List problem (POLP for short), it is also necessary to maintain dynamic subsets S_1… S_k \subset L, such that given some u in L it will be possible to quickly locate the predecessor of u in any S_i.

The POLP has some interesting applications such as the fastest algorithm for on-line text indexing (suffix tree construction), and improvements on maintaining fully persistent arrays. The talk will focus on both the applications and on an efficient solution for the POLP. In addition, as part of the solution for the POLP, a new and simple solution is provided for the order maintenance problem as opposed to the previously highly complex and incomplete solutions that were previously known.


Strategy iteration is strongly polynomial for 2-player turn-based stochastic games with a constant discount factor

by Thomas Dueholm Hansen (TAU)

A fundamental model of operations research is the finite, but
infinite-horizon, discounted Markov Decision Process. Ye showed
recently that the simplex method with Dantzig pivoting rule, as well
as Howard's policy iteration algorithm, solve discounted Markov
decision processes, with a constant discount factor, in strongly
polynomial time. More precisely, Ye showed that for both algorithms
the number of iterations required to find the optimal policy is
bounded by a polynomial in the number of states and actions. We
improve Ye's analysis in two respects. First, we show a tighter bound
for Howard's policy iteration algorithm. Second, we show that the same
bound applies to the number of iterations performed by the strategy
iteration (or strategy improvement) algorithm used for solving
2-player turn-based stochastic games with discounted zero-sum
rewards. This provides the first strongly polynomial algorithm for
solving these games.

Joint work with Peter Bro Miltersen and Uri Zwick


A Polylogarithmic-Competitive Algorithm for the k-Server Problem

by Niv Buchbinder (TAU)

The k-server problem is one of the most fundamental and extensively studied problems in online computation. Suppose there is an n-point metric space and k servers are located at some of the points of the metric space. At each time step, an online algorithm is given a request at one of the points of the metric space, and this request is served by moving a server to the requested point (if there is no server there already). The cost of serving a request is defined to be the distance traveled by the server. Given a sequence of requests, the task is to devise an online strategy minimizing the sum of the costs of serving the requests.

We give the first polylogarithmic-competitive randomized online algorithm for the k-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of O(\log^3 n \log^2 k) for any metric space on $n$ points. Our algorithm improves upon the deterministic (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou whenever n is sub-exponential in k.

Joint work with Nikhil Bansal, Aleksander Mardry and Seffi Naor.


Representation of Finite Games as Network Congestion Games

by Igal Milchtaich (BIU)

Weighted network congestion games are a natural model for interactions involving finitely many non-identical users of network resources, such as road segments or communication links. However, in spite of their special form, these games are not fundamentally special: every finite game can be represented as a weighted network congestion game. Therefore, these games cannot possibly have any special properties. The same is true for the class of (unweighted) network congestion games with player-specific costs, in which the players differ in their cost functions rather than their weights. In the first representation, different players may contribute differently to congestion, and in the second, they may be differently (negatively) affected by it. The intersection of the two classes consists of the unweighted network congestion games. These games are special: a finite game can be represented in this form as a network congestion game where both the players’ weights and their cost functions are identical if and only if it is an exact potential game.


Learning patterns in Big data from small data using core-sets

by Dan Feldman (MIT, The Distributed Robotics Lab)

When we need to solve an optimization problem we usually use the best
available algorithm/software or try to improve it. In recent years we
have started exploring a different approach: instead of improving the
algorithm, reduce the input data and run the existing algorithm on the
reduced data to obtain the desired output much faster.

A core-set for a given problem is a semantic compression of its input,
in the sense that a solution for the problem with the (small) coreset
as input yields a provable approximate solution to the problem with the original (Big) Data.
Core-set can usually be computed via one pass over a streaming input, manageable amount of memory, and in parallel.
For real time performance we use Hadoop, Clouds and GPUs.

In this talk I will describe how we applied this magical paradigm to
obtain algorithmic achievements with performance guarantees in iDiary:
a system that combines sensor networks, robotics, differential privacy,
and text mining. It turns large signals collected from smart-phones or autonomous robot into maps and textual
descriptions of their trajectories. The system features a user interface
similar to Google Search that allows users to type text queries on
their activities (e.g., "Where did I have dinner last time I visited Paris?")
and receive textual answers based on their signals.


Minimal Indices for Successor Search

by Sarel Cohen (TAU)

We give a new successor data structure which improves upon the index size of the P\v{a}tra\c{s}cu-Thorup data structures, reducing the index size from $O(n w^{4/5})$ bits to $O(n \log w)$ bits, with optimal probe complexity. Alternately, our new data structure can be viewed as matching the space complexity of the (probe-suboptimal) $z$-fast trie of Belazzougui et al. Thus, we get the best of both approaches with respect to both probe count and index size. The penalty we pay is an extra $O(\log w)$ inter-register operations. Our data structure can also be used to solve the weak prefix search problem, the index size of $O(n \log w)$ bits is known to be optimal for any such data structure.

The technical contributions include highly efficient single word indices, with out-degree $w/\log w$ (compared to the $w^{1/5}$
out-degree of fusion tree based indices). To construct such high efficiency single word indices we device highly efficient bit
extractors which, we believe, are of independent interest.

Joint work with Amos Fiat, Moshik Hershcovitch and Haim Kaplan.


Lower Bounds on Individual Sequence Regret

by Eyal Gofer (TAU)

In this work, we lower bound the individual sequence anytime regret of a large family of online algorithms. This bound depends on the quadratic variation of the sequence, Q_T, and the learning rate. Nevertheless, we show that any learning rate that guarantees a regret upper bound of O(sqrt{Q_T}) necessarily implies an Omega(sqrt{Q_T}) anytime regret on any sequence with quadratic variation Q_T.
The algorithms we consider are linear forecasters whose weight vector at time t+1 is the gradient of a concave potential function of cumulative losses at time t. We show that these algorithms include all linear Regularized Follow the Leader algorithms. We prove our result for the case of potentials with negative definite Hessians, and potentials for the best expert setting satisfying some natural regularity conditions. In the best expert setting, we give our result in terms of the translation-invariant relative quadratic variation.
We apply our lower bounds to Randomized Weighted Majority and to linear cost Online Gradient Descent.
We show that bounds on anytime regret imply a lower bound on the price of "at the money" call options in an arbitrage-free market. Given a lower bound Q on the quadratic variation of a stock price, we give an Omega(sqrt{Q}) lower bound on the option price, for Q<0.5. This lower bound has the same asymptotic behavior as the Black-Scholes pricing and improves a previous Omega(Q) result given by DeMarzo et al.

Joint work with Yishay Mansour (ALT 2012)


Robust Domain Adaptation

by Mariano Schain (TAU)

We derive a generalization bound for domain adaptation by using the properties of robust algorithms.

Our new bound depends on $\lambda$-shift, a measure of prior knowledge regarding the similarity of source and target domain distributions.

Based on the generalization bound, we design SVM variants for binary classification and regression domain adaptation algorithms.

Joint work with Yishay Mansour

[ISAIM 2012]


How Bad is Forming Your Own Opinion?

by Sigal Oren (Cornell, Microsoft)

A long-standing line of work in economic theory has studied models by which a group of people in a social network, each holding a numerical opinion, can arrive at a shared opinion through repeated averaging with their neighbors in the network. Motivated by the observation that consensus is rarely reached in real opinion dynamics, we study a related sociological model in which individuals’ intrinsic beliefs counterbalance the averaging process and yield a diversity of opinions.

By interpreting the repeated averaging as best-response dynamics in an underlying game with natural payoffs, and the limit of the process as an equilibrium, we are able to study the cost of disagreement in these models relative to a social optimum. We provide a tight bound on the cost at equilibrium relative to the optimum; our analysis draws a connection between these agreement models and extremal problems for generalized eigenvalues. We also consider a natural network design problem in this setting, where adding links to the underlying network can reduce the cost of disagreement at equilibrium.

This is joint work with David Bindel and Jon Kleinberg.


Testing Properties of Collections of Distributions: Equivalence and Similar Means

by Reut Levi (TAU)

We propose a framework for studying property testing of collections of
distributions, where the number of distributions in the collection is
a parameter of the
problem. Previous work on property testing of distributions considered
single distributions or pairs of
distributions. We suggest two models that differ in the way the
algorithm is given access to samples from
the distributions. In one model the algorithm may ask for a sample
from any distribution of its choice,
and in the other the distributions from which it gets samples are
selected randomly.
In these models we have studied two basic problems:
1) Distinguishing between the case that all the distributions in
the collection are the same (or very similar), and the case that
collection is relatively far from having this property.
We give almost tight upper and lower bounds for this testing problem,
as well as study an extension
to a clusterability property. One of our lower bounds directly implies
a lower bound on testing
independence of a joint distribution, a result which was left open by
previous work.
2) In a more recent work, we consider another problem of testing a
basic property of
collections of distributions: having similar means. Namely, the
algorithm should accept collections of
distributions in which all distributions have means that do not differ
by more than some given parameter, and should
reject collections that are relatively far from having this property.
We provide upper and lower bounds in both models.
In particular, in the query model, the complexity of the problem is
polynomial in $1/\epsilon$ (where $\epsilon$ is the
given distance parameter). On the other hand, in the sampling model,
the complexity grows roughly as $m^{1-\poly(1/\epsilon)}$, where $m$
is the number of distributions.

Joint work with Prof. Dana Ron and Prof. Ronitt Rubinfeld.


Joint cache partition and job assignment on multicore processors

by Omry Tuval (TAU)

Multicore shared cache processors pose a challenge for designers of embedded systems who try to achieve minimal and predictable execution time of workloads consisting of several jobs. One way in which this challenge was addressed is by statically partitioning the cache among the cores and assigning the jobs to the cores with the goal of minimizing the makespan. Several heuristic algorithms have been proposed that jointly decide how to partition the cache among the cores and how to assign the jobs. We initiate a theoretical study of this problem which we call the joint cache partition and job assignment problem.

In this problem the input is a set of jobs, where each job is specied by a function that gives the running time of the job for each possible cache allocation. The goal is to statically partition a cache of size K among c cores and assign each job to a core such that the makespan is minimized.

By a careful analysis of the space of possible cache partitions we obtain a constant approximation algorithm for this problem. We give better approximation algorithms for a few important special cases. We also provide lower and upper bounds on the improvement that can be obtained by allowing dynamic cache partitions and dynamic job assignments.

We show that our joint cache partition and job assignment problem generalizes an interesting special case of the problem of scheduling on unrelated machines that is still more general than scheduling on related machines. In this special case the machines are ordered by their "strength" and the running time of each job decreases when it is scheduled on a stronger machine. We call this problem the ordered unrelated machines scheduling problem. We give a polynomial time algorithm for scheduling on ordered unrelated machines for instances where each job has only two possible load values and the sets of load values for all jobs are of constant size.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License