Abstracts2014 Spring semester

Greedy-like algorithms and the non-monotone submodular maximization problem

by Allan Borodin (University of Toronto)

We are interested in the following ill-defined problem: What is
a conceptually simple algorithm and what is the power and limitations
of such algorithms?
In particular, what is a greedy algorithm
or more generally a myopic
algorithm for a combinatorial optimization problem? And to be even more specific,
motivated by the Buchbinder et al
“online double sided greedy algorithm” for the unconstrained non-monotone
submodular maximization problem, what are (if any) the limitations of
algorithms "of this genre" for the general unconstrained problem and for
specific instances of the problem, such as Max-Di-Cut?

Joint work with Norman Huang


Playing Non-linear Games with Linear Oracles

by Dan Garber (Technion)

Online Learning deals with playing repeated games against an adversary with the aim of minimising a quantity known as regret, and has attracted much attention in recent years due to its applications in algorithm design and machine learning. The computational bottleneck in the application of online learning algorithms is the computation of a projection of a point onto a convex set, which for many problems of interest, such as those that arise in combinatorial optimization and matrix optimization, is prohibitive. An alternative approach is to design algorithms for online learning that trade expensive projection steps in favour of linear optimization steps over the feasible domain, which for many problems are much more efficient and simple to implement.

In this talk we will present an algorithm for online learning that uses only linear optimization steps over the feasible domain (one per iteration of the game) and attains optimal regret, resolving an open question by Vempala and Kalai (COLT 2003), and Hazan and Kale (ICML 2012), in case the feasible domain is a polytope.

Our method is based on a novel variant of the conditional gradient method, that reduces the task of minimizing a smooth convex function over a domain to that of minimizing a linear objective. Whereas previous variants of this method give rise to approximation algorithms, we give such algorithm that converges exponentially faster.

The talk is based on a joint work with Elad Hazan (FOCS '13).


Fast Pseudorandomness for Independence and Load Balancing

by Ron Rothblum (Weizmann)

We provide new constructions of several fundamental pseudorandom
objects. Loosely speaking, these constructions obtain exponential
improvements in efficiency compared to previous constructions
with comparable randomness complexity. Our measure of efficiency
is the number of word operations, as captured by the
well-established unit-cost RAM model.

Our main results are constructions of fast:
1. k-wise (almost) independent hash functions.
2. Epsilon biased sequences.
3. Hash functions that guarantee good load balancing.

These constructions are all simultaneously within loglog(n)
factors of the optimal seed length, and within (loglog n)^2
factors of the optimal computational efficiency.

Joint work with Raghu Meka, Omer Reingold and Guy Rothblum


Cutting corners cheaply: How to remove Steiner points

by Lior Kamma (Weizmann)

The main result presented in the talk is that the Steiner Point Removal (SPR) problem can be solved with polylogarithmic distortion.
This resolves in the affirmative a question posed by Chan, Xia, Konjevod, and Richa (2006).
Specifically, for every edge-weighted graph $G = (V,E,w)$ and a designated set of terminals $T \subseteq V$,
there is a graph $G'=(T,E',w')$ on the terminals, that is (isomorphic to) a minor of $G$, and such that
the shortest-path distances between any two terminals is approximately equal in $G$ and in $G'$, within a polylogarithmic factor.
Our existence proof actually gives a randomized polynomial-time algorithm.

Our proof features a new variant of metric decomposition.
It is well-known that every finite metric space $(X,d)$ admits an $O(\log |X|)$-separating decomposition,
which roughly means there is a randomized partitioning of $X$,
in which there is a certain upper bound on the probability that two points lie in different clusters of the partition.
We introduce an additional requirement, which is the following tail bound:
For every path $P$ in $X$ which is not too long, the number $Z_P$ of clusters of the partition that meet the path $P$
satisfies $\Pr[Z_P > t] \le 2e^{-\Omega(t)}$ for all $t>0$.

Joint work with Robert Krauthgamer and Huy L. Nguyen


The Market for Keywords

by Kfir Eliaz (Professor of Economics, Tel-Aviv University)

Can a competitive market implement an ideal search engine? To address this question, we construct a two-sided market model in which consumers with limited, idiosyncratic vocabulary use keywords to search for their desired products. Firms get access to a keyword if they pay its competitive price-per-click. An underlying "broad match" function determines the probability with which a firm will enter the consumer's search pool as a function of the keyword it "buys" and the consumer's queried keyword. The main question we analyze is whether there exists a broad match function that gives rise to an efficient competitive equilibrium outcome. We provide necessary and sufficient conditions, in terms of the underlying search cost and the joint distribution over consumers' tastes and vocabulary, and characterize equilibrium keyword prices under such equilibria. The Bhattachayyara coefficient, a measure of closeness of probability distributions, turns out to play a key role in the analysis.


List and Unique Coding of Interactive Communication

by Klim Efremenko (University of Chicago)

In this talk we extend the notion of list decoding to the setting of interactive communication and study its limits. In particular, we show that any protocol can be encoded, with a constant rate, into a list-decodable protocol which is resilient to a noise rate of up to 1/2-\varepsilon, and that this is tight.

Using our list-decodable construction, we study a more nuanced model of noise where the adversary can corrupt up-to \alpha fraction of Alice's communication and up-to \beta fraction of Bob's communication. We will show how to use list-decoding in order to fully characterize the region R of pairs (\alpha, \beta) for which unique decoding with constant rate is possible. The region R_U turns out to be quite unusual in its shape. In particular, it is bounded by a piecewise-differentiable curve with infinitely many pieces.

Joint work with Mark Braverman.


3SUM is Subquadratic

by Seth Pettie (University of Michigan Ann Arbor)

The 3SUM problem is to decide, given a set of N real numbers, whether
any three of them sum to zero. A simple algorithm solves 3SUM in
O(N^2) time and it has long been conjectured that this is the best
possible.

The significance of the 3SUM problem does not lie with its practical
applications (roughly zero) but with the long list of problems in
computational geometry that are reducible from 3SUM. Some examples
include deciding whether a point set contains 3 colinear points,
calculating the area of the union of a set of triangles, and
determining whether one convex polygon can be placed within another
convex polygon. If 3SUM requires N^2 time then all 3SUM-hard problems
require N^2 time. More recently Patrascu gave many conditional lower
bounds on triangle enumeration and dynamic data structures that rely
on the hardness of 3SUM over the integers.

In this talk I'll present a non-uniform (decision-tree) algorithm for
3SUM that performs only N^{3/2} comparisons and a uniform 3SUM
algorithm running in O(N^2/polylog(N)) time. This result refutes the
3SUM conjecture and casts serious doubts on the optimality of many
O(N^2) geometric algorithms.

This is joint work with Allan Gronlund. A manuscript is available at
arXiv:1404.0799.

Joint work with Mark Braverman.


From average case complexity to improper learning complexity

by Amit Daniely (HUJI)

It is presently still unknown how to show hardness of learning problems. There are huge gaps between our upper and lower bounds in the area. The main obstacle is that standard NP-reductions do not yield hardness of learning . All known lower bounds rely on (unproved) cryptographic assumptions.

We introduce a new technique to this area, using reductions from problems that are hard on average. We put forward a natural generalization of Feige's assumption about the complexity of refuting random K-SAT instances. Under this assumption we show:

1. Learning DNFs is hard.
2. Learning an intersection of super logarithmic number of halfspaces is hard.

In addition, the same assumption implies the hardness of virtually all learning problems that were previously shown hard under cryptographic assumptions.

The talk is based on:
More data speeds up training time in learning halfspaces over sparse vectors (with Nati Linial and Shai Shalev-Shwartz, NIPS 2013).
From average case complexity to improper learning complexity (with Nati Linial and Shai Shalev-Shwartz, STOC 2014).
Complexity theoretic limitations on learning DNF's (with Shai Shalev-Shwartz, preprint).

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License