Abstracts2

The Furthest Hyperplane Problem; (Maximal Margin Clustering and Unsupervised SVMs)

I will discuss the Furthest Hyperplane Problem (FHP), which is an unsupervised counterpart of Support Vector Machines. Given a set of n points in R^d, the objective is to produce the hyperplane (passing through the origin) which maximizes the separation margin, that is, the minimal distance between the hyperplane and any input point.

This problem has been studied mostly as Maximal Margin Clustering. Heuristic algorithms have been suggested and their experiments are quite encouraging. In this seminar we will go ever the first (to our knowledge) provable results regarding FHP.
This includes algorithms as well as lower and upper bounds for this NP-hard problem.

First, we will see a simple randomized algorithm whose running time is O(1/θ^2)) where θ is the optimal separation margin. We show that its exponential dependency on θ is tight, up to sub-polynomial factors, assuming SAT cannot be solved in sub-exponential time. Next, we will review an efficient approximation algorithm. For any α in [0,1], the algorithm produces a hyperplane whose distance from at least 1 − 5α fraction of the points is at least α times the optimal separation margin. Finally, we show that FHP does not admit a PTAS by presenting a gap preserving reduction from a particular version of the PCP theorem.


Should I beat your price? Analysis of Sequential Competing Offers

Sellers often approach customers with competing offers. We study a dynamic market environment, where unit-demand customers sequentially arrive to the market. Sellers have identical goods for sale, and simultaneously publish offers to each arriving customer. Customers then decide whether to accept any of the offers, or leave the market for good.
We characterize the equilibrium behavior in this setting, and show that the equilibrium strategies may be symmetric or asymmetric.
We use the concept of publicly correlated equilibrium and show that the sequence of prices, as well as the expected surplus for the buyers and sellers, are the same in all such equilibria.
Our main result is a sufficient condition (which relates to the convexity of the virtual valuation) for the existence of a symmetric sub-game perfect equilibrium in this market.
Finally, we show some corollaries of the equilibrium characterization, and show that some anomalies (like revenue non-monotonicity) can never occur in equilibrium, but some other phenomena (like efficiency non-monotonicity and price non-monotonicity) may happen. We also conclude that due to competition, sellers will always be better off if some other seller sells an item to the first arriving customer; This holds even if this customer is likely to have very high valuations.

Joint work with Asaf Kovo


Scalable Training of Mixture Models via Coresets

How can we train a statistical mixture model on a massive data set? In this paper, we show how to construct coresets for mixtures of Gaussians and natural generalizations. A coreset is a weighted subset of the data, which guarantees that models fitting the coreset will also provide a good fit for the original data set. We show that, perhaps surprisingly, Gaussian mixtures admit coresets of size independent of the size of the data set. More precisely, we prove that a weighted set of O(dk^2/epsilon^2) data points suffices for computing a (1 + epsilon)-approximation for the optimal model on the original n data points. Moreover, such coresets can be efficiently constructed in a map-reduce style computation, as well as in a streaming setting. Our results rely on a novel reduction of statistical estimation to problems in computational geometry, as well as new complexity results about mixtures of Gaussians. We empirically evaluate our algorithms for several real world projects, including a density estimation problem in the context of earthquake detection using accelerometers in mobile phones.

Appeared in the main oral session of NIP 2011.

This is a joint work with Matthew Faulkner and Andreas Krause.


Efficient Adaptive Querying Strategies for Clustering and Ordering Problems

I will discuss two learning problems for which iid sampling techniques, as well as known adaptive sampling techniques developed for general "active learning" problems are grossly suboptimal. The two problems are k-Correlation Clustering, and Minimum Feedback Arc-Set in Tournaments. In both problems the instance space is the set of pairs of elements over a ground set V of n elements. In the first, the objective is to learn a partition of V into k disjoint sets, in other words, to learn a graph containing k disjoint cliques covering V. In the second, the objective is to learn an ordering of V, in other words, a transitive tournament graph.
Our setting is agnostic in the sense that the observations are noisy, and we compete with the best fit. Our sampling strategy is iterative, producing an improved solution at each iteration. Each "current" solution defines a distribution, from which we sample a
batch of (noisy) edges. The underlying empirical process allows uniform estimation of the difference between the cost of any solution and the current one to within an error of epsilon times the distance between the solutions, for any epsilon. Minimizing this empirical process defines the next solution, and allows a fast learning rate. I will describe this strategy and show some connections with the theory of matrix completion.

Joint work with Esther Ezra and Ron Begleiter


Upward Max Min Fairness

The allocation of global shared resources to different users is a fundamental problem in distributed computing and networking. A well accepted hypothesis is that network resources belong to the community and thus should be shared in a fair way among all users. A commonly used notion of fairness is Max-Min Fairness. We consider the setting when there are multiple possible paths to route each demand in the network (for example, a network using MPLS tunneling). In this setting, the only known way of finding a max-min fair allocation requires an iterative solution of multiple linear programs. Such an approach, although polynomial time, scales badly with the size of the network, the number of demands, and the number of paths. More importantly, a network operator has limited control and understanding of the inner working of the algorithm. Finally, this approach is inherently centralized and cannot be implemented via a distributed protocol. In this paper we introduce Upward Max-Min Fairness, a novel relaxation of Max-Min Fairness and present a family of simple dynamics that converge to it. These dynamics can be implemented in a distributed manner. Moreover, we present an efficient combinatorial algorithm for finding an upward max-min fair allocation. This algorithm is a natural extension of the well known Water Filling Algorithm for a multiple path setting. We test the expected behavior of this new algorithm and show that on realistic networks upward max-min fair allocations are comparable to the max-min fair allocations both in fairness and in network utilization.

Joint work with
Emilie Danna, Haim Kaplan , Alok Kumar, Yishay Mansour , Michal Segalov , and Danny Raz


SINR topology

In this talk I will present several recent papers dealing with the topology of SINR.
I will concentrate on the convexity of SINR digrams and if time permits I will elaborate on the Generalized Perron-Frobenius Theorem.

This is joint work with Chen Avin, Michael Borokhovich, Yuval Emek, Yoram Haddad, Erez Kantor, Merav Parter, David Peleg, Liam Roditty.


On multiplicative (1+epsilon)-approximation by a small sample, with some geometrical applications

Let F be a set system over an underlying finite set X.One needs to produce a small (weighted) sample of X,such that for every set S in F, the sampled weight of S differs from its actual weight by at most a multiplicative factor of 1+epsilon. We ask the following meta-question:What are the structural properties of F that govern the (minimum possible) size of such sample?

For comparison, a similar question in the context of the additive approximation has been extensively studied, and plays an important role in Learning Theoryas well as in Discrete Geometry and other branches of CS. A classical theorem of Vapnik & Chervonenkis asserts the existence of an additive epsilon*|X| - approximation by a sample of size bounded solely by a function of epsilon and the VC-dimension of F. The analogue of the VC-dimension for the multiplicative approximation turns out to be
the triangular rank of F, defined as the maximal length of a sequence of sets {S_1, S_2, … S_t} in F such that no set in this sequence is contained in the union of the preceding ones. One of our main results is the construction of a sample as required, of size t^2 log(t)/poly(epsilon). On the other hand, t is a lower bound on the size of any such sample, if one considers all weightings of X.

Time permitting, we shall present some applications to L_1 metrics and to Euclidean Volumes.

Based on a joint work with Ilan Newman.


Testing Booleanity and the Uncertainty Principle

Let f:{-1,1}^n -> R be a real function on the hypercube, given by its
discrete Fourier expansion, or, equivalently, represented as a
multilinear polynomial. We say that it is Boolean if its image is in
{-1,1}.

We show that every function on the hypercube with a sparse Fourier
expansion must either be Boolean or far from Boolean. In particular,
we show that a multilinear polynomial with at most k terms must either
be Boolean, or output values different than -1 or 1 for a fraction of
at least 2/(k+2)^2 of its domain.

It follows that given black box access to f, together with the
guarantee that its representation as a multilinear polynomial has at
most k terms, one can test Booleanity using O(k^2) queries. We show an
Omega(k) queries lower bound for this problem.

We also consider the problem of deciding if a function is Boolean,
given its explicit representation as a k term multilinear polynomial.
For large k (i.e, exponential) we present a simple randomized O(kn
sqrt(2^n)) algorithm. For small k we show how the problem can be
solved deterministically in O(k^3n).

Our proofs crucially use Hirschman's entropic version of Heisenberg's
uncertainty principle.

Joint work with Tom Gur


An Empirical Study of the Ad Auction Game in the Trading Agent Competition

The Ad Auction game is a TAC market game in the domain of sponsored search. Agents play the role of search engine advertisers, who compete with each other on ad placement for search results.
We study the empirical behavior of trading agents participating in TAC-AA. Our main goal is to investigate the robustness of the agents to deviations from the game's specified environment. Our results indicate that most agents, especially the top-scoring ones, are surprisingly robust. In addition, using the actual logs, we derive for each agent a strategic fingerprint and show that it almost uniquely identies it.


Random permutations and random subgraphs

We describe how random permutations give rise to random subgraphs of undirected graphs with interesting properties., We present applications to bootstrap percolation, proving existence of large $k$-degenerate graphs in bounded degree graphs as well as a new framework for designing algorithms for the independent set problem.

Joint work with Uri Feige


Combinatorial coloring of 3-colorable graphs

We consider the problem of coloring a 3-colorable graph in polynomial time using as few colors as possible. We present a combinatorial algorithm getting down to $\tilde O(n^{4/11})$ colors. This is the first combinatorial improvement of
Blum's $\tilde O(n^{3/8})$ bound from FOCS'90. Like Blum's algorithm, our new algorithm composes nicely with recent semi-definite approaches. The current best bound is $O(n^{0.2072})$ colors by Chlamtac from FOCS'07. We now bring it
down to $O(n^{0.2038})$ colors.

Joint work with with Ken-Ichi Kawarabayashi.


Cloud Scheduling with Setup Cost

We investigate the problem of online task scheduling of jobs such as MapReduce jobs, Monte Carlo simulations and generating search index from web documents, on cloud computing infrastructures. We consider the virtualized cloud computing setup comprising machines that host multiple identical virtual machines (VMs) under pay-as-you-go charging, and that booting a VM requires a constant setup time. The cost of job computation depends on the number of VMs activated, and the VMs can be activated and shutdown on demand. We propose a new bi-objective algorithm to minimize the maximum task delay, and the total cost of the computation. We study both the clairvoyant case, where the duration of each task is known upon its arrival, and the more realistic non-clairvoyant case.

Joint work with Yossi Azar, Nikhil Devanur and Navendu Jain.


Approximating Semidefinite Programs in Sublinear Time

Semidefinite programming is a convex optimization problem which is wildly used in numerous fields of science and engineering. In combinatorial optimization and machine learning in particular, many algorithms that are based on solving semidefinite programs have been developed in recent years. Although polynomial time algorithms which can solve general semidefinite programs accurately and even faster algorithms that solve such programs only approximately exist, their running time may be prohibitive in practice when applied to very large scale problems such as those that are ubiquitous nowadays in machine learning. In this talk I
will present an algorithm for approximately solving general semidefinite programs which enjoys a running time that is sublinear in the number of entries in the semidefinite instance. The algorithm also has the benefit of producing low rank solutions which is
computationally favorable. Our algorithm is based on solving a Lagrange relaxation of the semidefinite program using the well known Multiplicative Updates Method and applying recent algorithmic machinery from online learning and random sampling. I will also present lower bounds on the running time of any approximation algorithm for semidefinite programming which demonstrate that our algorithm is close to optimal in certain cases.


Generalized Reordering Buffer Management

An instance of the generalized reordering buffer management
problem consists of a service station that has $k$ servers, each
configured with a color, and a reordering buffer of size $b$. The
station needs to serve an online stream of colored items. Whenever
an item arrives, it is stored in the buffer. At any point in time,
a currently pending item can be served by moving a server to its
color. The objective is to serve all items in a way that minimizes
the number of color switches of the servers. This problem
generalizes two well-studied online problems: the paging problem,
which is the special case when $b = 1$, and the reordering buffer
problem, which is the special case when $k=1$.

In this paper, we develop a randomized algorithm that attains a
competitive ratio of $O(\sqrt{b}\ln k)$. We complement our
randomized approach by presenting a deterministic algorithm that
attains a competitive ratio of $O(\min\{k^{2}\ln b,kb\})$. We
further demonstrate that if our deterministic algorithm can employ
$k/(1-\delta)$ servers where $\delta \in (0, 1)$, then it achieves
a competitive ratio of $O(\min \{ \ln b/\delta^{2}, b/\delta\})$
against an optimal offline adversary that employs $k$ servers.

Joint work with Yossi Azar, Iftah Gamzu and Matthias Englert.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License