Abstracts 2015

Committed Online Preemptive Scheduling

by Inna Kalp

We study the fundamental online preemptive scheduling problem of jobs with size, value and deadlines. The goal is to maximize total value of jobs completed by their deadlines. The problem has been known to admit an arbitrarily high lower bound on the competitive ratio of any online algorithm; however, the lower bound only applies when deadlines are tight. The lower bound can be circumvented by assuming deadline slackness, i.e., there is a guaranteed lower bound s>1 on the ratio between the job time-window length (time between job arrival and deadline) and the job size.

In this paper, we consider a *committed* scheduling model, in which admitted (accepted) jobs must be fully completed before their deadline. In practice, the use of non-committed algorithms may be unacceptable to users with strict deadlines.
We show an interesting phase transition for the single server case at slackness s=4. For any s>4, we develop a committed algorithm which obtains a constant competitive ratio; the algorithm is later extended to handle multiple servers. Furthermore, we prove an unbounded lower bound on the competitive ratio of any single server algorithm for s<4; in general, our unbounded lower bound applies when s<4/C for any number of servers C<4. Finally, we present a committed scheduler for unit value jobs, which obtains an improved constant competitive ratio.

Joint work with Yossi Azar, Ishai Menache, Seffi Naor and Jonathan Yaniv

Date: November 3


Zero-One Laws for Sliding Windows and Universal Sketches

by Alan Roytman

As the amount of data being generated continues to grow at a staggering rate, streaming algorithms are increasingly becoming more important as a practical tool to analyze and make sense of all the information. In practice, such applications generate vast amounts of data in a very short period of time, and hence it is infeasible to store everything. This presents a pressing question: when is it possible to summarize data while still providing approximate solutions with good theoretical guarantees?

In this talk, we strive to answer this question for frequency-based, monotonically increasing functions in the sliding window model. We make progress on two significant, open problems outlined by Nelson and by Sohler. Specifically, we formalize the notion of universality for streaming over sliding windows, and our main result is the construction of a universal algorithm that uses polylogarithmic space in the timestamp-based sliding window model for a broad class of monotonically increasing functions. That is, we define a class of functions and design a single streaming algorithm that produces a data structure with the following guarantee. When querying the data structure with a function G taken from the class, our algorithm approximates \sum_{i=1}^nG(m_i) without knowing G in advance (here, m_i represents the frequency that element i from a universe of size n appears in the window).

Along the way to achieving our main result, we design a zero-one law for a broader class of monotonically increasing functions G that specifies when the summation \sum_{i=1}^n G(m_i) can be approximated with high probability in one pass, using polylogarithmic memory. If G satisfies the conditions specified by the test, then given the function G we construct an explicit, general algorithm that is able to approximate the summation using polylogarithmic memory. If the function G does not pass the test, then we provide a lower bound which proves it is impossible to do so using polylogarithmic memory.

Date: November 10


From Boolean to Quantitative Methods in Formal Verification

by Tom Henzinger

Formal verification aims to improve the quality of hardware and software by detecting errors before
they do harm. At the basis of formal verification lies the logical notion of correctness, which purports
to capture whether or not a circuit or a program behaves as desired. We suggest that the boolean
partition into correct and incorrect systems falls short of the practical need to assess the behavior
of hardware and software in a more nuanced fashion against multiple criteria. We propose quantitative
preference metrics for reactive systems, which can be used to measure the degree of desirability of
a system with respect to primary attributes such as function and performance, but also with regard
to secondary attributes such as robustness and resource consumption. The theory supports
quantitative generalizations of the paradigms that have become success stories in boolean formal
methods, such as temporal logic, property-preserving abstraction, model checking, and reactive
synthesis.

Date: November 17


Polynomial Bounds for the Grid-Minor Theorem

by Julia Chuzhoy

One of the key results in Robertson and Seymour's seminal work on graph minors is the Grid-Minor Theorem (also called the Excluded Grid Theorem). The theorem states that for every fixed-size grid H, every graph whose treewidth is large enough, contains H as a minor. This theorem has found many applications in graph theory and algorithms.

Let f(k) denote the largest value, such that every graph of treewidth k contains a grid minor of size f(k). The best current quantitative bound, due to recent work of Kawarabayashi and Kobayashi, and Leaf and Seymour, shows that:

f(k)=Omega( sqrt { log k/log log k } ).

In contrast, the best known upper bound implies that:

f(k) =O( sqrt { k/log k } ).

In this talk we present the first polynomial relationship between treewidth and grid-minor size, by showing that:

f(k)=Omega(k^d) for some fixed constant d > 0.

Based on joint work with Chandra Chekuri.

Date: November 24


The amortized cost of finding the minimum

by Or Zamir

We obtain an essentially optimal tradeoff between the amortized cost of the three basic priority queue operations insert, delete and findmin in the comparison model.
More specifically, we show that
Amort(findmin) = Omega( n / (2+epsilon)^(Amort(insert)+Amort(delete) )
Amort(findmin) = O( n / 2^(Amort(insert)+Amort(delete)) + log(n) )

for any fixed epsilon>0, where n is the number of items in the priority queue and Amort(insert), Amort(delete) and Amort(findmin) are the amortized costs of the insert, delete and findmin operations, respectively.
In particular, if Amort(insert)+Amort(delete)=O(1), then Amort(findmin)=Omega(n), and Amort(findmin)=O(n^alpha), for some alpha<1, only if
Amort(insert)+Amort(delete) = Omega(log n). (We can, of course, have Amort(insert)=O(1), Amort(delete)=O(log n), or vice versa, and Amort(findmin)=O(1).)

Our lower bound holds even if randomization is allowed.
Surprisingly, such fundamental bounds on the amortized cost of the operations were not known before.
Brodal, Chaudhuri and Radhakrishnan, obtained similar bounds for the worst-case complexity of findmin.

Joint work with Haim Kaplan and Uri Zwick.

Date: December 1


Machine Learning Algorithms and Robustness

by Mariano Schain

My thesis is composed of three parts.

The first part, Autonomous Bidding Agents, is mainly concerned with the strategies of trading agents implementing algorithms for optimizing execution of marketing campaigns. Specifically, a model-free approach to the design and implementation of a top-performing trading agent in a Trading Agent Competition game (TAC-AA) is presented, followed by a study of the robustness of some of the most successful TAC-AA agents (aimed at assessing the applicability of technology transfer to the more demanding and uncontrolled real-world settings). This part concludes with the introduction and implementation of a new TAC game, Ad Exchange (TAC-AdX), designed to capture some of the key trade-offs faced by real-world advertisers. [Joint works with Tomer Greenwald, Shai Hertz, and Yishay Mansour].

In the second part, Robust Domain Adaptation, a Robust Optimization approach is used to address the problem of Domain Adaptation. Generalization bound are derived, followed by related efficient learning algorithms that are Algorithmically Robust by design. [Joint work with Yishay Mansour].

In the third part (the focus of this talk), Multi Agent Learning is considered, where a set of self-interested agents collaborate by performing some computation that depends on the aggregate of the privately held information by each. We study a settings where history independent agents (that is, having no access to past actions or their own arrival order) sequentially update the outstanding value of a computation regarding the probability of an external event. The resulting computation in this setting may be interpreted as an estimator for the occurrence probability of the event, aggregating the private signals of the agents. Modelling a simple strategy space for the agents, we investigate socially optimal and equilibrium strategies. We compare the performance of the respective estimators to that of the optimal (Bayesian) estimator having full access to all the agents’ private signals. [Joint work with Amos Fiat and Yishay Mansour].

Date: December 8


Advances in Traffic Engineering and Discrepancy

by Roy Schwartz

I will discuss two recent works:
1. Long running transfers in wide area networks are usually time critical, as delays might impact service quality, affect customer revenue, and increase costs incurred by waste of resources.
Current traffic engineering systems fall short as they do not provide pre-facto guarantees on such long running transfers.
I will present an online traffic engineering system that provides pre-facto guarantees while maximizing both fairness and network utility.
The system is based on theoretical algorithmic techniques for solving packing and covering linear programs, and can quickly handle an evolving linear program containing up to millions of variables and constraints.
2. Combinatorial discrepancy is a basic problem in computer science and combinatorics, that has applications in diverse areas such as: computational geometry, complexity theory, Monte-Carlo simulation, number theory and more.
Given a family of subsets S_1,\ldots,S_n of a universe N of size n, the goal is to color the elements of N by one of two colors {-1,+1} such that each set is colored as evenly as possible.
Spencer's well known theorem asserts that this can be done while keeping the absolute value of the sum of the colors of every set to be O(\sqrt{n}).
Known proofs, algorithmic and existential, recursively construct "partial colorings", which assign colors only to half the universe N.
Unfortunately, this approach fails to provide tight guarantees to other important discrepancy problem, e.g., the Beck-Fiala and Koml\'os conjectures.
Therefore, it has been an open question to find new techniques that avoid partial colorings.
In this work I will present the first algorithm that directly computes a full coloring.
The algorithm is based on a geometric perspective and in its core is the analysis of ellipsoids contained in polytopes.

Date: December 15


Pricing Online Decisions: Beyond Auctions

by Alon Eden

We consider dynamic pricing schemes in online settings where selfish agents generate online events. Previous
work on online mechanisms has dealt almost entirely with the goal of maximizing social welfare or revenue in an
auction settings. This paper deals with quite general settings and minimizing social costs. We show that appropriately
computed posted prices allow one to achieve essentially the same performance as the best online algorithm.
This holds in a wide variety of settings. Unlike online algorithms that learn about the event,
and then make enforcable decisions, prices are posted without knowing the future events or even the current event,
and are thus inherently dominant strategy incentive compatible.
In particular we show that one can give efficient posted price mechanisms for metrical task systems, some
instances of the k-server problem, and metrical matching problems. We give both deterministic and randomized
algorithms. Such posted price mechanisms decrease the social cost dramatically over selfish behavior where no
decision incurs a charge. One alluring application of this is reducing the social cost of free parking exponentially.

Joint work with Amos Fiat, Lukasz Jez, Ilan Cohen.

Date: December 22


The Topology of Wireless Communication

by Merav Parter

We study the topological properties of wireless communication maps and their usability in algorithmic design.
We consider the SINR model, which compares the received power of a signal at a receiver against the sum of strengths of other interfering signals plus background noise. To model the reception regions, we use the convenient representation of an \emph{SINR diagram}, introduced in \cite{Avin10journal}, which partitions the plane into $n$ reception zones, one per station, and a complementary region where no station can be heard. The topology and geometry of SINR diagrams was studied in Avin et al. in the relatively simple setting of {\em uniform power}, where all stations transmit with the same power level. It was shown therein that uniform SINR diagrams assume a particularly convenient form: the reception zone of each station is convex, fat and strictly contained inside the corresponding Voronoi cell.

Here we consider the more general (and common) case where transmission energies are arbitrary (or non-uniform).
Under that setting, the reception zones are not necessarily convex or even connected. This poses the algorithmic challenge of designing
efficient algorithmic techniques for the non-uniform setting, as well as the theoretical challenge of understanding the geometry
of SINR diagrams (e.g., the maximal number of connected components they might have). We achieve several results in both directions.
One of our key results concerns the behavior of a $(d+1)$-dimensional map, i.e., a map in one dimension higher
than the dimension in which stations are embedded.
Specifically, although the $d$-dimensional map might be highly fractured, drawing the map in one dimension higher “heals” the zones, which become connected (in fact hyperbolically connected). In addition, we establish the {\em minimum principle} for the SINR
function, and utilize it as a discretization technique for solving two-dimensional problems in the SINR model.
This approach is shown to be useful for handling optimization problems over two dimensions (e.g., power control, energy minimization); in providing tight bounds on the number of null-cells in the reception map; and in approximating geometrical and topological properties of the wireless reception map (e.g., maximum inscribed sphere). Essentially, the minimum principle allows us to reduce the dimension
of the optimization domain \emph{without} losing anything in the accuracy or quality of the solution.

Joint work with Erez Kantor, Zvi Lotker and David Peleg.

Date: December 29


The Secretary Returns

by Shai Vardi

In the online random-arrival model, an algorithm receives a sequence of n requests that arrive in a random order. The algorithm is expected to make an irrevocable decision with regard to each request based only on the observed history. We consider the following natural extension of this model: each request arrives k times, and the arrival order is a random permutation of the kn arrivals; the algorithm is expected to make a decision regarding each request only upon its last arrival. We focus primarily on the case when k=2, which can also be interpreted as each request arriving at, and departing from the system, at a random time.

We examine the secretary problem: the problem of selecting the best secretary when the secretaries are presented online according to a random permutation. We show that when each secretary arrives twice, we can achieve a competitive ratio of ~0.768 (compared to 1/e in the classical secretary problem), and that it is optimal. We also show that without any knowledge about the number of secretaries or their arrival times, we can still hire the best secretary with probability at least 2/3, in contrast to the impossibility of achieving a constant success probability in the classical setting.

We extend our results to the matroid secretary problem, introduced by Babaioff et al., (2007) and show a simple algorithm that achieves a 2-approximation to the maximal weighted basis in the new model (for k=2). We show that this approximation factor can be improved in special cases of the matroid secretary problem; in particular, we give a 16/9-competitive algorithm for the returning edge-weighted bipartite matching problem.

Date: January 19


Good Margins Make Good Neighbors

by Aryeh Kontorovich

Although well-known by practitioners to be an effective classification
tool, nearest-neighbor methods have been somewhat neglected by
learning theory of late. The goal of this talk is to revive interest
in this time-tested technique by recasting it in a modern perspective.
We will present a paradigm of margin-regularized 1-nearest
neighborclassification which: (i) is Bayes-consistent (ii) yields
simple, usable finite-sample error bounds (iii) provides for very
efficient algorithms with a principled speed-accuracy tradeoff (iv)
allows for near-optimal sample compression. Further extensions include
multiclass, regression, and metric dimensionality reduction. I will
argue that the regularized 1-nearest neighbor is superior to k-nearest
neighbors in several crucial statistical and computational aspects.

Based on a series of works with: Lee-Ad Gottlieb, Robert Krauthgamer,
Roi Weiss
Date: March 16

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License