## Social Efficiency in Cake Cutting

### by Yair Dombb (BIU)

The problem of sharing a resource among multiple players has been modeled by the famous "cake-cutting problem", which has enjoyed much interest in the mathematical community, and more recently in computer science as well. Most of the classic work on cake-cutting is concerned with the fairness of such allocations. In this talk, we consider the social welfare associated with cake cutting. We present some of our recent results concerning the relations between fairness and social welfare, and the problem of computing socially-efficient divisions.

Based on joint work with Orit Arzi, Yonatan Aumann, and Avinatan Hassidim.

## Nash equilibria using polynomial algebra

### by Ratnik Gandhi (TAU)

In this talk I will present a characterization of Nash equilibria as solutions to a system of polynomial equations. Subsequently I will present (a) a method for computing all equilibria of a subclass of finite normal form games that uses automorphic forms of fields and their extensions and (b) an algorithm for deciding membership of an input game to the subclass of games.

## Resource Placement and Assignment in Distributed Network Topologies

### by Yuval Rochman (TAU)

We consider the problem of how to place and

efficiently utilize resources in network environments. The setting

consists of a regionally organized system which must satisfy

regionally varying demands for various resources. The operator

aims at placing resources in the regions as to minimize the cost

of providing the demands. Examples of systems falling under

this paradigm are 1) A peer supported Video on Demand service

where the problem is how to place various video movies, and

2) A cloud-based system consisting of regional server-farms,

where the problem is where to place various contents or end-user

services. The main challenge posed by this paradigm is the need

to deal with an arbitrary multi-dimensional (high-dimensionality)

stochastic demand. We show that, despite this complexity, one

can optimize the system operation while accounting for the full

demand distribution. We provide algorithms for conducting this

optimization and show that their complexity is pretty small,

implying they can handle very large systems. The algorithms

can be used for: 1) Exact system optimization, 2) deriving lower

bounds for heuristic based analysis, and 3) Sensitivity analysis.

The importance of the model is demonstrated by showing that an

alternative analysis which is based on the demand means only,

may, in certain cases, achieve performance that is drastically

worse than the optimal one.

## Submodular Secretary Problems

### by Moran Feldman (Technion and Microsoft)

In the Classical Secretary problem a set of secretaries arrive in a random order. Whenever a secretary arrives it can be either hired or dismissed. Either way, the decision is irrevocable. The objective is to hire the best secretary (and no other). Since the introduction of this problem during the 60's of the 20-th century, many variants of it have been proposed and researched. Many of these variants have the same arrival process as the original problem, i.e., a set of secretaries arrives in a random order.

In this work we apply to the secretary problem a simple observation which states that the random order of the input can be generated by independently choosing a random continuous arrival time for each secretary.

Surprisingly, this simple observation enables us to improve the competitive ratio of several known and studied variants of the secretary problem. In addition, the proofs we provide assuming random arrival times are shorter and simpler in comparison to existing proofs. All the variants considered in this work have the following structure. The algorithm is allowed to hire any subset of secretaries as long as it agrees with some given constraint. The objective is to hire a set of secretaries maximizing the value of a given monotone submodular function $f$. We consider three constraint types: uniform matroid, partition matroid and knapsack constraints.

Joint work with Seffi Naor and Roy Schwartz.

## Colored Packets with Deadlines and Metric Space Transition Cost

### by Adi Vardi (TAU)

We consider scheduling of colored packets with transition costs which form a general metric space. We design a $1 - O(\sqrt{MST(G) / L})$ competitive algorithm. Our main result is a hardness result of $1 - \Omega(\sqrt{MST(G) / L})$ which matches the competitive ratio of the algorithm for each metric space separately. In particular, we improve the hardness result of Azar at el. 2009 for uniform metric spaces.

We also extend our result to weighted directed graphs which obey the triangular inequality and show a $1 - O(\sqrt{TSP(G) / L})$ competitive algorithm and a nearly-matching hardness result. In proving our hardness results we use some interesting non-standard embedding.

Joint work with Yossi Azar.

## CEfficient Parking Allocation as Online Bipartite Matching with Posted Prices

### by Reshef Meir (HUJI and Microsoft)

We study online bipartite matching settings inspired by parking allocation problems, where rational agents arrive sequentially and select their most preferred parking slot. In contrast to standard online matching setting where edges incident to each arriving vertex are revealed upon its arrival, agents in our setting have private preferences over available slots. Our focus is on natural and simple pricing mechanisms, in the form of posted prices.

On the one hand, the restriction to posted prices imposes new challenges relative to standard online matching.

On the other hand, we employ specific structures on agents' preferences that are natural in many scenarios including parking.

We construct optimal and approximate pricing mechanisms under various informational and structural assumptions,

and provide approximation upper bounds under the same assumptions.

In particular, one of our mechanisms guarantees a better approximation bound than the classical result of Karp, Vazirani and Vazirani ['90] for unweighted online matching, under a natural structural restriction.

Joint work with Yiling Chen and Michal Feldman from Harvard's CRCS.

## Tight Bounds for Online Vector Bin Packing

### by Ilan Cohen (TAU)

In the $d$-dimensional bin packing problem (VBP), one is given vectors $x_1,x_2, \ldots ,x_n \in {\bf R}^d$ and the goal is to find a partition into a minimum number of feasible sets: $\{1,2 \ldots ,n\} = \cup_i^s B_i$.

A set $B_i$ is feasible if $\sum_{j \in B_i} x_j \leq {\bf 1}$, where ${\bf 1}$ denotes the all $1$'s vector. For online VBP, it has been outstanding for almost 20 years to clarify the gap between the best lower bound $\Omega(1)$ on the competitive ratio versus the best upper bound of $O(d)$. We settle this by describing a $\Omega(d^{1-\epsilon})$ lower bound.

We also give strong lower bounds (of $\Omega(d^{\frac{1}{B}-\epsilon}$) ) if the bin size $B \in {\bf Z}_+$ is allowed to grow. Finally, we discuss almost-matching upper bound results for general values of $B$.

Finally, we show an upper bound which is "Shift by 1" from the lower bound.

Joint work with Yossi Azar, Bruce Shepherd and Seny Kamara

## Two-Sided Error Proximity Oblivious Testing

### by Igor Shinkar (Weizmann Institute)

Loosely speaking, a proximity-oblivious (property) tester is a randomized

algorithm that makes a constant number of queries to a tested object and

distinguishes objects that have a predetermined property from those that lack it.

Specifically, for some threshold probability $c \in (0,1]$, objects having the property

are accepted with probability at least $c$, whereas objects that are $\eps$-far

from having the property are accepted with probability at most $c−\delta(\eps)$

for some $\delta = \delta(\eps). (We stress that, in contrast to standard testers,

a proximity-oblivious tester is not given the proximity parameter $\eps$.)

The foregoing notion, introduced by Goldreich and Ron (STOC 2009), was

originally defined with respect to $c=1$, which corresponds to one-sided

error (proximity-oblivious) testing. Here we study the two-sided error

version of proximity-oblivious testers; that is, the (general) case of

arbitrary $c \in (0,1]$. We show that in many natural cases two-sided error

proximity-oblivious testers are more powerful than one-sided error

proximity-oblivious testers; that is, many natural properties that have

no one-sided error proximity-oblivious testers do have a two-sided error

proximity-oblivious tester.

Joint work with Oded Goldreich.

## An Optimal Randomized Online Algorithm for Reordering Buffer Management

### by Noa Avigdor-Elgrabli (Technion)

In the reordering buffer management problem an input

sequence of colored items arrives online, and has to be rescheduled

in a permuted output sequence of the same items, with the help

of a buffer that can hold $k$ items. The items enter the buffer

in their order of arrival. When the buffer is full, one item must be

removed and scheduled in the output sequence, making room for

a new input item to enter the buffer. The objective is to

minimize the total number of color changes between consecutive

items in the output schedule.

We give an $O(\log\log k)$-competitive randomized online

algorithm for reordering buffer management, where $k$ is the

buffer size. Our bound matches the lower bound of

Adamaszek et al. (STOC 2011). Our algorithm has two stages

which are executed online in parallel. The first stage computes

deterministically a feasible fractional solution to an LP relaxation

for reordering buffer management. The second stage ``rounds"

using randomness the fractional solution. The first stage

is based on the online primal-dual schema, combined with a

dual fitting charging scheme. As primal-dual steps and dual

fitting steps are interleaved and in some sense conflicting,

combining them is challenging. We also note that we apply

the primal-dual schema to a relaxation with mixed packing

and covering constraints. The first stage produces a fractional

LP solution with cost within a factor of $O(\log\log k)$ of the

optimal LP cost. The second stage is an online algorithm that

converts any LP solution to an integral solution, while increasing

the cost by a constant factor. This stage generalizes recent results

that gave a similar approximation guarantee using an offline

rounding algorithm.

This is a joint work with Yuval Rabani.

## A Nonmonotone Analysis with the Primal-Dual Approach: online routing of virtual circuits with unknown durations

### by Moti Medina (TAU)

We address the question of whether the primal-dual approach for the

design and analysis of online algorithms can be applied to nonmonotone

problems. We provide a positive answer by presenting a primal-dual

analysis to the online algorithm of Awerbuch et al.[AAPW01] for

routing virtual circuits with unknown durations.

Joint work with Guy Even.

## On the Locality of Some NP-Complete Problems

### by Leonid Barenboim (Ben Gurion University)

We consider the distributed message-passing {LOCAL} model. In this model a communication network is represented by a graph where vertices host processors, and communication is performed over the edges. Computation proceeds in synchronous rounds. The running time of an algorithm is the number of rounds from the beginning until all vertices terminate. Local computation is free. An algorithm is called {local} if it terminates within constant number of rounds. The question of what problems can be computed locally was raised by Naor and Stockmayer in their seminal paper in STOC'93. Since then the quest for problems with local algorithms, and for problems that cannot be computed locally, has become a central research direction in the field of distributed algorithms. However, prior to our work, it was only known how to solve locally some problems for which there exist simple sequential algorithms. On the other hand, it is known that many problems with simple sequential algorithms cannot be solved locally.

We present the first local algorithm for an {NP-complete problem}. Specifically, we show that O(n^{1/2 + \epsilon} \cdot \chi)-coloring can be computed within O(1) rounds, where \epsilon > 0 is an arbitrarily small constant, and \chi is the chromatic number of the input graph. (This problem was shown to be NP-complete by Zuckerman in ToC'07.) In our way to this result we devise a constant-time algorithm for computing (O(1), O(n^{1/2 + \epsilon}))-netwok-decompositions. Network-decompositions were introduced by Awerbuch, Goldberg, Luby, and Plotkin, and are very useful for computing various distributed problems. The best previously-known algorithm for network-decomposition has a polylogarithmic running time (but is applicable for a wider range of parameters). We also devise a \Delta^{1 + \epsilon}-coloring algorithm for graphs with sufficiently large maximum degree \Delta that runs within O(1) rounds. It improves the best previously-known result for this family of graphs, which is O(log-star n).

## Matching - A New Proof for an Ancient Algorithm

### by Vijay V. Vazirani (Georgia Institute of Technology)

For all practical purposes, the Micali-Vazirani algorithm, discovered in 1980, is still the most efficient known maximum matching algorithm (for very dense graphs, slight asymptotic improvement can be obtained using fast matrix multiplication). However, it has remained a "black box" result for the last 32 years. We hope to change this with the help of a recent paper giving a simpler proof and exposition of the algorithm: http://arxiv.org/abs/1210.4594

In the interest of covering all the ideas, we will assume that the audience is familiar with basic notions such as augmenting paths and bipartite matching algorithm.