Abstracts

Multi-Hop Routing and Scheduling in Wireless Networks in the SINR model

We present an algorithm for multi-hop routing and scheduling of requests in wireless networks in the \sinr\ model. The goal of our algorithm is to maximize the throughput or maximize the minimum ratio between the flow and the demand. Our algorithm partitions the links into buckets. Every bucket consists of a set of links that have nearly equivalent reception powers. We denote the number of nonempty buckets by $\sigdiv$. Our algorithm obtains an approximation ratio of $O(\sigdiv \cdot \log n)$,
where $n$ denotes the number of nodes. For the case of linear powers $\sigdiv =1$, hence the approximation ratio of the algorithm is $O(\log n)$. This is the first practical approximation algorithm for linear powers with an approximation ratio that depends only on $n$ (and not on the max-to-min distance ratio). If the transmission power of each link is part of the input (and arbitrary), then $\sigdiv = O(\log\Gamma + \log \Delta)$, where $\Gamma$ denotes the ratio of the max-to-min power, and $\Delta$ denotes the ratio of the max-to-min distance. Hence, the approximation ratio is $O(\log n \cdot (\log\Gamma + \log \Delta))$. Finally, we consider the case that the algorithm needs to assign powers to each link in a
range $[\pmin,\pmax]$. An extension of the algorithm to this case achieves an approximation ratio of $O[(\log n + \log \log \Gamma) \cdot (\log\Gamma + \log \Delta)]$.

Joint work with Guy Even and Moti Medina


Keyword Optimization in Search-Based Advertising Markets

Search-based advertising is a multi-billion industry which is part of the growing electronic commerce market. In this work, we study the search-based advertising market from the advertiser’s point of view. There are three natural participants in the search-based ad- vertising: the advertisers, promoting their products to consumers based on their search queries, the users, which are searching for content, and the search providers, who match the advertisers and users by placing ads in search result pages. It is customary today for the advertisers to pay only for ad clicks. This guides both the advertisers’ and the search providers’ strategy. The advertisers’ strategy, which is the focus of this work, requires them to select keywords, bids and a budget constraint imposed on their advertising spending. We abstract an optimization problem in which the advertisers set a daily budget and select a set of keywords on which they bid. We assume that the cost and benefit of keywords is fixed and known, sidestepping this important strategic issue and focusing on the keyword optimization problem. The advertisers’ goal is to optimize the utility subject to its budget constraint. Clearly the advertisers would like to buy the most profitable keywords, subject to the budget constraint. The problem is that there is uncertainty regarding the type and number of queries in a day, and the advertisers have to fix a single policy for the online day. If too few keywords are selected, the advertisers remain with unused budget. If too many keywords are selected, at the time keywords associated with ’good’ ads appear, the daily budget may have already been exhausted.

We study the advertisers’ keyword optimization problem in three different settings: in an offline problem setting in which all problem parameters are known beforehand, in a stochastic model in which the advertiser knows only some of the parameters of the stochastic model, and in an adversarial model which makes no statistical assumptions about the generation of the query sequences. For each of these models we provide lower and upper bounds to the performance of learning algorithms while using the notion of regret minimization.

We support our theoretical results with extensive simulations in a simulated environment.


Hitting Sets Online

We consider the hitting set problem in an online setting. In this setting a hypergraph over a ground set X is unknown in advance. A sequence of hyperedges S_i (i.e., S_i is a subset of X) arrive one by one in an online fashion. Upon arrival of a hyperedge S_i, the algorithm must select an element in S_i. The goal is to select as few elements as possible. We define a new hypergraph property, called the hitting set competitive ratio of H. This property equals the smallest competitive ratio of an online algorithm for hitting sets with respect to the hypergraph H.

We consider hypergraphs in which the union of every two intersecting hyperedges is also a hyperedge. We prove that the hitting set competitive ratio of such a hypergraph H equals the minimum number of colors needed to color the hypergraph in a unique-max coloring. In such a coloring, the maximum color in each hyperedge appears exactly once.

We apply this result as follows:

* Given a graph G, consider the hypergraph consisting of connected subgraphs of G induced by subsets of vertices. For such a hypergraph, we obtain improved competitive ratios compared to [Alon, Awerbuch, Azar, Buchbinder, Naor; 2003] if G is a planar graph or has bounded tree-width.

* We consider hypergraphs realized by points in the plane and half planes or unit disks. For these cases we obtain a logarithmic competitive ratio.

Joint work with Shakhar Smorodinsky (presented in ESA-2011)


Recommender Systems With Non-Binary Grades

We consider the interactive model of recommender systems, in which users are asked about just a few of their preferences, and in return the system outputs an approximation of all their preferences. The measure of performance is the \emph{probe complexity} of the algorithm, defined to be the maximal number of answers any user should provide (probe complexity typically depends inversely on the number of users with similar preferences and on the quality of the desired approximation). Previous interactive recommendation algorithms assume that user preferences are binary, meaning that each object is either “liked” or “disliked” by each user. In this paper we consider the general case in which users may have a more refined scale of preference, namely more than two possible grades. We show how to reduce the non-binary case to the binary one, proving the following results. For discrete grades with $s$ possible values, we give a simple deterministic reduction that preserves the approximation properties of the binary algorithm at the cost of increasing probe complexity by factor $s$. Our main result is for the general case, where we assume that user grades are arbitrary real numbers. For this case we present an algorithm that preserves the approximation properties of the binary algorithm while incurring only polylogarithmic overhead.

Appeared in SPAA'11.


Real Time Scheduling Using Sub-Linear Ternary CAMs

A practical and scalable TCAM (Ternary Content Addressable Memory) based priority queue implementation, called TPQ, is presented.

With current technologies the scheme can be easily used in today high end routers supporting $100$ Gbps line rate with millions of concurrent connections and above. Moreover, TPQ can scale with the technology to support future line rate demands.

TPQ takes at most $3$ sequential TCAM accesses per insert or remove operation, and a small constant number of main memory accesses, independent of $n$, the number of keys and $w$ the key size in bits, and most importantly uses only $6w \sqrt{n}$ TCAM entries each entry with $w$ bits, the entries are equally divided to six TCAMs.

This is in contrast to all previous implementations that either take $\log n$ memory accesses per operation or, build on expensive non-scalable hardware, where this hardware is either special ASIC design or large size TCAMs.

As a second contribution we present an $O(n)$ sorting algorithm in a system equipped with a $O(w\sqrt{n})$ entries TCAM, where here $n$ is the number of items, improving on a previous result that used $O(n)$ size TCAM.

Our work opens new applications for TCAM memories that have been until today extensively used in routers only for IP lookup and packet classification.


Optimal Replication in Multi-Regional Peer-to-peer Systems

In many communication systems, data transportation is done by the well-known client-server model. In these systems, multiple clients download contents from a single server. When the number of clients that communicate with the same server increases, the server upload-capacity reaches saturation and the server becomes a bottleneck. To overcome this problem, Peer-to-
peer (P2P) based systems were suggested. In the latter, the terminal users (called Peers) download content from terminal users, instead of a central server. This article deals with a fundamental question of how to optimally replicate the contents across the peers in an efficient manner. We study this question within the context of a large-scale network, where the
peers are grouped into different geographical regions. In such a network, transportation of large-scale data (for instance, a video file) across regions is slower than within a region. Our main result is that optimal replications in single-region and multi-region environments follow the same rule, which we call Max Percentile. We derive optimal algorithms for handling such
environments. We show that our algorithms have low complexity and thus are very practical. We use numerical analysis and simulation to evaluate the system performance and study its behavior. We believe that our results can provide valuable insights on the design of communication systems.


Challenges in Multi-Agent Systems: Bitcoin, Social Networks, P2P Communities, and Network Protocols.

The age of the internet and the pervasiveness of networked computing enabled the creation of large computational systems in which multiple autonomous entities interact. The designers of such systems face difficult challenges: they must bring about the desired behavior of the system as a whole while accounting for the disjoint behavior and incentives of individual agents. I will present a few examples of research (with various collaborators) that deals with these challenges in the context of different systems.

I will discuss recent work on incentives for information dissemination in the Bitcoin protocol (a distributed electronic currency that has received much attention recently) and in social networks, as well as work on the interactions within closed P2P communities, and on core network protocols such as TCP and BGP.


Multi-scale approximation and extension of functions with applications in data analysis

We will introduce a “learning” multi-scale iterative process for data analysis. This process approximates a task related function that is defined on a given data-set by using the geometric structures of the data in different scales. The constructed multi-scale representation can be easily extended to new data points. We will provide a number of examples including classification and regression, extension of non-linear embedding coordinates, and forecasting time series.


Treewidth reduction theorem and algorithmic problems on graphs

Given a graph $G$ and two specified vertices $s$ and $t$, the problem of computing a smallest $s-t$ (vertex) separator can be solved in a polynomial time by network flow techniques. However, when we impose additional constraints on this separator (e.g. require it to be an independent set), the problem becomes NP-hard. We consider the following generic problem. Given a hereditary and decidable class $Z$ and a parameter $k$, find an $s-t$ separator $S$ of size at most $k$ such that $G[S]$ belongs to $Z$. We show the fixed-parameter tractability of this problem, thus immediately solving a number of seemingly unrelated open questions scattered in the literature.
The cornerstone of this work (that probably has a potential beyond the achieved algorithmic results) is so called treewidth reduction theorem. In particular, let $C$ be the set of all minimal $s-t$ separators of size at most $k$ and let $G^*$ be the graph obtained from $C$ by contracting all the connected components of $G \setminus C$ into single vertices. The theorem states that the treewidth of $G^*$ is bounded by a function of $k$, and that the graph $G^*$ can be computed in an FPT time parameterized by $k$. The theorem allows us to reduce the above problem to the same one but on an instance with a bounded treewidth, which then can be solved in an FPT time by a standard dynamic programming based technique, involving, say Courcelle theorem.
The purpose of this talk is to convey the main technical ideas of the above work in an intuitive level. The talk is self-contained. In particular, no prior knowledge of parameterized complexity, treewidth, and Courcelle theorem is needed. Everything will be intuitively defined in the first 10-15 minutes of the talk.
Join work with D. Marx and B. O'Sullivan. Available at [http://arxiv.org/abs/1110.4765]


(In) Compressibility of NP-hard problems

A compression algorithm for a computation problem is a polynomial-time algorithm that compresses instances of the given problem
into equivalent instances. The performance of the compression is naturally measured with respect to its worst-case output size. While NP-hard problems cannot have compression algorithms with non-trivial performance guarantees in terms of the original input size (assuming NP is not in P), some NP-hard problems have surprisingly good compressions when performance is measured in terms of the input solution size. In this talk we discuss some recently developed lower bounds for the compressibility of P-hard
problems when the latter measure is used.


The traveling salesman problem: Low-dimensionality implies a polynomial time approximation scheme

The Traveling Salesman Problem (TSP) is among the most famous NP-hard optimization problems. We design for this problem a randomized polynomial-time algorithm that computes a (1+eps)-approximation to the optimal tour, for any fixed eps>0, in TSP instances that form an arbitrary metric space with bounded intrinsic dimension.

The celebrated results of Arora (A-98) and Mitchell (M-99) prove that the above result holds in the special case of TSP in a fixed-dimensional Euclidean space. Thus, our algorithm demonstrates that the algorithmic tractability of metric TSP depends on the dimensionality of the space and not on its specific geometry. This result resolves a problem that has been open since the quasi-polynomial time algorithm of Talwar (T-04).


Improved Collaborative Filtering

We consider the interactive model of collaborative filtering, where each member of a given set of users has a grade for each object in a given set of objects. The users do not know the grades at start, but a user can probe any object, thereby learning her grade for that object directly. We describe reconstruction algorithms which generate good estimates of all user grades (“preference vectors”) using only few probes. To this end, the outcomes of probes are posted on some public “billboard”, allowing users to adopt results of probes executed by others.

We give two new algorithms for this task under very general assumptions on user preferences: both improve the best known query complexity for reconstruction, and one improving resilience in the presence of many users with esoteric taste.


Beyond Myopic Best Response (in Cournot Competition)

A Nash Equilibrium is a joint strategy profile at which each agent myopically plays a best response to the other agents' strategies, ignoring the possibility that deviating from the equilibrium could lead to an avalanche of successive changes by other agents. However, such changes could potentially be beneficial to the agent, creating incentive to act non-myopically, so as to take advantage of others' responses.

To study this phenomenon, we consider a non-myopic Cournot competition, where each firm selects whether it wants to maximize profit (as in the classical Cournot competition) or to maximize revenue (by masquerading as a firm with zero production costs). The key observation is that profit may actually be higher when acting to maximize revenue, (1) which will depress market prices, (2) which will reduce the production of other firms, (3) which will gain market share for the revenue maximizing firm, (4) which will, overall, increase profits for the revenue maximizing firm. Implicit in this line of thought is that one might take other
firms' responses into account when choosing a market strategy. The Nash Equilibria of the non-myopic Cournot competition capture this action/response issue appropriately, and this work is a step towards understanding the impact of such strategic manipulative play in markets.

We study the properties of Nash Equilibria of non-myopic Cournot competition with linear demand functions and show existence of pure Nash Equilibria, that simple best response dynamics will produce such an equilibrium, and that for some natural dynamics this convergence is within linear time. This is in contrast to the well known fact that best response dynamics need not converge in the standard myopic Cournot competition. Furthermore, we compare the outcome of the non-myopic Cournot competition with that of the standard myopic Cournot competition. Not surprisingly, perhaps, prices in the non-myopic game are lower and the firms, in total, produce more and have a lower aggregate utility.

SODA, 2012

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License