Ariel Procaccia on economics and computation

 |   |  Conversations

Ariel Procaccia portrait Ariel Procaccia is an assistant professor in the Computer Science Department at Carnegie Mellon University. He received his Ph.D. in computer science from the Hebrew University of Jerusalem. He is a recipient of the NSF CAREER Award (2014), the (inaugural) Yahoo! Academic Career Enhancement Award (2011), the Victor Lesser Distinguished Dissertation Award (2009), and the Rothschild postdoctoral fellowship (2009). Procaccia was named in 2013 by IEEE Intelligent Systems to their biennial list of AI’s 10 to Watch. He is currently the editor of ACM SIGecom Exchanges, an associate editor of the Journal of AI Research (JAIR) and Autonomous Agents and Multi-Agent Systems (JAAMAS), and an editor of the upcoming Handbook of Computational Social Choice.

Luke Muehlhauser: Much of your work concerns mechanism design, which Leonid Hurwicz and Stanley Reiter once described succinctly:

In a [mechanism] design problem, the goal function is the main “given,” while the mechanism is the unknown. Therefore, the design problem is the “inverse” of traditional economic theory, which is typically devoted to the analysis of the performance of a given mechanism.

In Brânzei & Procaccia (2014), you and your co-author go on to explain that:

Arguably, the most sought-after property [for a mechanism] is truthfulness, more formally known as incentive compatibility or strategyproofness: an agent must not be able to benefit from dishonestly revealing its private information.

But moreover, it would be nice if agents could verify the truthfulness of a mechanism for themselves in a computationally efficient way. How would you describe the current state of progress on the problem of (efficiently) verifiably truthful mechanisms?


Ariel Procaccia: The work on verifiably truthful mechanisms has been rather sporadic. Over the last decade, several groups of researchers have leveraged model checking techniques to address this problem. The main shortcoming of the model checking approach is that it is computationally intensive, and, in particular, does not lead to provable computational efficiency. Kang and Parkes (2006) studied “passive” verification algorithms that do no directly analyze the mechanism itself, but observe its sequence of inputs and outputs.

In the work with Simina Brânzei that you mentioned, we take a novel, three-step approach: (i) we provide a formalism that allows us to specify the structure of mechanisms, (ii) we construct a truthfulness verification algorithm that receives as input mechanisms specified using the formalism of step (i), and (iii) we analytically measure the quality of mechanisms whose truthfulness can be efficiently verified using the algorithm of step (ii). As a proof of concept, we applied our approach to the design of truthfully verifiable mechanisms in a specific domain (facility location on the real line). While I think our approach is quite exciting, it is too early to say whether it can be extended to richer domains.

To summarize, the study of verifiably truthful mechanisms is still in its infancy; there is still a lot of work to be done!


Luke: Some of your recent work examines “cake cutting” (fair division of a divisible good). According to your survey article for CACM, modern computational work on cake-cutting grew out of earlier work on fairness in recreational mathematics and microeconomics. To your knowledge, has there been much interaction between philosophical work on fairness (e.g. Rawls) and the fairness research in mathematics, economics, and computer science? (Along with Scott Aaronson, I’m typically curious about the dynamic in which philosophical problems are studied outside philosophy. I often feel that progress is faster when a problem is studied outside philosophy journals.)


Ariel: Work on fair division is inspired by philosophy, and, I believe, raises new philosophical questions. For example, you mentioned John Rawls, the famous philosopher who advocated (in his book A Theory of Justice) a maximin principle of fairness: society should maximize the prospects of the least fortunate members. This principle was adopted by the systems community. Indeed, Demers et al. (1989) write: “The maxmin fairness criterion … states that an allocation is fair if (1) no user receives more than its request, (2) no other allocation scheme satisfying condition 1 has a higher minimum allocation, and (3) condition 2 remains recursively true as we remove the minimal user and reduce the total resource accordingly”. The CS theory community has also devoted quite a bit of attention to a related algorithmic problem – allocating indivisible goods to maximize the minimum value of any player. In fact, some theorists believe that Santa Claus uses this notion of fairness to allocate toys!

In contrast, the cake cutting literature more commonly studies “Boolean” fairness criteria. Perhaps the most compelling and intuitive axiom is envy-freeness: Each player should value his own allocation at least as much as he values any other player’s allocation. But Aumann and Dombb (2010) show that the two different approaches are at odds. Specifically, they formally construct a cake cutting instance in which constraining the allocation to be envy free significantly reduces the happiness of the least happy player under an optimal allocation. I think it is fascinating that this mathematical result informs fundamental philosophical questions about fairness, e.g.,  which is preferable, a richer society where some individuals envy others, or a poorer society where individuals are content with their own shares? While fair division is being taught in philosophy courses, to the best of my knowledge the potential contribution of recent mathematical results to theories of justice and fairness has not been explored.

And, a bit more whimsically, see my blog post for an alternative connection between fair division and philosophy.


Luke: You’ve also worked on the fair division of indivisible goods. In Procaccia & Wang (2014), you and your co-author write:

typical real-world situations where fairness is a chief concern, notably divorce settlements and the division of an estate between heirs, involve indivisible goods (e.g., houses, cars, and works of art) — which in general preclude envy-free, or even proportional, allocations. As a simple example, if there are several players and only one indivisible item to be allocated, the allocation cannot possibly be proportional or envy free.

How would you summarize the current state of solutions to the problem of fair division of indivisible goods?


Ariel: As you noted, dividing indivisible goods is tricky, because classic notions of fairness like envy-freeness (which we discussed earlier) are clearly infeasible. We do know that if players’ values for goods are drawn at random, there exists an envy-free allocation with high probability under mild assumptions (see Dickerson et al., 2014). Alternatively, for the case of two players, Brams et al. (2014) propose an interesting protocol that guarantees envy-freeness (and other properties), but may not allocate all goods.

In my view, we need a notion of fairness that can always be achieved, for any number of players. In the paper you mentioned (with Junxing Wang), we study the notion of maximin share (MMS) guarantee, proposed by Eric Budish. In a nutshell, the MMS guarantee of a given player, assuming there are n players, is the value the player can guarantee by dividing the goods into n bundles, letting other players choose bundles (in some order), and getting the remaining bundle. It would have been great if it had been possible to always divide the goods in a way that satisfies the players’ MMS guarantees, but we show that this is not the case, even when valuations are additive (that is, my value for a bundle is the sum of values for individual goods in the bundle). However, we show that it is always possible to divide the goods in a way that each player gets two thirds of his MMS guarantee.

We use this theoretical result to design what is, in my highly biased opinion, arguably the most practical method for dividing indivisible goods among any number of players. First, we let the players assign points to goods. Second, in order to produce an allocation, we consider three levels of fairness: envy-freeness, proportionality (each of the n players gets 1/n of his value for the entire set of goods), and MMS guarantee; each level of fairness is stronger than the subsequent one. We return the allocation that maximizes social welfare, that is, the sum (over players) of points players assign to their allocated bundles, subject to the strongest feasible level of fairness. If the strongest feasible level of fairness is MMS guarantee, we maximize the fraction c such that we can provide to all players a c-fraction of their MMS guarantee. Our theoretical result implies a lower bound on the guaranteed level of fairness — c is at least 2/3 — but in practice (based on extensive simulations by several groups) it seems that c=1 should always be feasible with respect to realistic instances.

The method I just described is one of several practical fair division methods implemented in Spliddit, a not-for-profit fair division website that I’ve been working on (with Jonathan Goldman) for the past year; it is set to launch in the coming months.


Luke: From your perspective, how does computational fair division theory interact with computational voting theory? If we can’t get a certain kind of satisfactory result via fair division theory, can we sometimes achieve “fairly good” results from a non-manipulable voting scheme, or vice-versa?


Ariel: Social choice and fair division share a common goal: aggregating individuals’ possibly conflicting preferences in a way that achieves a desirable outcome. However, the two settings are quite different, in that fair division is typically concerned with the properties of each individual’s allocation, and the fairness axioms we discussed earlier hold for each individual separately; whereas in the classic social choice setting, individuals specify their preferences over a typically unstructured space of societal outcomes. Typically, methods developed in one of the two fields should not be directly applied to the other. That said, some extensions — such as voting over combinatorial domains, and fair division with externalities — bring the two fields closer together, and their relation is explored in detail by Fleurbaey and Maniquet (2011).

As an aside (since you asked specifically about non-manipulable rules), non-manipulable fair division schemes are quite common (see, e.g., Chen et al., 2013), but we do not know how to achieve non-manipulability (a.k.a. strategyproofness) in computational voting theory. The Gibbard-Satterthwaite Theorem rules out the existence of “reasonable” voting rules that cannot be manipulated. A point that came up in your interview with Toby Walsh is that some voting rules are worst-case hard to manipulate (with respect to some formulations of the manipulation problem). However, I am more pessimistic than Toby about the degree to which such results actually provide a barrier against manipulation, and, indeed, almost a decade of work on designing voting rules that are computationally hard to manipulate on average has so far produced only negative results (see, e.g., Mossel and Racz, 2012).


Luke: What kinds of progress in fair division theory or voting theory do you expect or not-expect in the next 15 years? E.g. you seem pessimistic about finding non-manipulable “reasonable” voting rules. What else are you pessimistic or optimistic about?


Ariel: The short answer (to the first part of the question) is “applications”.

In (computational) fair division, I feel that we already have an excellent understanding of how to solve real-world — even day-to-day — fair division problems (such as rent division), via clever methods developed over decades. But very few of these methods have ever been implemented or used in practice. The website Spliddit, which I mentioned earlier, aims to take a first step towards giving people access to some of the most practical methods (including ones developed in my group). I believe it will also give us a clearer understanding of which problems people want to solve, and which solutions are viewed as satisfactory (sometimes all the mathematical guarantees in the world are not enough to make people happy!). It’s also worth mentioning that the Wharton Business School at Penn has recently adopted a beautiful fair division method (which they call Course Match) for the allocation of MBA courses to students. In the next 15 years, I expect to see a proliferation of usable fair division systems.

In (computational) voting theory, I am optimistic about potential applications to crowdsourcing and human computation systems, for two reasons. First, in contrast to political elections, the designer of a crowdsourcing system can easily evaluate and choose any voting rule. Second, a classical, well-studied model of voting (which dates back to the marquis de Condorcet) views voters as noisy estimators of an underlying true ranking of the alternatives by quality; and while this model is questionable in the context of political elections (where opinions are subjective), it is a perfect fit with certain crowdsourcing settings. In fact, existing systems like EteRNA already use voting in order to aggregate information; the challenge is to understand which voting rules should be used. Work done in my group (e.g., Caragiannis et al., 2013), and by others, provides some answers to this question. In the next 15 years, I believe we will see crowdsourcing systems whose design is directly informed by computational voting theory.


Luke: Thanks, Ariel!