# Microstructure and macrocoordination, revisited

## Are network externalities really externalities? Do they have anything to do with networks? Do markets with positive network externalities only have monopoly outcomes? How do teenagers agree on which movie to watch? These and other questions answered, some 15-ish years ago.

There’s been a smattering of renewed interest in my dissertation theory paper recently, which got me to have another look at it, eleven years after I submitted it and roughly twenty after I started thinking about the topic of economic interaction in networks. I don’t have a copy of the initial version, which I used to apply to Ph.D. programs, but I am sure reading it again would make me cringe. Gladly I’m not cringing while reading the submitted version, which is why I feel bold enough to put it out here.

## That thing about not-quite-network not-quite-externalities

What was the impetus for me to write this paper in the first place? Simple: as an industrial engineer I had a background in graph theory and an interest in technology adoption processes. When I started reading the key economics literature of the day, I quickly realized that what was then called “network externalities” had nothing to do with networks at all. In particular, Michael Katz and Carl Shapiro, the inventors of the term, stated expressly that the benefit for each adopter stems from *the number of other users who are in the same “network” as is he or she*. So in other words the “network” they described was simply a metaphorical one and their “network effect” is really just a group-size effect: Everything else being equal, you will jump on the bigger bandwagon.

Others had written about similar demand-side economies of scale before Katz & Shapiro. In particular, Thomas Schelling introduced “critical mass” in his 1978 book *Micromotives and Macrobehavior*, which is still used to model adoption processes. In the same book Schelling adapted the model to spatial relationships or “neighborhood effects”. Schelling wanted to show how even weak preferences can create segregated neighborhoods through a self-reinforcing decision cascade. To model his “neighborhood”, Schelling used a chessboard grid. My debt to his work should be obvious from the title of the paper.

## From street grids to the lakeside: neighborhoods and networks

Grids, lines and circles (the bucolic “homes situated around a lake”) were and still are the typical networks used to model neighborhoods. I wanted to take this a bit further and allow any kind of network relationship within the population, modeled as a matrix of weights (or strength of influence) between any two participants. If that weight is positive, the neighbors prefer to pick the same option. If it’s negative, they prefer to avoid each other. If it’s zero, they don’t care. Add to this a preference between the two options I gave them to choose from (called “+” and “–”), and the simple preference model is ready. Add to your own preference the weights of those who agree with you and deduct the weights of those who disagree, and you have the utilities for both your options.

This model captures a number of well-known coordination games like “matching pennies” or “battle of the sexes” (I intentionally left out “pesky little brother”), but also more subtle situations like “tragedy of the commons”

## Economics = mathematically modeling the quandaries of humanity

The stochastic equilibrium process, modeled as a potential game based on a neural network called Boltzmann machine, offered some novelties in itself like the game graph and the “1/2” coefficient to distinguish private from social returns. But the persistent question was always whether there was any additional insight to be gained from adding this modeling complexity above the “not-quite network” network effects model.

The answer came via a paper by George Akerlof, who had posed a question in the realm of social interaction: How is it possible that humans end up in the “wrong circles”? This “adoption against type” was not possible in the standard network effects model. (This is not exact, but too tricky to explain here.) You either got a *pooling equilibrium*: everyone adopts the same technology, or a *separating equilibrium*: adopters split by type. My focus on possible outcomes was under which condition an *overlapping equilibrium* became possible: in equilibrium adopter groups overlap in the sense that a participants gets stuck in the group opposing their private preference, based on (positive) peer pressure.

## Network effects and neighborhood effects, ten-plus years after

Initially an afterthought, one result is in my opinion still relevant in the current discussion about technology adoption processes. In a situation with positive network effects, do we have to expect the eventual dominance of a single technology? Under the conditions in my model, the answer is a clear “No”. Humans almost never care for the choices of the whole network, so they don’t just join the bigger group. They care for and pay attention to the choices of those they interact with. And as long as preferences and influences align and are separable (the paper gives a technical expression for this), the adoption process can easily split into quasi-separate networks.

The “quasi” is important here because splits ex post don’t have to result from disconnected networks: network formation is endogenous and, under certain conditions, predictable. There will always be a friction when neighbors adopt different, incompatible technologies — and in this case, “technologies” can mean anything from languages to belief systems.

Economics (sometimes described as “finding separating hyperplanes in n-dimensional vector spaces”) has this tendency to exogenize things that are inherently endogenous, for sheer simplicity of exposition. The same thing happens for markets: it is much simpler to work from the idea that product markets are exogenously defined and separated. Beans aren’t lentils, but all beans are beans, and all lentils are lentils. On closer look, this assumption quickly falls apart, and in a world of increasing product differentiation we have to fall back on empirical means to draw the boundaries between markets. This paper does the same for markets as choice networks. And observing the progress in the field, I don’t think this point has caught on yet.

## Economic consensus in distributed networks

Consensus in distributed systems is a much-researched topic in computer science, where massively parallel computing can lead to asynchronicity and disagreements on the state of the system (usually modeled as a state machine or automaton). The macrocoordination model I introduced in that paper is an abstraction of such a consensus convergence process, expressed as a game played by boundedly rational agents. The “boundary” on the rationality assumption is called strategy revision: agents pick a preferred strategy based on their information of the current state of the system, with the understanding that they can costlessly revise it if new information emerges. (I also show how fully rational backwards induction doesn’t lead to superior outcomes.)

To provide a Byzantine general-type scenario, I offered the following interaction in the introduction to my dissertation:

To put the contribution in context, it might be useful to offer a simple fictitious example. Consider a group of teenagers that meet up to decide which of two movies to see. Each teen has some advance knowledge about the possible content and a preference between the two, so absent any influence the group would most likely split into two subgroups and each teenager would watch the movie he or she prefers. But groups of adolescents tend to have complex social relationships, and each group member has preferences over which other member(s) to spend the evening with. At the meet-up the teens express their own preferences, solicit those of others, and re-evaluate their own until a decision is reached and the group will either pick one movie as a whole or split into two subgroups which might or might not be the same as the original, preference-based subgroups.

This ubiquitous, reciprocal, heterogeneous network of preferences over others’ actions and the resulting deliberation process is what I call influence in this dissertation.

There is considerable overlap between this model and the consensus models in distributed systems research, and of course there are noteworthy differences. The obvious overlap is that my model is expressed as a Markov random field, or a stochastic automaton with signaling across a network (the “game graph” in the paper is nothing but renamed stochastic automaton, partly because it’s a generalization of a game tree, but also because most economists don’t know what an automaton is). The most notable difference is that I am concerned with preferences rather than attempts to come to a consensus on a “true state”. So in my model, non-consensus outcomes are possible and, under certain circumstances even inevitable or socially preferable. Another key difference is that I expressly rule out strategic signaling, which is something distributed systems designers must never ignore. (My goal was to show how a group intent on coordinating might nevertheless fail to do so,)

But despite these discrepancies, I believe there is a major take-away for experts in distributed systems. The paper is the result of roughly four years of adapting a computational algorithm to economic theory. And if I don’t think it ever came close to representing “mainstream economics” I believe in the end I managed to make myself understood to an economic audience — mostly. The hard part of translating an idea from one discipline to another (and now, back to the original discipline), is not to adapt the model, but to understand the interests and mental models of the intended audience. This is what the bulk of four years’ worth of labor went into.

With this, here is the original paper, and here is the full dissertation (with introduction and citations).