The new science of artificial societies suggests that real ones are both more predictable and more surprising than we thought. Growing long-vanished civilizations and modern-day genocides on computers will probably never enable us to foresee the future in detail—but we might learn to anticipate the kinds of events that lie ahead, and where to look for interventions that might work
In about A.D. 1300 the Anasazi people abandoned Long House Valley. To this day the valley, though beautiful in its way, seems touched by desolation. It runs eight miles more or less north to south, on the Navajo reservation in northern Arizona, just west of the broad Black Mesa and half an hour’s drive south of Monument Valley. To the west Long House Valley is bounded by gently sloping domes of pink sandstone; to the east are low cliffs of yellow-white sedimentary rock crowned with a mist of windblown juniper. The valley floor is riverless and almost perfectly flat, a sea of blue-gray sagebrush and greasewood in sandy reddish soil carried in by wind and water. Today the valley is home to a modest Navajo farm, a few head of cattle, several electrical transmission towers, and not much else.
Yet it is not hard to imagine the vibrant farming district that this once was. The Anasazi used to cultivate the valley floor and build their settlements on low hills around the valley’s perimeter. Remains of their settlements are easy to see, even today. Because the soil is sandy and the wind blows hard, not much stays buried, so if you leave the highway and walk along the edge of the valley (which, by the way, you can’t do without a Navajo permit), you frequently happen upon shards of Anasazi pottery, which was eggshell-perfect and luminously painted. On the site of the valley’s eponymous Long House—the largest of the ancient settlements—several ancient stone walls remain standing.
Last year I visited the valley with two University of Arizona archaeologists, George Gumerman and Jeffrey Dean, who between them have studied the area for fifty or more years. Every time I picked up a pottery shard, they dated it at a glance. By now they and other archaeologists know a great deal about the Anasazi of Long House Valley: approximately how many lived here, where their dwellings were, how much water was available to them for farming, and even (though here more guesswork is involved) approximately how much corn each acre of farmland produced. They have built up a whole prehistoric account of the people and their land. But they still do not know what everyone would most like to know, which is what happened to the Anasazi around A.D. 1300.
“Really, we’ve been sort of spinning our wheels in the last eight to ten years,” Gumerman told me during the drive up to the valley. “Even though we were getting more data, we haven’t been able to answer that question.” Recently, however, they tried something new. Unable to interrogate or observe the real Long House Valley Anasazi, they set about growing artificial ones.
Mr. Schelling’s Neighborhood
Growing artificial societies on computers—in silico, so to speak—requires quite a lot of computing power and, still more important, some sophisticated modern programming languages, so the ability to do it is of recent vintage. Moreover, artificial societies do not belong to any one academic discipline, and their roots are, accordingly, difficult to trace. Clearly, however, one pioneer is Thomas C. Schelling, an economist who created a simple artificial neighborhood a generation ago.
Today Schelling is eighty years old. He looks younger than his age and is still active as an academic economist, currently at the University of Maryland. He and his wife, Alice, live in a light-filled house in Bethesda, Maryland, where I went to see him one day not long ago. Schelling is of medium height and slender, with a full head of iron-gray hair, big clear-framed eyeglasses, and a mild, soft-spoken manner. Unlike most other economists I’ve dealt with, Schelling customarily thinks about everyday questions of collective organization and disorganization, such as lunchroom seating and traffic jams. He tends to notice the ways in which complicated social patterns can emerge even when individual people are following very simple rules, and how those patterns can suddenly shift or even reverse as though of their own accord. Years ago, when he taught in a second-floor classroom at Harvard, he noticed that both of the building’s two narrow stairwells—one at the front of the building, the other at the rear—were jammed during breaks with students laboriously jostling past one another in both directions. As an experiment, one day he asked his 10:00 A.M. class to begin taking the front stairway up and the back one down. “It took about three days,” Schelling told me, “before the nine o’clock class learned you should always come up the front stairs and the eleven o’clock class always came down the back stairs”—without, so far as Schelling knew, any explicit instruction from the ten o’clock class. “I think they just forced the accommodation by changing the traffic pattern,” Schelling said.
![]() |
In the 1960s he grew interested in segregated neighborhoods. It was easy in America, he noticed, to find neighborhoods that were mostly or entirely black or white, and correspondingly difficult to find neighborhoods where neither race made up more than, say, three fourths of the total. “The distribution,” he wrote in 1971, “is so U-shaped that it is virtually a choice of two extremes.” That might, of course, have been a result of widespread racism, but Schelling suspected otherwise. “I had an intuition,” he told me, “that you could get a lot more segregation than would be expected if you put people together and just let them interact.”
One day in the late 1960s, on a flight from Chicago to Boston, he found himself with nothing to read and began doodling with pencil and paper. He drew a straight line and then “populated” it with Xs and Os. Then he decreed that each X and O wanted at least two of its six nearest neighbors to be of its own kind, and he began moving them around in ways that would make more of them content with their neighborhood. “It was slow going,” he told me, “but by the time I got off the plane in Boston, I knew the results were interesting.” When he got home, he and his eldest son, a coin collector, set out copper and zinc pennies (the latter were wartime relics) on a grid that resembled a checkerboard. “We’d look around and find a penny that wanted to move and figure out where it wanted to move to,” he said. “I kept getting results that I found quite striking.”
To see what happens in this sort of artificial neighborhood, look at Figure 1, which contains a series of stills captured from a Schelling-style computer simulation created for the purposes of this article. (All the illustrations in the article are taken from animated artificial-society simulations that you can view online, at www.theatlantic.com/rauch.) You are looking down on an artificial neighborhood containing two kinds of people, blue and red, with—for simplicity’s sake—no blank spaces (that is, every “house” is occupied). The board wraps around, so if a dot exits to the right, it reappears on the left, and if it exits at the top, it re-enters at the bottom.
In the first frame blues and reds are randomly distributed. But they do not stay that way for long, because each agent, each simulated person, is ethnocentric. That is, the agent is happy only if its four nearest neighbors (one at each point of the compass) include at least a certain number of agents of its own color. In the random distribution, of course, many agents are unhappy; and in each of many iterations—in which a computer essentially does what Schelling and his son did as they moved coins around their grid—unhappy agents are allowed to switch places. Very quickly (Frame 2) the reds gravitate to their own neighborhood, and a few seconds later the segregation is complete: reds and blues live in two distinct districts (Frame 3). After that the border between the districts simply shifts a little as reds and blues jockey to move away from the boundary (Frame 4).
![]() |
Because no two runs begin from the same random starting point, and because each agent’s moves affect every subsequent move, no two runs are alike; but this one is typical. When I first looked at it, I thought I must be seeing a model of a community full of racists. I assumed, that is, that each agent wanted to live only among neighbors of its own color. I was wrong. In the simulation I’ve just described, each agent seeks only two neighbors of its own color. That is, these “people” would all be perfectly happy in an integrated neighborhood, half red, half blue. If they were real, they might well swear that they valued diversity. The realization that their individual preferences lead to a collective outcome indistinguishable from thoroughgoing racism might surprise them no less than it surprised me and, many years ago, Thomas Schelling.
In the same connection, look at Figure 2. This time the agents seek only one neighbor of their own color. Again the simulation begins with a random distribution (Frame 1). This time sorting proceeds more slowly and less starkly. But it does proceed. About a third of the way through the simulation, discernible ethnic clusters have emerged (Frame 2). As time goes on, the boundaries tend to harden (Frames 3 and 4). Most agents live in areas that are identifiably blue or red. Yet these “people” would be perfectly happy to be in the minority; they want only to avoid being completely alone. Each would no doubt regard itself as a model of tolerance and, noticing the formation of color clusters, might conclude that a lot of other agents must be racists.
Schelling’s model implied that even the simplest of societies could produce outcomes that were simultaneously orderly and unintended: outcomes that were in no sense accidental, but also in no sense deliberate. “The interplay of individual choices, where unorganized segregation is concerned, is a complex system with collective results that bear no close relation to the individual intent,” he wrote in 1969. In other words, even in this extremely crude little world, knowing individuals’ intent does not allow you to foresee the social outcome, and knowing the social outcome does not give you an accurate picture of individuals’ intent. Furthermore, the godlike outside observer—Schelling, or me, or you—is no more able to foresee what will happen than are the agents themselves. The only way to discover what pattern, if any, will emerge from a given set of rules and a particular starting point is to move the pennies around and watch the results.
Schelling moved on to other subjects in the 1970s. A few years later a political scientist named Robert Axelrod (now at the University of Michigan) used a computer simulation to show that cooperation could emerge spontaneously in a world of self-interested actors. His work and Schelling’s work and other dribs and drabs of research hinting at simulated societies were, however, isolated threads; and for the next decade or more the threads remained ungathered.
Sugarscape and Beyond