Why Emergent Phenomena Are Important
To appreciate the full power of agent-based modeling, you first need to understand the concept of “emergent phenomena,” and the best way to do that is by thinking of a traffic jam. Although they are everyday occurrences, traffic jams are actually very complicated and mysterious. On an individual level, each driver is trying to get somewhere and is following (or breaking) certain rules, some legal (the speed limit) and others societal or personal (slow down to let another driver change into your lane). But a traffic jam is a separate and distinct entity that emerges from those individual behaviors. Gridlock on a highway, for example, can travel backward for no apparent reason, even as the cars are moving forward.
Emergent phenomena are not just academic curiosities; they lie beneath the surface of many mysteries in the business world. How prices are set in a free market is but one illustration. Why, for example, do employee bonuses and other incentives sometimes lead to reduced productivity? Why do some products—like collapsible scooters—generate tremendous buzz, seemingly out of nowhere, while others languish, despite their multimillion-dollar marketing campaigns? How could a simple clerical error snowball into a catastrophic loss that bankrupts a financial institution?
For many businesses—and for society in general—emergent phenomena have become more prevalent in recent years. One reason is that cities and other urban areas have gotten more crowded. In addition, people are now interconnected to a greater degree (thanks, in part, to the Internet and other communications technologies). As population densities and the number of interactions among people increase, so does the probability of emergent phenomena. Furthermore, some businesses are becoming much more interconnected and complicated. The complexity of the stock market, for instance, has surged with a greater number of participants, including casual investors, and with the creation of sophisticated financial instruments like derivatives.
Because of their very nature, emergent phenomena have been devilishly difficult to analyze, let alone predict. Traditional approaches like spreadsheet and regression analyses or even system dynamics (a popular business-modeling technique that relies on sets of differential equations) are currently impotent in analyzing and predicting them. Such approaches work from the top down, taking global equations and frameworks and applying them to a situation, whereas the behavior of emergent phenomena is formed from the bottom up, starting with the local interactions of the different independent agents. Those individuals (such as the drivers in a traffic jam) alter their actions in response to what others are doing, and together the myriad interactions result in a group behavior (the traffic jam) that can easily elude any top-down analysis.
The independent actions of myriad people often result in a global behavior that bubbles up from their local actions.
Emergent phenomena often defy intuition as well. Nasdaq’s smaller tick size could lead to a larger bid-ask spread. Adding new lanes to a highway often makes rush-hour traffic jams far worse—a result known as Braess’s paradox after the German operations research engineer who discovered it in 1968. People usually have no shortage of explanations for such surprising behavior (“Of course adding a lane will increase traffic jams because drivers will change lanes more often, slowing down other drivers”). Not-withstanding such convenient post-rationalizations, the crucial point here is that each emergent phenomenon is a unique entity that can be counterintuitive and, hence, difficult to predict.
In my experience studying a variety of emergent phenomena, I have found that the only way to analyze and even begin to predict them is to model them from the bottom up. In such a simulation, each individual participant, such as an investor buying stocks or a person driving on the highway, is a virtual person who makes decisions based on what the others are doing. Such modeling can accurately capture reality by making each participant a distinct individual. An experienced pension fund manager, after all, does not buy and sell stocks in the same way a young day trader does. In other words, modeling the agents as individuals helps capture the heterogeneity of the real world. And obtaining that kind of accuracy has recently become much more economical, thanks to cheaper computers and better modeling techniques.
Indeed, cost-effective computing power has enabled companies to investigate what-if scenarios in silico that would be prohibitively expensive and risky to explore in the real world. Consider how traffic patterns—the way people move through stores and malls—can have a direct impact on business. In retail environments, what layout maximizes not only customers’ satisfaction but also their spending?
Cost-effective computing power has enabled companies to investigate what-if scenarios in silico that would be prohibitively expensive and risky to explore in the real world.
To answer that, researchers have taken advantage of a wealth of existing information, including the copious bar-coded data collected at cash registers (what customers bought and the time they bought it) as well as the knowledge of experts like Paco Underhill, a naturalist of shopping behavior and the author of Why We Buy. Underhill knows, for example, the exact percentage of shoppers who turn right after entering a supermarket and the likelihood that someone will make a U-turn in the middle of a crowded aisle. Using this information, researchers can build agent-based models of, for instance, a supermarket with virtual shoppers. These simulations have found that changes in store layout have the potential to increase customer spending by up to 20%.
Sainsbury’s, the British supermarket chain, has developed such a computer model of its supermarket at South Ruislip in West London. With the help of Ugur Bilge of SimWorld, a London-based consultancy, and John Casti of the University of Vienna, Sainsbury’s was able to incorporate sophisticated details into the model, such as how long each shopper spends in different departments. Camera studies have found, for example, that the average time a customer spends on buying milk is five seconds, versus 90 seconds for selecting a bottle of wine.
In the agent-based model, each shopper has a different list of items (based on real data collected from the bar code readers at the cash registers in the Sainsbury’s stores). As the virtual people make their way through the aisles and choose their goods, the software tracks the customer densities throughout the store as well as the wait times at the checkout counters. Different layouts (such as relocating the frozen foods department) can be tested easily to judge their impact on store congestion.
Of course, enhancing the efficiency of shopping is not the only criterion. Store managers often want to separate high-traffic areas (the meat and baked goods sections, for example) to encourage impulse buying as shoppers travel between them. Sometimes “hot spots” (areas of congestion) are desirable locations for sale items or free samples. Furthermore, responding to customer psychology is important. A supermarket might want to place its produce section near the entrance, for instance, to impress customers with the freshness of its vegetables and fruits.
The agent-based model enabled Sainsbury’s to balance those different factors in order to determine the best store layout. Although the project requires further refinement (the simulation does not take into account that younger customers tend to shop faster than older ones, for example), some of the preliminary results have given Sainsbury’s insights into its business. In particular, the model exposed some surprising behavior. An increase in the number of customers in the store, for instance, can lead to a drop in wine sales. The reason is that, as the supermarket becomes crowded, the number of hot spots increases, which discourages customers from making their way to the wine section, located at a far corner of the store.
Other retailers have also used agent-based simulation to investigate better layouts for their stores. Specifically, Macy’s was interested in such issues as where best to locate its cash registers and service desks—decisions that had been based on aesthetics and past practices. Working with Pricewaterhouse-Coopers (then Coopers & Lybrand), Macy’s developed a virtual store that could be modified not only in terms of physical layout but also with respect to staffing—the number of sales associates in the different departments. A huge benefit of the agent-based model was that it enabled Macy’s to experiment with different layouts and options in cyberspace without risking its reputation in the real world.
Consumer goods manufacturers are interested in agent-based modeling for a different reason. Companies like Procter and Gamble and Unilever would like to determine the optimal shelf placement for their products to make the most sales. Agent-based modeling can also be used to design better stadiums, shopping malls, and amusement parks.
In an example of the latter, Rob Axtell and Josh Epstein of the Brookings Institution have developed an agent-based model of a theme park that takes advantage of the facility’s copious data from people counters, queue timers, customer surveys, and other sources. That information helped Axtell and Epstein build a detailed model of a heterogeneous population that had different desires and expectations for a day at the park. For instance, a family of four will have very different needs (six rides, four hot dogs, two cotton candies, three trips to the restroom) than a teenage couple on a date. The agent-based model considered that information to balance customer satisfaction with the theme park’s goal of increasing business. The model was able to explore complex questions that were beyond the reach of traditional mathematical techniques and a pure statistical analysis of the data. (For instance, what’s a better solution, extending the park’s hours by 30 minutes or shortening each ride by 8.5 seconds?) Furthermore, the research sparked new ideas for further investigation. What would happen, for example, if every customer were given a small handheld computing device that displayed up-to-date information on line lengths at every ride and attraction?
Companies have been using agent-based technology to model the actions not only of their customers but also of their employees. A consumer goods corporation has used the technology to design a better incentive structure for its country managers in Europe. The company had been rewarding them based on their proportion of “shorts” (when a product sells out)—the lower the better. But that encouraged managers to order more than they needed—a particularly costly practice when the products were perishable. To prevent spoilage, the company often had to quickly relocate huge quantities of stock from, say, Denmark to Italy if the Danish country manager had overestimated his requirements. Thus a new incentive system was needed, one that would motivate the country managers to act in the best interest of the overall company.
What Are Emergent Phenomena?
The problem is trickier than it might first appear. Obviously, the current system encouraged hoarding, but incentives tied to just the company’s overall performance were not viable because people dislike having their bonuses linked to factors over which they have little influence. So what local behavior should the company reward, and how should it ensure that the new system would not ultimately lead to any counterproductive actions, like hoarding? Agent-based modeling helped uncover the answer: Tie the country managers’ bonuses directly to their storage costs in addition to their shorts. This change alone could reduce supply-chain costs by several percent resulting in annual savings of millions of dollars. In essence, agent-based modeling helped connect the local behavior of country managers to the global performance of the organization.
Other companies have used agent-based modeling to investigate radically new ways of doing business. In the pharmaceutical industry, the cost of developing new drugs has surged, forcing many companies to rethink their R&D operations. Part of the problem is the so-called selfish-team syndrome, in which a group that is developing a particular drug makes biased decisions—for example, trying to save the project when it should be killed—because the team’s reputation is tied to the drug’s success or because the team members have become emotionally attached to the project. Such counterproductive behavior can slow drug development and increase its cost. Concerned by such issues, a major pharmaceutical company thought of a possible solution—creating a marketplace to subcontract some of the drug development in the early phases of human clinical trials.
To explore that and other alternatives, my colleagues at Icosystem and I developed an agent-based model of the various players, both the company’s employees as well as potential contractors, including contract research organizations (companies that specialize in managing clinical trials), academics who do consulting work, and even experts at competing firms. We found that because of the diversity of the players (their different motivations, aversions to risk, cost structures, and so on), our pharmaceutical client could not possibly coordinate all of that activity profitably in an open marketplace.
Our client then suggested creating a network of participants, both internal and external, using incentives that encouraged better decision making (such as bonuses tied to the success of the entire portfolio of drug molecules). Through further modeling, we found that this solution could help our client more than double the risk-adjusted value of its portfolio of recently discovered molecules. Based on these results, the company has decided to test in the real world this new way of organizing early clinical development.
Agent-based modeling can also help predict how changes in an organization’s recruiting strategy might ultimately affect its corporate culture. For instance, in an experimental project, Cap Gemini Ernst & Young’s Center for Business Innovation developed an agent-based model of Hewlett-Packard’s employees. For decades, HP had a strong tradition of hiring people for their loyalty and not necessarily for their experience. The company focused its efforts on finding people, often-recent college graduates, who would fit into its culture, and many employees spent their entire careers at HP. But as the labor market began moving toward free agency, HP became concerned about how that change would affect the company. In addition, as the company shifted its focus toward services, it became increasingly interested in hiring high-powered, experienced consultants, who were typically much less loyal than HP engineers.
The simulation results confirmed some of HP’s suspicions. Hiring free agents, for example, would eventually result in higher turnover costs, as employees (even those who were originally loyal) would begin to leave at a higher rate. A more surprising finding was that hiring experienced but less loyal people would eventually lead to an overall decrease in HP’s total level of knowledge. That result would be particularly pronounced if the hiring strategy was changed abruptly. A better alternative was a gradual transition over the course of a year or two. Furthermore, the agent-based model suggested that HP could greatly mitigate the negative effects of such a change by simultaneously investing heavily in knowledge capture, such as a repository and IT systems that could retain some of the expertise of employees before they left. To do so would be a marked shift from HP’s traditional focus on the development of each individual employee (for instance, by encouraging job rotation across businesses and functions)—a strategy that makes less sense when turnover is high.
An exciting new area of agent-based research is in the field of operational risk, which is a growing concern at many financial institutions because of the huge losses suffered over the years by Daiwa, Sumitomo, Barings, Kidder Peabody, and others. Although banks have developed efficient and sophisticated ways of assessing their market and credit risks, they are still in the early stages of figuring out just how to measure and monitor their operational risk. The task is extremely difficult because organizations do not have a clear understanding of exactly how an error (or act of fraud) can cascade through a system, causing a catastrophic loss, like a tree that falls on a power line and disrupts the electrical power grid of several states.
An error can cascade through a system, causing a catastrophic loss, like a tree that falls on a power line and disrupts the electrical power grid of several states.
My colleagues at Bios and Cap Gemini Ernst & Young and I have applied agent-based modeling to analyze and quantify the operational risk of the asset management business of Société Générale in France. In the simulation, we modeled the company’s employees as virtual agents who continually interacted with one another as they performed their tasks. From past data, we knew that bank employees commonly make certain types of errors, such as writing down the wrong number of zeroes ($10,000 instead of $1,000) or confusing a local currency with the euro. But we found that such errors would almost never lead to catastrophic losses unless they occurred in certain types of situations—for example, when the financial markets are volatile in August. Detailed results from the agent-based model helped explain why.
Fluctuations in the market lead to an increase in the volume of transactions, which then results in a much higher number of errors because people are rushed and have little time to double-check their work. In France, the problem is exacerbated in August because that’s when many employees—generally the more experienced ones who have earned seniority—take extended vacations. In one scenario, an inexperienced and overworked trader makes a mistake: Instead of selling a stock, he buys it, and nobody in his department, including his busy supervisor, spots the error. The paperwork for the order makes its way to the back office, where a summer intern also fails to detect the mistake and processes the order. By the time the gaffe is uncovered several days later, the stock has plummeted in value, resulting in a multimillion-dollar loss.
Not only did we uncover such potential vulnerabilities, we could also estimate the likelihood that they would occur in the real world, using historical data from the capital markets. Although catastrophic losses were extremely unlikely in the model, by running thousands of simulations we were able to generate the rare events that triggered such disasters, and those results helped provide statistics about the bank’s true operational risk. From that information, Société Générale could test procedures for minimizing that risk (such as changes to its vacation policy) as well as calculate how much capital it should set aside to cover certain potential losses. Currently, financial institutions do not have an accurate way of determining their operational risk, so regulatory agencies force them to overestimate the amount of rainy-day cash they need to have in reserve. In the asset-management business, a financial institution that could determine its operational risk accurately could easily save millions of dollars each year, not only by freeing up some rainy-day capital (which can then be invested) but also by lowering the organization’s insurance premiums.
The Bizarre World of Emergent Behavior
The research into organizational behavior at HP, our pharmaceutical client, and Société Générale holds a larger lesson. A common criticism of agent-based modeling is that the technology often requires an understanding of the complex psychology of human behavior, and errors in quantifying such “soft factors”can lead to results that are grossly inaccurate. As the saying goes,” garbage in, garbage out.” Of course, an agent-based model will only be as accurate as the assumptions and data that went into it, but even approximate simulations can be very valuable. HP, for example, used its model to gain a better qualitative understanding of how certain factors (the company’s hiring strategy, employee turnover, total level of knowledge, and so forth) were related. By contrast, the simulation for our pharmaceutical client was much more detailed and complete, enabling the company not only to understand its business better but also to predict, shape, optimize, and control it. In other words, how a company uses an agent-based model should be directly related to the work and data that went into building it, and vice versa.