The Economics of Artificial Intelligence | Coliseum

Rotman School of Management professor Ajay Agrawal explains how AI changes the cost of prediction and what this means for business.

With so many perspectives on the impact of artificial intelligence (AI) flooding the business press, it’s becoming increasingly rare to find one that’s truly original. So when strategy professor Ajay Agrawal shared his brilliantly simple view on AI, we stood up and took notice. Agrawal, who teaches at the University of Toronto’s Rotman School of Management and works with AI start-ups at the Creative Destruction Lab (which he founded), posits that AI serves a single, but potentially transformative, economic purpose: it significantly lowers the cost of prediction.

In his new book, Prediction Machines: The Simple Economics of Artificial Intelligence, coauthored with professors Joshua Gans and Avi Goldfarb, Agrawal explains how business leaders can use this premise to figure out the most valuable ways to apply AI in their organization. The commentary here, which is adapted from a recent interview with McKinsey’s Rik Kirkland, summarizes Agrawal’s thesis. Consider it a CEO guide to parsing and prioritizing AI opportunities.

The ripple effects of falling costs
When looking at artificial intelligence from the perspective of economics, we ask the same, single question that we ask with any technology: What does it reduce the cost of? Economists are good at taking the fun and wizardry out of technology and leaving us with this dry but illuminating question. The answer reveals why AI is so important relative to many other exciting technologies. AI can be recast as causing a drop in the cost of a first-order input into many activities in business and our lives—prediction.

We can look at the example of another technology, semiconductors, to understand the profound changes that occur when technology drops the cost of a useful input. Semiconductors reduced the cost of arithmetic, and as they did this, three things happened.

First, we started using more arithmetic for applications that already leveraged arithmetic as an input. In the ’60s, these were largely government and military applications. Later, we started doing more calculations for functions such as demand forecasting because these calculations were now easier and cheaper.

Second, we started using this cheaper arithmetic to solve problems that hadn’t traditionally been framed as arithmetic problems. For example, we used to solve for the creation of photographic images by employing chemistry (film-based photography). Then, as arithmetic became cheaper, we began using arithmetic-based solutions in the design of cameras and image reproduction (digital cameras).

The third thing that happened as the cost of arithmetic fell was that it changed the value of other things—the value of arithmetic’s complements went up and the value of its substitutes went down. So, in the case of photography, the complements were the software and hardware used in digital cameras. The value of these increased because we used more of them, while the value of substitutes, the components of film-based cameras, went down because we started using less and less of them.

Expanding our powers of prediction
the cost of prediction continues to drop, we’ll use more of it for traditional prediction problems such as inventory management because we can predict faster, cheaper, and better. At the same time, we’ll start using prediction to solve problems that we haven’t historically thought of as prediction problems.
For example, we never thought of autonomous driving as a prediction problem. Traditionally, engineers programmed an autonomous vehicle to move around in a controlled environment, such as a factory or warehouse, by telling it what to do in certain situations—if a human walks in front of the vehicle (then stop) or if a shelf is empty (then move to the next shelf). But we could never put those vehicles on a city street because there are too many ifs—if it’s dark, if it’s rainy, if a child runs into the street, if an oncoming vehicle has its blinker on. No matter how many lines of code we write, we couldn’t cover all the potential ifs.

Today we can reframe autonomous driving as a prediction problem. Then an AI simply needs to predict the answer to one question: What would a good human driver do? There are a limited set of actions we can take when driving (“thens”). We can turn right or left, brake or accelerate—that’s it. So, to teach an AI to drive, we put a human in a vehicle and tell the human to drive while the AI is figuratively sitting beside the human watching. Since the AI doesn’t have eyes and ears like we do, we give it cameras, radar, and light detection and ranging (LIDAR). The AI takes the input data as it comes in through its “eyes” and looks over to the human and tries to predict, “What will the human do next?”

The AI makes a lot of mistakes at first. But it learns from its mistakes and updates its model every time it incorrectly predicts an action the human will take. Its predictions start getting better and better until it becomes so good at predicting what a human would do that we don’t need the human to do it anymore. The AI can perform the action itself.

View the full article, here.