RUSH JOB

The economic case for slowing down AI

Economists built a mathematical model for avoiding a machine learning disaster

We may earn a commission from links on this page.
Sam Altman, CEO of ChatGPT maker OpenAI, attends an open dialogue with students at Keio University in Tokyo, Japan June 12, 2023.
OpenAI CEO Sam Altman is worried about his software destroying the world.
Photo: Issei Kato (Reuters)

The proliferation of machine learning models dubbed “artificial intelligence” has prompted debate over whether and how to regulate their use, perhaps most prominently among executives at the companies making these products.

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” OpenAI founder and CEO Sam Altman told US lawmakers in June. A 2022 survey of “AI experts” produced a median estimate of existential disaster at 10%. And there are plenty of examples of non-existential risks that are still problems, from lawyers citing fake cases to media outlets publishing inaccurate stories.

Advertisement

Now, MIT economics professor Daron Acemoglu and grad student Todd Lensman have produced what they call the first economic model (pdf) of how to regulate transformative technologies. Their tentative conclusion is that slower deployment is likely better, and that a machine learning tax combined with sector-specific restrictions on the use of the technology could provide the best possible outcomes.

A framework for regulating transformative technology

The authors sketch out a few assumptions about what they call transformative technology: It can increase productivity in any sector in which it is used, but can also be misused (intentionally or otherwise) to create a disaster. Think of the internal combustion engine, or transistors—each enabled people to do lots more useful things, but were just as productive for doing bad things.

Advertisement
Advertisement

As economists do, the authors create a mathematical expression of these ideas and how businesses behave under the assumptions. They find that the theoretical best way to use a new transformative technology is slowly, since that enables greater learning about its potential benefits and risks. As those risks are better understood, the new technology can be rolled out across the economy. If it turns out it is riskier than imagined, it’s easier to switch paths than if many industries are dependent on the new tools.

Reductively, if it turns out we can’t control AI, we want to learn that before AI is being used in every sector of the economy. And regulation of some kind is necessary because private firms only bear some of the costs of AI misuse and thus have an incentive to adopt it more quickly than is socially optimal.

To force the technology to be adopted at the right speed, the MIT economists consider tax schemes, but find them ineffective (at least in theory). They suggest that one smart way forward would combine some kind of tax on transformative technologies with rules limiting it to specific sectors where the risk of adoption is low. This “regulatory sandbox” approach, which is already common with new technologies, could delay adoption of machine learning by high-risk sectors until we understand it better.

The case for faster adoption of transformative technology

The authors carefully detail the argument for slowing a powerful technology we don’t fully understand, but recognize their assumptions might be wrong. For example, they note that faster adoption of a transformative technology may increase knowledge about it, actually reducing risks. Future research, they suggest, might examine how experimentation in certain sectors could be done without increasing overall risk.

Advertisement

Tyler Cowen, a George Mason University economist, has suggested other ideas that might trip up the authors’ conclusions. One is the possibility of rival nations (read: China) developing AI that is less safe or more threatening. This is an oft-cited argument by proponents of accelerating machine learning adoption. The implication is that we need to use AI in risky ways—in weapons systems, for example—or else Beijing will first, even if the empirical evidence suggests that Chinese machine learning may not be close to such activity.

Still, even this argument requires regulation with bright lines that offer a distinct difference between the US and rival nations. Advocates of AI safety argue that first principles still matter: If AI will be misused for mass surveillance, the US should make laws to prevent that, not adopt dystopian tech in the name of being first.