A few years after he invented the laser, Theodore H Maiman was still tinkering with its potential uses in medicine and other fields. But he had not found them yet.
“A laser,” Maiman told the New York Times in 1964, “is a solution seeking a problem.”
Lasers, of course, are used today in everything from telecommunications equipment and bar code scanners to surgical tools and precision manufacturing.
Predicting the economic impact of new technologies has always been notoriously difficult, even for the people with the most information about them, including their own inventors.
Beyond evolving in unpredictable ways, some technologies also seem to come out of nowhere. It is amazing how quickly new technologies can move from science fiction to something we all take for granted.
History is filled with examples of technologies that emerged suddenly and surprisingly in addition to technologies that evolved in unexpected ways:
Percy Spencer, a self-taught engineer, noticed that his candy bar melted when he was working next to a magnetron — which set him on the path to inventing the microwave.
CRISPR, a technology now transforming biomedical research, began with a curiosity about repeated DNA sequences in bacteria.
Industry leaders in the 1970s struggled to believe that people would want a personal computer in their home.
Brian Arthur, the technology scholar and pioneer of Complexity Economics, describes innovation as a complex, combinatorial, and emergent process. Technology evolves, interacts with market forces, and builds upon itself in ways that simply cannot be known in advance.
This knowledge problem is the first, but hardly the last, risk that policymakers face when hoping to steer the path of a particular technology. Policymakers, in their attempts to regulate away its potential harms, might instead lock us out of the most beneficial uses of AI for the economy and society.
“So-so” or So What?
The academics and commentators calling for policymakers to steer AI innovation sometimes point to the dangers of “so-so” AI technologies.
Typically, when a technology displaces workers, the firm becomes more productive and saves money on the workers it no longer needs to employ. The firm can use that money to invest and expand its operations, to pay its remaining workers higher salaries, or lower the price of the product it sells. Either way, the additional spending power results in more demand somewhere else in the economy, oftentimes creating new, different kinds of jobs.
So-so technologies are different. They displace workers, but they don’t boost productivity enough to generate the compensating increases in demand for workers. One often-cited example is self-checkout kiosks at the grocery store.1 Customers do the work that the cashiers once did, but kiosks don’t save the grocery store enough money to expand and hire additional workers to stock shelves or run the customer service desk, or to pay its remaining workers more money.
On net, these technologies just leave the economy with fewer jobs.
For technologies like these, the knowledge problems become more concrete. If policymakers can identify a particular kind of AI technology as a so-so technology, then there might be a justification for turning the innovation wheel in the other direction. But can they?
Doing so would require predicting how a given AI technology will affect the demand for labor and the productivity of businesses, raising a few thorny questions. For example:
1) Will the AI technology be a complement or a substitute for labor?
New AI technologies can either help the worker complete a task or do the task all on its own. A worker might use an AI tool to refine the firm’s marketing materials by drawing logos and finding typos. Alternatively, the worker might have the AI tool generate the marketing materials entirely from scratch — in which case the worker is freed up to perform different or entirely new tasks.
Policymakers would have to understand not only how workers will use the technology, but also how that interaction will change the set of tasks that workers are doing.
2) Will the firm be able to produce goods or services more efficiently because of this AI technology?
We economists sometimes refer to productivity as a measure of our ignorance, as it represents what is left over after accounting for the effects of labor (workers) and capital (machinery, tools, infrastructure) on the economy’s total production of goods and services. Measuring productivity thus requires an accurate accounting of both inputs and outputs, which it turns out is very difficult.
And if measuring productivity is difficult, anticipating the response of productivity to a new technology is even harder.
Knowing if a technology will make a firm more productive isn’t enough. What we really care about, for the purposes of identifying the so-so technologies, is how those productivity effects translate into jobs. After adopting automation technologies, firms may actually hire more workers because they can make and sell more goods and services. Even if the firm doesn’t end up hiring more workers, other firms might. One industry’s output is another industry’s input. The extra jobs may show up in industries upstream or downstream in the firm’s supply chain.
3) What future technologies might we lose if we regulate a “so-so” AI technology?
Back to the knowledge problem. Anticipating the productivity and labor effects of a new technology is hard enough. But policymakers must also consider the dynamic effects on the innovation ecosystem itself — and specifically how a new AI technology might feed into other, still unknown technologies, and how those subsequent technologies would in turn affect labor demand and productivity. It is productivity and labor effects all the way down.
Policymakers have no way of knowing whether today’s “so-so” AI technology will unlock tomorrow’s labor demand-boosting breakthrough.
Let’s Not Be Naive
It is also true that the federal government played a pivotal role in the development of many technologies, including the early internet, GPS, and mRNA vaccines. The government shapes innovation by choosing which grants to fund and which research investments to subsidize.2
But there are crucial differences between allocating research funds and trying to micromanage the direction of AI innovation. Grant funding and R&D subsidies are not predicated on how technologies interact with market forces. No program officer reviewing grant applications is asking about the impact of a new technology on labor demand.
Policy approaches, right and wrong
I hope that by now I have established that steering the course of AI is both incredibly difficult and risky. But it is nonetheless true that the diffusion of AI, however much it benefits the economy, will also raise important challenges for policymakers.
Once again we can look to history for useful lessons.
The mechanical telephone switch, introduced by AT&T in the 1930s, was one of the largest “so-so” automation shocks in history. At the time, telephone switch operator was one of the most common jobs for American women. The economists James Feigenbaum and Daniel P Gross describe how new mechanical switches displaced incumbent workers. These women often lost their jobs, were less likely to be working years later, and those who did find new jobs tended to find them in lower-paying occupations.
Subsequent cohorts of young women, however, were not permanently scarred by the automation shock. Local labor markets were able to adjust, with firms in other sectors finding ways to employ new workers without significantly depressing wages.
We can be glad that the government didn’t intervene to stop automated telephone switching. Automated telephone switches paved the way for the digital switches that undergird modern global telecommunications infrastructure.
But the shock to the incumbent workers was significant. The lesson to keep from this episode is that policy should focus on softening the blow to incumbent workers and easing the transition of workers between sectors of the economy. Importantly, these policies can be designed to be useful whether AI dramatically disrupts labor markets or not. Given the significant uncertainty we face about AI’s effect on the economy, policymakers should focus on policies that would be a good idea either way.
Regulatory Capture
Let's assume that policymakers actually manage to overcome the knowledge problems. They are able to differentiate between so-so automation technologies and those that have enormous benefits. They're ready to regulate!
But even in this scenario, could they do so responsibly?
Policymakers are already concerned about the consequences of market concentration among AI labs. Building and training frontier generative AI models requires significant fixed costs in compute, data, and energy. Getting the right people to run an AI lab is also expensive. Competition for top AI talent means that AI researchers are able to command hefty salaries. All of these costs make it difficult for smaller, younger firms to enter the market. Even if DeepSeek’s penny-pinching training methods cast doubt on ever increasing returns to scale, the rush to build datacenters continues.
Concentration among AI labs raises the threat of regulatory capture, particularly when policymakers’ goals are difficult to define. An asymmetry of technical expertise puts regulators on the back foot, making it more likely we end up with regulations that insulate incumbents at the expense of industry dynamism. Ironically, this may cement the market concentration policymakers find so concerning.
If the largest firms get to write the rules, those rules will end up making it harder for upstarts to enter and disrupt the market.
Not Losing the Forest for the Trees
Given the lightning pace of progress and the cacophony of predictions and proposals about AI, it is easy to lose sight of the benefits that AI technologies already provide.
One of the easiest places to see these benefits is healthcare. AI is helping speed up drug and device discovery, personalizing cancer treatment, improving cancer screening, and even reducing neonatal deaths in developing countries.
These developments should serve as a reminder that even though AI has the potential to dramatically transform labor markets, it is already saving and improving lives today. If we grip the wheel too hard, and try to steer AI innovation, we may miss out on technologies that can dramatically improve our lives. Not only would we miss out on a better future, but we would risk dragging the economy back into the bleaker past.
Eg Remaking the Post-COVID World, Spring 2021, by Daron Acemoglu
The National Science Foundation, one of the primary funding agencies of basic science, sets strategic priorities with its 10 Big Ideas program, for instance.