5 Quick Questions for … MIT Research Scientist Tamay Besiroglu on the Huge Economic Potential of AI | American Enterprise Institute

By James Pethokoukis

Are you bullish about the next 25 years of the American economy? I mean, really bullish. Look, I’m not talking about slow-but-steady growth that modestly outperforms, say, the median Federal Reserve forecast of 1.8 percent, inflation adjusted.

Consider, instead, an economy that grows 50 percent faster. Or how about twice as fast? That may sound crazy, but it would only mean the economy was growing as fast as it did on average over the second half of the 20th century. Let me put it another way: I would love for a productivity-led boom of such strength that economists and technologists would start to wonder if the exponential-growth Singularity was nigh. This is the sort of thing that happened during the 1990s.

For any scenario even remotely like those to happen, we’re going to need much faster productivity growth. Remember, those past years of fast growth were helped along by robust labor force growth. But thanks to Baby Boomer retirements and a lower fertility rate, the labor force is now more of a dampening factor on the economy’s potential growth rate. As such, productivity will need to do the heavy lifting.

San Francisco Fed

And to be bullish on productivity growth is to be bullish on AI. Here’s what that means: AI would need to boost worker productivity across a broad swath of business sectors. It would need to be a “general-purpose technology,” or GPT, much like factory electrification in the 1920s. Now it’s not hard to imagine a wide variety of sectors affected by AI, everything from retail product recommendations to customer service chatbots to business analytics for better decision making.

But what really gets me excited is that AI isn’t just potentially an immensely powerful GPT but also an IMI, an invention of a method of invention. “IMIs raise productivity in the production of ideas, while GPTs raise productivity in the production of goods and services,” writes University of Warwick economist Nicholas Crafts in the 2021 paper “Artificial intelligence as a general-purpose technology: an historical perspective.” AI could be an “antidote,” as Crafts puts it, to the finding that big ideas are becoming harder to find.

For more on this subject, I emailed some relevant questions to Tamay Besiroglu, a visiting research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, where his work focuses on the economics of computing and big-picture trends in machine learning. He also recently did this, which is what led me to him:

I recently organized a contest for @Metaculus on investigations into predictions of the future of AI. This resulted in two-dozen insightful analyses by forecasters into the prospects of transformatively advanced AI systems. Here are my short summaries of some that stood out:

— Tamay Besiroglu (@tamaybes) June 20, 2022

1/ How optimistic are you that AI will deliver significant productivity gains in the 2020s?

I think that there is only a modest chance—say, around 25 percent—that by the end of this decade, AI will significantly boost aggregate US productivity growth (by “significantly,” I have in mind something like reverting to the 2 percent productivity growth rate that we observed before the productivity slowdown that occurred in the early 2000s).

I’m not more optimistic because boosting aggregate productivity is a tall order. In the past, few technologies—even powerful, general-purpose, and widely adopted ones like the computer—have had much of an effect. Deep learning has been applied with some success to a few problems faced by large tech companies (such as facial recognition, image detection, recommendations, and translation, among others). However, this has benefited only a small sliver of the overall economy (IT produces around 3 percent of US GDP). It also does not seem likely that AI has enhanced the productivity of technology companies by a large enough margin to produce economy-level productivity effects.

Over longer timescales—say, 15 or 30 years—I think there are good reasons to expect that conservative extensions of current deep learning techniques will be generally useful and reliable enough to automate a range of tasks in sectors beyond IT; notably in manufacturing, energy, and science and engineering. Concretely, I think it is more likely than not that over such a time frame AI productivity effects will dominate the productivity effects that computers had in the late 20th century.

Given the importance of technological progress for driving economic growth among frontier economies, I pay particular attention to the use of AI tools for automating key tasks in science and engineering, such as drug discovery, software engineering, the designing of chips, and so on. The widespread augmentation of R&D with AI could enable us to improve the productivity of scientists and engineers. Automating relevant tasks will also enable us to scale up aggregate R&D efforts (as computer hardware and software for AI are much easier to scale up than it is to increase the number of human scientists and engineers). I think it’s possible that by the middle of this century, the widespread augmentation of R&D with AI could increase productivity growth rates by 5-fold or more

More of this, please

2/ How important is the continuation of Moore’s Law to further deep learning progress? Does this worry you?

The improvements in hardware price-performance have historically been crucial for progress in AI. My recent work on the topic has shown that, largely due to Moore’s law, the amount of compute used to train AI models has grown by roughly 20 orders of magnitude since the early AI systems in the 1950s (that’s an increase of 100 quintillion-fold!). This growth has been crucial both for enabling researchers to train larger and more capable models and has unlocked highly compute-intensive paradigms of AI, such as deep learning. It seems likely that, at least in some important domains, recent progress in deep learning has mostly been the result of a growth in computing power, rather than improvements in the underlying stack of machine learning algorithms and architectures.

However, since around the 2010s, the importance of Moore’s law for advances in the frontier of AI has been rather modest. This is because the main driver in the growth of compute used for machine learning has been the growth in the money spent by large tech companies, rather than improvements in hardware price-performance. However, I predict that this won’t last for long; Moore’s law will once again become a key driver of AI progress as budgets for training runs will reach the multibillion-dollar range. At this point, labs will likely no longer be able to rely on the growth of their funds earmarked for compute for enabling them to train ever-larger models.

3/ What do overly optimistic and overly pessimistic AI experts get wrong?

One common impression I get from those overly optimistic about AI is that they think that much less is required for a technology to have important large-scale effects on the world. They seem to be much more inclined than I think is justified to expect that a small effort, a single AI system, or a small set of innovations will drastically increase the rate of scientific progress and economic growth.

This view underestimates the scale and sophistication of the civilization-scale efforts required to produce current rates of progress, and it seems inconsistent with what we know about the usual distribution of productivity and effectiveness of groups (firms, research labs, universities, governments, countries, and so on). Many hundreds of thousands of well-organized groups are working quite directly on sustaining current rates of progress. For a single effort or AI system to have important effects on aggregate rates of progress, such an effort would need to be more effective at pushing these things forwards than that of all other groups combined. While we cannot be entirely confident this can’t happen, it seems eminently more likely for AI to impact the world more diffusely and gradually: spread out over time, enabled by many contributing innovations, and with the involvement of numerous organizations.

One thing I think that some of those who are pessimistic about AI get wrong is that they underestimate what a simple learning algorithm combined with large amounts of compute can do. I suspect pessimists fail to appropriately internalize quite how much follows from this idea and the recognition that the amount of compute per dollar available to us doubles every two to three years. Many problems that pessimists think are too challenging feats of intelligence—such as abstract reasoning or producing novel scientific insights—seem to me likely to be at least partially soluble with conservative extensions of current deep learning techniques combined with 2040s or even 2030s hardware.

The Most Evil Artificial Intelligences in Film | Den of Geek

4/ How much should concerns about “unaligned” AI affect research considerations? Is this something we need to think more deeply about? Is it an overblown worry?

The problem of training machine learning models that act appropriately and robustly in accordance with human values and are robust seems crucial for enabling us to reap AI’s potential while minimizing its risks. Alignment problems commonly come up in the lab and real-world deployments of AI systems. As AI systems become more powerful and are deployed in more important contexts, such issues will likely become seriously pressing. There are also good arguments that if such problems are not appropriately addressed, advanced AI systems could entirely and permanently disempower humans.

A key question to think about is: To what extent will alignment problems be addressed during the usual efforts by AI companies to build economically useful systems? Labs will have a strong incentive to ensure that their models behave safely in “day-to-day” situations, and therefore will prevent prosaic failure modes. However, we might expect that labs will be less vigilant about tail risks, such as global catastrophic risks involving human disempowerment. We might therefore want more research dedicated to problems that we suspect will not be solved on the default path of AI development, perhaps because these have little overlap with more prosaic failure modes, because the interval between when these become salient and when these pose substantial risks leaves us too little time for their solution, or because we expect these to be neglected for other reasons.

5/ If you could impress one or two things about AI on policymakers, what would those be and why?

With some exceptions—such as restricting the proliferation of lethal autonomous weapons—it seems that it is relatively unclear which policies would help or hinder progress on some of AI’s key challenges (compared to, say, climate change or pandemic risk). Hence, we might want to be more careful than usual with bold policy actions that lock in how, by whom, and for what purposes AI is developed and deployed, as such changes might later turn out unhelpful in enabling us to manage the transition to a world with advanced AI.

Micro Reads

▶ Democrats’ side deal with Manchin would speed up projects, West Virginia gas – Jeff Stein and Tony Romm, WaPo | The side deal would set new two-year limits, or maximum timelines, for environmental reviews for “major” projects, the summary says. It would also aim to streamline the government processes for deciding approvals for energy projects by centralizing decision-making with one lead agency, the summary adds. The bill would also attempt to clear the way for the approval of the Mountain Valley Pipeline, which would transport Appalachian shale gas about 300 miles from West Virginia to Virginia. This pipeline is a key priority of Manchin’s. Other provisions would limit legal challenges to energy projects and give the Energy Department more authority to approve electric transmission lines that are deemed to be “in the national interest,” according to the document. One provision in the agreement could make it harder for government agencies to deny new approvals based on certain environmental impacts that are not directly caused by the project itself, said Sean Marotta, a partner at the Hogan Lovells law firm who represents pipeline companies.

▶ US regulators will certify first small nuclear reactor design – John Timmer, Ars Technica | On Friday, the Nuclear Regulatory Commission (NRC) announced that it would be issuing a certification to a new nuclear reactor design, making it just the seventh that has been approved for use in the US. But in some ways, it’s a first: The design, from a company called NuScale, is a small modular reactor that can be constructed at a central facility and then moved to the site where it will be operated. The move was expected after the design received an OK during its final safety evaluation in 2020. … Once complete, the certification is published in the Federal Register, allowing the design to be used in the US. Friday’s announcement says that the NRC is all set to take the publication step. The NRC will still have to weigh in on the sites where any of these reactors are deployed. Currently, one such site is in the works: a project called the Carbon Free Power Project, which will be situated at Idaho National Lab. That’s expected to be operational in 2030 but has been facing some financial uncertainty. Utilities that might use the power produced there have grown hesitant to commit money to the project.

▶ Supply-Side Economics Isn’t Just for Republicans Anymore – Peter Coy, NYT | Supply-side economics has long been the province of Republicans, who have asserted that the only way government can help increase the supply of goods and services is to get out of the way — mainly by cutting taxes so companies will have a stronger profit motive to increase production. But Democrats have begun shaping their own version of supply-side economics, which gives the government a more active role. My colleague Ezra Klein identified this approach last year as “supply-side progressivism.” In January, Treasury Secretary Janet Yellen described the Biden administration’s Build Back Better agenda as “modern supply-side economics.”

▶ A New Private Moon Race Kicks Off Soon – Rebecca Boyle, SciAm | Sometime in the next four or five months, the first American moon missions in half a century will make a return to Earth’s satellite. The arrivals won’t be human—at least not yet—and they won’t even be government-built. The coming lunar fleet will consist of private spacecraft carrying science experiments and other cargo for paying customers, including NASA. Astrobotic’s Peregrine lander is due to ride on United Launch Alliance’s new Vulcan Centaur rocket, scheduled to make its inaugural voyage before the end of 2022. Competing lunar start-up Intuitive Machines is set to launch its lunar lander, Nova-C, on a SpaceX Falcon 9 rocket, also by the end of this year. A dozen more firms are expected to follow in the next six years, carrying cargo that ranges from a magnetometer and supplies for a future lunar base camp to small amounts of cremated human remains.

Comments are closed.