Growth when Innovation is Combinatorial

Matt Clancy
The Startup
Published in
8 min readJan 16, 2020

--

Note: I’ve started a weekly newsletter on recent research on the economics of innovation (you can sign up here). Normally I create a twitter-thread about the my posts, but this week the content seemed uniquely bad for twitter, so I’m posting here instead and linking via twitter.

To an important degree, “innovation” is a process of combining pre-existing ideas and technologies in novel ways. Twenty years ago, the late Martin Weitzman spelled out what this model of innovation means for long-run economic growth ( here and here for summaries). But a number of recent papers have deepened these ideas.

When to make a module?

The fundamental assumption in Weitzman (1998) is that new ideas are made by combining two previously existing ideas and putting in a bit of research effort. So, given n ideas, the number of possible innovations (i.e., the number of unique pairs) is n( n- 1)/2. Only some of these will turn out to be useful.

What drives growth in the long run is that, sometimes, one of these ideas is so good that it becomes a new self-contained idea. When this happens, it raises the number of ideas you can build with to n + 1 and the number of possible innovations by n (since you can pair the new idea with all n of the old ones).

A recent working paper by Fink and Teimouri takes a closer look at this process of combining idea-components into self-contained ideas. When does it make sense to permanently fuse several components together into a brand new component, and when is it better to simply leave the components separated?

I think the intuition is clearest if think in terms of computer code. Suppose I am working in a language where I can add up number N1 and N2 with the command SUM(N1,N2) and divide N1 by N2 with the command DIV(N1,N2). I can find the average of these two numbers with the code DIV(SUM(N1,N2),2). If I need to find the average frequently, I can define this as a new command AVE(N1,N2) = DIV(SUM(N1,N2),2).

The key idea is that if I’m teaching a new person this language, I can tell them how to take the average of two numbers by teaching them two things — the commands for SUM and DIV — or by teaching them one thing — the command AVE. If they rarely need to add or divide except to find averages, I can more efficiently give them the tools to start coding by simply teaching them AVE. In fact, if they are really time-constrained and only have the time to learn three commands, I can potentially expand the set of things they do by having them learn AVE and two other commands, instead of SUM, DIV and one other command. The trade-off is if they face a problem where they need to add or divide but not take the average, they’ll be out of luck.

Now suppose I need to computes the reciprocal of the average, which can be computed as DIV(2,SUM(N1,N2)). I could define a new command R_AVE(N1,N2) = DIV(2,SUM(N1,N2)), but since it’s not very common to require the reciprocal of the average, it’s probably not a very useful command to define. If a student only has capacity to learn three commands, I would almost certainly be better off teaching them DIV, SUM and a third command, instead of R_AVE and two commands.

More generally, FT assume innovators acquire n components and can create anything built from those components. Only some such combinations are useful though. Importantly, when components are fused together, the resulting component only counts as one of the innovator’s n components (as in the above example, where learning the AVE or R_AVE command is just as easy as learning SUM or DIV). In this framework, it makes sense to fuse components when doing so expands the set of useful innovations you can make when restricted to n components.

It ends up being best to fuse together components that are frequently used together (e.g., creating AVE from SUM and DIV). Fusing together components rarely used together is inefficient (i.e., R_AVE), since it results in a waste of the innovator’s n components.

What’s cool about this paper is they actually investigate this with data. They look at the number of recipes that can be made out of 381 ingredients and the number of combination therapies that can be made by combining 901 different drugs. They then ask, for example, if you had a set of n ingredients how many actually existing recipes could you make? How does this change if a random set of k ingredients are combined into one ingredient (e.g., in a pre-mixed spice pack, or pre-made dough, etc.), freeing up space for k — 1 other ingredients?

Two results are interesting. First, most of the time, combining multiple components into one does not expand the set of possible innovations. For example, there are over 65 million different ways to combine 2–5 ingredients. Of those, if you are restricted to a set of n items in your kitchen, giving one of these slots to a combined item only expands the set of possible recipes for 9,839 of those combinations. That is, for recipes, only 0.02% of possible combinations of ingredients were more useful together than decomposed into their constituents. Similar results follow for drug combinations (though less extreme).

Second, restricting attention to these useful combinations, the vast majority are themselves viable recipes or drug combinations! Of the 9,839 combinations of ingredients that expand the set of recipes a kitchen can cook, 96.88% were combinations that themselves complete a recipe. Again, similarly for drug combinations.

What’s the take-away? One interpretation is that innovators are relatively short-sighted. Successful combinations of components are originally built to be stand-alone technologies. Only later do people realize this technology can itself be usefully combined with other technological components. It is not typically the case that innovators foresee a set of components will enable many other technologies if combined, but do not on their own do anything useful. (At least for these little datasets)

This is in the spirit of Weitzman’s model, where innovators combine ideas to invent technologies that are useful in the production of economic goods, not to enable follow-on innovation that builds on their idea. It fits well a view of technology where we stumble onto unexpected new possibilities after creating new technologies not originally created with the intention of opening up new domains.

The Industrial Revolution

One of the implications of Weitzman (1990) is that the growth of ideas is initially constrained by the number of possible ideas. During this period, every possible combination is investigated and innovation is very slow. However, once enough components are added to the stock of ideas, the number of possibilities grows explosively. In other words, innovation has a long period of near-stagnation, followed by faster-than-exponential growth.

Weitzman alludes in passing to the fact that this seems to fit the history of innovation quite well. A working paper by Koppl, Devereaux, Herriot, and Kauffman expands on this notion as a potential explanation for the industrial revolution. Both KDHK and Weitzman adopt the model that combinations of pre-existing technologies sometimes yield new technologies. For Weitzman, this is a purposeful pairing of two components. For KDHK, this is modeled as a random evolutionary process, where there is some probability any pair of components results in a new component, a lower probability that triple-combinations result in a new component, a still lower probability that quadruple-combinations result in a new component, and so on. KDHK show this simple process generates the same slow-then-fast growth of technology.

Now; their point is not so much that industrial revolution is now a solved question. They merely show a process where people occasionally combine random sets of created artifacts around them and notice when they yield useful inventions inevitably transitions from slow to very rapid growth. It’s not that this explains why the industrial revolution happened in Great Britain in the 1700–1800s; instead, they argue an industrial revolution was inevitable somewhere at some time, given these dynamics. Also interesting to me is the idea that this might have happened regardless of the institutional environment innovators were working in. If random tinkering is allowed to happen, with or without a profit motive, then you can get a phase-change in the technological trajectory of a society as the set of combinatorial possibilities grows.

Long-run Growth and AI

A final point of Weitzman (1998) is that, eventually, the space of possible ideas expands far more rapidly than the economy’s GDP. From this point on, the growth of technology is constrained not by the set of possible ideas, but by the growth of R&D resources. What ultimately happens to the rate of economic growth depends on two factors: the cost of R&D and the extent to which knowledge can be substituted for physical capital. In a world where the cost of R&D falls aymptotically to zero, growth goes to infinity if there is sufficient ability to replace the function of machinery and other capital with better ideas about producing goods.

Why would we assume R&D would ever fall to zero? Well, one possibility is that technological progress yields technologies that increase the productivity of R&D. One such technology is artificial intelligence.

The impact of AI on the rate of growth is what motivates Agrawal, McHale, and Oettl (2018). They build a new model of economic growth by combining a combinatorial model of innovation (inspired by Weitzman) with a now standard model of endogenous growth by Charles Jones. Their contribution is to more rigorously found the innovation side of the economy in a way that they can push and prod different parameters to think through the impact on economic growth.

What happens, for example, if researchers can access a greater share of human knowledge via improved search and the internet? In their model, growth is enhanced in the short and long-run.

A more interesting example is what if AI allows researchers to more fully explore the space of possible ideas? Consider the cooking ingredients example from FT above. If there were only 10 possible kitchen ingredients, a dedicated cook could work their way through all 1,024 possible combinations. However, there were over 65 million different ways to combine up to 5 ingredients, given 381 possible kitchen ingredients. By FT’s criteria, only 0.02% of these were “useful.” No human cook could ever try all these combinations out, but maybe an AI (with a model of human taste) could.

As you would expect, an AI that expands your ability to sort through the space of combinatorial possibilities increases the rate of growth in the short-run. But perhaps surprisingly, in the long run it does not.

This result comes from the idea that combinatorial spaces quickly grow to a size that is inconceivably big, even for a super-powerful AI. If you consider all possible combinations of 381 ingredients — not just those with 5 or fewer components — there are more possibilities than there are atoms in the universe. That means, in the long run, artificial intelligence is no better than human intelligence, since both explore a negligible share of the space of possibilities. (That doesn’t mean AI doesn’t improve long-run levels of GDP though, since they do raise the growth rate on your way to the long-run)

What if an AI could actually keep pace with the combinatorial explosion? In that case, you get a singularity — growth increases without bound.

Combining Insights

To sum up; Weitzman was among the first to write up a model of the economy with innovation as combination at it’s heart. In my experience though, it was the kind of paper that was cited a lot for it’s conceptual contribution, but rarely did people build on it’s model of the innovation process explicitly. In fact, since it did not become a workhorse model in economics, the paper is probably not as well known by people working in the area as it could be. (I know I encountered it after I had already been working on combinatorial innovation, and at least one of these papers does not cite it either). A pleasant surprise then to encounter three separate papers in the last few years that deepen and extend these ideas.

Originally published at https://mattsclancy.substack.com.

--

--