What is the world’s current distribution of computation, and what will it be in the future?
This question is relevant to several issues in AGI safety strategy. To name just three examples:
- If a large government or corporation wanted to quickly and massively upgrade its computing capacity so as to make a push for AGI or WBE, how quickly could it do so?
- If a government thought that AGI or WBE posed a national security threat or global risk, how much computation could it restrict, how quickly?
- How much extra computing is “immediately” available to a successful botnet or government, simply by running existing computers near 100% capacity rather than at current capacity?
To investigate these questions, MIRI recently contracted Vipul Naik to gather data on the world’s current distribution of computation, including current trends. This blog post summarizes our initial findings by briefly responding to a few questions. Naik’s complete research notes are available here (22 pages). This work is meant to provide a “quick and dirty” launchpad for future, more thorough research into the topic.
Q: How much of the world’s computation is in high-performance computing clusters vs. normal clusters vs. desktop computers vs. other sources?
A: Computation is split between application-specific integrated circuits (ASICs) and general purpose computing: According to Hilbert & Lopez (2011a, 2012a, 2012b), the fraction of computation done by general-purpose computing declined from 40% in 1986 to 3% in 2007. The trend line suggests further decline.
Within general-purpose computing, the split for the year 2007 is as follows:
- For installed capacity: 66% PCs (incl. laptops), 25% videogame consoles, 6% mobile phones/PDAs, 3% servers and mainframes, 0.03% supercomputers, 0.3% pocket calculators.
- For effective gross capacity: 52% PCs, 20% videogame consoles, 13% mobile phones/PDAs, 11% servers and mainframes, 4% supercomputers, 0% pocket calculators.
For more detailed data, see Section 2.2 of Naik’s research notes and Section E of Hilbert & Lopez (2011b).
Q: What is it being used for, by whom, where?
A: See the answer above, plus Section 3 of Naik’s research notes.
Q: How much capacity is added per year?
A: Growth rates and doubling periods are as follows, using data 1986-2007:
- General-purpose computing capacity: growth rate 58% per annum, doubling period 18 months (see Section 2.2 of Naik’s research notes).
- Communication: growth rate 28% per annum, doubling period 34 months (see Section 2.3 of Naik’s research notes).
- Storage: growth rate 23% per annum, doubling period 40 months (see Section 2.4 of Naik’s research notes).
- Application-specific computing: growth rate 83% per annum, doubling time 14 months (see Section 2.2 of Naik’s research notes).
Breakdown of data by time periods is available in Hilbert & Lopez (2011a), and the most important quotes are included in the relevant sections of Naik’s research notes.
Q: How quickly could capacity be scaled up (in the short or medium term) if demand for computing increased?
The semiconductor industry is quite responsive to changes in demand, and catches up with book-to-bill ratios as large as 1.4 within 6 months (see Section 3.2 of Naik’s research notes). In addition, the fact that Litecoin, an allegedly ASIC-resistant substitute for Bitcoin, already has ASICs about to be shipped (within two years of launch) also suggests relatively rapid turnaround given large enough economic incentives. In the case of high-frequency trading (HFT), huge investments in nanosecond computing and shaving off milliseconds from Chicago-New York and New York-London cables also suggests quick responsiveness to large incentives.
Q: How much computation would we expect to be available from custom hardware (FPGA’s/ASIC’s and their future analogs)?
A: An increasing fraction (note decline in general-purpose computing’s share from 40% in 1986 to 3% in 2007). However, stuff like ASICs and FPGAs can’t really be repurposed for other use, so existing ones don’t help that much with new tasks.
Q: What is the state of standard conventions for reporting data on such trends?
A: The work of Hilbert, Lopez, and others may eventually lead to uniform conventions for reporting and communicating the data, which would allow for a more informed discussion of these. However, Martin Hilbert in particular is skeptical of standardization in the near future, although he believes it possible in principle, see Hilbert (2012) for more. On the other hand, Dienes (2012) argues for standardization.
Did you like this post? You may enjoy our other Analysis posts, including: