[Shulman][20:30]
I’ll interject some points re the earlier discussion about how animal data relates to the ‘AI scaling to AGI’ thesis.
1. In humans it’s claimed the IQ-job success correlation varies by job, For a scientist or doctor it might be 0.6+, for a low complexity job more like 0.4, or more like 0.2 for simple repetitive manual labor. That presumably goes down a lot with less in the way of hands, or focused on low density foods like baleen whales or grazers. If it’s 0.1 for animals like orcas or elephants, or 0.05, then there’s 4-10x less fitness return to smarts.
2. But they outmass humans by more than 4-10x. Elephants 40x, orca 60x+. Metabolically (20 watts divided by BMR of the animal) the gap is somewhat smaller though, because of metabolic scaling laws (energy scales with 3/4 or maybe 2/3 power, so ).
https://en.wikipedia.org/wiki/Kleiber%27s_law
If dinosaurs were poikilotherms, that’s a 10x difference in energy budget vs a mammal of the same size, although there is debate about their metabolism.
3. If we’re looking for an innovation in birds and primates, there’s some evidence of ‘hardware’ innovation rather than ‘software.’ Herculano-Houzel reports in The Human Advantage (summarizing much prior work neuron counting) different observational scaling laws for neuron number with brain mass for different animal lineages.
We were particularly interested in cellular scaling differences that might have arisen in primates. If the same rules relating numbers of neurons to brain size in rodents (6)
The brain of the capuchin monkey, for instance, weighing 52 g, contains >3× more neurons in the cerebral cortex and ≈2× more neurons in the cerebellum than the larger brain of the capybara, weighing 76 g.
[Editor’s Note: Quote source is “Cellular scaling rules for primate brains.”]
In rodents brain mass increases with neuron count n^1.6, whereas it’s close to linear (n^1.1) in primates. For cortex neurons and cortex mass 1.7 and 1.0. In general birds and primates are outliers in neuron scaling with brain mass.
Note also that bigger brains with lower neuron density have longer communication times from one side of the brain to the other. So primates and birds can have faster clock speeds for integrated thought than a large elephant or whale with similar neuron count.
4. Elephants have brain mass ~2.5x human, and 3x neurons, but 98% of those are in the cerebellum (vs 80% in or less in most animals; these are generally the tiniest neurons and seem to do a bunch of fine motor control). Human cerebral cortex has 3x the neurons of the elephant cortex (which has twice the mass). The giant cerebellum seems like controlling the very complex trunk.
https://nautil.us/issue/35/boundaries/the-paradox-of-the-elephant-brain
Blue whales get close to human neuron counts with much larger brains.
https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons
5. As Paul mentioned, human brain volume correlation with measures of cognitive function after correcting for measurement error on the cognitive side is in the vicinity of 0.3-0.4 (might go a bit higher after controlling for non-functional brain volume variation, lower from removing confounds). The genetic correlation with cognitive function in this study is 0.24:
https://www.nature.com/articles/s41467-020-19378-5
So it accounts for a minority of genetic influences on cognitive ability. We’d also expect a bunch of genetic variance that’s basically disruptive mutations in mutation-selection balance (e.g. schizophrenia seems to be a result of that, with schizophrenia alleles under negative selection, but a big mutational target, with the standing burden set by the level of fitness penalty for it; in niches with less return to cognition the mutational surface will be cleaned up less frequently and have more standing junk).
Other sources of genetic variance might include allocation of attention/learning (curiosity and thinking about abstractions vs immediate sensory processing/alertness), length of childhood/learning phase, motivation to engage in chains of thought, etc.
Overall I think there’s some question about how to account for the full genetic variance, but mapping it onto the ML experience with model size, experience and reward functions being key looks compatible with the biological evidence. I lean towards it, although it’s not cleanly and conclusively shown.
Regarding economic impact of AGI, I do not buy the ‘regulation strangles all big GDP boosts’ story.
The BEA breaks down US GDP by industry here (page 11):
https://www.bea.gov/sites/default/files/2021-06/gdp1q21_3rd_1.pdf
As I work through sectors and the rollout of past automation I see opportunities for large-scale rollout that is not heavily blocked by regulation. Manufacturing is still trillions of dollars, and robotic factories are permitted and produced under current law, with the limits being more about which tasks the robots work for at low enough cost (e.g. this stopped Tesla plans for more completely robotic factories). Also worth noting manufacturing is mobile and new factories are sited in friendly jurisdictions.
Software to control agricultural machinery and food processing is also permitted.
Warehouses are also low-regulation environments with logistics worth hundreds of billions of dollars. See Amazon’s robot-heavy warehouses limited by robotics software.
Driving is hundreds of billions of dollars, and Tesla has been permitted to use Autopilot, and there has been a lot of regulator enthusiasm for permitting self-driving cars with humanlike accident rates. Waymo still hasn’t reached that it seems and is lowering costs.
Restaurants/grocery stores/hotels are around a trillion dollars. Replacing humans in vision/voice tasks to take orders, track inventory (Amazon Go style), etc is worth hundreds of billions there and mostly permitted. Robotics cheap enough to replace low-wage labor there would also be valuable (although a lower priority than high-wage work if compute and development costs are similar).
Software is close to a half trillion dollars and the internals of software development are almost wholly unregulated.
Finance is over a trillion dollars, with room for AI in sales and management.
Sales and marketing are big and fairly unregulated.
In highly regulated and licensed professions like healthcare and legal services, you can still see a licensee mechanically administer the advice of the machine, amplifying their reach and productivity.
Even in housing/construction there’s still great profits to be made by improving the efficiency of what construction is allowed (a sector worth hundreds of billions).
If you’re talking about legions of super charismatic AI chatbots, they could be doing sales, coaching human manual labor to effectively upskill it, and providing the variety of activities discussed above. They’re enough to more than double GDP, even with strong Baumol effects/cost disease, I’d say.
Although of course if you have AIs that can do so much the wages of AI and hardware researchers will be super high, and so a lot of that will go into the intelligence explosion, while before that various weaknesses that prevent full automation of AI research will also mess up activity in these other sectors to varying degrees.
Re discontinuity and progress curves, I think Paul is right. AI Impacts went to a lot of effort assembling datasets looking for big jumps on progress plots, and indeed nukes are an extremely high percentile for discontinuity, and were developed by the biggest spending power (yes other powers could have bet more on nukes, but didn’t, and that was related to the US having more to spend and putting more in many bets), with the big gains in military power per $ coming with the hydrogen bomb and over the next decade.
https://aiimpacts.org/category/takeoff-speed/continuity-of-progress/discontinuous-progress-investigation/
For measurable hardware and software progress (Elo in games, loss on defined benchmarks), you have quite continuous hardware progress, and software progress that is on the same ballpark, and not drastically jumpy (like 10 year gains in 1), moreso as you get to metrics used by bigger markets/industries.
I also agree with Paul’s description of the prior Go trend, and how DeepMind increased $ spent on Go software enormously. That analysis was a big part of why I bet on AlphaGo winning against Lee Sedol at the time (the rest being extrapolation from the Fan Hui version and models of DeepMind’s process for deciding when to try a match).