11 Comments

Even though I started a substack explicitly focused on how reckless we're being about AI, it's not that I believe rogue superintelligence is a guarantee, just that it's plausible enough that it's hard to kick-back and relax about it, akin to how playing Russian Roulette would be a white-knuckle experience. Just because some experts think something is hard or impossible does not make it so.

Scott Alexander likes to mention the example of Rutherford declaring the idea of energetic nuclear chain reactions to be "moonshine", Leo Szilard reading about that, taking it as a challenge, and then proceeding to figure out how to do it in a matter of hours.

Though of course, that's not quite right: experimentation was still necessary to get from Szilard's insight to nuclear bombs and reactors. I think it's probably because of that that I'm not worried about a super-fast takeoff (basically the same as you said in this article), but there is still the likelihood that AI systems keep getting more useful and widespread, increasing the probability rogue superintelligence comes about at some point.

Bottom line, I trust more the safety-minded experts (https://aidid.substack.com/p/what-is-the-problem), those who think AI poses major risks, than the ones who don't, not because I apply the precautionary principle in every situation, but because it does seem to fit in this one.

And like skybrian said, armchair arguments that certain scientific breakthroughs are impossible are suspect. As I once read a nihilist say (https://rsbakker.wordpress.com/essay-archive/outing-the-it-that-thinks-the-collapse-of-an-intellectual-ecosystem/), using philosophy to countermand science is "like using Ted Bundy’s testimony to convict Mother Theresa".

Expand full comment

While it’s true that many forms of learning will require real experiments and so can’t be infinitely fast, this only goes so far in reducing my concerns. Air travel is an accelerant for the spread of viruses and social media for the spread of memes. Accelerants are concerning because they reduce our ability to cope, and this is still true even if there are limits on how much they can speed things up.

Expand full comment

Quick disagreements.

First of all, OpenAI Codex and GPT’s capabilities feel fairly ... general. It’s hard to name some capability within the input text below a complexity threshold that they don’t have. And much of it is extremely simple techniques, albeit with a lot of refinement and tweaking and complexity, still extremely conceptually simple techniques relative to the scale of the number of floating point operations - being applied at supercomputer scale. The “slippery slope fallacy” of computing power increasing means it will continue to increase seems like not a fallacy - Moore’s law continues to push, and FLOP/s continues to multiply. GPT is already a good conversationalist relative to some people, and codex can code better than 80% of people. You speak of “redefining the things computers do as intelligence” - but historically, the opposite happened. As the power of computers expanded, things previously seen as typifying and proving intelligence (energy, motion, vitality, calculation, speech, representation, control, chess, then Go, then language, then programming, then image synthesis) were redefined as outside the human sphere.

Expand full comment