Discussion about this post

User's avatar
Carlos's avatar

Even though I started a substack explicitly focused on how reckless we're being about AI, it's not that I believe rogue superintelligence is a guarantee, just that it's plausible enough that it's hard to kick-back and relax about it, akin to how playing Russian Roulette would be a white-knuckle experience. Just because some experts think something is hard or impossible does not make it so.

Scott Alexander likes to mention the example of Rutherford declaring the idea of energetic nuclear chain reactions to be "moonshine", Leo Szilard reading about that, taking it as a challenge, and then proceeding to figure out how to do it in a matter of hours.

Though of course, that's not quite right: experimentation was still necessary to get from Szilard's insight to nuclear bombs and reactors. I think it's probably because of that that I'm not worried about a super-fast takeoff (basically the same as you said in this article), but there is still the likelihood that AI systems keep getting more useful and widespread, increasing the probability rogue superintelligence comes about at some point.

Bottom line, I trust more the safety-minded experts (https://aidid.substack.com/p/what-is-the-problem), those who think AI poses major risks, than the ones who don't, not because I apply the precautionary principle in every situation, but because it does seem to fit in this one.

And like skybrian said, armchair arguments that certain scientific breakthroughs are impossible are suspect. As I once read a nihilist say (https://rsbakker.wordpress.com/essay-archive/outing-the-it-that-thinks-the-collapse-of-an-intellectual-ecosystem/), using philosophy to countermand science is "like using Ted Bundy’s testimony to convict Mother Theresa".

Expand full comment
skybrian's avatar

While it’s true that many forms of learning will require real experiments and so can’t be infinitely fast, this only goes so far in reducing my concerns. Air travel is an accelerant for the spread of viruses and social media for the spread of memes. Accelerants are concerning because they reduce our ability to cope, and this is still true even if there are limits on how much they can speed things up.

Expand full comment
9 more comments...

No posts