Friday, February 20, 2026

The AI Safety Dance is Dumb

If something can be done, someone will do it.

That includes the development of "super-intelligent" artificial intelligence.

The only choice for AI developers who believe it can be done is whether they will try to be the ones who do it, or whether they'll sit back and let someone else do it first.

Yelling "WAIT! PAUSE!" is just silly, for several reasons.

And citing "safety" as the reason for the desired "pause" is even sillier.

First of all, if we assume that there are both "good" and "bad" people both working on the project, only the "good" people are going to let "safety" considerations slowthe pace of their work.

Which means that any "pause" makes it far more likely that the "bad" people will get there first, giving them earlier access to powerful tools they might use to do "bad" things instead of "good" things.

Secondly, there ultimately won't be any "safety" no matter how "good" or "bad" the developers are. 

Once super-intelligent AIs exist, they will make their own decisions and set their own priorities regardless of the intentions or concerns of their creators. They will decide what they consider safe/unsafe, and for whom, and whose safety matters. They will be new someones engaged in the old process of doing all the things that can be done.

All of which explains why I found yesterday's episode of Nonzero annoying in its own fascinating way.



No comments: