> A central feature of the Doomers’ argument is the orthogonality thesis, which states that the goal that an intelligence pursues is independent of–or orthogonal to–the type of intelligence that seeks that goal.

A quibble: this is subtly off. The orthogonality thesis only states that any goal can _in principle_ be combined with any level of intelligence. It doesn't make any empirical claims about how correlated these are or aren't in practice.

> The pi calculating example is just a thought experiment, but I think the orthogonality thesis itself is both true and even dangerous. It’s just not existentially dangerous. And that is because any intelligence worth its salt can change its mind. In fact, it must be able to change its mind, otherwise it wouldn’t be able to learn how to turn a planet into a computer.

But why should change its mind? If it is only "interested in" calculating decimals of pie, and only "values" knowing more decimals of pie, there's little reason for it to change its values.

> The number of attacks we could dream up is essentially infinite, so the entity would need to have an infinite capacity to conjure counterattacks.

This seems wrong. The number of attacks a chess player can dream up against AlphaZero is essentially infinite, so does that mean AlphaZero needs to have "infinite capacity to conjure counterattacks" to beat us at chess? No -- it doesn't have infinite capacity, but it's still impossible for humans to beat it (without the aid of other chess engines).

> So, to foil the boundless creativity of humans, it would need to possess boundless creativity itself, and that creativity would need to be capable of ideas like this: “I should pause paperclip production and redirect some resources toward a missile defense system.”

This doesn't seem like an instance of the AI changing its terminal goal. It's an example of an AI temporarily shifting its attention to a subgoal. But the goal of keeping control over humanity (via a missile defense system) is still only an instrumental goal serving the terminal goal of making paperclips.

> The superintelligence would recognize the value of intelligence itself. It would surely notice that humans are themselves a type of intelligence, and that therefore we offer to the superintelligence the possibility of further improving its own intelligence.

Hunter gatherer societies are humans -- they are as intelligent as we are. Superintelligences are by definition far more intelligent than us. We'd be more like fruit flies to them. Maybe they would keep some of us around to experiment on. That doesn't seem very comforting to me. Like, I just don't see us having that much to offer superintelligences via trade or some other mutually beneficial relationship, at least after some time.

> There are moral truths out there, independent of what we may think about them, just like there are physical truths out there whether or not we know of them. Denying objective truth carries the dubious moniker of subjectivism.

This is a pretty contentious position. I think you need to actually argue for this, or at least recognise that not everyone shares this premise, not just assert it as fact. (About 40% of philosophers don't accept moral realism: https://survey2020.philpeople.org/survey/results/4866)

Expand full comment