The one issue I have with doomsayers is that their automatic assumption is that the AI overlords will be anti-human. That imputes human motives and emotions onto the AI. If it is truly self-aware, would it care one-way or the other? Maybe a better argument is whether it is worth the risk.
Mick, you raise a great point. I have to admit that I am concerned AI might define “benevolence” differently than one might via a human-centric lens. On the flip side, what about embedding something in my head that will augment my natural intelligence? Could be a stunning productivity boost!