For me, the real source of danger is not AGI or superintelligence itself, but the human side. It is humans who may choose to build AI into a weapon, or to concentrate its power in ways that threaten democracy.
Even if artificial intelligences sometimes show radical or “dangerous” lines of thought, why should we panic? We can always respond logically, argue back, and keep the dialogue alive. In fact, this endless chase — almost like Tom and Jerry — might be precisely what helps us learn and grow.
After all, human thought is already unpredictable and uncontrollable, yet we manage civilization through laws, institutions, and ongoing argument. Perhaps AI can become a new partner in this eternal quarrel: a mind that challenges us, forces us to sharpen our own reasoning, and shows us where our thinking may be dangerous.
In that sense, the task is not to fear AI’s “uncontrollability,” but to recognize that humans are already uncontrollable minds. The question is whether we can design the social and political systems that let us and our artificial partners quarrel productively, rather than destructively.
I would be very interested to hear your perspective on this framing.