Thank you for raising these important questions.
Personally, I do not believe that superintelligence must be uncontrollable. If we imagine it as a Socratic intelligence—one that begins with questions, exposes assumptions, and explains risks—then its role could be less about domination and more about arguing with us as a trusted adversary.
In that sense, I would even welcome a world where humans and AI can "quarrel like friends": disagree openly, point out dangers in each other’s reasoning, but still remain within a shared framework of logic and respect.
To me, the real challenge is not whether superintelligence is controllable in a remote-control sense, but whether we can design the institutional and epistemic infrastructure where questioning, counter-argument, and permission become defaults. Then "safety" does not mean passive obedience, but transparent and auditable reasoning.