If you suspend your transcription on amara.org, please add a timestamp below to indicate how far you progressed! This will help others to resume your work!
Please do not press “publish” on amara.org to save your progress, use “save draft” instead. Only press “publish” when you're done with quality control.
Do we really want to leave our technological futures in the hands of the major AI researchers – Google, Facebook, and the US Defense Department?
I argue that our political system is designed not to deal with the questions raised by the transhumanist movement, and that without a major overhaul of political liberalism, technological progress will escape democratic oversight.
For the first time in history we have the ability to choose what it means to be human, and yet our liberal pluralist societies preclude substantive debate about our collective future. Modern liberal states are based upon the assumption that there is no single best way to live, and that for the state to endorse a substantive vision of the good life is to open the door to totalitarianism. On matters of personal conviction – human nature, our place in the cosmos, and our ultimate goals – liberal states want us to agree to disagree.
However, we cannot simply agree to disagree about transhumanism because our individual choices will affect the entire species. If you decide to upload your brain onto a computer and abandon your biological body, you are choosing what is essential to humanity: you are defining human nature. If, on the other hand, the government bans technological enhancement, it is also imposing a vision of humanity. Thus, only once liberalism abandons the pretense of neutrality can we start imagining alternative technological futures and debating the underlying vision of the good life that will orient our choice.
I’m a political theory researcher at Sciences Po, and this talk draws on modern political theories of liberalism, the latest transhumanist literature, and ancient Greek theories of the good life.