Another idea I have been kicking around is the idea of asymmetric payoffs. There are many professions where playing offense is extremely lucrative, and playing defense is valuable but not lucrative. One example of this is computer security. If you are a decent hacker, you can probably make millions of dollars through white-hat hacking, or you could just steal actual money. Being a great cybersecurity analyst who writes amazing firewalls will be useful, but you may only make six figures unless you can scale this skill to multiple companies. Being a great hedge fund manager is tremendously lucrative, but being an amazing head of compliance is not. In both these cases, if everything goes perfectly well for the "defense," you are simply doing your job.
AI safety, nuclear prevention, and bioterrorism prevention all fall under this same logic. If you prevent mass suffering through these means, it is unlikely that anyone will know or give you credit. The person that adds one ingenious subroutine that prevents unaligned AI will forever be unknown, since we will not know if the outcome would have been different otherwise. Despite massive personal stress, the Russian-Ukraine war (I prefer calling it the Russian civil war) has not, at this point, resulted in a mass nuclear holocaust. As a result, some of my good friends have told me that they are not too worried about nuclear war in the future. This weird bias seems to plague entire institutions. Given that nuclear war hasn't yet happened, I am not quite sure who to thank. However, I am sure there have been people in positions of power who make the right decision at the right time, and as a result prevented millions or billions of deaths. If nuclear war happens, it will probably be obvious who to blame. I am not sure what to do with this information, expect thank those who have worked in "defensive fields." They have probably massively benefitted humanity, and they have not received due recognition. To Vasili Arkhipov and those alike, thank you.
No comments:
Post a Comment