Monday, June 19, 2023

Suffering Risk

    As a longtermist, I care a lot about humanity's potential. I want things to go very well for us: human expansion across the galaxy/universe, free societies full of peace and love, advancements in technology that reduce suffering and death, etc. I do not want things to go very wrong: humanity dies out due to AI/nuclear war/chemically engineered pandemics, authoritarian governments use technology to enslave or repress large swaths of humanity, AI simulates billions of digital minds and puts them in a virtual hellscape, etc. Most people probably agree with this sentiment, they just don't think much about it, and they don't live their lives differently even if they agree. 

    A lot of longtermists care a lot about existential risk, called X-risk for short. Existential risk is the risk that humanity dies out (nukes, pandemics, AI). A different risk is suffering risk, called S-risk. Suffering risk is the risk that humans are stuck in place (authoritarian government takes over and stops progress), or that humans are tortured forever (AI simulates digital minds and tortures them relentlessly, or enslaves humanity and tortures us in "real life"). 80,000 hours estimates that there are less than fifty people in the world who are thinking through how to reduce S-risk. Again, it seems pretty weird that we live in a world of eight billion people where less than fifty are seriously concerned about the very real possibility of worldwide enslavement. For the most part, I don't see any way towards S-risk that isn't agent-driven. What I mean is that only through massive advancements in technology would this sort of grand-scale suffering be possible. We generally think of technology as good, but as Tim Urban writes: “technology is a multiplier of both good and bad. More technology means better good times, but it also means badder bad times.”

    Only through advanced technology do I see such potential for mass suffering. Sure, maybe aliens descend from the sky and torture the species for millennia, but, contrary to what Independence Day would lead you to believe, we probably can't do anything about that. We can, however, massively influence the types of technology that are developed. We can put in place extremely forward-looking safeguards. S-risk is really the main reason I stick so closely to the topic of artificial intelligence. Nukes and pandemics are bad (and I'm sure some authoritarian government could use these to blackmail their citizens), but all you have to do to see S-risk in action is to watch The Matrix one time. Obviously, I doubt robots start using humans for batteries (have they heard about nuclear power?). But in many ways, the paperclip maximizer is the least scary robot.

No comments:

Post a Comment

Doing Good, or Not Doing Bad?

      Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...