Tuesday, June 20, 2023

The Mediocrity Principle

     Humans often believe that they are at the center of the universe. Disagreeing with this, scientifically speaking, has landed quite a few scientists in trouble. However, essentially all astronomers and cosmologists of the modern day tend to wind up agreeing with The Copernican Principle, which states that the Earth does not occupy a special place in the universe. We are not the center of the universe, and not even the center of our galaxy (gasp!). An extension of this principle is The Mediocrity Principle, the idea that there is nothing special at all about the Earth or humans. In fact, we are probably just par for the course in the universe. We can assume that what is happening here is happening elsewhere (probably even life and intelligent life). This is just a stipulation, but a pretty powerful one. It seems science has trended in this direction, with not just cosmology but also biology (evolution says we are basically just advanced monkeys).

    There is a big problem with this principle: it is quite depressing. We want to think that we are special. We want to strive for important causes and have an impact. We do not want to be forgotten. When we look up at the night sky, it fills us with existential dread to realize that there are more stars in the universe then there are grains of sand on Earth. And a single beach has an incredible amount of sand. The next time you are on a beach, run your fingers through the sand. Imagine the worlds that could exist out if there are really that many stars. Then, wonder if you are really the most important species, the "chosen" race. Probably not. Seems a bit ridiculous. But, maybe? We haven't seen any existence of extraterrestrial life, and the Fermi Paradox is quite complex (why don't we see any evidence for aliens given that we probably should?). Maybe sentient life is extremely rare, or maybe we are the only conscious beings in the universe. This could be important, as without us the universe might just be lifeless balls of gas and rock. We might just be mediocre, or we be the only thing that matters.

    The Mediocrity Principle has rung true up until this point in history. We don't seem particularly special. Whatever we do with nukes, or AI, or pandemics, it doesn't matter much. We could all die out in a flash, and intelligent life elsewhere will probably live on. Perhaps they are better than us, more empathetic and moral. Maybe we would just get in their way. Whatever life force there is in the universe, it doesn't end with us. But, what if it does? Maybe it does? Until there is evidence to the contrary, we have an enormous responsibility. A burden, perhaps. To survive and thrive, to spread among the starts and act as observers to a beautiful universe. Beautiful because of our eyes.

Monday, June 19, 2023

Suffering Risk

    As a longtermist, I care a lot about humanity's potential. I want things to go very well for us: human expansion across the galaxy/universe, free societies full of peace and love, advancements in technology that reduce suffering and death, etc. I do not want things to go very wrong: humanity dies out due to AI/nuclear war/chemically engineered pandemics, authoritarian governments use technology to enslave or repress large swaths of humanity, AI simulates billions of digital minds and puts them in a virtual hellscape, etc. Most people probably agree with this sentiment, they just don't think much about it, and they don't live their lives differently even if they agree. 

    A lot of longtermists care a lot about existential risk, called X-risk for short. Existential risk is the risk that humanity dies out (nukes, pandemics, AI). A different risk is suffering risk, called S-risk. Suffering risk is the risk that humans are stuck in place (authoritarian government takes over and stops progress), or that humans are tortured forever (AI simulates digital minds and tortures them relentlessly, or enslaves humanity and tortures us in "real life"). 80,000 hours estimates that there are less than fifty people in the world who are thinking through how to reduce S-risk. Again, it seems pretty weird that we live in a world of eight billion people where less than fifty are seriously concerned about the very real possibility of worldwide enslavement. For the most part, I don't see any way towards S-risk that isn't agent-driven. What I mean is that only through massive advancements in technology would this sort of grand-scale suffering be possible. We generally think of technology as good, but as Tim Urban writes: “technology is a multiplier of both good and bad. More technology means better good times, but it also means badder bad times.”

    Only through advanced technology do I see such potential for mass suffering. Sure, maybe aliens descend from the sky and torture the species for millennia, but, contrary to what Independence Day would lead you to believe, we probably can't do anything about that. We can, however, massively influence the types of technology that are developed. We can put in place extremely forward-looking safeguards. S-risk is really the main reason I stick so closely to the topic of artificial intelligence. Nukes and pandemics are bad (and I'm sure some authoritarian government could use these to blackmail their citizens), but all you have to do to see S-risk in action is to watch The Matrix one time. Obviously, I doubt robots start using humans for batteries (have they heard about nuclear power?). But in many ways, the paperclip maximizer is the least scary robot.

Tuesday, June 13, 2023

Lie to Me

     In a perfect world, lawyers do not defend the guilty. We know who the guilty are, and they are adequately punished. This may not be impossible for long. The more data that we collect about an individual, the more we know about them. If you had a camera trained on O. J. Simpson for his entire life, you would know that he was a murderer. If you were a superintelligent AI system, you would likely not have to try that hard to become the world's greatest detective (sorry Batman!) and convict O. J. of murder. Maybe there are actual lie detection techniques that certain AI systems will be good at, but even just by combing through massive amounts of data and using simple inference, I am sure that the policing systems available in the future will be extremely powerful. Powerful enough to trust, and powerful enough to do away with the current "jury of your peers" legal system. Now, there is a trade-off to this, the same trade-off we always face: safety vs human rights. 

    Authoritarian regimes focus on safety. Not safety of their citizens, but safety of their regime. They would want to know when a civilization was lying: "no, I wasn't at the protest last night." They will not want their citizens to have use of this technology: "hey, did you see that Robot 3000 proved that Xi Jinping was lying last night about the Uyghurs?" Use of advanced lie detection in the legal system will certainly change human interaction. Fooling modern-day lie detector machines is a bit of an ironic, because polygraphs are shown to actually not work in any sort of reliable fashion. What about in the future? If your dog knocks over a vase, and then tries to fool you into thinking it happened on its own, would you believe it? We can see right through the attempts of animals to deflect blame or straight up lie. They simply do not have the required mental capacity to string together a convincing argument. That may be us in the future, trying to convince our technocratic overlords of our innocence. It does matter if we turn a 10% wrongfully convicted rate into a 0%, and a 90% rightfully convinced rate into a 100%. What matters much more is what the sentencing requirements are for breaking the law. What matters most is who is writing the laws.

Doing Good, or Not Doing Bad?

      Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...