Friday, December 8, 2023

Doing Good, or Not Doing Bad?

     Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that means you should do good. If it is morally wrong, objectively, to kick a baby or not save a drowning child, then it is morally right to treat others with kindness and spend some of your time and energy helping others. If it is true, morally, that you shouldn't cheat or steal, it is true that you should give and sacrifice.

    This is a very controversial take. I understand it particularly well, in my opinion, because I grew up Catholic. Catholics, in my estimate, spend a lot of time avoiding the negative. Whipping themselves into a frenzy over impure thoughts, past mistakes, and current temptations. As a Catholic teenager, I was constantly guilt ridden. I was very concerned with what was going on in my own head, trying so hard to avoid slipping up or thinking the wrong thing. Policing my own brain rigorously, stressing about intrusive thoughts to an almost psychotic point. Little did I know, no one cared about what was going on inside my head. Not God, not others, not anyone.

    If I had spent half of that time focused on doing good, I wonder where I would be? Sure, I spent a lot of time volunteering and being nice to people, but I now wonder if I did that because I felt compelled to, or if I do it in order to "avoid" being a bad person. I have a theory that the way the religions have been traditionally practiced is counter to this Effective Altruism idea of "doing good," and rather focus almost exclusively on "not doing bad." Doing good for others, in most religions, is placed lower in the hierarchy than worship and avoiding sin. The ideal Christian, or Muslim, or Buddhist, is one without temptations, who has control over this thoughts and actions, and could sit in deep prayer for hours, talking directly to God. Sure, there are some rare examples that differ from this, as the Mother Theresa's of the world have shown. These people, in my estimate, are the true heroes. Sure, you can live your life as another Desert Father who sits in a room and meditates all day. Sure, you can be totally without temptation, without impure thoughts, and never lie, cheat, or steal. But if you don't do anything for other people, if you don't contribute positively thought the world, if all you do is sit in a room full of silence and purity, what was the point of having you here?

Wednesday, September 20, 2023

Too Many Things We Want

    I visited Vietnam recently. A beautiful country, filled with phenomenal food, wonderful people, and a much different political system. When Americans think of the word "Communism," they think of the totalitarian regimes of the Soviet Union and China, and all of the evils perpetrated by such regimes. Visiting Vietnam was no different than visiting another country, as the human experience anywhere in the world is broadly similar. Some people live in cities, some people live in the country, and most people are friendly and good. The economic and political system that someone lives under may mean nothing for their day-to-day lives, and in the vast majority of the world this appears anecdotally true. Still, these are important decisions. The amount of government interference, from the scale of laissez faire capitalism to full-blown authoritarianism, does actually matter. In most historical cases, communist societies tend to fall further towards the latter over time. Both trend towards a consolidation of power, and decentralization becomes harder and harder over time (without an uprising).

    I've written extensively in my book blog about the political and economic books I read during my trip to and from southeast Asia. Full of Marx, Lenin, Hayek, and Mises, I think I got a pretty good handle on which types of economic systems work well and which types of political systems lead to the repression of human freedom. This post, however, is going to be about something a little different.

    Something that I've noticed in my adult life is that the capitalistic system and the technological progress it brings is really, really good at giving people what they want. The problem is, it may be too good. It is so good at giving us what we want, that it borders exploitation. Alcohol and drugs are essentially a brain hack. The heroin addict actually does want heroin, the problem is it is not a detached, rational want. It is not a long-term want, it is a short term want. Social media is somewhat similar. I would rather not watch three hours of YouTube a day, but clicking certain links that a really optimized algorithm has crafted for me is nearly impossible to enjoy. People love TikTok, because it is using a data-driven approach to keep them engaged and "happy." Sure, some people like myself have completely cut social media, but it is clearly something that we "want." Materialism, and endless array of products, faster cars, better phones, near-instant packages, all things that we actually want. The point of regulation, as seen with drugs, is to step in when things we want (aka heroin) are bad for society as a whole. The freedom of pursuing something (individuality) is outweighed by the greater good of society (collectivism). This is a very, very important lens for thinking through Effective Altruism.

    Most EA members have a collectivist lens, as there are things that pursuing what we want individually (research fame, money, power) can have a horrific effect on society (existential risk, etc.). Sure, publishing vaccine resistant Smallpox may bring you money and fame. However, not only should you not do so, but you probably shouldn't be allowed to do so, for the "greater good." This is where the state steps in, and where things get dicey. What about the things that the state wants? How do we have a check on that? What if the state is the one building the super virus? These are the type of trade offs that we need to think through, and why I think it is so important to take a step back sometimes from the "collectivist" lens. Sure, Vietnam was amazing, and my visit to China ten years ago was incredible. But people are scared in both places, terrified of the Chinese Communist Party. Maybe we do need to appeal to authority to curb the destruction that can be caused by individual wants, but we need to be just as careful to retain the right to curb the wants of a collective authority. Forgetting this will be an costly mistake.

Thursday, September 7, 2023

Are Political Donations Worthless?

     If you were going to try to optimize your donations in a bang-for-buck fashion in order to have a positive impact on the world, how much would do donate to politicians? From an Effective Altruist point of view, political donations are likely worthless. The amount of money in American politics is staggering, and the number of voices online and in-person shouting over each other is staggering. Anyone who has argued about politics in an online forum can attest to the difficulty of changing another's mind on any issue. This comes down more to ideology than anything. Also, voting for the presidential election is generally worthless, due to the electoral college but also due to the fact that there are hundreds of millions of people. You should still do it, civic duty and all, but we all know the odds. Even if you contribute the average American salary or a hundred times that to most political campaigns, you are not going to move the needle. In a solid blue or a solid red state, this is even more likely so. Additionally, the system is winner-takes all. If you donate to cancer research, maybe you have an impact. If you donate an additional thousand dollars to a candidate, and they lose, where is the impact? Given all these considerations, should we give up on politics? What if you love a certain politician or hate another? What if you are pretty certain that a certain presidential candidate would contribute extremely negatively to the nation or the future of the human race?

    It is a well known fact that local politics play a much larger role in American life than national politics. Sure, we love to argue about national issues, but the local stuff is what affects your day to day. How are the roads? How is the crime? How well run is the school district your kids go to? These local races have much less money involved, and a single vote count exponentially more than in the national election, so getting involved at the local level (or donating) could have a larger impact on your life. But is it in any way comparable to funding de-worming medication or malaria nets? No, probably not. Still, everyone has to have their own "asset allocation" when it comes to donations, and if some slice (let's say 20%) has to go to politicians that you like to make you feel good and continue to donate to effective causes, all the better. Personally, I would never give a cent to a political candidate. I am pretty politically passionate, but I simply believe there are better uses for my money. However, I do believe that advocacy is severely underrated. Calling your congressmen, writing your local representative, starting petitions, etc., are all massively more impactful than voting in any election. This is somewhat backed by intuition but also real-world anecdotes. I've found that my ability to aggressively send emails and call phone numbers to be pretty politically persuasive, especially at the local level. Making your voice heard through your vote isn't easy, so you might as well shout.

Saturday, July 22, 2023

Asymmetric Evil

     One of the constant themes in my Losing Money Effectively writing has been the idea of asymmetry. The asymmetric payoff from playing defense is a common one: where if you prevent a nuclear war probably no one knows and you don't get billions of dollars, but if you develop AGI or some other groundbreaking (and dangerous) technology you become one of the richest and most powerful people in history. Sort of in a similar theme, I was recently thinking about the potential that people have to cause great societal harm. If you live your life to the fullest, you may be able to provide enough utility to others to equate to a dozen lives. Maybe you have children, treat them really well, be nice to your coworkers, be a great husband/wife, and donate quite a bit of money to charity. Barring some exceptional luck, you probably won't be a billionaire or famous, and thus your sphere of influence is likely to remain small. If you aren't born into a wealthy family, even with a perfect work ethic you are unlikely to reach a high enough level of status to cause large levels of change.

    Unfortunately, being a good person doesn't change society, except for at the margins. Being a bad person, in contrast, can have a really, really negative impact. If you engineer a pandemic, or shoot up a public area, or assassinate the right person, you can cause quite a bit of harm over a large sphere of influence. Maybe you shoot JFK, but if you want to cause real long-term human suffering for thousands or even millions, shoot Lincoln or Archduke Franz Ferdinand. A motivated terrorist can kill quite a bit of people, and a small group proved in 2001 that with simple planning you can kill thousands of people and spark a war that kills hundreds of thousands of people. Nineteen terrorists, a hundred thousands deaths. There's not many nineteen person nonprofits that save hundreds of thousands of lives on their own.

    This is, of course, a massive problem. In a lot of ways, human society is built on trust. Trust that the overwhelming majority (99.999999% of people) are not evil, or at least not both smart and evil. The data seems to back this up for the most part, as I don't live in constant fear whenever I go to a concert or a public park. Sure, the fear may be there, but for the most part it is irrational. Still, I think this concept of asymmetric evil is a very understaffed problem. To prevent mass shootings there are advocates for gun control (which I support), but for increased anti-terrorism efforts we often see a decrease in human freedom. It's hard to be a serial killer in an authoritarian regime that tracks your every move, but that does not mean I'd trade aggregate human freedom for a world with a handful of less serial killers. Also, we saw with the Patriot Act that a lot of times these "safety" measures actually do more harm then good. 

    This is an important concern of mine, and I do think we could do a few things better. First, we in no way shape or form should trust sensitive technological information that could lead to mass deaths to the public domain. If the government finds out how to create a super virus, they should not open source that information. This seems obvious, but for some reason (looking at you Australian smallpox researchers), it has to be said. Next, we shouldn't trust any individual or small group of individuals with massive amounts of power. Any weapons of mass destruction plans should have the required redundancy attached, less we find ourselves in the world of Dr. Strangelove. Third, we should be very cognizant of how fragile society is. There are probably reasonably easy ways to trigger a societal collapse (financial meltdown, kill the right world leader and blame a different world power), so we should be extremely diligent when building institutions and planning for the worst case scenario. In the meantime, we should acknowledge that our "good person" impact will likely be small and continue to stay the course anyway.

Friday, July 21, 2023

Sacrifice vs. Uncertainty

     For some reason, in my fiction writing at least, I can't stop writing about the idea of sacrifice. Maybe it is my Irish Catholic upbringing, or maybe it was the fact that I've read a lot of Cioran, but regardless the idea intrigues me endlessly. Here is my question:

    Would you take a bullet for democracy in the United States? You get shot in the chest, you may or may not live.

    If you don't take the bullet, the US becomes a dictatorship of North Korean proportions. If you jump in front of the bullet, nothing changes in the US. There are plenty of interesting derivations to this question. Maybe you throw in that democracy falls in 100 years, so you and your family will be fine. Or you change the prompt to reflect some other cause. Well, many American soldiers have lost their lives fighting for certain American ideals, democracy being the foremost. Probably the coolest part of the US is our history of standing up against tyranny, and taking a look at the world stage shows us as mostly alone. It's pretty crazy to me that the American experiment actually worked, and I don't see any obvious civilizational collapse on the horizon despite what the media says. I'm taking the bullet. Now, what I find actually interesting about this question is the introduction of uncertainty. If you set the odds for either of these, at 95%, or 5%, you can probably get some widely inconsistent and interesting answers.

    Would you take a 5% chance of dying to prevent a 95% chance of democracy collapsing? Would you take a 95% chance of dying to prevent a 5% chance of democracy collapsing? What about 50% and 50%?

    Now, shift those numbers on each side until you get something noteworthy. Personally, I think introducing odds into anything causes humans to lock up and focus on self-preservation. I doubt many people would take their own life to prevent a 40% chance of a family member dying, even if we traditionally valued that family members' life at multiples of our own. This is one of the problems with altruism, and one of the problems with effective donations. Basically, the problem is we don't really know what is going to happen. Even if we are pretty sure our donation money will go to cure someone of a preventable disease, maybe that leads to knock-on effects (they grow up and become bad, overpopulation, the money is stolen and used to buy weapons). Even if the odds of such bad outcomes are extremely low, we become extremely adverse to donating. Maybe I want to donate to AI alignment research, but there is some low probability I make the problem worse. In fact, I really have no idea what will make the problem worse and what will make the problem better. Even if the odds of the money being useful is 80%, that 20% scares me to an irrational level.

    What does this mean? I think it means that research into how effective certain causes are is actually extremely useful. Removing the uncertainty regarding charitable causes might actually be the most impactful contribution of EA, because by doing so we can finally convince people to make a small sacrifice.

Love and Intelligence

    We assume that animals don't really love each other. They partner up out of instinct, and their sexual activities are driven by instinct. Partnership and even family is an animalistic urge, put widely on display across the animal kingdom. Why does a mother lion protect her cubs? Does she actually love them, or is that just an instinct drilled in by millions of years of evolution? These are the thoughts that keep me up at night. Just kidding. But I do wonder if there's actually some sort of spectrum here. If I see a beetle mother taking care of her young, I assume it's all instinct. If I see a female ape taking care of her young, I see more than a glimmer of affection. Maybe this is all personification, but if it is even just a little bit more than that, it is possible that our capacity to love scales directly with either intelligence or sentience.

    Sure, skeptics would just say that love isn't even a real "thing" among humans; it is just another animalistic instinct driven in by evolution, no different than the beetle or fruit fly. Maybe at a certain intelligence level you realize this, and are no longer able to love. A horseshoe theory, where most humans are simply in the sweet spot. Once you pass a certain intelligence threshold or become a certain amount of self-aware, you realize that everything is meaningless and predetermined and love is impossible. Maybe. Maybe love requires a willful suspension of disbelief, but maybe we need to do a better job at separating out lust and sex from this discussion. Love could be an intellectual feat, rather than a physical or spiritual one. Maybe the best marriages and the perfect partnerships could become deeper and more beautiful if each party understood each other on a more fundamental level. Perhaps it is the absence of this understanding that gets in the way of the empathy and compassion really needed for a deeper level of appreciation and respect. I imagine that superintelligent AIs, of the movie Her variety, will be able to form some crazy deep connections. This is based solely on intuition, but it is quite a pleasant thought to believe.

Saturday, July 8, 2023

Animal Rights

    Martin McDonagh's film "The Banshees of Inisherin" was my favorite film in 2022. I re-watched it recently, and I was struck by a certain piece of dialogue between one of the main characters, Colm, and a priest:

     Priest: "Do you think God gives a damn about miniature donkeys, Colm?"

    Colm: "I fear he doesn't. And I fear that's where it's all gone wrong."

    Animal rights are a very complex issue, one that I've avoided writing about because I'm not sure exactly where I stand. In Animal Liberation, Peter Singer made a pretty compelling case for vegetarianism. Before we get into what is moral to eat, we first have to solve a hard problem: what makes humans special? In my opinion, there is a flavor of sentience/consciousness that humans have that is extremely valuable. We are by far the most intelligent species, with a level of self awareness and decision making ability far beyond that of other animals. We are the only animal who can contemplate philosophy or wade in existential angst. Our capacity for language and problem solving has allowed us to traverse oceans and explore the stars. Other living creatures on Earth, for the most part, are really dumb. If you spend ten minutes alone with a chicken, you will realize that there is quite simply a not a lot going on upstairs. Yeah we probably shouldn't torture the thing, but if I had to kill a thousand chickens or a random guy, I'd feel pretty confident in my decision. 

    Also, the life of an animal in the wild is not ideal. Evolution is a mostly random process full of cruelty and waste, and most animals die horrifying deaths. Prey are hunted constantly by predators, and predators are constantly at risk of starvation. Starvation, disease, and getting eaten are more than commonplace in the wild, where in farms most animals only have to worry about the last one. Well, maybe some have to worry about living out a miserable life in what is essentially a cramped prison cell where they are kept until slaughter, and that is actually a good point. Here is my personal dilemma: I am confident that I can eat an oyster, and I am confident that we shouldn't eat chimpanzees. Distinct animal species basically either have rights or they don't, and it's weird that society is so logically inconsistent about this (if you eat a dog you are horrible, but go on and kill billions of pigs and that's fine). There is a large spectrum of intelligence from oyster to chimp, and the effective altruists probably draw the line for what we can slaughter too far down. But 97% of humanity draws the line too high, and it's probably better to be safe, especially given how little it costs us. But I find it hard to criticize too harshly whenever I actually see a chicken.

Wednesday, July 5, 2023

Altruism for the Rich

    Arthur Schopenhauer says that there are two options in life: pain or boredom. The poor deal with pain, the rich deal with boredom. It has been argued that effective altruism is religion for the rich. A way for rich people to feel good about their lives. A way to avoid boredom. A way to donate a small chunk of one's relative wealth and then brag at a dinner party that you "have saved hundreds of lives," all the while treating your employees horribly and scamming customers. This cynical take is fairly common, as many see service work as inherently selfish. Unfortunately, we often miss the obvious. We miss the fact that most people are good, and helping others is not a zero sum game. If helping others makes you feel good, and thus you help others, you are adding value to the world. I don't really care if Bill Gates donates billions of dollars to charity because he truly cares about the world, or if he is donating in order to build a legacy and help his friends. Less children are dying either way. Sure, I hope that everyone eventually migrates to actually altruistic intentions, free of any second-order selfish reasons. But honestly, that is too much to ask in the beginning.

    Social impact is a marathon, not a sprint. If we get bogged down in attacking anyone who tries to do good for "selfish motivations," if we hold everyone to a standard of perfection, we'll lose out on a lot of potential helpers. We miss the forest for the trees, and ignore the fact that the overwhelming majority of people donate essentially nothing. Let's not demand piety, let's just keep each other honest and try to change the world over time. Let's bring people in, not shut people out by gatekeeping. Let's learn from the mistakes of millions of other social movements, and embrace a culture of inclusivity.

Automation Knocks

     A few months ago, I got a letter in the mail. It was a marketing letter, but handwritten on a folded up piece of notebook paper. I thought to myself, wow, I am so much more incentivized to read a marketing letter written with real ink than one of those obviously fake letters written in "handwritten" font. Then I realized that we are in the period of generative AI. AI that can not only come up with customized marketing text for each individual, but AI that can replicate human handwriting via loads of training data. So, if you connected a robot arm to a pen, you could have the arm write very convincing letters in actual ink. These letters would be indistinguishable from human writing, and a collection of robot arms with the same software could write thousands of "handwritten" letters a day. Well, that is quite the business idea. I am sure that millions will be made by the first company to implement such systems, as every marketing guru knows the historic power of the handwritten note. On cue, another layer of human personality will die.

    One of my coworkers asked me for a business idea that he could use for an MBA class project. I looked through my list of notes, and pitched him on this generative AI letter business. It seems to have taken off, with his MBA class now being supremely interested in the idea. The class will be working through the business plan and potentially pitching the idea further up the chain. I might not create the eventual monster, but maybe I pushed it a year or two ahead of schedule.

    Lets take this idea to fruition. In ten years, campaign volunteers are obsolete. Who needs dozens of volunteers to write campaign letters when you can pay a company extremely small sums of money for thousands of equivalent letters? When generative AI starts picking up the phone, grassroots campaigns are essentially dead. The world moves further towards consolidation, with the owners of capital making use of scalable systems to win votes and increase their power. When automation knocks, perhaps we shouldn't answer. Maybe it is the case that when I have a business idea, I should keep it to myself.

Tuesday, June 20, 2023

The Mediocrity Principle

     Humans often believe that they are at the center of the universe. Disagreeing with this, scientifically speaking, has landed quite a few scientists in trouble. However, essentially all astronomers and cosmologists of the modern day tend to wind up agreeing with The Copernican Principle, which states that the Earth does not occupy a special place in the universe. We are not the center of the universe, and not even the center of our galaxy (gasp!). An extension of this principle is The Mediocrity Principle, the idea that there is nothing special at all about the Earth or humans. In fact, we are probably just par for the course in the universe. We can assume that what is happening here is happening elsewhere (probably even life and intelligent life). This is just a stipulation, but a pretty powerful one. It seems science has trended in this direction, with not just cosmology but also biology (evolution says we are basically just advanced monkeys).

    There is a big problem with this principle: it is quite depressing. We want to think that we are special. We want to strive for important causes and have an impact. We do not want to be forgotten. When we look up at the night sky, it fills us with existential dread to realize that there are more stars in the universe then there are grains of sand on Earth. And a single beach has an incredible amount of sand. The next time you are on a beach, run your fingers through the sand. Imagine the worlds that could exist out if there are really that many stars. Then, wonder if you are really the most important species, the "chosen" race. Probably not. Seems a bit ridiculous. But, maybe? We haven't seen any existence of extraterrestrial life, and the Fermi Paradox is quite complex (why don't we see any evidence for aliens given that we probably should?). Maybe sentient life is extremely rare, or maybe we are the only conscious beings in the universe. This could be important, as without us the universe might just be lifeless balls of gas and rock. We might just be mediocre, or we be the only thing that matters.

    The Mediocrity Principle has rung true up until this point in history. We don't seem particularly special. Whatever we do with nukes, or AI, or pandemics, it doesn't matter much. We could all die out in a flash, and intelligent life elsewhere will probably live on. Perhaps they are better than us, more empathetic and moral. Maybe we would just get in their way. Whatever life force there is in the universe, it doesn't end with us. But, what if it does? Maybe it does? Until there is evidence to the contrary, we have an enormous responsibility. A burden, perhaps. To survive and thrive, to spread among the starts and act as observers to a beautiful universe. Beautiful because of our eyes.

Monday, June 19, 2023

Suffering Risk

    As a longtermist, I care a lot about humanity's potential. I want things to go very well for us: human expansion across the galaxy/universe, free societies full of peace and love, advancements in technology that reduce suffering and death, etc. I do not want things to go very wrong: humanity dies out due to AI/nuclear war/chemically engineered pandemics, authoritarian governments use technology to enslave or repress large swaths of humanity, AI simulates billions of digital minds and puts them in a virtual hellscape, etc. Most people probably agree with this sentiment, they just don't think much about it, and they don't live their lives differently even if they agree. 

    A lot of longtermists care a lot about existential risk, called X-risk for short. Existential risk is the risk that humanity dies out (nukes, pandemics, AI). A different risk is suffering risk, called S-risk. Suffering risk is the risk that humans are stuck in place (authoritarian government takes over and stops progress), or that humans are tortured forever (AI simulates digital minds and tortures them relentlessly, or enslaves humanity and tortures us in "real life"). 80,000 hours estimates that there are less than fifty people in the world who are thinking through how to reduce S-risk. Again, it seems pretty weird that we live in a world of eight billion people where less than fifty are seriously concerned about the very real possibility of worldwide enslavement. For the most part, I don't see any way towards S-risk that isn't agent-driven. What I mean is that only through massive advancements in technology would this sort of grand-scale suffering be possible. We generally think of technology as good, but as Tim Urban writes: “technology is a multiplier of both good and bad. More technology means better good times, but it also means badder bad times.”

    Only through advanced technology do I see such potential for mass suffering. Sure, maybe aliens descend from the sky and torture the species for millennia, but, contrary to what Independence Day would lead you to believe, we probably can't do anything about that. We can, however, massively influence the types of technology that are developed. We can put in place extremely forward-looking safeguards. S-risk is really the main reason I stick so closely to the topic of artificial intelligence. Nukes and pandemics are bad (and I'm sure some authoritarian government could use these to blackmail their citizens), but all you have to do to see S-risk in action is to watch The Matrix one time. Obviously, I doubt robots start using humans for batteries (have they heard about nuclear power?). But in many ways, the paperclip maximizer is the least scary robot.

Tuesday, June 13, 2023

Lie to Me

     In a perfect world, lawyers do not defend the guilty. We know who the guilty are, and they are adequately punished. This may not be impossible for long. The more data that we collect about an individual, the more we know about them. If you had a camera trained on O. J. Simpson for his entire life, you would know that he was a murderer. If you were a superintelligent AI system, you would likely not have to try that hard to become the world's greatest detective (sorry Batman!) and convict O. J. of murder. Maybe there are actual lie detection techniques that certain AI systems will be good at, but even just by combing through massive amounts of data and using simple inference, I am sure that the policing systems available in the future will be extremely powerful. Powerful enough to trust, and powerful enough to do away with the current "jury of your peers" legal system. Now, there is a trade-off to this, the same trade-off we always face: safety vs human rights. 

    Authoritarian regimes focus on safety. Not safety of their citizens, but safety of their regime. They would want to know when a civilization was lying: "no, I wasn't at the protest last night." They will not want their citizens to have use of this technology: "hey, did you see that Robot 3000 proved that Xi Jinping was lying last night about the Uyghurs?" Use of advanced lie detection in the legal system will certainly change human interaction. Fooling modern-day lie detector machines is a bit of an ironic, because polygraphs are shown to actually not work in any sort of reliable fashion. What about in the future? If your dog knocks over a vase, and then tries to fool you into thinking it happened on its own, would you believe it? We can see right through the attempts of animals to deflect blame or straight up lie. They simply do not have the required mental capacity to string together a convincing argument. That may be us in the future, trying to convince our technocratic overlords of our innocence. It does matter if we turn a 10% wrongfully convicted rate into a 0%, and a 90% rightfully convinced rate into a 100%. What matters much more is what the sentencing requirements are for breaking the law. What matters most is who is writing the laws.

Friday, May 26, 2023

Batman

    From a utilitarian point of view, Batman is the dumbest superhero. After the Joker kills dozens or hundreds of people, Batman tracks him down, beats him up, and sticks him in prison. A prison system that doesn't work, where the Joker always escapes. Then, the Joker kills dozens, hundreds, or billions (in some comics) more. Repeat ad infinitum. This goes for not just the Joker, but for every criminal in Gotham. While this makes for a good storyline (where you don't have to introduce a new bad guy every comic), it is a bit ridiculous. Good thing Batman didn't kill that grunt for stealing a purse, he only pushed him off a building and paralyzed him for life. Now, comic book stans will disagree, with some derivative of "wow it is so cool how morally complex Bruce is, he can't kill because he knows that if he does he won't be able to stop." Other people like The Punisher. Common movie trope: Evil Guy orchestrates a horrific attack on innocent people that kills hundreds. Good Guy fights his way through dozens of low-level bad guys (often brutally killing them) in order to get to the Evil Guy. Good Guy beats Evil Guy and has a chance to kill him. In act act of heroism, Good Guy refuses. Then either the Evil Guy dies anyway or he is locked away (where he often later escapes and kills more people). That is Hollywood, folks.

    I don't want to a world where vigilante justice reigns. In fact, I didn't make this post to lament about superheroes. I wanted to prompt a thought experiment: if you made it your life's mission to decrease the amount of gang violence in the city of Chicago, how much could you accomplish? You could spend every minute calling and emailing politicians, or you could try to run for office. You could work very hard in a high paying profession and donate 50% of your salary to organizations that you believe would decrease crime. What would you do? Think about it for a minute. Whatever it is, I would guess that you would be unlikely to make a big impact. Ken Griffin, billionaire hedge fund founder, couldn't make a dent. How much of an impact could a university make? The University of Chicago, one of the most prestigious universities in the world, can't even protect its students. What could you, personally, hope to accomplish? Without billionaire status or near-superhuman training, you couldn't even be an effective vigilante (Batman or Punisher). Plus, I doubt even an effective vigilante could make dent in a city of almost three million people. Would a real-world Batman fix Chicago? What if he killed people? In both cases, this is a pretty clear "no." Even if Ken Griffin put on the suit and started blasting, I'm pretty sure things would not get any better. Crime is complex. Selling drugs is lucrative. Guns are widely available. Many gang-related shootings are a form of vigilante justice, payback from a previous killing. There are over 100,000 gang members in Chicago. You are just one person. This isn't meant to be a depressing question. Just because your impact might be small doesn't mean that it is worthless. We should still strive for a safer world, regardless of the difficulty. There are no recurring villains in the real world. No easy answers (hey, Batman should just kill people). There are more effective ways to have an impact, but that impact is destined to be small.

Tuesday, May 23, 2023

The Trade You Make

     There are plenty of existential risks. AI, nuclear war, and chemically engineered pandemics are the most likely and are all pretty horrifying. I was on a run today, and I wondered exactly why I have been stressing so much about these risks. Spending years reading and writing about these topics has made me a more anxious person, hesitant to bring children into a world of suffering on a mass scale. Even if there is only a 10% chance humanity gets wiped out this century, it is clearly possible that the year of this extinction could be a horrible time to endure. I would die, my family would all die, my kids would all die, my wife would die, all of my friends would die, and everyone I've ever met would die. This is very clearly the most depressing outcome imaginable. However, the more I thought about it, the more I realized: this is the trade you make. This is what it means to be human. This is how every human has lived throughout history: in a period of extreme violence and uncertainty.

    If you lived in Germany many hundreds of years ago, you were probably pretty worried that a group of barbarians would raid your cabin, steal your wife, and kill you and your children. If you lived in Egypt thousands of years ago, you were probably a slave and your family could have been killed on a whim by a despot. In fact, in the majority of societies, the majority of people were placed completely at the mercy of authoritarian rulers or warring overlords. The life of a father or mother caught in a current African civil war is par for the course, it is modern life in the West that is the exception. Or is it? Just because we ignore the prospect of nuclear war doesn't mean we don't constantly sit at the brink. Just because we don't read up on chemically engineered pandemics doesn't mean a modified strain of smallpox won't run through New York City and kill 95% of the population. Nothing has changed. Humans have always faced individual and group extinction, we are just moving the boundaries a little more.

    Looking at ex-risk, the only truly scary one from an existential standpoint is AI. Not because it will kill us all (bioweapons and nukes can too), but because it could make us live near-forever in a pit of tremendous suffering. The good news is, this outcome is probably not likely, and the future of AI is basically entirely uncertain. Also, it is the only one out of the three main ex-risks that through development could lead to something resembling a utopia. Regardless, looking too far towards the long term future of humanity can distract from the present. The stresses of current life and the existential struggle we personally have with accepting death are really nothing new. There were plenty of brave men and women who choose to end their lives out of circumstance or out of fear of some circumstance (they heard the barbarians charging over the hill). We should not discount these decisions. But we should also realize that plenty of people stuck it out. Maybe for some pre-programmed survival instinct, or maybe for some other reason. Not only are things pretty good in the world right now on an absolute basis (very, very few people live every day in perpetual bouts of fear and suffering), but things on a relative basis are getting better every year. The long-term trajectory of the human race is positive, perhaps in an exponential fashion. Perhaps not. Personally, I plan to stick around long enough to find out.

Monday, May 22, 2023

Political Donations

    Effective Altruism as a community has more or less agreed to stay out of politics. Most of the community is probably very liberal, but there is still some large variance within the community. I am still unconvinced that voting is important. William MacAskill argues that on an expected return basis, voting is very important. He argues that even if there is a 0.00000001% chance that your vote swings the presidential election, the value could be so great (worth $1,000,000,000,000 or something) due to averting ex-risk or allocating foreign aid or some other thing, it is worth it to vote. In reality your vote has a 0% chance of mattering (on a national scale, regional elections are probably pretty important and impactful) and the difference between politicians is probably overstated. There is a lot of inertia, bureaucracy, and infrastructure, and legal precedent in place that tie the hands of even radical politicians here in the U.S.. In foreign countries the elections are either similarly low-impact or completely rigged. However, with political donations you may be able to thoroughly swing votes.

    If you are a billionaire such a Michael Bloomberg, you could bankroll candidates or pay for millions of targeted political advertisements. Through the power of absurd wealth, an individual can actually make an impact. Whether actually an effective use of money or not (I would guess the significant majority of political donations are entirely wasted and ineffective) I do not have the data to say. I would guess malaria nets are much cheaper and more impactful than spending a million dollars on a one-minute political ad for Biden which will be unlikely to sway aggregate voters a single basis point in voting polls. One of the problems is that there is already a massive amount of money in politics, so the marginal value of another million dollars is likely to be zero. For this reason, I would argue against political donations of all kinds. If one candidate has pledged to start an unjust nuclear civil war, assassination or voter suppression would be much more effective ways to spend your time. Given that this is rarely the case (and also I think democracy works well generally, I doubt the American public would knowingly elect such a candidate), we should stick to effective charities. To me, donating to political parties is likely one of the most useless uses of your funds, slightly more useless than donating to universities and slightly more useful than setting all of your cash ablaze.

Friday, May 19, 2023

Nihilism

     Utilitarianism is for nihilists who are too cowardly to admit they are nihilists. At least, most of the time. The idea that nothing matters is scary. It follows that everything is permissible, and that there is no objective right and wrong. Living your life according to this is basically impossible (or psychopathic), and only ignorance will avoid a life full of existential dread. So, people turn to any other sort of moral system, despite any objective backing. Semantic tricks and empty ideas such as "the meaning of life is to give life meaning" and "life has no meaning, but your life is meaningful because of you" are philosophically useless. Either things matter or they don't, there is no in between. Either suffering is a bad thing (or good), or it is neutral. These things by definition have to be binary. If we are animals made of meat and bones and there is no god, do we have any moral obligations? Is killing an innocent child for no reason other than sheer amusement really, actually, truly wrong? Is it objectively morally wrong? Or is it only "wrong" because of our current culture and upbringing. Wrong only to the level where if they were releasing and hunting innocent children for fun in the Middle East we would say "well that's just their culture, we can't say for certain if what they are doing is bad." These questions matter. "Well even if everything is meaningless, we don't want to talk about it much because then people might do bad things" is a circular argument. What do you mean by "bad," and wouldn't it be useful to know where our moral obligations lie (or if there are any)?

    I have mentioned my version of Pascal's wager before. I call it Pascal's modified wager. If there is a chance that nothing matters and a chance that something matters, you should probably live your life believing that something matters. If you are wrong, it doesn't matter anyway. But if things actually matter and you ignore them, you could make some pretty sizeable moral mistakes. Proof of this wager is basically impossible, as it is prone to similar counterarguments to a religious version (well, how do you decide what matters? Says who? What if hunting young children is morally good? Prove it!). This is why it's called a "wager" and not a proof. It makes sense to me that suffering is bad. I am going to start there. The Hume is-ought problem makes it impossible for me to thoroughly prove that suffering is bad, even though some have tried ("The Moral Landscape" by Sam Harris, for example). But I am fine with that, and of all the things that could be evil in the world (saving children from burning buildings, being nice to people, refusing to partake in genocide), I am going to assume that causing immense suffering is bad. Sue me. With that as a starting point, things can get tricky. Cue the thousands of utilitarian dilemmas. But hey, at least we have a starting point. If you have that, you should have the intelligence to navigate 90% of basic moral decisions (should I push the person in front of me on the train tracks, even though no one will ever know, or should I not?). 

    I love reading about pessimism and nihilism. Cioran and Ligotti are masterful writers who bestowed upon me years worth of existential trauma. But through them I found Becker, and then Singer. And I decided to make a simple choice, to believe that suffering is bad. From that choice a million interesting and fulfilling opportunities arise. The motivation to help others and the desire to have an impact have made my life significantly better. I am still worried that the emperor has no clothes and that I should spend the next forty years at the cabaret. If on my death bed I learn that this was the case, I doubt I will look back with regret.

Hot Take: Pandemics are Bad

    Ebola is terrifying. Any virus with a 90% kill rate that causes you to bleed out of your eyeballs and reduces your body to a heap of bloody mush is something worth freaking out about. As time has gone on, the world has become slightly less scared. Ebola never translated to a worldwide pandemic, it was simply too deadly. Ebola killed people at a very high rate, meaning it didn't have enough time to spread to other humans before killing its host. I am currently reading "The Hot Zone," which successfully transferred my fear of dying of a slow radiation-induced death with that of a pandemic that makes my skin fall off and causes me to cry tears of blood. One cool thing about being a longtermist is that you live a life in constant fear of horrifying deaths. Viruses are not sentient. They are not really alive or dead, they are more similar to machines that manage to reproduce. Even if they were alive and sentient, I am convinced that we should destroy them all with prejudice. Life forms that require the death of others to live on (viruses, parasites) may not have any other choice, but I do think there is some sort of libertarian non-aggression principle at work here.
    
    The Covid-19 pandemic was a wake up call. The world witnessed firsthand how weak our current institutions are, and it was clear that a virus as transmissible as Covid would end up infecting a large portion of the human population no matter what protections a society puts in place (even China). If Covid killed 10% of all people instead of less than 1%, that would have been a big problem.

    Quick defense of the current human population. I hear people say things like "we need a new plague," "there are too many people, it's unsustainable," "even if a virus killed 90% of people we would bounce back, it's just a natural culling of the herd." Not only is this factually wrong (I used to believe overpopulation was a problem, its actually underpopulation that is hurting most societies at the moment), but it fails to think through the after-effects. Killing 90% of the world may be better than killing 50% (since the remaining 10% will have many more per capita food sources and infrastructure and society will probably collapse either way), but killing any significant number of the worldwide population would be horrible for humanity's prospects. It is likely the world will be fragmented into groups of tribes or feudal lords that are vastly decentralized, a world with thermonuclear warheads ripe for the taking.

    Pandemics are bad. Humanity has experienced some really nasty ones, and there are plenty that give me nightmares. Now realize that all of those are naturally occurring viruses, created randomly by a largely stupid process called evolution. Now realize it is becoming easier to to modify these viruses to be exponentially more deadly and pervasive. Then think about mass shootings and suicide bombings, and wonder what the world would be like if those individuals had access to these viruses. Now try to sleep at night.

Thursday, May 18, 2023

Is Having Children Moral?

     My favorite philosopher of all time is Emil Cioran. Emil was a Romanian philosopher who was a staunch pessimist, nihilist, and anti-natalist (which means he is anti-procreation). It was through reading "The Trouble With Being Born" that I first seriously considered the idea that having kids could be morally wrong. This idea is usually hardly taken seriously. Personally, I have never been convinced that it is moral to have children. Not that it necessarily isn't moral, but rather I really don't know one way or the other. I've scoffed at others who claimed that they refused to have children because of climate change or some other half-baked ex-risk, but I have actually considered the pain and suffering that would be inflicted by bringing children into a world on the brink of nuclear war. Being born only to suffer a painful, radioactive death seem not very ideal. Would it be stupid to try for children during the Cuban missile crisis? There is probably some level where I agree with anti-natalism. If you live in abject poverty where every week a group of soldiers stops by your house beats you senseless for two hours, your kids are probably not going to have a good life. In that case, it is probably immoral to have children. Now, this is a much rarer case that you would think. It's easy to view abortion as permissible simply because "well foster care is pretty hard and kids that are born into poverty probably turn to a life of violence." Not only is this far-fetched, but if it is the case we should assume having children at all in poverty should be disallowed (if their life is really so bad that it is better to never have lived). I seriously doubt it. The morality of abortion is solely dependent on when a clump of cells becomes equivalent to that of a human life (or at which point some level of moral significance is bestowed that outweighs other factors). This is a philosophical question with no easy answer.

    I view life as good and death as bad, and existence as better than non-existence. A lot of people say things like "death is what makes life worth living" or "death is an important part of life," but remember that humans are very good at rationalizing their situation. We don't need to do this. We should admit that all of us would choose to live longer, potentially indefinitely, until we decide on our own terms not to exist. We do not choose to be born. This makes having kids a moral dilemma, because they have no say over the matter. Although I love pessimistic philosophy, I live my life as an optimist. Human life at the current moment in history is very positive. Having children is totally reasonable, and if you raise them to be happy caretakers of others you will have done an immense service. I am extremely grateful that I was created, and most people I know are as well. If this changes due to some horrifying global developments, maybe we reconsider.

Friday, May 12, 2023

Wealth Inequality

    While I am generally in favor of free market capitalism, I do not agree with the assessment that we need to "lift everyone out of poverty and not worry about those at the top." This is a technocratic take that I have heard Silicon Valley gurus proclaim (Sam Altman included), and I think it is fundamentally misguided. Yes, capitalism is responsible for lifting billions of people out of poverty over the past few centuries and it is clearly a great system for spurring technological grown and ensuring constant economic progress. Yes, bringing the world out of poverty should be our number one goal. If we are rapidly curing diseases, avoiding war, and broadly alleviating poverty, who cares about income inequality? If there is a large income divide between the rich and the poor, but the "poor" have amazing, fulfilling lives, then we probably shouldn't care much. Fairness should never be our guiding principle. If everyone was capped at earning $10,000 a year, the would not be a better place. Dispersion between individual wealth is natural. People can take different sorts of jobs, work harder or not at all, and pass wealth to their children. People that take risks should be compensated. People that innovate should be compensated (in order to incentivize others). Now before I seem too Ayn Randian, here comes the counterargument.

    Money = power. In a system where the top .01% controls a massive portion of the wealth, that small portion of people have an outsized impact on others. Voting is driven in a large part by campaign donations, and oftentimes the richest people have zero altruistic tendencies. If you own the company that owns the AGI, yeah maybe you cure a lot of diseases and everyone has a universal basic income, but now you are the supreme ruler and the most powerful person on the planet. A quick point on financial returns. When you act ethically, you are sacrificing financial return. This is why "ESG" marketers are so focused on trying to prove that constraining your opportunity set to have an "impact" doesn't sacrifice financial returns. Everyone knows that this is false (spoiler, the sales departments are lying to you). Now, donations. Donating money decreases wealth, something a lot of rich people are allergic to. To make an impact, you have to lose money. This truth is unfortunate but will bring an immense amount of clarity to your life.

    The question is, how do we make people with money and power do good? How do we get them to lose money in effective ways (through greater taxes, greater donation incentives, greater guidance)? This is really the fundamental problem with capitalism. It is really good at building people's wealth, but it does nothing for guiding them towards the best ways to lose it.

Friday, April 21, 2023

We Should Ignore The Environment

     Yes, climate change is a problem and humans are making it worse. But it is not going to kill us all, and it is not even going to kill most of us. Most countries are pushing for greater environmental regulations, and pretty much everyone is now aware of the issues at hand. The solutions are harder to discern, but there are now entire industries devoted to building a more sustainable future. Climate change is a problem, but there are simply more pressing existential risks at hand (AI, nuclear war, chemically engineered pandemics). Using investing terminology, I would say that climate change is overvalued and these others are massively undervalued. If you are passionate about the environment and plan to live your life campaigning and donating for that cause, that is perfectly acceptable. Positive impacts are positive impacts, and you deserve to follow your passion. However, for people with less heavy interest, we should focus our time and effort elsewhere.

    Still, from an effective altruist standpoint, how should we handle environmental issues? In my opinion, every environmental problem is simply an energy problem. Water quality and availability is an energy problem (reverse osmosis, you can use energy to convert bad water to good water). Carbon in the atmosphere is an energy problem (the type of energy you use puts it in, you can use energy to take it out). The world getting warmer or colder is an energy problem (massive amounts of energy can be used to heat and cool things). Thankfully, there is a carbon neutral way to generate massive amounts of energy with minimal waste: nuclear energy. Anyone who claims to care about sustainable energy yet campaigns against nuclear is a legitimate fraud. The existence of so many irrational and regressive people astounds me, and because of my sheer rage at such irrationality I find it hard to even discuss the issue. For context, I have a shirt that says "Go Green, Go Nuclear" and am looking forward to the day when nuclear fusion catapults us forward into a post-scarcity society. 

    Despite my provocative title, we should still care broadly about the environment. We should clean up trash, reduce carbon emissions, and advocate against companies that pollute. Most importantly, we should strongly campaign for nuclear energy and push forward with nuclear fusion research. To me, this is by far the most undervalued aspect of climate change prevention, whereas every other aspect is overvalued. We should do all of these things because we should care about the future of the planet and the future of the human race. Still, we should focus the bulk of our time on more important things.

Challenge: Think About Nuclear War Every Minute

    In 2022, I spent a significant portion of my time worrying about nuclear war. The Russian invasion of Ukraine was scary for everybody, but I estimated the likelihood of nukes flying at 10% throughout the crisis. It is really, really hard to live a normal life if you think that there is a 10% chance that everyone you know dies that year. Was I wrong in my estimate? JFK said that he thought the probability of nuclear war during the Cuban missile crisis was between 33% and 50%. During that crisis we now know that it very nearly happened, thanks to Vasili we scraped by. In 2022, the public simply couldn't have known about the close calls and near accidents behind the scenes. We simply had to read the rhetoric of Vladimir Putin and hope that the actions of the USA and NATO didn't back him into too deep of a corner. I think 10% was, in retrospect, a very fair estimate. Thankfully that number has dropped quite a bit recently.

    In addition to being substantially freaked out all year, I also read a bunch of books about nuclear history and nuclear war. It is insane that we lives our lives in spite of the terrifying fact that every moment we are a button click away from near annihilation. The chapter about Hiroshima in "The Making of the Atomic Bomb" by Richard Rhodes is seriously the most horrifying thing I have ever read. Then I read about global nuclear policy and how terribly accident and manipulation prone our nuclear systems are, and I realized we are still balancing on the edge of a knife. There's not really a solution to this problem. Obviously, we should try to minimize the risk of nuclear accidents and try to disarm the worldwide "Doomsday machine," but what really keeps things in balance is mutually assured destruction. It sucks that decision theory combined with world-ending weaponry is the main cause for the current near-peace in the world between global superpowers. Without these weapons I doubt we would have made it this long without a far reaching global conflict. We'll have to see how the world progresses with new technological weapons (such as AGI) that are winner-takes all and not beholden to mutually assured destruction.

    I have been thinking a lot less about nuclear war this year, as that fear has been replaced by AI. However, I don't think we should take our eyes completely off the ball. Nuclear risk could very well be an ex-risk, and I think that most people discount the probability just due to survivorship bias. No self-interested nation would ever give up all their nuclear weapons, and true world peace is not around the corner. Unfortunately, we will have this risk hanging over our heads for the duration of our lives. This sucks, so we should probably do our best with lobbying and donations to make it marginally less likely that the lives of us and our families will end in horrific deaths.

Sunday, April 16, 2023

Allocating Donations

     We've established in "The Investment Mechanics of Giving" that treating donations as investments is very useful. When you invest, you expect to either receive dividends (or some other cash flow) or capital appreciation (being able to sell your investment for a higher price than what you bought it). For donations, the money is no longer yours, but the impact you have is similar to that of a dividend (there are positive effects over time, perhaps forever). We also touched on the idea that you should probably have a diversified donation portfolio, and there should be a balance between less "risky" bets (such as global poverty/health, where you can be pretty sure you are doing good) and "riskier" bets (such as ex-risk, where you will likely have no impact but there is a small chance of massive impact). Since the money is no longer yours, one benefit of a donation portfolio is that it does not have to be rebalanced. The weighting you assign between the different causes only matters when you send the money to the charities. This allocation between different causes is a deeply personal decision, but there could be useful guidelines.

    I plan to have a series about each of the major Effective Altruism causes, and I will give my own opinions about how to structure your donation portfolio. Finance in the age of machines is tricky business, as if you really believe AGI is near you should probably structure your investments and your donations very differently. If you are very concerned about ex-risk and want to donate, your question is probably as follows:

              “We’re probably all going to die. So where do I invest? Where should I donate? Can I have an impact?”

              I would urge caution with this line of thinking. “Doomsayers” have existed throughout history and they have all been wrong. Yes, we are probably at the highest level of existential risk in human history (aside from the early formation of humans when there were like 200 of us) but predicting when the world will end is entirely unpredictable. Since we don’t know how the world will end, it’s hard to determine exactly when and where to donate and invest. So, still save for retirement, and you should probably still donate to non-existential risk charities. I would use Toby Ord’s “The Precipice” as a pretty good guide to existential risk probabilities. He estimates a 16% chance of existential risk within the next 100 years, so you should probably live your life assuming that in over 80% of the scenarios you make it to retirement. He estimates that there is a 10% chance AI kills us all and a 0.1% change climate change does, so we should probably focus more on AI. If you disagree with his numbers that could inform your asset (donation) allocation. If you think the AI risk is actually 50%, that should probably make up the bulk of your donations. Obviously, feasibility and capacity constraints matter. I wonder if there would be demand for model portfolios, or example donation portfolios that you could model your own donations after (here is the % Tody Ord donates to global health, nuclear risk, etc). It is probably the case that unfortunately, not enough people donate for this to really be of use.

Saturday, April 15, 2023

How Much Do You Owe the World?

     How do you determine how much to donate? This is a very complicated question. You should save for retirement, provide for your family, and save money for emergencies. Also, you should absolutely spend money on yourself and your happiness. Not just for some overly robotic reason such as "make sure you spend money on yourself so that you feel happy and continue to donate," but also because it is legitimately your right to spend money on yourself. It is not evil for you to dine at a nice restaurant, or spend a hundred dollars on roulette in Vegas, or buy a boat. Yes, you could probably instead donate that money to an effective charity. Yes, you could probably save someone's life with the money you spent on alcohol this year. But we need to be reasonable. You didn't cause the world's problems, you didn't create malaria and you didn't invent death. Also, it's extremely hard to make a positive impact with donations, and there is absolutely no certainty that anything you do contributes positive to the world over the long term. Maybe the child you cure of malaria grows up to be a warlord and unleashes a pathogen on a rival community that kills thousands. Maybe the alignment research you contribute to leads to a faster timeline to ASI, and without this quickened pace humanity would have solved alignment. We need to understand how limited the information we have is, and we have to be comfortable making decisions in situations of extreme uncertainty.

    While it is very unlikely that the drowning child that you save grows up to be a serial killer, it is still possible. So, should you let the child drown? That would be ridiculous. It's very, very easy to live a selfish life. Maybe all the excuses you are making for walking past the drowning child (well we don't really know what will happen, it's just natural selection preventing overpopulation, someone else will probably save him) are just that, excuses. I don't think you owe strangers your entire life. Read the book Strangers Drowning for a first-hand look about how miserable living your life entirely for others could make you. Still, we obviously owe something. I think that the 10% of your income is a pretty good metric. We also want to make sure we are living enviable lives, and it's hard to convince others to follow you if you are living in poverty because of your donations. Look at the life of Jesus, and then look at the lives of the typical Christian American. "It is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God". Has there ever been a more thoroughly ignored statement? 

    We should be practical when designing our giving pledges. We want to leave room for humanity and avoid cold-hard calculation. Not only is a life of "give everything, even the shirt off your back to the poor" unlikely to persuade others, it is also probably not morally required. You don't owe everything to the world, but you probably owe more than you are currently contributing. If you live your life believing this you will probably wind up doing a lot of good.

The Investment Mechanics of Giving

     The investment mechanics of effective giving are complicated. How much to give depends mostly on your personal capacity constraints. For example, if you are saving up for medical school and are about to need to take out $300,000 in student loans, you should probably not donate anytime soon and wait until you have paid back your loans. If you are planning to start a company that will have a big positive impact on the world, you should probably save up to ensure your company has adequate runway. Planning for children is especially hard, and given the cost of raising a child to 18 and funding their college education, it makes sense why so many people fail to donate. Better people than me have established why giving is so obvious and important (The Life You Can Save, Doing Good Better), but I don't necessarily think that 10% of your income is that realistic in all circumstances. Sometimes, it will probably be better and more effective to not donate for three years and then donate 20% of your income for five years. Timing matters, and there are no hard and fast rules when it comes to what to do with your money.

    We've established that you should donate, so now we should discuss what to donate to. There are two ways to structure your donation portfolio:

   1. Pick and choose a variety of causes that you would like to support

    2. Look at the broad level of donations across the board, and donate to the most underfunded

    Let's discuss the first option. Let's say that you care about nuclear risk, AI alignment, and global poverty. You don't really care that much about climate change, as it seems a lot of institutions and governments are aware of the issue and are working to combat it. Let's assume that you make a decent salary and donate a lot of it, and over the next 40 years you manage to donate $1 million dollars in total. Nuclear risk and AI alignment are long shots, and we can think of them as risky investments. If you fund an AI alignment company, it is unlikely that you will have a direct impact. Given the complexity of the problem and all the unknowns associated with it (will AGI happen in our lifetime, will it rapidly progress to ASI, will additional funding make any sort of difference) your additional $1 million dollars is unlikely to move the needle much. However, if your contribution happens to lead to some form of research that prevents unaligned AI or makes the ASI treat humanity better over the long term future, you might have a massive impact. Nuclear war and other existential risks (ex: chemically engineered pandemics) are probably also long shots, with a small likelihood of impact but a massive impact if they "hit." These are a bit like buying a lottery ticket, except the actual odds are completely unforeseeable. 

    Global poverty, on the other hand, is a bit like buying a government bond. GiveWell, my favorite charity in the world, does a tremendous job of finding the most effective charities in the global poverty space. GiveWell does a great job measuring impact, and you can be pretty sure that your donations are saving lives. There are probably some slightly "riskier" global poverty initiatives that aren't sanctioned by GiveWell because their impact is less easily measured, but those I would consider similar to a slightly riskier bond. Why all the investment terminology? I want you to start thinking of your donations as an investment portfolio. All of the same considerations that you think about when planning your own investments also apply here. Instead of a 60% stock and 40% bond portfolio, you should probably be in a 60% global poverty and 40% existential risk portfolio, or something similar. This way, you will ensure that your "investments" are doing good, the 60%, which will help you to continue to donate and feel good about yourself. Also, with the 40% you can ensure that you are intellectually engaged in ex-risk, and I would guess learning and thinking about these sort of topics is way more fun and stimulating.

    The counterpoint to this type of investing is bullet point 2 a few paragraphs above, which states that you should really only donate to one cause. Maybe it is the case that one of the ex-risks facing humanity is extremely underfunded. Maybe the "bang for buck" of nuclear risk charities is way higher than other areas, and even a small donation could tip the scales greatly towards averting a nuclear war. Unlike traditional investing, we are all in this together. The aggregate, global portfolio is really what matters and what determines the allocations to specific causes. In this case, you should focus all of your donations on this one "undervalued" investment. The problem is, I find it hard to believe anyone can adequately forecast how important each of these issues is relative to each other. From my conversations with alignment research companies it seems that they are actually overfunded, and they have quite a bit of cash just sitting there doing nothing. In that case, donating to them contributes effectively nothing, whereas you could donate to global poverty and have an impact. Still, in my opinion AI alignment is by far the most pressing issue facing humanity, and this issue will have a direct impact on all other ex-risks and even long term global poverty. However, I'm not quite sure where to donate, and I want to be sure I'm not just providing funding for someone to quit their job to learn about AI for 3 months and then start an unsuccessful AI alignment blog. Also, I want to be careful I'm not actually contributing to capabilities research and making the problem worse. So, what should we do?

    I pretty much stick by some blended portfolio of causes, such as a 60/40 portfolio of global poverty and ex-risk. Diversification is very important, and it is easy to have horrendous results if you put all your eggs in one basket (imagine you find out that the single charity you have been donating 10% of your income to for the last 20 years is stealing money or spending it very ineffectively). I think there should be some short term, easy "wins" such as GiveWell in any portfolio, so that you can be sure you are making a positive impact. Then, with some leftover cash you have some fun and swing for the fences, and maybe contribute very positively to humanity's long term future. Happy investing!

Friday, April 14, 2023

Free Will

    The less free will we have, the more we should be sympathetic towards other people's situations. If free will doesn't exist, we should look at a criminal with pity. They didn't choose this life, they didn't choose their parents. They didn't choose their upbringing, or their bad influences, or their faulty brain chemistry. Since we don't have free will, we shouldn't really judge. However, if we are judging others heavily even though free will doesn't exist, we shouldn't be too hard on ourselves. If we look at that criminal and say out loud "god, what scum," we shouldn't judge ourselves too much, even if that reaction isn't logical. We can't help that we judge them, given our upbringing, genetics, and neurochemistry. See the unfolding paradox here? 

    The less free will we have, the more understanding we owe towards other people but also the less understanding we owe other people. If someone else is a bigot then it is not really their fault, but that means its not really my fault either for being a bigot. If we can't hold others accountable, we shouldn't hold ourselves accountable either. This is why I'm not really a fan of using free will as an argument for anything. It strays too close to the nihilistic "everything is permissible" boundary to be useful for anything. Maybe everything is malignantly useless, but Pascal's Modified Wager is useful here. If there is a 50% chance that nothing matters and a 50% change that something matters, you should probably live your life believing that something matters. If you are wrong, it doesn't matter anyway. Maybe free will doesn't exist, but then it doesn't really matter what you do or think because you can't change your mind or affect the events that were set in motion by the big bang. So you might as well think that it does. Maybe it doesn't work the way tradition thinks it does, and we probably should be empathetic towards people's background and situation. But the stakes are high, and it is easy to create very bad incentives by leaning too far to either side.

Friday, March 24, 2023

Taxes, and then Death

    Let's talk about taxes for a second. A lot of people don't donate because the government forcefully takes some of their money each year for taxes. These taxes go on to fund things like military, the healthcare system, and a very small amount goes to foreign aid. Taxes are not a substitute for donating. In theory, the majority of taxes can benefit you as an individual. Taxes are a way to avoid the tragedy of the commons. You pay taxes to fund the military, so the military protects you from invaders. This works for the court systems, police department, and fire department as well. Taxes also act as mandatory insurance. Even if you are a millionaire and could afford your own healthcare, if all of your money gets stolen, you will have a safety net. If you live under an authoritarian regime that does nothing but bad things that hurt people with your money, figuring out how to safely pay less taxes would be a good thing. If 100% of your taxes went to the most effective ways to positively benefit humanity, paying more than your required share in taxes would be a good thing. Ultimately, no large country operates near either of these. 

    I would argue that the majority of the taxes you pay do nothing to positively affect humanity. Getting the government to do better things with the money they take is good, but given the massive problems with bureaucracy and the lack of good incentives/controls, I would shy far, far away from advocating that paying more in taxes is an effective substitute for effective donations. That is not to say that countries would benefit from a better tax structure, but rather that the current tax structures do not allow you to have a positive marginal benefit on society.

    Now let's talk about the second certainty in life, death. End of life care is extremely expensive. One study in the Journal of General Internal Medicine in 2020 found that the average cost for the last month of life in the US was around $18,500. That's quite a bit of money, more than the average salary of a typical human in 2020. Personally, if I hit an age where I am extremely ill and content with my previous life, I may choose to hit the "little red button." Not to say that this should be required, but it should be something we all think more about.

Losing Time Effectively

     Time is money. In this regard, losing money effectively is no different than losing time effectively. I chose to name this blog "losing money effectively" because I think seeing donations as a optimal way to lose money is a funny perspective. Also, I have a finance background and eventually hope to make some contributions to the wider world of effective altruism through this lens.

    The only way to get money is to spend time acquiring it, valuable time that could be spent doing something else. In this sense, donating money could mean something substantial for your time. If you are a billionaire this is not the case, but if you have rigid retirement goals and are not a billionaire, a life of donating 10% of your income per year means something. Maybe you retire later. Or, maybe you spend more time doing tasks that you would not have otherwise (housework instead of hiring a maid, washing car instead of car wash, child care instead of thousands of hours raising children). The less money you have, the more time you would save by acquiring more money. So, why is this important?

    Making a lot of money and donating it, also called earning to give, basically makes the assumption that donating money can be much more valuable than donating time. My point above is that these are not really that distinct, as you are donating time either way. Depending on your situation, you may just be donating your time in a much more effective way. To be clear, I am not sold on utilitarianism. If you are totally sold on the EA philosophy, you could argue that retirement is immoral. If the cost of saving a life is $4,500, then by working another year at a job that pays $45,000, you could save ten lives. Sounds like a potential moral obligation to me. I think utilitarianism is a good lens through which to view the world, especially service work, rather than a hard and fast religious dogma. Instead of working a a soup kitchen in a first world country and making $35,000 at year, maybe the call to make $85,000 as a law clerk and donating $40,000 could be a good trade. You don't get the same pat on the back, but utilitarianism tells you that you shouldn't care about that. This way of thinking feels robotic, but it shouldn't.

    If you are like me, you have already begun to suspect that making money and doing good might be two distinct steps. Maybe you should do each separately, simply due to the difficulty of getting an optimal outcome by pursuing both at the same time. I will keep searching for a career that threads this needle, high pay and high impact, but maybe it is not the worst thing ever if that job does not exist?

My Problem with Utilitarianism

     You are in a room with Jeff Bezos, the previous CEO of Amazon. He sold all of his Amazon stock and is now sitting on 100 billion dollars worth of cash. He is thinking of burning all of it (assume the supply of money in the US stays the same regardless). He says to you, "if you sleep with me, I will donate 100 billion dollars to the most impactful charities in the world. Otherwise the money burns." Are you morally obligated to sleep with Jeff Bezos? I really, really hope not. But taking some utilitarian arguments to the extreme, it seems that through your inaction you may be "causing" an extreme amount of suffering. Perhaps we cure malaria five years earlier than we would otherwise, perhaps world hunger is pushed off for a decade. Thousands of people, perhaps millions, could live fulfilling lives full of heath and wellness as a result of this donation. Are you making a morally terrible decision by not taking up Jeff on his offer? Are you in some sense responsible for the suffering that you could have avoided?

    To me, this is the most convincing argument against utilitarianism and against donating as a whole. If you start thinking of counterarguments that ignore my main point, you are missing my main point. Swap Bezos with a robot that wants you to cut off your arm in exchange for 100 billion, or this entire scenario for the big red button I mentioned in a previous post. It may be the case that personal dignity and even human rights are worthless, except insofar as they contribute to the collective well being of humanity as a whole (over the long term). Maybe human rights are only worth something because if we treat them as worthless, then even greater harm and suffering would persist over time. In a one-off occasion such as the above, maybe there is no room for human dignity. But wow, is it hard to believe that.

    One aspect of effective altruism that many people don't like is related heavily to the Bezos example above. The meat of the question is this: "should you bow to power?" If becoming subservient to your overlords is required in order to do good in the world, count me out. Why would I ever want to be part of a system that values my personal worth and dignity so low? Am I somehow responsible for the suffering of a world that I had no hand in creating? It is easy to take a look at these arguments and say "wow, you are right. Guess I'll go back to being selfish and never helping anyone ever." That is also missing the point. I think that the water is really murky. Thousands of people have debated derivatives of the trolley problem, and I know I am not adding much flavor here. Basically all I am asking is if you have to throw yourself in front of the trolley to save a million, less be a immoral person. Maybe its just the millions of years of survival instinct programmed into me, but I can't see how this would be the case.


Thursday, March 23, 2023

Would You Press the Button?

     There is a big red button if front of you. If you press this button, cancer will be cured. Wars stop, and so does world hunger. Humans may still have some problems, but suffering for billions of people ceases. Plus, another cause specific to your preferences are solved. Maybe factory farming ends, or climate change, or racism. Take your pick. So, there is this big red button is in front of you, and if you press it, a lot of really, really good things happen. Things that you could by no means accomplish in your lifetime, things so good that your decision to press the button will be the single most important and beneficial decision to humanity so far in human history. Don't overthink this and make excuses as to why maybe this wouldn't be a good thing (ex: well, if cancer wasn't a thing there would be overpopulation, etc.) No. I am telling you that it will be really, really good for the rest of the world if you press the button. However, there is a catch. If you press the button, you die. Instantly. Now, here comes the question.

Would you press the button?

    Contemplate this for a minute. This is an interesting question. It a personal question, one that reflects how much you truly value your life. However, I think there is a more important question. 

If you don't press the button, are you a horrible person? 

    Through your inaction, your refusal to make a sacrifice, billions will suffer around the world. You will maybe only enjoy another seventy or eighty years on the planet, yet you would choose this over the happiness and preferences of the entire world. Should we condemn you for being so selfish? Should we shun you, should doing such a selfish thing be considered worse than any crime you have heard of?

    If I was given a choice between my life and that of a family member, the decision is simple. If it meant one of my family members should live on, I would end my life without hesitation. Given the choice between my life and that of five strangers, I would pick my life. This is logically consistent with how I live my life, given all of the things that I fail to do for other people. Given the choice between my life and the lives of a thousand strangers, how should I choose? These questions matter. If utilitarianism is really a good moral guide, inaction is morally bad. Refusing to press the button is an objectively bad decision, and we should all be shamed for making the decision. It has been true for all of human history that martyrdom is noble. It is an act of heroism that should be celebrated. Giving up your life so that others may live is extremely honorable. But maybe, just maybe, it is actually required.

Monday, March 20, 2023

The Savior Complex

    When I was growing up, I was convinced that I would be important. If you had asked me at age fifteen, "are you going to change the world?" I would have leaned towards "yes" as an answer. It is easy to have delusions of grandeur when you are successful early in life. I was athletic, intelligent, and saw nothing but success in both areas for the first twenty years of my life. I thought that I had higher moral character than most of those around me. I would have denied it at the time, but I'm sure a large part of me wanted to be wealthy and powerful. Once I became an actual adult, my perspective changed. I saw, for the first time, how difficult achieving lasting impact was. Every amazing biography I read was by someone who was already forgotten, or soon to be. I would credit books and life experience equally in changing my perspective, and I began to realize that maybe it was ok to not be massively successful. Maybe distancing yourself from the rat race was a good thing, maybe making a stable income, raising a good family, and accomplishing reasonable goals was good enough. Maybe you help others along the way, sometimes volunteering your time and money, but mostly your positive impact deals with character. You treat those around you well. You are a great husband, and a great father. You are nice to the convenience store workers. You smile. And then, at some point, you die. That seemed like a good enough life. One where you don't obsess over worldly possessions, you don't obsessive over money or power. Maybe not the most impressive life, but a very dignified one.

    Or maybe not. Maybe it is actually immoral to live your life in that way. Maybe seeking wealth and power is not only good, it is the only thing you should do. Maybe rising through the ranks and moving mountains is the point of life, and every materialistic urge you have can actually further your moral progress. What sort of ridiculous philosophy would encourage this? Simple, Effective Altruism. In a world where your moral significance is measured based on the positive impact you have on others, this argument is persuasive. Raising a good family and being nice to a few people is good, but making one hundred million dollars and donating it all to fighting malaria is orders of magnitude better. Those in positions of wealth and power are in a much better position to do good, so seeking to be in that position is reasonable. It may hard to disentangle which part of your drive to the top is selfish, and which part is altruistic, but the end goal is the same. There are two steps: achieve wealth and status, and then use this wealth and status to make a positive impact. This first step may be reasonable, or it may be of the devil. Since becoming an effective altruist I have struggled with wrapping my brain around the first step. I know it won't make me happy, but how much does my happiness matter? Generally seeking money so strongly would be seen as selfish, but through this lens my desire to not seek money can be seen as selfish. My desire to not work myself to death, to enjoy my life, to have fun and relax, can be seen as doing nothing in the face of massive suffering. Silence is violence, in this sense. Longtermism takes this a step farther, pushing the dreaded main character syndrome even further down the line.

    If humanity dies out, that is a massive moral loss. Trillions of potential future lives vanish, and we have no way of knowing if sentient life will reemerge in the universe. Maybe it doesn't, and everything becomes truly meaningless. What sort of person would I be if I ignored this? Maybe spending my life attempting to make even a small impact on ex-risk can tip the scales. Should I even start a family? Or should I devote the next two decades to working 100 hours a week to ensure that when AGI comes it is friendly? The more this line of thinking continues, the more I become humanity's savior. Humility vanishes, and it seems clear that I know better than everyone. People are living their lives in ignorance, while I, the smartest individual in the room, know that nuclear war, bioterrorism, and AI all threaten their lives. I am working tirelessly to prevent this, and yet I don't receive any thanks. I am a true martyr, sacrificing in obscurity. This savior complex seems impossible to avoid. My guess is that many EAs have this, especially those dealing in longtermist issues (MIRI comes to mind). Since I identify with the cause, I am not convinced this savior complex is irrational. However, it is clearly sad and unrealistic. The weight of the world simply cannot fall on the shoulders of a single individual. The world's current problems and future problems do not depend on me, and it is likely that nothing I do will change anything. It cannot possibly be true that I am humanity's savior. I am not the main character. Only through recognizing this can character really begin to build. Only through taking a look at the night sky can we begin to recognize our place in the universe. Hubris used to be the enemy, and it still is. Materialism used to be the enemy, and it still is. Maybe focusing on altruistic impact makes the picture less clear, but maybe it doesn't. Maybe staying humble and shedding self-importance is the only path forward. A path towards an impactful life full of dignity and happiness.

Sunday, March 5, 2023

Asymmetric Payoffs for Ex-Risk

    Another idea I have been kicking around is the idea of asymmetric payoffs. There are many professions where playing offense is extremely lucrative, and playing defense is valuable but not lucrative. One example of this is computer security. If you are a decent hacker, you can probably make millions of dollars through white-hat hacking, or you could just steal actual money. Being a great cybersecurity analyst who writes amazing firewalls will be useful, but you may only make six figures unless you can scale this skill to multiple companies. Being a great hedge fund manager is tremendously lucrative, but being an amazing head of compliance is not. In both these cases, if everything goes perfectly well for the "defense," you are simply doing your job.

    AI safety, nuclear prevention, and bioterrorism prevention all fall under this same logic. If you prevent mass suffering through these means, it is unlikely that anyone will know or give you credit. The person that adds one ingenious subroutine that prevents unaligned AI will forever be unknown, since we will not know if the outcome would have been different otherwise. Despite massive personal stress, the Russian-Ukraine war (I prefer calling it the Russian civil war) has not, at this point, resulted in a mass nuclear holocaust. As a result, some of my good friends have told me that they are not too worried about nuclear war in the future. This weird bias seems to plague entire institutions. Given that nuclear war hasn't yet happened, I am not quite sure who to thank. However, I am sure there have been people in positions of power who make the right decision at the right time, and as a result prevented millions or billions of deaths. If nuclear war happens, it will probably be obvious who to blame. I am not sure what to do with this information, expect thank those who have worked in "defensive fields." They have probably massively benefitted humanity, and they have not received due recognition. To Vasili Arkhipov and those alike, thank you.

Doomsday Arbitrage

    In the face of nuclear war, should you buy stocks? My argument is that you absolutely should, because of  a concept I am calling Doomsday Arbitrage. I am officially coining this term, and I will explain below.

    Let's say that Russia announces that it is going to send a nuclear missile hurling towards the United States in two hours. The market is open, and the S&P drops ten percent instantly. If you put all of your money into the stock market, and this ends up not happening, you will likely make ten percent of your money back. If the nukes actually start flying, then money is the least of your problems. USD will probably become worthless in a post-nuclear-holocaust society, and regardless it will be locked up in some TD Ameritrade account. The servers linking this money to you may be destroyed, the TD Ameritrade buildings could be blown up, and/or every employee at TD Ameritrade might be dead. The global banking system has likely collapsed, and you are now in the midst of a post-apocalyptical society that will probably run on the barter system. You'll probably soon be victim to some form of incredible violence. So, you should buy stocks.

    Basically, I am saying that selling puts on the S&P500 at $100 is free money (assuming you hedge volatility). If the S&P500 ever goes below $100, you simply have bigger problems. Money is probably worthless, and you are in mortal danger. This line of thinking becomes more applicable when you apply it to a near term catastrophe in your personal life. I have always been of the opinion that given current bankruptcy laws, we should be way more risk seeking. If you can flip a coin between winning one million dollars and losing one million dollars, flip the coin. Worst case you file for bankruptcy and start back at $0. The limited downside of financial crimes and the high upside of name recognition mean that every hedge fund trader should probably insider trade, or at least take wild speculative bets. My guess is that they probably do.

    I do not think that the concept of Doomsday Arbitrage is going to make you money. However, it could  be a VERY useful idea for risk management. Is your financial advisor having money problems? Fire him. Is your fund manager near bankruptcy? Fire him. Given the asymmetric payoff for many financial professionals, you as an investor need to be very sure that you are including personal "doomsdays" into your analysis. Profiting from this sort of highly skewed asymmetric risk tolerance is difficult, unless you are in a position near ruin. Let's say you are under the poverty line and are given the chance to flip a coin. If the coin lands on heads, you get one million dollars. If the coin lands on tails, you get negative ten million dollars. This is a one time bet. If there are bankruptcy laws, flip that coin without hesitation. If you're going to be held to that debt to the rest of your life, whether you flip or not depends on how bad your life currently is. This should seem obvious, but I don't think people take full advantage of the situation. They also confuse high potential payoffs with high expected return (ex: lottery). You can translate this line of thinking outside of the financial sector, into everyday life. Given that the world is extremely uncertain, life is fleeting and fragile, and a variety of actual doomsday scenarios may be waiting around the corner, we should all be taking much greater risks.

Doing Good, or Not Doing Bad?

      Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...