Saturday, July 22, 2023

Asymmetric Evil

     One of the constant themes in my Losing Money Effectively writing has been the idea of asymmetry. The asymmetric payoff from playing defense is a common one: where if you prevent a nuclear war probably no one knows and you don't get billions of dollars, but if you develop AGI or some other groundbreaking (and dangerous) technology you become one of the richest and most powerful people in history. Sort of in a similar theme, I was recently thinking about the potential that people have to cause great societal harm. If you live your life to the fullest, you may be able to provide enough utility to others to equate to a dozen lives. Maybe you have children, treat them really well, be nice to your coworkers, be a great husband/wife, and donate quite a bit of money to charity. Barring some exceptional luck, you probably won't be a billionaire or famous, and thus your sphere of influence is likely to remain small. If you aren't born into a wealthy family, even with a perfect work ethic you are unlikely to reach a high enough level of status to cause large levels of change.

    Unfortunately, being a good person doesn't change society, except for at the margins. Being a bad person, in contrast, can have a really, really negative impact. If you engineer a pandemic, or shoot up a public area, or assassinate the right person, you can cause quite a bit of harm over a large sphere of influence. Maybe you shoot JFK, but if you want to cause real long-term human suffering for thousands or even millions, shoot Lincoln or Archduke Franz Ferdinand. A motivated terrorist can kill quite a bit of people, and a small group proved in 2001 that with simple planning you can kill thousands of people and spark a war that kills hundreds of thousands of people. Nineteen terrorists, a hundred thousands deaths. There's not many nineteen person nonprofits that save hundreds of thousands of lives on their own.

    This is, of course, a massive problem. In a lot of ways, human society is built on trust. Trust that the overwhelming majority (99.999999% of people) are not evil, or at least not both smart and evil. The data seems to back this up for the most part, as I don't live in constant fear whenever I go to a concert or a public park. Sure, the fear may be there, but for the most part it is irrational. Still, I think this concept of asymmetric evil is a very understaffed problem. To prevent mass shootings there are advocates for gun control (which I support), but for increased anti-terrorism efforts we often see a decrease in human freedom. It's hard to be a serial killer in an authoritarian regime that tracks your every move, but that does not mean I'd trade aggregate human freedom for a world with a handful of less serial killers. Also, we saw with the Patriot Act that a lot of times these "safety" measures actually do more harm then good. 

    This is an important concern of mine, and I do think we could do a few things better. First, we in no way shape or form should trust sensitive technological information that could lead to mass deaths to the public domain. If the government finds out how to create a super virus, they should not open source that information. This seems obvious, but for some reason (looking at you Australian smallpox researchers), it has to be said. Next, we shouldn't trust any individual or small group of individuals with massive amounts of power. Any weapons of mass destruction plans should have the required redundancy attached, less we find ourselves in the world of Dr. Strangelove. Third, we should be very cognizant of how fragile society is. There are probably reasonably easy ways to trigger a societal collapse (financial meltdown, kill the right world leader and blame a different world power), so we should be extremely diligent when building institutions and planning for the worst case scenario. In the meantime, we should acknowledge that our "good person" impact will likely be small and continue to stay the course anyway.

Friday, July 21, 2023

Sacrifice vs. Uncertainty

     For some reason, in my fiction writing at least, I can't stop writing about the idea of sacrifice. Maybe it is my Irish Catholic upbringing, or maybe it was the fact that I've read a lot of Cioran, but regardless the idea intrigues me endlessly. Here is my question:

    Would you take a bullet for democracy in the United States? You get shot in the chest, you may or may not live.

    If you don't take the bullet, the US becomes a dictatorship of North Korean proportions. If you jump in front of the bullet, nothing changes in the US. There are plenty of interesting derivations to this question. Maybe you throw in that democracy falls in 100 years, so you and your family will be fine. Or you change the prompt to reflect some other cause. Well, many American soldiers have lost their lives fighting for certain American ideals, democracy being the foremost. Probably the coolest part of the US is our history of standing up against tyranny, and taking a look at the world stage shows us as mostly alone. It's pretty crazy to me that the American experiment actually worked, and I don't see any obvious civilizational collapse on the horizon despite what the media says. I'm taking the bullet. Now, what I find actually interesting about this question is the introduction of uncertainty. If you set the odds for either of these, at 95%, or 5%, you can probably get some widely inconsistent and interesting answers.

    Would you take a 5% chance of dying to prevent a 95% chance of democracy collapsing? Would you take a 95% chance of dying to prevent a 5% chance of democracy collapsing? What about 50% and 50%?

    Now, shift those numbers on each side until you get something noteworthy. Personally, I think introducing odds into anything causes humans to lock up and focus on self-preservation. I doubt many people would take their own life to prevent a 40% chance of a family member dying, even if we traditionally valued that family members' life at multiples of our own. This is one of the problems with altruism, and one of the problems with effective donations. Basically, the problem is we don't really know what is going to happen. Even if we are pretty sure our donation money will go to cure someone of a preventable disease, maybe that leads to knock-on effects (they grow up and become bad, overpopulation, the money is stolen and used to buy weapons). Even if the odds of such bad outcomes are extremely low, we become extremely adverse to donating. Maybe I want to donate to AI alignment research, but there is some low probability I make the problem worse. In fact, I really have no idea what will make the problem worse and what will make the problem better. Even if the odds of the money being useful is 80%, that 20% scares me to an irrational level.

    What does this mean? I think it means that research into how effective certain causes are is actually extremely useful. Removing the uncertainty regarding charitable causes might actually be the most impactful contribution of EA, because by doing so we can finally convince people to make a small sacrifice.

Love and Intelligence

    We assume that animals don't really love each other. They partner up out of instinct, and their sexual activities are driven by instinct. Partnership and even family is an animalistic urge, put widely on display across the animal kingdom. Why does a mother lion protect her cubs? Does she actually love them, or is that just an instinct drilled in by millions of years of evolution? These are the thoughts that keep me up at night. Just kidding. But I do wonder if there's actually some sort of spectrum here. If I see a beetle mother taking care of her young, I assume it's all instinct. If I see a female ape taking care of her young, I see more than a glimmer of affection. Maybe this is all personification, but if it is even just a little bit more than that, it is possible that our capacity to love scales directly with either intelligence or sentience.

    Sure, skeptics would just say that love isn't even a real "thing" among humans; it is just another animalistic instinct driven in by evolution, no different than the beetle or fruit fly. Maybe at a certain intelligence level you realize this, and are no longer able to love. A horseshoe theory, where most humans are simply in the sweet spot. Once you pass a certain intelligence threshold or become a certain amount of self-aware, you realize that everything is meaningless and predetermined and love is impossible. Maybe. Maybe love requires a willful suspension of disbelief, but maybe we need to do a better job at separating out lust and sex from this discussion. Love could be an intellectual feat, rather than a physical or spiritual one. Maybe the best marriages and the perfect partnerships could become deeper and more beautiful if each party understood each other on a more fundamental level. Perhaps it is the absence of this understanding that gets in the way of the empathy and compassion really needed for a deeper level of appreciation and respect. I imagine that superintelligent AIs, of the movie Her variety, will be able to form some crazy deep connections. This is based solely on intuition, but it is quite a pleasant thought to believe.

Saturday, July 8, 2023

Animal Rights

    Martin McDonagh's film "The Banshees of Inisherin" was my favorite film in 2022. I re-watched it recently, and I was struck by a certain piece of dialogue between one of the main characters, Colm, and a priest:

     Priest: "Do you think God gives a damn about miniature donkeys, Colm?"

    Colm: "I fear he doesn't. And I fear that's where it's all gone wrong."

    Animal rights are a very complex issue, one that I've avoided writing about because I'm not sure exactly where I stand. In Animal Liberation, Peter Singer made a pretty compelling case for vegetarianism. Before we get into what is moral to eat, we first have to solve a hard problem: what makes humans special? In my opinion, there is a flavor of sentience/consciousness that humans have that is extremely valuable. We are by far the most intelligent species, with a level of self awareness and decision making ability far beyond that of other animals. We are the only animal who can contemplate philosophy or wade in existential angst. Our capacity for language and problem solving has allowed us to traverse oceans and explore the stars. Other living creatures on Earth, for the most part, are really dumb. If you spend ten minutes alone with a chicken, you will realize that there is quite simply a not a lot going on upstairs. Yeah we probably shouldn't torture the thing, but if I had to kill a thousand chickens or a random guy, I'd feel pretty confident in my decision. 

    Also, the life of an animal in the wild is not ideal. Evolution is a mostly random process full of cruelty and waste, and most animals die horrifying deaths. Prey are hunted constantly by predators, and predators are constantly at risk of starvation. Starvation, disease, and getting eaten are more than commonplace in the wild, where in farms most animals only have to worry about the last one. Well, maybe some have to worry about living out a miserable life in what is essentially a cramped prison cell where they are kept until slaughter, and that is actually a good point. Here is my personal dilemma: I am confident that I can eat an oyster, and I am confident that we shouldn't eat chimpanzees. Distinct animal species basically either have rights or they don't, and it's weird that society is so logically inconsistent about this (if you eat a dog you are horrible, but go on and kill billions of pigs and that's fine). There is a large spectrum of intelligence from oyster to chimp, and the effective altruists probably draw the line for what we can slaughter too far down. But 97% of humanity draws the line too high, and it's probably better to be safe, especially given how little it costs us. But I find it hard to criticize too harshly whenever I actually see a chicken.

Wednesday, July 5, 2023

Altruism for the Rich

    Arthur Schopenhauer says that there are two options in life: pain or boredom. The poor deal with pain, the rich deal with boredom. It has been argued that effective altruism is religion for the rich. A way for rich people to feel good about their lives. A way to avoid boredom. A way to donate a small chunk of one's relative wealth and then brag at a dinner party that you "have saved hundreds of lives," all the while treating your employees horribly and scamming customers. This cynical take is fairly common, as many see service work as inherently selfish. Unfortunately, we often miss the obvious. We miss the fact that most people are good, and helping others is not a zero sum game. If helping others makes you feel good, and thus you help others, you are adding value to the world. I don't really care if Bill Gates donates billions of dollars to charity because he truly cares about the world, or if he is donating in order to build a legacy and help his friends. Less children are dying either way. Sure, I hope that everyone eventually migrates to actually altruistic intentions, free of any second-order selfish reasons. But honestly, that is too much to ask in the beginning.

    Social impact is a marathon, not a sprint. If we get bogged down in attacking anyone who tries to do good for "selfish motivations," if we hold everyone to a standard of perfection, we'll lose out on a lot of potential helpers. We miss the forest for the trees, and ignore the fact that the overwhelming majority of people donate essentially nothing. Let's not demand piety, let's just keep each other honest and try to change the world over time. Let's bring people in, not shut people out by gatekeeping. Let's learn from the mistakes of millions of other social movements, and embrace a culture of inclusivity.

Automation Knocks

     A few months ago, I got a letter in the mail. It was a marketing letter, but handwritten on a folded up piece of notebook paper. I thought to myself, wow, I am so much more incentivized to read a marketing letter written with real ink than one of those obviously fake letters written in "handwritten" font. Then I realized that we are in the period of generative AI. AI that can not only come up with customized marketing text for each individual, but AI that can replicate human handwriting via loads of training data. So, if you connected a robot arm to a pen, you could have the arm write very convincing letters in actual ink. These letters would be indistinguishable from human writing, and a collection of robot arms with the same software could write thousands of "handwritten" letters a day. Well, that is quite the business idea. I am sure that millions will be made by the first company to implement such systems, as every marketing guru knows the historic power of the handwritten note. On cue, another layer of human personality will die.

    One of my coworkers asked me for a business idea that he could use for an MBA class project. I looked through my list of notes, and pitched him on this generative AI letter business. It seems to have taken off, with his MBA class now being supremely interested in the idea. The class will be working through the business plan and potentially pitching the idea further up the chain. I might not create the eventual monster, but maybe I pushed it a year or two ahead of schedule.

    Lets take this idea to fruition. In ten years, campaign volunteers are obsolete. Who needs dozens of volunteers to write campaign letters when you can pay a company extremely small sums of money for thousands of equivalent letters? When generative AI starts picking up the phone, grassroots campaigns are essentially dead. The world moves further towards consolidation, with the owners of capital making use of scalable systems to win votes and increase their power. When automation knocks, perhaps we shouldn't answer. Maybe it is the case that when I have a business idea, I should keep it to myself.

Doing Good, or Not Doing Bad?

      Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...