If you take AI safety seriously and have short timelines, there's not really a reason to do anything in the EA space. Sure, we need some people for diversification purposes (what if we're wrong), and a lot of these groups are complementary and can take in a broader array of people and perspectives, but they are somewhat useless. Saving the shrimp may matter only insofar as this mission ends up in the ASI's training data, a footnote on a list of priorities and capabilities far beyond our understanding or control. This conclusion is uncomfortable, and distressing, as it means as time goes on the circle of impactful individuals will become less and less. The sphere of influence over ASI development will only become smaller until the end, and eventually, it will become nonexistent. Unless we figure a way to implement broad democratic access, there will be no more EAs, as there will be no "effective" way to do anything that doesn't involve controlling the machine in the first place. There may be altruists, but if the ASI isn't built as one, there may not be many for long.
Losing Money Effectively
Sunday, March 8, 2026
AI Positivity
Things may get very positive, until they get very scary. One of the issues with planning for the future, is that rapid AI progress may lead to scientific discovery across a range of areas. Technology often leads to better outcomes for people, in terms of both health and entertainment, so it could be that the upcoming technological advancement greatly extends lifespans, reduces suffering, and creates amazing content. Perhaps there is societal disruption and mass unemployment, but we could also rapidly respond to those issues as we blow through eras of technological progress. We may believe that we are on a crazy upward trajectory to the stars. And we may well be, until we aren't. Until power concentration, or AI takeover, or some form of immense tragedy that comes from recursive self improvement, puts an end to our happiness or our species. Until the train falls off the tracks, off it's previous upward slope to heaven. We should be prepared for this, and be willing to pull for the breaks even if everything is looking rosy on our way up. Unfortunately, I do not think we will collectively have the wisdom to do this.
Sunday, January 18, 2026
The Precipice is Distant
Toby Ord's book, The Precipice, is one of the best books I've read. In it, Toby argues that we are at a particularly important time in human history, where there is a consolidation of x-risks that may result in us blowing everything up (nuclear war) or permanently locking in values due to superintelligent AI systems. The decisions made over the next hundred years may be extremely important. It is inferred in the EA community that there may be a period of "long reflection" after this initial period, where if we don't destroy ourselves or lock in bad values, we could chill for a bit and then strategically decide what the best moves our (taken hundreds of years to discuss before our next actions).
However, the progress of technology may make this claim entirely irrelevant. Perhaps in 200 years our problems are those of civilizational importance, but that civilization is about to colonize the galaxy and then universe. Determining if China and the US split the universe, or what specific space governance system should be implemented, may be drastically more important than the decisions we could make today. The discovery of novel physics in five hundred years, something crazy like the ability to create false vaccums or access different dimensions (or multiverses) could make those times the "precipice" of human history, where individual actions hold extaordinary weight. We are certainly, in my view, at the most important time period in human history. But aside from the consolidation of power possible through ASI, I have no reason to believe this trend will not simply continue upward.
Tuesday, September 30, 2025
Random EA Reflections
Utopia:
If there's only a five percent chance we go extinct, that means there's a 95% chance that human experience lives on. Should we not spend more time ensuring that time is spent well? Should we not dedicate more resources to ensuring we get to post-instrumental utopia?
The future:
Let's say someone else is playing a game. The outcome of the game is that there is a 10% chance your parents die, and a 90% chance your parents get to utopia. If you killed the person playing this game, would you be wrong?
Nightmares:
If you are an EA, you believe that conscious suffering matters. In-the-moment suffering, meaning if you are tortured and your brain is wiped after and you have no memory of the event, that is bad. If you are a shrimp and you suffocate once brought on land, that is bad (possibly). Well, what about nightmares? There are some nightmares I've had that I certainly remember, and I'm fairly positive in the moment I am facing actual psychic distress. Should a new cause area be to limit the amount of nightmares people have, or the intensity?
Breakups:
Breakups are some of the most negatively impactful events for most people. I'd much rather break a bone than get a divorce, and it's not even close. Pain in the moment is hard to compare to the toil of human relationships. It seems in a country where most middle class families can put food on the table, but almost half dissolve because of divorce, we might be missing some low-hanging fruit.
Magnitude:
EAs aren't usually directionally wrong about things. Sure, they mess up the magnitude. But the direction is usually correct. Animal welfare is a good example of this.
The Repotato Conclusion:
Are plants morally valuable? Is a potato? How many potatoes equals one human life?
Life:
It is very hard to live life outside the Overton Window. It's easy to claim to be an independent free thinker who stands up for their ideas. But when actually faced with public mockery and shame, one realizes how hard life can become.
Digital:
Consciousness also falls victim to the anthropic principle. This may be the only sort of universe where consciousness can exist. This may mean things like digital consciousness are more likely.
AI:
We basically want the future ASIs to think humans are utility monsters. That is the control problem.
Simulation:
If you take simulation theory seriously, you think that we are probably digital minds. In which case, you should probably care a lot about how digital minds are treated.
Sunday, July 20, 2025
Moral Non-realism
There is something particularly disturbing about moral non-realists who believe we should phase out life itself. This is the position of many anti-natalists adjacent to the EA community, who often focus on suffering-focused ethics. I've never been convinced by this "moral non-realism" stuff in general, it just seems like nihilism with extra steps. This idea of preference satisfaction ("I am a utilitarian, and we should do good, but by good I just mean my idea of good and my preferences") is frankly pretty stupid. If you believe that morality is not objective, you are a nihilist. Or a cultural relativist. Or whatever else you want to call yourself, but it basically excludes you from arguing for moral actions. Sure, there are arguments regarding how to act under uncertainty (many of these I have made), but to argue that we should pave over the rainforests requires stronger claims. Arguing that we should prefer a world without any sort of life (because suffering is so bad), especially when you are actually a nihilist, is a particular kind of derangement. And it is obvious that the majority of the world thinks that taking actual actions towards this goal would be considered evil. To walk this road anyway is to claim that your subjective beliefs (that you believe are subjective) should override the beliefs of others (that you know they believe to be objective). I am not sure what the right word for this is, but it sure sounds sickening.
Friday, July 18, 2025
"Literally Everyone"
Saturday, July 12, 2025
Negative Utilitariansism
Friday, December 8, 2023
Doing Good, or Not Doing Bad?
Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that means you should do good. If it is morally wrong, objectively, to kick a baby or not save a drowning child, then it is morally right to treat others with kindness and spend some of your time and energy helping others. If it is true, morally, that you shouldn't cheat or steal, it is true that you should give and sacrifice.
This is a very controversial take. I understand it particularly well, in my opinion, because I grew up Catholic. Catholics, in my estimate, spend a lot of time avoiding the negative. Whipping themselves into a frenzy over impure thoughts, past mistakes, and current temptations. As a Catholic teenager, I was constantly guilt ridden. I was very concerned with what was going on in my own head, trying so hard to avoid slipping up or thinking the wrong thing. Policing my own brain rigorously, stressing about intrusive thoughts to an almost psychotic point. Little did I know, no one cared about what was going on inside my head. Not God, not others, not anyone.
If I had spent half of that time focused on doing good, I wonder where I would be? Sure, I spent a lot of time volunteering and being nice to people, but I now wonder if I did that because I felt compelled to, or if I do it in order to "avoid" being a bad person. I have a theory that the way the religions have been traditionally practiced is counter to this Effective Altruism idea of "doing good," and rather focus almost exclusively on "not doing bad." Doing good for others, in most religions, is placed lower in the hierarchy than worship and avoiding sin. The ideal Christian, or Muslim, or Buddhist, is one without temptations, who has control over this thoughts and actions, and could sit in deep prayer for hours, talking directly to God. Sure, there are some rare examples that differ from this, as the Mother Theresa's of the world have shown. These people, in my estimate, are the true heroes. Sure, you can live your life as another Desert Father who sits in a room and meditates all day. Sure, you can be totally without temptation, without impure thoughts, and never lie, cheat, or steal. But if you don't do anything for other people, if you don't contribute positively thought the world, if all you do is sit in a room full of silence and purity, what was the point of having you here?
Wednesday, September 20, 2023
Too Many Things We Want
Thursday, September 7, 2023
Are Political Donations Worthless?
If you were going to try to optimize your donations in a bang-for-buck fashion in order to have a positive impact on the world, how much would do donate to politicians? From an Effective Altruist point of view, political donations are likely worthless. The amount of money in American politics is staggering, and the number of voices online and in-person shouting over each other is staggering. Anyone who has argued about politics in an online forum can attest to the difficulty of changing another's mind on any issue. This comes down more to ideology than anything. Also, voting for the presidential election is generally worthless, due to the electoral college but also due to the fact that there are hundreds of millions of people. You should still do it, civic duty and all, but we all know the odds. Even if you contribute the average American salary or a hundred times that to most political campaigns, you are not going to move the needle. In a solid blue or a solid red state, this is even more likely so. Additionally, the system is winner-takes all. If you donate to cancer research, maybe you have an impact. If you donate an additional thousand dollars to a candidate, and they lose, where is the impact? Given all these considerations, should we give up on politics? What if you love a certain politician or hate another? What if you are pretty certain that a certain presidential candidate would contribute extremely negatively to the nation or the future of the human race?
It is a well known fact that local politics play a much larger role in American life than national politics. Sure, we love to argue about national issues, but the local stuff is what affects your day to day. How are the roads? How is the crime? How well run is the school district your kids go to? These local races have much less money involved, and a single vote count exponentially more than in the national election, so getting involved at the local level (or donating) could have a larger impact on your life. But is it in any way comparable to funding de-worming medication or malaria nets? No, probably not. Still, everyone has to have their own "asset allocation" when it comes to donations, and if some slice (let's say 20%) has to go to politicians that you like to make you feel good and continue to donate to effective causes, all the better. Personally, I would never give a cent to a political candidate. I am pretty politically passionate, but I simply believe there are better uses for my money. However, I do believe that advocacy is severely underrated. Calling your congressmen, writing your local representative, starting petitions, etc., are all massively more impactful than voting in any election. This is somewhat backed by intuition but also real-world anecdotes. I've found that my ability to aggressively send emails and call phone numbers to be pretty politically persuasive, especially at the local level. Making your voice heard through your vote isn't easy, so you might as well shout.
Saturday, July 22, 2023
Asymmetric Evil
One of the constant themes in my Losing Money Effectively writing has been the idea of asymmetry. The asymmetric payoff from playing defense is a common one: where if you prevent a nuclear war probably no one knows and you don't get billions of dollars, but if you develop AGI or some other groundbreaking (and dangerous) technology you become one of the richest and most powerful people in history. Sort of in a similar theme, I was recently thinking about the potential that people have to cause great societal harm. If you live your life to the fullest, you may be able to provide enough utility to others to equate to a dozen lives. Maybe you have children, treat them really well, be nice to your coworkers, be a great husband/wife, and donate quite a bit of money to charity. Barring some exceptional luck, you probably won't be a billionaire or famous, and thus your sphere of influence is likely to remain small. If you aren't born into a wealthy family, even with a perfect work ethic you are unlikely to reach a high enough level of status to cause large levels of change.
Unfortunately, being a good person doesn't change society, except for at the margins. Being a bad person, in contrast, can have a really, really negative impact. If you engineer a pandemic, or shoot up a public area, or assassinate the right person, you can cause quite a bit of harm over a large sphere of influence. Maybe you shoot JFK, but if you want to cause real long-term human suffering for thousands or even millions, shoot Lincoln or Archduke Franz Ferdinand. A motivated terrorist can kill quite a bit of people, and a small group proved in 2001 that with simple planning you can kill thousands of people and spark a war that kills hundreds of thousands of people. Nineteen terrorists, a hundred thousands deaths. There's not many nineteen person nonprofits that save hundreds of thousands of lives on their own.
This is, of course, a massive problem. In a lot of ways, human society is built on trust. Trust that the overwhelming majority (99.999999% of people) are not evil, or at least not both smart and evil. The data seems to back this up for the most part, as I don't live in constant fear whenever I go to a concert or a public park. Sure, the fear may be there, but for the most part it is irrational. Still, I think this concept of asymmetric evil is a very understaffed problem. To prevent mass shootings there are advocates for gun control (which I support), but for increased anti-terrorism efforts we often see a decrease in human freedom. It's hard to be a serial killer in an authoritarian regime that tracks your every move, but that does not mean I'd trade aggregate human freedom for a world with a handful of less serial killers. Also, we saw with the Patriot Act that a lot of times these "safety" measures actually do more harm then good.
This is an important concern of mine, and I do think we could do a few things better. First, we in no way shape or form should trust sensitive technological information that could lead to mass deaths to the public domain. If the government finds out how to create a super virus, they should not open source that information. This seems obvious, but for some reason (looking at you Australian smallpox researchers), it has to be said. Next, we shouldn't trust any individual or small group of individuals with massive amounts of power. Any weapons of mass destruction plans should have the required redundancy attached, less we find ourselves in the world of Dr. Strangelove. Third, we should be very cognizant of how fragile society is. There are probably reasonably easy ways to trigger a societal collapse (financial meltdown, kill the right world leader and blame a different world power), so we should be extremely diligent when building institutions and planning for the worst case scenario. In the meantime, we should acknowledge that our "good person" impact will likely be small and continue to stay the course anyway.
AI Safety Kills EA
If you take AI safety seriously and have short timelines, there's not really a reason to do anything in the EA space. Sure, we nee...
-
If you were going to try to optimize your donations in a bang-for-buck fashion in order to have a positive impact on the world, how mu...
-
Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...