Writing clarifies focus and ideas, and is the single best determiner for intelligent decision making I've seen. Most now influential AI and EA people, for example, from Sam to Dario to Paul to Carl to Brian to others, had blogs where they just dumped their thoughts (albeit in a better laid out format than this very informal blog). I think I should do this way more, and the year-long hiatus I took was probably a pretty dumb move.
Losing Money Effectively
Sunday, March 15, 2026
Saving Money Isn't Very Useful
Small dollar values don't matter if you're gunning for civilizational change. If you are making a substantial income or very productive work, saving money on groceries or nights out don't matter much. In the AI-based future where lots of long term value will be decided immediately, it is likely not worth saving a little extra money to donate to causes that will only be relevant for the next few years (until they're surpassed by the importance of ASI). The headache and lifestyle hit on buying the less expensive canned corn at the grocery store just don't really matter. The big decisions do.
I don't think EAs should be forced to live like hermits (or Jesus) and get stuck in the most frugal lifestyle possible, and think that if they don't do this they are immoral (which according to standard utilitarianism they basically are). I think we should just care about the big ticket items, and try to move the biggest levers possible to change the world. Even if this means eating meat or spending money on fancy dinners that could have gone to the less fortunate, I think the most important thing anyone will do with their life is gun for literal transformational change.
Sunday, March 8, 2026
AI Safety Kills EA
If you take AI safety seriously and have short timelines, there's not really a reason to do anything in the EA space. Sure, we need some people for diversification purposes (what if we're wrong), and a lot of these groups are complementary and can take in a broader array of people and perspectives, but they are somewhat useless. Saving the shrimp may matter only insofar as this mission ends up in the ASI's training data, a footnote on a list of priorities and capabilities far beyond our understanding or control. This conclusion is uncomfortable, and distressing, as it means as time goes on the circle of impactful individuals will become less and less. The sphere of influence over ASI development will only become smaller until the end, and eventually, it will become nonexistent. Unless we figure a way to implement broad democratic access, there will be no more EAs, as there will be no "effective" way to do anything that doesn't involve controlling the machine in the first place. There may be altruists, but if the ASI isn't built as one, there may not be many for long.
AI Positivity
Things may get very positive, until they get very scary. One of the issues with planning for the future, is that rapid AI progress may lead to scientific discovery across a range of areas. Technology often leads to better outcomes for people, in terms of both health and entertainment, so it could be that the upcoming technological advancement greatly extends lifespans, reduces suffering, and creates amazing content. Perhaps there is societal disruption and mass unemployment, but we could also rapidly respond to those issues as we blow through eras of technological progress. We may believe that we are on a crazy upward trajectory to the stars. And we may well be, until we aren't. Until power concentration, or AI takeover, or some form of immense tragedy that comes from recursive self improvement, puts an end to our happiness or our species. Until the train falls off the tracks, off it's previous upward slope to heaven. We should be prepared for this, and be willing to pull for the breaks even if everything is looking rosy on our way up. Unfortunately, I do not think we will collectively have the wisdom to do this.
Sunday, January 18, 2026
The Precipice is Distant
Toby Ord's book, The Precipice, is one of the best books I've read. In it, Toby argues that we are at a particularly important time in human history, where there is a consolidation of x-risks that may result in us blowing everything up (nuclear war) or permanently locking in values due to superintelligent AI systems. The decisions made over the next hundred years may be extremely important. It is inferred in the EA community that there may be a period of "long reflection" after this initial period, where if we don't destroy ourselves or lock in bad values, we could chill for a bit and then strategically decide what the best moves our (taken hundreds of years to discuss before our next actions).
However, the progress of technology may make this claim entirely irrelevant. Perhaps in 200 years our problems are those of civilizational importance, but that civilization is about to colonize the galaxy and then universe. Determining if China and the US split the universe, or what specific space governance system should be implemented, may be drastically more important than the decisions we could make today. The discovery of novel physics in five hundred years, something crazy like the ability to create false vaccums or access different dimensions (or multiverses) could make those times the "precipice" of human history, where individual actions hold extaordinary weight. We are certainly, in my view, at the most important time period in human history. But aside from the consolidation of power possible through ASI, I have no reason to believe this trend will not simply continue upward.
Tuesday, September 30, 2025
Random EA Reflections
Utopia:
If there's only a five percent chance we go extinct, that means there's a 95% chance that human experience lives on. Should we not spend more time ensuring that time is spent well? Should we not dedicate more resources to ensuring we get to post-instrumental utopia?
The future:
Let's say someone else is playing a game. The outcome of the game is that there is a 10% chance your parents die, and a 90% chance your parents get to utopia. If you killed the person playing this game, would you be wrong?
Nightmares:
If you are an EA, you believe that conscious suffering matters. In-the-moment suffering, meaning if you are tortured and your brain is wiped after and you have no memory of the event, that is bad. If you are a shrimp and you suffocate once brought on land, that is bad (possibly). Well, what about nightmares? There are some nightmares I've had that I certainly remember, and I'm fairly positive in the moment I am facing actual psychic distress. Should a new cause area be to limit the amount of nightmares people have, or the intensity?
Breakups:
Breakups are some of the most negatively impactful events for most people. I'd much rather break a bone than get a divorce, and it's not even close. Pain in the moment is hard to compare to the toil of human relationships. It seems in a country where most middle class families can put food on the table, but almost half dissolve because of divorce, we might be missing some low-hanging fruit.
Magnitude:
EAs aren't usually directionally wrong about things. Sure, they mess up the magnitude. But the direction is usually correct. Animal welfare is a good example of this.
The Repotato Conclusion:
Are plants morally valuable? Is a potato? How many potatoes equals one human life?
Life:
It is very hard to live life outside the Overton Window. It's easy to claim to be an independent free thinker who stands up for their ideas. But when actually faced with public mockery and shame, one realizes how hard life can become.
Digital:
Consciousness also falls victim to the anthropic principle. This may be the only sort of universe where consciousness can exist. This may mean things like digital consciousness are more likely.
AI:
We basically want the future ASIs to think humans are utility monsters. That is the control problem.
Simulation:
If you take simulation theory seriously, you think that we are probably digital minds. In which case, you should probably care a lot about how digital minds are treated.
Sunday, July 20, 2025
Moral Non-realism
There is something particularly disturbing about moral non-realists who believe we should phase out life itself. This is the position of many anti-natalists adjacent to the EA community, who often focus on suffering-focused ethics. I've never been convinced by this "moral non-realism" stuff in general, it just seems like nihilism with extra steps. This idea of preference satisfaction ("I am a utilitarian, and we should do good, but by good I just mean my idea of good and my preferences") is frankly pretty stupid. If you believe that morality is not objective, you are a nihilist. Or a cultural relativist. Or whatever else you want to call yourself, but it basically excludes you from arguing for moral actions. Sure, there are arguments regarding how to act under uncertainty (many of these I have made), but to argue that we should pave over the rainforests requires stronger claims. Arguing that we should prefer a world without any sort of life (because suffering is so bad), especially when you are actually a nihilist, is a particular kind of derangement. And it is obvious that the majority of the world thinks that taking actual actions towards this goal would be considered evil. To walk this road anyway is to claim that your subjective beliefs (that you believe are subjective) should override the beliefs of others (that you know they believe to be objective). I am not sure what the right word for this is, but it sure sounds sickening.
Friday, July 18, 2025
"Literally Everyone"
Saturday, July 12, 2025
Negative Utilitariansism
Friday, December 8, 2023
Doing Good, or Not Doing Bad?
Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that means you should do good. If it is morally wrong, objectively, to kick a baby or not save a drowning child, then it is morally right to treat others with kindness and spend some of your time and energy helping others. If it is true, morally, that you shouldn't cheat or steal, it is true that you should give and sacrifice.
This is a very controversial take. I understand it particularly well, in my opinion, because I grew up Catholic. Catholics, in my estimate, spend a lot of time avoiding the negative. Whipping themselves into a frenzy over impure thoughts, past mistakes, and current temptations. As a Catholic teenager, I was constantly guilt ridden. I was very concerned with what was going on in my own head, trying so hard to avoid slipping up or thinking the wrong thing. Policing my own brain rigorously, stressing about intrusive thoughts to an almost psychotic point. Little did I know, no one cared about what was going on inside my head. Not God, not others, not anyone.
If I had spent half of that time focused on doing good, I wonder where I would be? Sure, I spent a lot of time volunteering and being nice to people, but I now wonder if I did that because I felt compelled to, or if I do it in order to "avoid" being a bad person. I have a theory that the way the religions have been traditionally practiced is counter to this Effective Altruism idea of "doing good," and rather focus almost exclusively on "not doing bad." Doing good for others, in most religions, is placed lower in the hierarchy than worship and avoiding sin. The ideal Christian, or Muslim, or Buddhist, is one without temptations, who has control over this thoughts and actions, and could sit in deep prayer for hours, talking directly to God. Sure, there are some rare examples that differ from this, as the Mother Theresa's of the world have shown. These people, in my estimate, are the true heroes. Sure, you can live your life as another Desert Father who sits in a room and meditates all day. Sure, you can be totally without temptation, without impure thoughts, and never lie, cheat, or steal. But if you don't do anything for other people, if you don't contribute positively thought the world, if all you do is sit in a room full of silence and purity, what was the point of having you here?
Wednesday, September 20, 2023
Too Many Things We Want
On Writing
Writing clarifies focus and ideas, and is the single best determiner for intelligent decision making I've seen. Most now influenti...
-
If you were going to try to optimize your donations in a bang-for-buck fashion in order to have a positive impact on the world, how mu...
-
Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that me...