Saturday, July 12, 2025

Negative Utilitariansism

    Quick housekeeping: so much of my productive time was spent on my book (Mind Crime) over the past year and a half, I have done essentially zero or no blogging since. This will change! A lot of the books material came from stream-of-consciousness blogging on this site (and my book and AI-focused blog), which I should certainly pick back up now. Also, I have felt a lot of pressure regarding AI timelines to try to figure out the best optimization of the explore/exploit path. I'm trying to now do 50/50 of both. So 50% of my time reading a linear algebra book, reading AI research, novel mind stuff, etc., and 50% of my time thinking through actual strategy, writing, producing in some capacity or setting up meetings with people.

    Now, back to my point. During my book's revision process earlier this year, someone mentioned that my book (Mind Crime) was quite "suffering focused." This is true, although I was not quite so sure at the time how much I had been influenced by "suffering-focused ethics," or what exactly that means. Essentially the people who drive the s-risk discussion are actually negative utilitarians and naturalists basically, a fact that I was previously unaware of. To them, the likes of Brian Tomasik, for example, the concern is not just creating a universe where suffering > well-being, meaning it would be potentially better of if nothing existed, or a universe of maximal torment (hell), but actually just a universe where suffering spreads across the universe at all. That, in much of a sense, means it would probably be better for life not to continue at all. They also frame their arguments in an attempt to be practical, which means not stating exactly what they believe because it's better (utilitarian reasoning) to boil the frog slowly. Also, the conclusions sound so absurd and unpopular, but the ideas may be correct, so there is use in not stating the full "logical conclusion" principles up front.

    First off, I hated Better to Never Have Been as a book, as I found it very sloppy, unprofessional, and unconvincing. I am very open to nihilism as a theory, and anti-natalism as a "malignantly useless" sort of lens, although much less sympathetic to anti-natalism from a utilitarian perspective. Let's talk about Brian Tomasik, who is one of the most important contributors to EA philosophy (earning to give, s-risk, etc.) and cofounder of the Center on Long Term Risk. I've read a lot of his work recently, here is a likely strawman version of his beliefs: being an animal is bad. Animals suffer a lot and nature is a horrorshow (true). It would be better if animals and bugs did not exist, because of how much they suffer (unknown). Environmentalism is thus bad, a world made of concrete is better than a world of rainforests because all of the suffering that happens in the wild. Also, eating beef is better than being vegan potentially because if there were no beef farms that land would probably be forest, where more suffering would happen. Space exploration would be bad because terraforming could lead to more insect and bugs spreading and suffering on other planets (an s-risk).

    Basically, I was under the impression that s-risk was more about having suffering so great it was worse than life being around at all. To Brian, we are already in this scenario. No optimized ASI-driven-torture is needed, it's already the case that we should wish to push the button switches this universe to one with only unconscious rocks. As I saw on reddit, "that's the problem if you get really into utilitarian harm reduction. The best way to eliminate all suffering is to eliminate all life." Brian is very worried about zooplankton so feels guilty about washing his hands, and tries really hard not to harm any bugs or squish them around the house. He states that "unbearable torture cannot be 'morally offset' by also bringing enough pleasure into the world." He basically leans into the "rebugnant conclusion", and worries about RL algorthims (like those perhaps on simple NPCs in video games and cares a lot about bugs. One thing I love about reading Brian's work is he simply follows everything to it's logical conclusion. If suffering is really bad, so bad that it's something like 10x worse than well-being, and there is some suffering so bad that not even infinite well-being can make up for it, this is where we end up. If you believe this theory, it seems far more important to wipe out as many animal habitats as possible rather than avoiding eating a relatively small number of animals by being a vegan. 

    I might start a company that offers Brian-offsets, where I offer negative utilitarians offsets that commit to paving over one rainforest in exchange for cash. You can do bad things as a profit seeking company, but as long as you are destroying the environment (or paying me to do so) we're actually contributing to the world. Again, I do not think the logic here is bad, or that Brian is unhinged. I think that negative utilitarianism is just probably wrong, and a thus it's not crazy that these takes seem so weird.

Friday, December 8, 2023

Doing Good, or Not Doing Bad?

     Effective Altruism, as a philosophy, is very simple. Basically, the argument is that if you shouldn't do bad in the world, that means you should do good. If it is morally wrong, objectively, to kick a baby or not save a drowning child, then it is morally right to treat others with kindness and spend some of your time and energy helping others. If it is true, morally, that you shouldn't cheat or steal, it is true that you should give and sacrifice.

    This is a very controversial take. I understand it particularly well, in my opinion, because I grew up Catholic. Catholics, in my estimate, spend a lot of time avoiding the negative. Whipping themselves into a frenzy over impure thoughts, past mistakes, and current temptations. As a Catholic teenager, I was constantly guilt ridden. I was very concerned with what was going on in my own head, trying so hard to avoid slipping up or thinking the wrong thing. Policing my own brain rigorously, stressing about intrusive thoughts to an almost psychotic point. Little did I know, no one cared about what was going on inside my head. Not God, not others, not anyone.

    If I had spent half of that time focused on doing good, I wonder where I would be? Sure, I spent a lot of time volunteering and being nice to people, but I now wonder if I did that because I felt compelled to, or if I do it in order to "avoid" being a bad person. I have a theory that the way the religions have been traditionally practiced is counter to this Effective Altruism idea of "doing good," and rather focus almost exclusively on "not doing bad." Doing good for others, in most religions, is placed lower in the hierarchy than worship and avoiding sin. The ideal Christian, or Muslim, or Buddhist, is one without temptations, who has control over this thoughts and actions, and could sit in deep prayer for hours, talking directly to God. Sure, there are some rare examples that differ from this, as the Mother Theresa's of the world have shown. These people, in my estimate, are the true heroes. Sure, you can live your life as another Desert Father who sits in a room and meditates all day. Sure, you can be totally without temptation, without impure thoughts, and never lie, cheat, or steal. But if you don't do anything for other people, if you don't contribute positively thought the world, if all you do is sit in a room full of silence and purity, what was the point of having you here?

Wednesday, September 20, 2023

Too Many Things We Want

    I visited Vietnam recently. A beautiful country, filled with phenomenal food, wonderful people, and a much different political system. When Americans think of the word "Communism," they think of the totalitarian regimes of the Soviet Union and China, and all of the evils perpetrated by such regimes. Visiting Vietnam was no different than visiting another country, as the human experience anywhere in the world is broadly similar. Some people live in cities, some people live in the country, and most people are friendly and good. The economic and political system that someone lives under may mean nothing for their day-to-day lives, and in the vast majority of the world this appears anecdotally true. Still, these are important decisions. The amount of government interference, from the scale of laissez faire capitalism to full-blown authoritarianism, does actually matter. In most historical cases, communist societies tend to fall further towards the latter over time. Both trend towards a consolidation of power, and decentralization becomes harder and harder over time (without an uprising).

    I've written extensively in my book blog about the political and economic books I read during my trip to and from southeast Asia. Full of Marx, Lenin, Hayek, and Mises, I think I got a pretty good handle on which types of economic systems work well and which types of political systems lead to the repression of human freedom. This post, however, is going to be about something a little different.

    Something that I've noticed in my adult life is that the capitalistic system and the technological progress it brings is really, really good at giving people what they want. The problem is, it may be too good. It is so good at giving us what we want, that it borders exploitation. Alcohol and drugs are essentially a brain hack. The heroin addict actually does want heroin, the problem is it is not a detached, rational want. It is not a long-term want, it is a short term want. Social media is somewhat similar. I would rather not watch three hours of YouTube a day, but clicking certain links that a really optimized algorithm has crafted for me is nearly impossible to enjoy. People love TikTok, because it is using a data-driven approach to keep them engaged and "happy." Sure, some people like myself have completely cut social media, but it is clearly something that we "want." Materialism, and endless array of products, faster cars, better phones, near-instant packages, all things that we actually want. The point of regulation, as seen with drugs, is to step in when things we want (aka heroin) are bad for society as a whole. The freedom of pursuing something (individuality) is outweighed by the greater good of society (collectivism). This is a very, very important lens for thinking through Effective Altruism.

    Most EA members have a collectivist lens, as there are things that pursuing what we want individually (research fame, money, power) can have a horrific effect on society (existential risk, etc.). Sure, publishing vaccine resistant Smallpox may bring you money and fame. However, not only should you not do so, but you probably shouldn't be allowed to do so, for the "greater good." This is where the state steps in, and where things get dicey. What about the things that the state wants? How do we have a check on that? What if the state is the one building the super virus? These are the type of trade offs that we need to think through, and why I think it is so important to take a step back sometimes from the "collectivist" lens. Sure, Vietnam was amazing, and my visit to China ten years ago was incredible. But people are scared in both places, terrified of the Chinese Communist Party. Maybe we do need to appeal to authority to curb the destruction that can be caused by individual wants, but we need to be just as careful to retain the right to curb the wants of a collective authority. Forgetting this will be an costly mistake.

Thursday, September 7, 2023

Are Political Donations Worthless?

     If you were going to try to optimize your donations in a bang-for-buck fashion in order to have a positive impact on the world, how much would do donate to politicians? From an Effective Altruist point of view, political donations are likely worthless. The amount of money in American politics is staggering, and the number of voices online and in-person shouting over each other is staggering. Anyone who has argued about politics in an online forum can attest to the difficulty of changing another's mind on any issue. This comes down more to ideology than anything. Also, voting for the presidential election is generally worthless, due to the electoral college but also due to the fact that there are hundreds of millions of people. You should still do it, civic duty and all, but we all know the odds. Even if you contribute the average American salary or a hundred times that to most political campaigns, you are not going to move the needle. In a solid blue or a solid red state, this is even more likely so. Additionally, the system is winner-takes all. If you donate to cancer research, maybe you have an impact. If you donate an additional thousand dollars to a candidate, and they lose, where is the impact? Given all these considerations, should we give up on politics? What if you love a certain politician or hate another? What if you are pretty certain that a certain presidential candidate would contribute extremely negatively to the nation or the future of the human race?

    It is a well known fact that local politics play a much larger role in American life than national politics. Sure, we love to argue about national issues, but the local stuff is what affects your day to day. How are the roads? How is the crime? How well run is the school district your kids go to? These local races have much less money involved, and a single vote count exponentially more than in the national election, so getting involved at the local level (or donating) could have a larger impact on your life. But is it in any way comparable to funding de-worming medication or malaria nets? No, probably not. Still, everyone has to have their own "asset allocation" when it comes to donations, and if some slice (let's say 20%) has to go to politicians that you like to make you feel good and continue to donate to effective causes, all the better. Personally, I would never give a cent to a political candidate. I am pretty politically passionate, but I simply believe there are better uses for my money. However, I do believe that advocacy is severely underrated. Calling your congressmen, writing your local representative, starting petitions, etc., are all massively more impactful than voting in any election. This is somewhat backed by intuition but also real-world anecdotes. I've found that my ability to aggressively send emails and call phone numbers to be pretty politically persuasive, especially at the local level. Making your voice heard through your vote isn't easy, so you might as well shout.

Saturday, July 22, 2023

Asymmetric Evil

     One of the constant themes in my Losing Money Effectively writing has been the idea of asymmetry. The asymmetric payoff from playing defense is a common one: where if you prevent a nuclear war probably no one knows and you don't get billions of dollars, but if you develop AGI or some other groundbreaking (and dangerous) technology you become one of the richest and most powerful people in history. Sort of in a similar theme, I was recently thinking about the potential that people have to cause great societal harm. If you live your life to the fullest, you may be able to provide enough utility to others to equate to a dozen lives. Maybe you have children, treat them really well, be nice to your coworkers, be a great husband/wife, and donate quite a bit of money to charity. Barring some exceptional luck, you probably won't be a billionaire or famous, and thus your sphere of influence is likely to remain small. If you aren't born into a wealthy family, even with a perfect work ethic you are unlikely to reach a high enough level of status to cause large levels of change.

    Unfortunately, being a good person doesn't change society, except for at the margins. Being a bad person, in contrast, can have a really, really negative impact. If you engineer a pandemic, or shoot up a public area, or assassinate the right person, you can cause quite a bit of harm over a large sphere of influence. Maybe you shoot JFK, but if you want to cause real long-term human suffering for thousands or even millions, shoot Lincoln or Archduke Franz Ferdinand. A motivated terrorist can kill quite a bit of people, and a small group proved in 2001 that with simple planning you can kill thousands of people and spark a war that kills hundreds of thousands of people. Nineteen terrorists, a hundred thousands deaths. There's not many nineteen person nonprofits that save hundreds of thousands of lives on their own.

    This is, of course, a massive problem. In a lot of ways, human society is built on trust. Trust that the overwhelming majority (99.999999% of people) are not evil, or at least not both smart and evil. The data seems to back this up for the most part, as I don't live in constant fear whenever I go to a concert or a public park. Sure, the fear may be there, but for the most part it is irrational. Still, I think this concept of asymmetric evil is a very understaffed problem. To prevent mass shootings there are advocates for gun control (which I support), but for increased anti-terrorism efforts we often see a decrease in human freedom. It's hard to be a serial killer in an authoritarian regime that tracks your every move, but that does not mean I'd trade aggregate human freedom for a world with a handful of less serial killers. Also, we saw with the Patriot Act that a lot of times these "safety" measures actually do more harm then good. 

    This is an important concern of mine, and I do think we could do a few things better. First, we in no way shape or form should trust sensitive technological information that could lead to mass deaths to the public domain. If the government finds out how to create a super virus, they should not open source that information. This seems obvious, but for some reason (looking at you Australian smallpox researchers), it has to be said. Next, we shouldn't trust any individual or small group of individuals with massive amounts of power. Any weapons of mass destruction plans should have the required redundancy attached, less we find ourselves in the world of Dr. Strangelove. Third, we should be very cognizant of how fragile society is. There are probably reasonably easy ways to trigger a societal collapse (financial meltdown, kill the right world leader and blame a different world power), so we should be extremely diligent when building institutions and planning for the worst case scenario. In the meantime, we should acknowledge that our "good person" impact will likely be small and continue to stay the course anyway.

Friday, July 21, 2023

Sacrifice vs. Uncertainty

     For some reason, in my fiction writing at least, I can't stop writing about the idea of sacrifice. Maybe it is my Irish Catholic upbringing, or maybe it was the fact that I've read a lot of Cioran, but regardless the idea intrigues me endlessly. Here is my question:

    Would you take a bullet for democracy in the United States? You get shot in the chest, you may or may not live.

    If you don't take the bullet, the US becomes a dictatorship of North Korean proportions. If you jump in front of the bullet, nothing changes in the US. There are plenty of interesting derivations to this question. Maybe you throw in that democracy falls in 100 years, so you and your family will be fine. Or you change the prompt to reflect some other cause. Well, many American soldiers have lost their lives fighting for certain American ideals, democracy being the foremost. Probably the coolest part of the US is our history of standing up against tyranny, and taking a look at the world stage shows us as mostly alone. It's pretty crazy to me that the American experiment actually worked, and I don't see any obvious civilizational collapse on the horizon despite what the media says. I'm taking the bullet. Now, what I find actually interesting about this question is the introduction of uncertainty. If you set the odds for either of these, at 95%, or 5%, you can probably get some widely inconsistent and interesting answers.

    Would you take a 5% chance of dying to prevent a 95% chance of democracy collapsing? Would you take a 95% chance of dying to prevent a 5% chance of democracy collapsing? What about 50% and 50%?

    Now, shift those numbers on each side until you get something noteworthy. Personally, I think introducing odds into anything causes humans to lock up and focus on self-preservation. I doubt many people would take their own life to prevent a 40% chance of a family member dying, even if we traditionally valued that family members' life at multiples of our own. This is one of the problems with altruism, and one of the problems with effective donations. Basically, the problem is we don't really know what is going to happen. Even if we are pretty sure our donation money will go to cure someone of a preventable disease, maybe that leads to knock-on effects (they grow up and become bad, overpopulation, the money is stolen and used to buy weapons). Even if the odds of such bad outcomes are extremely low, we become extremely adverse to donating. Maybe I want to donate to AI alignment research, but there is some low probability I make the problem worse. In fact, I really have no idea what will make the problem worse and what will make the problem better. Even if the odds of the money being useful is 80%, that 20% scares me to an irrational level.

    What does this mean? I think it means that research into how effective certain causes are is actually extremely useful. Removing the uncertainty regarding charitable causes might actually be the most impactful contribution of EA, because by doing so we can finally convince people to make a small sacrifice.

Love and Intelligence

    We assume that animals don't really love each other. They partner up out of instinct, and their sexual activities are driven by instinct. Partnership and even family is an animalistic urge, put widely on display across the animal kingdom. Why does a mother lion protect her cubs? Does she actually love them, or is that just an instinct drilled in by millions of years of evolution? These are the thoughts that keep me up at night. Just kidding. But I do wonder if there's actually some sort of spectrum here. If I see a beetle mother taking care of her young, I assume it's all instinct. If I see a female ape taking care of her young, I see more than a glimmer of affection. Maybe this is all personification, but if it is even just a little bit more than that, it is possible that our capacity to love scales directly with either intelligence or sentience.

    Sure, skeptics would just say that love isn't even a real "thing" among humans; it is just another animalistic instinct driven in by evolution, no different than the beetle or fruit fly. Maybe at a certain intelligence level you realize this, and are no longer able to love. A horseshoe theory, where most humans are simply in the sweet spot. Once you pass a certain intelligence threshold or become a certain amount of self-aware, you realize that everything is meaningless and predetermined and love is impossible. Maybe. Maybe love requires a willful suspension of disbelief, but maybe we need to do a better job at separating out lust and sex from this discussion. Love could be an intellectual feat, rather than a physical or spiritual one. Maybe the best marriages and the perfect partnerships could become deeper and more beautiful if each party understood each other on a more fundamental level. Perhaps it is the absence of this understanding that gets in the way of the empathy and compassion really needed for a deeper level of appreciation and respect. I imagine that superintelligent AIs, of the movie Her variety, will be able to form some crazy deep connections. This is based solely on intuition, but it is quite a pleasant thought to believe.

Saturday, July 8, 2023

Animal Rights

    Martin McDonagh's film "The Banshees of Inisherin" was my favorite film in 2022. I re-watched it recently, and I was struck by a certain piece of dialogue between one of the main characters, Colm, and a priest:

     Priest: "Do you think God gives a damn about miniature donkeys, Colm?"

    Colm: "I fear he doesn't. And I fear that's where it's all gone wrong."

    Animal rights are a very complex issue, one that I've avoided writing about because I'm not sure exactly where I stand. In Animal Liberation, Peter Singer made a pretty compelling case for vegetarianism. Before we get into what is moral to eat, we first have to solve a hard problem: what makes humans special? In my opinion, there is a flavor of sentience/consciousness that humans have that is extremely valuable. We are by far the most intelligent species, with a level of self awareness and decision making ability far beyond that of other animals. We are the only animal who can contemplate philosophy or wade in existential angst. Our capacity for language and problem solving has allowed us to traverse oceans and explore the stars. Other living creatures on Earth, for the most part, are really dumb. If you spend ten minutes alone with a chicken, you will realize that there is quite simply a not a lot going on upstairs. Yeah we probably shouldn't torture the thing, but if I had to kill a thousand chickens or a random guy, I'd feel pretty confident in my decision. 

    Also, the life of an animal in the wild is not ideal. Evolution is a mostly random process full of cruelty and waste, and most animals die horrifying deaths. Prey are hunted constantly by predators, and predators are constantly at risk of starvation. Starvation, disease, and getting eaten are more than commonplace in the wild, where in farms most animals only have to worry about the last one. Well, maybe some have to worry about living out a miserable life in what is essentially a cramped prison cell where they are kept until slaughter, and that is actually a good point. Here is my personal dilemma: I am confident that I can eat an oyster, and I am confident that we shouldn't eat chimpanzees. Distinct animal species basically either have rights or they don't, and it's weird that society is so logically inconsistent about this (if you eat a dog you are horrible, but go on and kill billions of pigs and that's fine). There is a large spectrum of intelligence from oyster to chimp, and the effective altruists probably draw the line for what we can slaughter too far down. But 97% of humanity draws the line too high, and it's probably better to be safe, especially given how little it costs us. But I find it hard to criticize too harshly whenever I actually see a chicken.

Wednesday, July 5, 2023

Altruism for the Rich

    Arthur Schopenhauer says that there are two options in life: pain or boredom. The poor deal with pain, the rich deal with boredom. It has been argued that effective altruism is religion for the rich. A way for rich people to feel good about their lives. A way to avoid boredom. A way to donate a small chunk of one's relative wealth and then brag at a dinner party that you "have saved hundreds of lives," all the while treating your employees horribly and scamming customers. This cynical take is fairly common, as many see service work as inherently selfish. Unfortunately, we often miss the obvious. We miss the fact that most people are good, and helping others is not a zero sum game. If helping others makes you feel good, and thus you help others, you are adding value to the world. I don't really care if Bill Gates donates billions of dollars to charity because he truly cares about the world, or if he is donating in order to build a legacy and help his friends. Less children are dying either way. Sure, I hope that everyone eventually migrates to actually altruistic intentions, free of any second-order selfish reasons. But honestly, that is too much to ask in the beginning.

    Social impact is a marathon, not a sprint. If we get bogged down in attacking anyone who tries to do good for "selfish motivations," if we hold everyone to a standard of perfection, we'll lose out on a lot of potential helpers. We miss the forest for the trees, and ignore the fact that the overwhelming majority of people donate essentially nothing. Let's not demand piety, let's just keep each other honest and try to change the world over time. Let's bring people in, not shut people out by gatekeeping. Let's learn from the mistakes of millions of other social movements, and embrace a culture of inclusivity.

Automation Knocks

     A few months ago, I got a letter in the mail. It was a marketing letter, but handwritten on a folded up piece of notebook paper. I thought to myself, wow, I am so much more incentivized to read a marketing letter written with real ink than one of those obviously fake letters written in "handwritten" font. Then I realized that we are in the period of generative AI. AI that can not only come up with customized marketing text for each individual, but AI that can replicate human handwriting via loads of training data. So, if you connected a robot arm to a pen, you could have the arm write very convincing letters in actual ink. These letters would be indistinguishable from human writing, and a collection of robot arms with the same software could write thousands of "handwritten" letters a day. Well, that is quite the business idea. I am sure that millions will be made by the first company to implement such systems, as every marketing guru knows the historic power of the handwritten note. On cue, another layer of human personality will die.

    One of my coworkers asked me for a business idea that he could use for an MBA class project. I looked through my list of notes, and pitched him on this generative AI letter business. It seems to have taken off, with his MBA class now being supremely interested in the idea. The class will be working through the business plan and potentially pitching the idea further up the chain. I might not create the eventual monster, but maybe I pushed it a year or two ahead of schedule.

    Lets take this idea to fruition. In ten years, campaign volunteers are obsolete. Who needs dozens of volunteers to write campaign letters when you can pay a company extremely small sums of money for thousands of equivalent letters? When generative AI starts picking up the phone, grassroots campaigns are essentially dead. The world moves further towards consolidation, with the owners of capital making use of scalable systems to win votes and increase their power. When automation knocks, perhaps we shouldn't answer. Maybe it is the case that when I have a business idea, I should keep it to myself.

Tuesday, June 20, 2023

The Mediocrity Principle

     Humans often believe that they are at the center of the universe. Disagreeing with this, scientifically speaking, has landed quite a few scientists in trouble. However, essentially all astronomers and cosmologists of the modern day tend to wind up agreeing with The Copernican Principle, which states that the Earth does not occupy a special place in the universe. We are not the center of the universe, and not even the center of our galaxy (gasp!). An extension of this principle is The Mediocrity Principle, the idea that there is nothing special at all about the Earth or humans. In fact, we are probably just par for the course in the universe. We can assume that what is happening here is happening elsewhere (probably even life and intelligent life). This is just a stipulation, but a pretty powerful one. It seems science has trended in this direction, with not just cosmology but also biology (evolution says we are basically just advanced monkeys).

    There is a big problem with this principle: it is quite depressing. We want to think that we are special. We want to strive for important causes and have an impact. We do not want to be forgotten. When we look up at the night sky, it fills us with existential dread to realize that there are more stars in the universe then there are grains of sand on Earth. And a single beach has an incredible amount of sand. The next time you are on a beach, run your fingers through the sand. Imagine the worlds that could exist out if there are really that many stars. Then, wonder if you are really the most important species, the "chosen" race. Probably not. Seems a bit ridiculous. But, maybe? We haven't seen any existence of extraterrestrial life, and the Fermi Paradox is quite complex (why don't we see any evidence for aliens given that we probably should?). Maybe sentient life is extremely rare, or maybe we are the only conscious beings in the universe. This could be important, as without us the universe might just be lifeless balls of gas and rock. We might just be mediocre, or we be the only thing that matters.

    The Mediocrity Principle has rung true up until this point in history. We don't seem particularly special. Whatever we do with nukes, or AI, or pandemics, it doesn't matter much. We could all die out in a flash, and intelligent life elsewhere will probably live on. Perhaps they are better than us, more empathetic and moral. Maybe we would just get in their way. Whatever life force there is in the universe, it doesn't end with us. But, what if it does? Maybe it does? Until there is evidence to the contrary, we have an enormous responsibility. A burden, perhaps. To survive and thrive, to spread among the starts and act as observers to a beautiful universe. Beautiful because of our eyes.

Negative Utilitariansism

    Quick housekeeping: s o much of my productive time was spent on my book ( Mind Crime ) over the past year and a half, I have done essent...