Pages

Tuesday, April 11, 2023

In which I abandon utilitarianism

 Okay, to be fair, I've never really described myself as a utilitarian. I find utilitarianism to be a handy rule of thumb when considering policy issues or charities to donate to. The things that prevent the most suffering, or make the most people happy, are the best ones. These are two different things, but in everyday ethics it doesn't even matter which you use. "Saves the most lives" is another good one. It's just a way to come up with a unit you can measure interventions by, to pick the most effective one.

But I've always known it gets a little wonky when you try to use it for everything. For instance, should you have another child? Well, it will cause suffering if you do that, so by negative utilitarianism you shouldn't. But it will also cause happiness, so by positive utilitarianism, you should. And if you try to expand it universally, it comes to either "everyone should have children, provided each child is likely to be even slightly more happy than miserable," or else "no one should have children, ever." Neither one of those seems like a very good answer to me. (I think you should have a child if you want one, personally. Whichever the "right" option is, there are plenty of people choosing each, and you will not be saving or damning the world personally by the one you pick.)

But I haven't thought about it much in a while, till recently I came upon two things that made me think utilitarianism is much worse than I realized. Sure, it might have been handy when used casually by people who also have other ways to do ethics and quickly abandon ideas that lack common sense. But some people want an ethical system to replace common sense, and when used that way, utilitarianism becomes as rapidly disastrous as most theoretical systems.

The first thing I ran into was the concept of longtermism. It's an offshoot of effective altruism (the people who say we should give to the charities that do the most good--which seems unarguably a good idea). Longtermists point out that there may very well be more people in the future than there are alive today. And if we want to do the most good, shouldn't we worry more about future generations?

So far, not too bad. I mean, I certainly don't want to set humanity up for a future where we either destroy ourselves or have miserable lives. That's why I care about the environment--well, that and my fondness for going outside.

But, as I read further, things started to get weird. For instance, there was the idea that, given a billion times more humans might live in the future than do in the present, harm to those future humans is a billion times more important than the lives of people who, you know . . . actually exist. Then, they dream up a possible extinction scenario that might happen. Does it have a .0001% chance of happening? Well, then preventing it takes priority over saving the life of any individual alive today. Because after all, a tiny risk to a very large number of people is like murdering a moderately large number of people, right?

My first objection to this is the math. All of these problems should be multiplied by the certainty that your efforts are going to help at all. The farther out in the future we look, the less we know, and the more vanishingly small become the odds what we do will make the slightest difference. Next, we could refute it by saying that a person alive today might have one billion descendants, so saving his life is saving a billion potential people, so really, it's more efficient to save lives now than to plan to save future lives. Third, we could point out that future humans, since they don't exist, have no real rights. If they did, we'd have to live in moral terror, knowing that every butterfly we diverted from its course might wipe out one billion and create a different billion. That's . . . that's not how ethics works.

But that is just introductory stuff. Once you get into their actual dream of the future, it gets weirder. (By they I mean: a few cranks I read on the internet. It doesn't really matter who, the stuff just got me thinking. I'm sure there are better longtermists.)

These people think the ultimate future of mankind is to somehow upload ourselves into computers. In this scenario, we can simply program ourselves infinite bliss. We can be thousands of times happier than any person alive today. So in that way, our descendants matter much more than anyone can possibly matter today, because we can only be mildly happy at best (everyday life being what it is) whereas they can be perfectly happy, potentially forever.

In service to this end, they think it's justified to ignore any issues around today, focus on creating AI, plan to colonize other planets, and so on. If whole continents of people are lost to war or climate change, that's small potatoes. So long as any humans survive (preferably the most privileged ones, as these are the most likely to invent the tech we need to reach this future) it's all good.

I stopped, at this point. I didn't need to read any further or do any more math. I simply thought: these future digital people are nothing to me. They are not human in any sense I care about. I sense no connection to them. And I don't see why I should want them to exist.

Utilitarianism could bring people to thinking that endless techno-bliss is worth fighting for, just because someone, somewhen, could be happy. But my common sense says no. I do not want endless techno-bliss. I want my species to survive, sure, but that's not my species anymore. I want to prevent actual suffering by actual people alive today. I care about a homeless guy getting to come inside when it's cold, and medical care in another country so that a mom doesn't have to mourn her child. These things are real. The rest is . . . simply not.

So that was the first thing. I realized that utilitarianism can take you to some weird places, and maybe instead of working out better math (which I think you can, it's completely valid to) I can simply drop the whole idea.

The second thing was a pair of books by Hank Green, An Absolutely Remarkable Thing and A Beautifully Foolish Endeavor. Both are fantastic, especially the second one, and I won't spoil them because I think everybody should read them.

But they brought up the question, from the perspective of an extra-terrestrial, about what the best future of humanity is. From that perspective, humanity is not a collection of individuals, it's a beautiful system of interconnection, which has a destiny to be and do great things.

But from another perspective, it's too dangerous to let humanity try to achieve any of that. So doesn't it make more sense to plug us into a digital matrix and dream away the rest of our future? We won't be unhappy. We'll be pacified. We won't be able to harm other societies. So what if we don't achieve anything further, don't do or dream or make contact with the rest of the galaxy?

The negative outcome in that book is the dream of the longtermists. Plugged in and happy.

It made me see why I reacted with such strong disgust to that idea in the first place. Infinite happiness isn't the way humanity has ever lived, or how it was meant to live. We always have a bit of dissatisfaction--and that can be a good thing! It drives us to do more. You don't get the Star Trek future of exploration and achievement by being contented.

I tried, when I was very depressed some years ago, to get into Buddhism or Stoicism. They promised to hold the secret to happiness. That secret is to stop desiring. If you're sad, it's because you wanted something and didn't receive it. If only you wanted nothing, you would be happy.

Then as now, I recoiled. I want to be happy because I've earned it somehow. I don't want a happiness that is simply an absence of desire. I can even be happy while desiring more.

Happiness can be kind of a treadmill. You wanted a big house, you thought if you had a big house you'd be happy. Now that you have it, you're no longer conscious of it. If you lost the house, you'd be sad. But you aren't happy now--now you want a house by the beach.

I've noticed this in my career as an author. I wanted to get good at writing, and I did, but I wasn't happy. I wanted a book deal. So I got a book deal, and was I contented? Heck no! Now I want to sell a zillion copies and get good reviews. I see other authors who have achieved all that, and are they contented? Nope! They're afraid their next book won't do so well, they want to be a bestseller, they want a Hugo Award.

But . . . to me, this isn't misery. This isn't an outcome to be feared. This is the nature of being human. Sure, we could all do a little better to appreciate what we have. And there are goals we strive for that aren't really worth the effort--stuff like being popular or rich, which are both black holes of your effort that never pay as much back as you hoped.

But wanting? striving? having dreams? I want to live like that. When I play a video game (Stardew Valley, naturally) I enjoy the part where I'm building up the farm. When it's all perfect and I've achieved everything, I'm bored.

A good life, to me, includes both happiness and striving. Smelling the roses and digging a fresh garden bed. I want to always have dreams. When I think of a positive future of humanity, I don't imagine people plugged into a vast network where we can't feel the slightest discomfort. I imagine a future where we've learned how to achieve the different parts of a good life in a sustainable way. Communities the right size to really get to know your friends. Art that we can all participate in without needing to make a living off it. Homes that rest easy on the earth, without extracting resources that aren't replaced.

Looking at my life, I feel I maybe haven't been striving enough. I've been trying to protect my mental health and my energy by taking it easy on myself. I've been reading a lot, playing some video games, taking walks. And that's fine, I don't feel guilty since I get my work done. But it hasn't done as much good for my mental health as I hoped. Thinking over all I've said here, I think maybe I should be striving a little bit more. Every time I'm drafting a novel, I feel much busier, but I'm also more happy. I like working on a difficult problem and making progress. The same with crafting--I can sew or weave happily the whole day long without feeling restless the way I do when I'm just scrolling the internet.

I've wandered far from the topic of ethics. I don't have an answer for what to do instead of utilitarianism. I'd hesitate to substitute it with "the good life," because everyone has a different idea of what that is. Instead I'd just say, when your ethical system leads you to ignore good sense, you need a new one. 

But I think that considering only happiness and suffering is just not enough. What kind of happiness? What kind of suffering? I'm happy to work to reduce suffering such as poverty, sickness, and death. But I'm not going to try to change who I am as a person so that I don't suffer. Instead, I'm going to try to pursue goals that are meaningful to me, and see how that goes.

Related Posts Plugin for WordPress, Blogger...