Lately I've been reading up a lot on morality and ethics. I'm less concerned with what is right and wrong -- in most cases I'm not in any real doubt about that -- than with how I know. In moments of doubt, what is your rule of thumb -- to what rules must all your decisions follow?
The most obvious and natural standard is empathy. You instinctively care what happens to others, so you avoid harming others and try to help them when you can.
However, empathy is not entirely trustworthy for moral standards. First off, not everyone has equal amounts of it, and some rare people don't appear to have any. And second, humans can be pretty irrational in the things we care and don't care about. Snow White's huntsman was too compassionate to kill her outright, so he left her in the woods to starve to death. More painful for Snow White (if the huntsman's expectations had proven right), but less agonizing to the huntsman, because he didn't have to be a part of it. Humans are also kind of terrible about giving empathy to people they don't feel close to. And when we're upset or tired, we're less willing to empathize even with our own friends.
Conscience is a blend of innate feelings and training. You feel guilty when you do a bad thing, either because of empathy or because you know you've done something you were taught was wrong. It's a great guide, but on the other hand it could be wrong if you were taught the wrong things. Take Huck Finn, who felt horribly guilty for helping Jim escape from slavery, because he had been taught it was stealing. But his empathy won out in that case.
The Catholic Church is entirely right when it says that you should always follow your conscience, but you should also form your conscience. You should train yourself in the habits that you know are moral. But this assumes you must have a further source for morality to check your conscience against.
Should you use reason to assess your moral standards? Absolutely! Feelings can be unreliable. However, reason alone isn't ideal at making decisions because it can't tell you what you want. Hume pointed this out by saying reason alone can't take us from is to ought. I think that gap is easily bridged, though, if you assume that what you want is the good of everyone, not just yourself. And that seems to be pretty universal as a standard for morality. If we all make only decisions in our immediate self-interest, human society as a whole will collapse and everyone will suffer. But if each of us tempers our self-interest with some basic moral rules, all of us will profit. It's really not a big leap to say we ought to do this.
Divine command ethics
This is simple: do what God says, don't do what God forbids. That raises some questions, though, like "how do we know what God says?" A pretty serious problem, when you have people blowing up buildings because they thought God wanted them to. Even if you limit your reasoning to Christianity, there's no end of debates about whether God is okay with birth control, homosexuality, slavery, and so forth. Either it wasn't mentioned in the Bible, or it was but what the Bible says goes against people's consciences.
Personally, I can't see how a Christian who believes in divine command ethics could oppose slavery. It's pretty clear what God says about it. Your conscience might be opposed, but if your conscience opposes God, you should form your conscience better, shouldn't you?
There's nothing sadder than hearing a group of religious people saying that they would love to do a kind, loving action, but sadly God will not let them do it. They're between a rock and a hard place. And in general my sympathies are with those who, like Huck Finn, are willing to say, "I feel so strongly X is right that I'm willing to go to hell for it." After all, if God really were evil -- if the universe were ruled by Cthulhu -- the right thing would be to oppose him, even if he had the power to punish you eternally for it. Of course, if God is good, he knows best what is good -- but one's knowledge of religious truths is always a little uncertain, requiring a leap of faith, and it seems wise to consider the possibility that you might be wrong about what God wants. To do that, you have to have some other source of ethics that you cross-check your religion with, whether it's your conscience or a rational rule.
The Catholic Church doesn't purely teach divine command ethics. While it does say you should follow divine commands, it also says that divine and natural law do not contradict. So it's perfectly comfortable devising new moral teachings about things not mentioned in scripture, based on natural law or the common good. Still, it expects its conclusions to be taken on faith, and if your conscience disagrees, you should form it until it does.
I love Kant. I have to laugh, because in college I didn't think he had come up with anything original. Then again, I had a very boring professor. Kant came up with the categorical imperative -- "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." At first that sounded like just another formulation of the Golden Rule -- do as you'd be done by. And I suppose it is. But the thing about it is that it makes rational sense -- if you have a rule for your own actions, you should expect that others will pick up on that rule and apply it to you. Don't lie, because when people catch on, they'll do it too. Do you really want your actions to be a general rule? Because odds are, they will become so.
John Rawles said the perfect society would be built by people who didn't know what role they would have in it. Of course that's not actually possible. But it's a useful thought experiment to ask myself -- if I didn't know which person in this conflict was me, what would I do? Like when I argue with my husband, I imagine what I would think of some other couple that I wasn't part of having the same fight. Would I agree that the wife was in the right? Or would it be pretty clear, once you took my own self-interest out of the question, that the wife needs to apologize? I'm looking for an answer that would be equally good for either of us.
When I was growing up, we had a similar rule for sharing: one divides, one chooses. My brother would cut the apple or candy bar or whatever in half, and I would pick whichever half I wanted. As a result, he always cut it as exactly as he could, because he knew if one half was obviously bigger, he wasn't going to get that one.
Everyone uses this one, if only to defend what they've already decided. It's taken as a proof of a good moral law that it has good results, on a societal scale. It's particularly helpful when voting -- you might think, say, single-payer healthcare has good results, but if it has bad results, that proves it wasn't the best moral choice. The more you know about a choice's potential effects, the better moral judgment you'll make.
This has two flaws: first, it doesn't tell you what kind of consequences to want, and second, you don't actually know the future. But it's certainly an idea to include in your decisionmaking.
This more completely spells out the sort of consequences you want: the greatest happiness for the most people. The thing is that moral choices can't really be mathematically summed up like that. Which counts for more, cancer for one person or indigestion for forty million people? Is it okay to hurt some people to give enjoyment to other people?
This one gives me a bad vibe. It seems it could so easily be used to justify atrocities. But on the other hand, in general it is useful to ask how much suffering your actions prevent, or how much joy they are likely to bring. For instance, if you have $10 you want to give to charity, you might want to find out which charity is likely to help the most people. Still, I wouldn't use utilitarianism all by itself to make decisions.
This is a great antidote to utilitarianism. It recognizes that you are a human being and you don't make moral decisions in a vacuum. You are not going to execute the criminally insane one fine morning and go home to be a loving parent that evening. You are constantly forming habits which will help you to act in all kinds of situations, big and small. You won't always have a chance to deliberate, or you won't have all the information, or you will be in fear for your life, and you will fall back on your habits. Do you habitually respect human life, or not? Do you make a habit of thinking of your neighbor's needs as equal in importance to your own? If you do, very likely you'll make the right choice in a pinch.
I love virtue ethics because it shows that even small moral choices matter. A lot of the other theories rely a lot on thought experiments ("a building is on fire, do you save a tank of 10,000 embryos or a toddler?") but don't apply much in your day-to-day life.
Rights language is deontological, that is, it relies on hard-and-fast rules rather than flexible judgments. I like that. There is no real difference between saying "thou shalt not kill" and "everyone has the inalienable right to life," but in some cases rights language is handier. You can justify self-defense (by saying that a person who attacks another's life loses their own right to life) and you can weigh rights against other rights.
The downside is that it doesn't actually tell you what rights are more important. You have to decide that separately. Ideally a society will choose which rights to honor, and since everyone has the same ones, it sets up clear boundaries between things people owe you and things they don't.
This states that no one may initiate force against another's person or property. It's a favorite of libertarians because it forbids aggression while allowing for self-defense. But it seems deeply lacking to me -- more the beginning of morality than the end of it. Sure, if everyone followed it, there would be no war. But there would be people shooting trespassers and petty thieves. And since it's purely negative, it doesn't require positive morality such as care for children or charity for the poor. I also dislike the way it treats property as equivalent to life. I myself don't think the right to property is absolute, unless it's the property someone needs to survive. If the government wants to tax your second beach house, I really don't see how that's theft.
So far as I understand it, this is something of a blend between utilitarianism and the categorical imperative. You choose a set of rules that, if followed, are likely to result in the greatest happiness/least suffering for the greatest number. And then you follow those rules all the time, rather than making a new assessment of possible happiness and suffering caused by each decision. That's because a decision made for utilitarian reasons still sets a precedent for others' actions. If you wipe out all the Jews because you think it will cause a better future, you've created a world in which wiping out minorities is now a thing. So even if you could alleviate some suffering by such an action (which, for the record, you couldn't) it would still be a bad action because it would be according to a rule which you wouldn't like others to use. It also blends well with virtue ethics because you'd make a habit of following those rules.
So which ethical theory is mine? Oh, all of them, I suppose. That is, I use different ones in different cases. For day to day life, I tend to stick with virtue ethics, asking myself, "Will this action help me be a better person?" Developing self-discipline and compassion are just good things to do, even if the specific action doesn't have any other noticeable effects. When I'm picking policy decisions to support, I ask, "What consequences is this policy likely to have?" When deciding whether to follow the speed limit, I think, "I want others to follow the speed limit, and if I speed, I'm encouraging others to do the same." When I want to make a donation, I think, "Which charity will help the greatest number of people?" When asked about the bombing of Hiroshima, I might say, "The innocent victims had an inviolable right to life."
There are others I don't use. I no longer ask "What would Jesus do?" because I've found it's too easy to assume that Jesus would do what you would do. And I don't ask, as I did for years, "What would be the most unselfish thing to do?" because I've learned the consequences of that are resentment and personal suffering. And because I think that I have rights too -- I am no more important than anybody else, but I am also no less important.
And no theory I've yet examined -- religions included -- seem to know exactly how differing values should be balanced. None of them can tell me if it is more important to feed my own children or starving orphans abroad; if it is more important to save a life or to conceive a new life; if a heart transplant should go to the medical missionary or the mother of ten. Some of them might try -- utilitarianism could attempt the job -- but it all depends on unknown factors or unweighable values. I think it's okay for the people close to me to be more important to me than those far away; I can't explain why, but it seems right to me.
Another question that isn't easily answered is "who is my neighbor?" What is the group of people who have rights, whose happiness we are concerned with? In ancient times, most people assumed it was their own tribe alone, which is why Jesus was so revolutionary in saying it could be a foreigner. But expanding the in-group benefits everyone: just as it helps individuals to be able to cooperate on a firm moral footing with others, it helps groups if they can cooperate with other groups. The more connected the world is, the more vital it is to treat opposing groups morally. In today's world, a failure to treat other nations fairly could result in a nuclear winter.
But not everyone is going to cooperate with you, either now or at any time in the future. How exactly does one demarcate which beings are morally significant and which are not? Some people think the rights of animals are as important as those of humans; I strongly disagree, not least because it's impractical. If you want to defend the lion's right to life and liberty, you can't do the same for the gazelle. "Any being I can empathize with" seems a common demarcation, but it relies entirely on emotion, which is shaky ground. On the other hand, respecting what you feel empathy for seems a virtuous habit in general. "All human individuals" is a good group, except that if we ever meet sentient aliens, it seems silly to think they don't have moral weight. But if instead you said "anyone with a human level of intelligence or higher," you'd cease to value the severely mentally disabled, which is unacceptable to me. Perhaps you should respect all these groups -- if a being is intelligent OR human OR adorable, you shouldn't kill it. Still, you're going to have to choose which definition has more moral significance -- do you save an alien, or a human infant? Would you kill a majestic lion if it was mauling a disabled person? (I would.)
So, there you have it: plenty of moral codes to guide your actions, in case you've been living the unexamined life up to now. The nice thing is that most of these would come up with the same answers to all of your common moral problems. The basic rule of life, "do unto others as you would have them do to you," has been independently invented more than once, and it works pretty well. Do it because you recognize in them the same feelings that mean so much to you. Do it because you will benefit from living in a society that has strong moral standards. Do it because you love them. Do you really need a more complicated reason than that?
(The above post owes a lot to this one: The Ineffable Carrot and the Infinite Stick.)