- If you could harvest organs from a single man to save the lives of 2 men while killing the donor in the process, would you do it?
- If you could save 10?
- What if the man was someone with no family meaning nobody would grief for him?
- What if you could kill him without him having a chance to realize it. One moment he is alive, next one he is dead and 10 more people can live because of him. Would you do it?
- What if could harvest only one kidney from him, without killing him and without endangering him at all? You could still save lives for minimal to zero damage done to him. Would you force him to give his kidney?
- What about forcing him to give blood twice a year?
Most people would answer “no” to all of these questions. If for some reason you answer yes to one of them consider the case where the state makes this a law and turn this into a permanent system. Take the least invasive of these cases, the least violent: obligatory blood donations. Imagine a government that fines or jails anyone who is refusing to give blood. Of course, if you try to resist arrest the government won’t really hesitate to kill you, especially if you become violent trying to defend yourself from the needle or the cop or the fine-collector. We would have a state that forcibly violates any kind of self-ownership. This is a system where the body is not owned exclusively by its person, where others have a word about what’s the most efficient use of your own kidney’s. The way the Nazis were deciding about what’s the most efficient use for the Jews, or a slave owner for his slave.
That my main concern when anyone brings up utilitarian arguments about any legal/political system. The end result, the utility, of a system, should be evaluated after you have applied some sort of moral code. If you do it the other way then, even with the most sane definitions of utility, you will end up having some sort of fascism where no right is absolute and what it means to be human is a constant haggle between you and the society or those in power.
Here is a system with great results for the society: make a fair system that picks random citizens, kills them, harvests all their healthy organs and saves lives that would be otherwise lost. If each donor saves 5 lives then this system is great if you look at the results. Your chances of survival are actually higher in such a society as the chance to be saved by the system are 5 times bigger that the chance to be killed by it. Why don’t we have such a system in place? Because most people already put their morality before utilitarian evaluations of every system. And they should always do that although they seem to remember that only when presented with extreme violations of their human rights. There is no golden ratio of using utilitarianism to using a moral basis when judging systems. Even if you could save 5000 lives by murdering 1 person, that system would still be equally immoral. You are still treating an actual living person as non-human as you deny him his very own body. A person dying from illness because nobody can (and/or wants to) help him, still dies as a human being, nobody is violating any of his rights. The donor doesn’t die as a human being, you make him something less than that right before you murder him.
Another problem with utilitarian arguments is that any definition of utility is arbitrary. What’s utility? The number of living persons? Of course not. Not every life is equal: one kid is obviously more important than a 90-year-old man. Also life quality should be taken into account too: it’s better to have 10% mortality per year if everybody is happy and free rather than have 1% mortality but being mistreated slaves to some master.
In my opinion the only measure of utility that could make some sense is happiness. Maybe in a few more years we’ll be able to measure the amount of the “happy” hormones in someone’s brain. Maybe we could even create a linear scale for it, where 10 is twice as happy as 5 as most people perceive it. Then, perhaps, trying to maximize the mean average or the sum of this well-defined happiness would actually make sense.
Would it really make sense though? Again, I don’t think so. First of all, this would still end up in fascist systems: killing people (or just zeroing their happiness) to save other people’s happiness. Secondly, if you define utility in such a way, drugs would possibly offer the best solution. Just inject the damn hormones into everyone and you have a perfect society by that utilitarian standard! So even that standard will not make sense.
Utilitarian arguments always end up in absurdities if taken as the primary criterion for comparing political (or other) systems. There is always a way to “cheat” them into creating some sort of insanely fascist system that maximizes any kind of utility measure you are using. And you always end up placing some moral code before your utilitarian arguments to escape.
That’s my problem with any kind of state in the modern world. When you question their moral basis, people always bring up utilitarian arguments: “Yes the state is immoral in some cases but the moral alternative you propose would not work.”. By using that kind of argument you imply that you are putting utilitarianism before your moral code. If you do that then you might as well accept that killing people to harvest organs is immoral but has immense utility so we should do it. The alternative as you would say, which is not killing people to harvest organs, costs human lives, much more human than the one life of the forced donor. So who cares about morality, right? For some reason, everyone cares about morality in such morbid cases but when the moral violations are smaller, utilitarianism takes the lead.
You might be tempted to say that it’s not black and white. There is middle ground. Using utilitarianism along with a moral code is a viable solution. But that doesn’t make sense. If your moral code is not absolute, if it allows you to skip parts of it if some other utility is raised that way, then you are actually a 100% utilitarian since your moral code is yet another utility measure. You still judge systems by their utility but you include some “moral” rules that, when broken, make you “sad” or “angry” and therefore impose a penalty on your measured utility for the system.
Moral rules are by definition absolute. If you say hurting an innocent is bad, that’s an absolute moral rule. If you allow hurting an innocent in certain cases then that’s not a moral rule, you are just turning <number of unhurt innocents> into another factor of your measure of utility.