Will the A.I. be Vegan?

Alan Keegan
11 min readMar 24, 2021

Let’s hope so.

To frame this conversation, I am not a vegan. Like most people, I just eat meat because I like it.

But veganism is fascinating. It falls into a category of social movements where one side of the debate are morally motivated, passionate warriors for a cause attempting to change society, and the other side just doesn’t have an interest in thinking about the topic.

Usually this happens when one side of the “debate” doesn’t have any acute pain associated with the issue. This is why these types of movements, almost necessarily, have to use tactics that are…annoying.

The same was true of the Women’s suffrage movement, or of the teetotallers fighting for prohibition. Both movements are trying to convince the majority to change their minds on an issue that the majority doesn’t even view as an issue. They make it an issue by creating disturbances at public events, or ransacking drinking establishments.

These two examples also demonstrate the nearly binary outcome of “successful” movements of this type.

Sometimes, like with women’s suffrage, it turns out that the movement was accurately pointing out an unexamined belief or practice that doesn’t actually square with the society’s morality. In such cases, at the beginning, only a few find the argument convincing or interesting, and the majority find it to be a silly conversation. But, ultimately, the majority ends up agreeing with the previously “radical” stance, with only a few holdouts who are now the minority.

In the case of the teetotalers, their movement succeeded. It had its day in the sun and even got Prohibition written into the law of the land. But, even with such victories, it turned out that our cultural values don’t jive with prohibition (drinking is fun, also we value individual freedoms) and, in hindsight, the whole idea seems like a strange phase we went through.

So, which category does Veganism fall into? Will we look back and think it’s crazy that we ever ate animals, or will we look back and laugh at the extremeness of the vegan movement?

Figuring out which one veganism is requires looking past the folks who are more attached to veganism as a personal identity and dealing with the moral questions at its core, which, it turns out, might help you decide if you’re down with humanity being enslaved and/or genocided and/or modified by the AI.

The Moral Weight of Animal Suffering

Does the suffering of animals have any moral significance at all? If I burn my cat to death for no reason, is that bad?*

The intuitive answer is yes. If someone in my neighborhood kept buying kittens and burning them to death, he would appear evil.

But it’s hard to move forward from this intuitive idea unless we put together a good logic for why animal suffering matters and how much it matters.

And the most compelling case I’ve seen around that is from this kindly old Australian man, who you might recognize as the guy who has a habit of articulating moral arguments so clearly it forces you to acknowledge the moral incoherency of your whole god damn life

But before becoming the kindly old man who actively articulates the effective altruism movement, he was just a devastatingly handsome, hungry young Australian professor in 1985 who wrote the book “Animal Liberation,” which basically uses the Principle of Equal Consideration of interest to make the case that if humans have right, animals have rights too, and those rights include not being subjected to the mind blowing torturous hellscape of modern industrial farming.

I mean, look at this guy

I’ll put a bit more description of his argument right here in an aggressive footnote** if you want to pause and read it — I actually recommend getting the book. But, you don’t have to take Peter Singer’s argument “whole hog” (lol) for this consideration to begin to get your attention.

For the purposes of this conversation let’s reduce it to this:

It definitely seems like animals are capable of suffering. If they are capable of experiencing suffering, inflicting suffering on them has some moral weight.

Now if you have done any exploration at all of the conditions of most of the meat and dairy industry, this is enough to make the question seem worth engaging with. The levels of suffering are astounding, so even at a very discounted relative value of animal vs. human suffering, there is a conversation to be had here.

So, now, we need to ask, is the moral weight of animal suffering the same as the moral weight of human suffering?

Our intuitive answer is usually no

If a kid, even my least favorite kid in the neighborhood, is in a fight to the death with a possum, even a cute baby possum, and I have a gun — I’m shooting that possum.

But the intuitive answer isn’t necessarily the correct answer, we need to figure out why we think human suffering has more moral weight — and maybe even by how much

You would rather kill a dog than a human?

Would you rather torture a dog than slap a human?

Would you rather burn 20 dogs alive than put salt in a human’s cereal?

We have to focus on the basis of this preference for human wellbeing if we want to know how to translate that difference into a tractable system of morally weighing options when deciding things like whether our society ought to eat meat.

If the only basis we have left is the a priori assertion that humans are the only type of animal with moral significance simply, you know, because, then we’re making an arbitrary distinction between the moral value of two beings based on a characteristic that has no justification for being morally relevant: that is pretty logically similar to sexism (delineating moral value by the arbitrary distinction of sex) or racism (delineating moral value based on the arbitrary distinction of race). It’s just in this case we are arbitrarily delineating moral value based on the distinction of species.

So really if this is a justified distinction, there would have to be an intellectually honest reason to think that the barrier of “species”, unlike race or sex, is actually morally relevant.

This conversation almost always turns, now, to intelligence.

Intelligence and Moral Weight

Smashing an ant with a hammer might seem morally preferable to smashing a cow with a hammer. Which, in turn, seems morally preferable to smashing a human with a hammer.

Perhaps if we put animals on a spectrum of intelligence, we can get a sense of the relative importance of their suffering. Ants are less intelligent, and therefore less morally valuable than cows; cows are less intelligent, and therefore less morally valuable than humans.

There are two major problems with this intuitive way of justifying our moral weighing of human suffering over that of animals. The first comes when you do the corollary application of the same spectrum of moral weight between humans.

If intelligence (or, say, capacity for rational thought) is our basis for valuing relative suffering, then within the group of humans on our spectrum we are obligated to separate humans of differing levels of intelligence. This would mean consistent logic demands that we separate our consideration of human needs into who is stupid, and who is morally valuable. It removes your justification for ascribing the same human moral value to the severely brain damaged, or to the differently abled. Many of them, by this framework, would be no more morally valuable than some animals.

And, given how we treat animals, that’s a bit morally icky.

It’s around this point that you may begin wondering,

“Man, if I’m working this hard to find a way to make intelligence work as a basis for relative moral weight, maybe a part of me really did just inherit the belief that humanity has a special divine spark, being made in God’s image unlike any other animal, and given righteous dominion over other beings of the earth. Or at least I act like I believe that.”

But, the even greater problem area comes when we extend the spectrum of intelligence (and corresponding moral weight) beyond human levels of intelligence.

Artificial Intelligence and Relative Moral Weight

A general AI would certainly, by almost any measure of observable intelligence, be more intelligent than a human. If moral consequence scales with intelligence in the way our treatment of animals would imply, a general AI’s simple pleasures might be as much more morally important than human suffering as the human pleasure of having cheap bacon is to a pig’s suffering.

If you’re trying to make an “observable intelligence” justification (rather than a “God says we are special” argument) for why animal suffering is less significant than human suffering, then you are also morally justifying an AI treating us just as cruelly for just as marginal a benefit to itself.

Not so bad

The humans as batteries in pods while dropped into The Matrix is actually significantly more morally sound than factory farming by this logic. Relatively speaking, the humans in The Matrix have pretty good lives, and with no solar power available the need of the machines for us as a power source seems greater than current human’s need for meat products.

A more equivalent treatment would be the AI’s decision that it’s morally acceptable for them to keep all humans in cages just larger than the size of our bodies, slowly descending into insanity over the course of their lives while being fed paste and antibiotics, which are needed to keep them from contracting deadly infections due to the excrement (their own and their crowded in neighbors’) that covers their claustrophobic prison until they are ready for harvest.

At this point, you may be constructing a moral argument for why this is not a morally acceptable behavior from an AI. But, in the words of Joey from Friends, what you think of this is a “Moo point.” It’s like a cow’s opinion. It just doesn’t matter.

We do not concern ourselves with whether cows think that their situation is morally acceptable — their inferior capacity for moral reasoning means it is our decision to make. The AI could, quite fairly, think exactly the same way about humans.

To bring the argument up a level here, the basis of this entire debate already has a built in assumption — that we are better equipped than cows to make the moral decision in the first place. We are having this discussion on the basis that this is our decision to make.

To prepare ourselves to deal with the presence of a super intelligent AI, the question we need to ask may not be “Whose suffering matters more?”, but, “Whose opinion about ‘whose suffering matters more’ matters more?”

We haven’t even considered the possibility that we shouldn’t have the moral authority to decide whether our own treatment of animals is moral — we’re forming a human opinion about it, obviously presupposing that our opinion matters whereas the cow’s opinion does not: Why? Because it is less intelligent?

To the degree we put aside the cow’s obvious preference not to be in terrible conditions, the AI is justified in ignoring whatever our preference is. According to our own morality, it has no moral obligation to even care what our arguments are. Our moral opinions, by the same logic, are almost completely devoid of value versus the moral ponderings of a significantly more intelligent AI. It would know the better answer to the moral question. Even if you constructed an argument for the AI treating us well that was somehow not hypocritical based on our treatment of animals, your expectation that the AI lends any weight to your opinion is hypocritical in itself.

And I sure hope the AI thinks like a vegan.

Which is to say, I sure hope that the vegan answer to this moral question is the better answer.

*Incidentally, Aquinas’ answer was that this is only bad because it could give you a worse disposition toward humans, in which case as long as you kept to yourself, it wouldn’t be evil at all. This article is longer than it ought to be anyway so that lil’ somethin’ is down here in a footnote.

**Peter Singer basically argues this case based on the Principle of Equal Consideration of Interest — that similar interests should be weighed equally regardless of their origin. This same framework was a key argument against slavery back in the day: it was obvious that all people had an interest in freedom (and a desire to be free), regardless of their origin or ethnicity. And, that very interest, implied some right to self direction (or at least some moral question to denying it). Incidentally, to go back to another example, it was also instrumental in some arguments for womens’ suffrage — women are as much affected by the outcome of elections as men, and so have an equal interest in them, and therefore an equal moral claim to vote in them. The principle of equal interest also, actually, does a pretty good job of dealing with babies — which tend to be a really sticky issue in a lot of political theory. A baby has an interest in being fed and cared for, but doesn’t care about the election (don’t need a vote). This is key because it means that the principle does not ascribe pre-defined rights universally to all beings, but rather specifically to different beings based on their interest. A being’s rights are tied to what it is capable of caring about. Obviously babies are beings with different interests than adult humans, but that doesn’t mean we can ignore that their interests have moral weight. Animals are also beings with different interests than adult humans. Using this principle, Singer argues that given animals obviously have some interests that are totally comparable to humans (like not spending their entire life tortured in a dystopian hellscape), they probably ought to have some rights (we should make cruelty to them illegal). We don’t need to allow them to run for office or anything like that. But, if we acknowledge that they have an interest in not feeling pain, but ignore the moral weight of that interest based on distinctions delineated by morally arbitrary characteristics, like what species the being is, we are making a moral argument analogous to sexism or racism — which makes an arbitrary moral distinction between like interests delineated on race or sex (without providing a basis for the moral relevance of that delineation). Industrial farming, where animals are raised in captivity for food, goes against their interest and is therefore morally wrong. It is morally wrong to support morally wrong practices, therefore purchasing their products is morally wrong. Go vegan. This is an extremely brief summary of the argument, I recommend reading the book to give it a fair shake.

--

--