In Defense of Moral Codes

Alan Keegan
7 min readApr 7, 2021

Heuristics are good for decision-making, because we are bad at making decisions.

In fact, heuristics are better for decision-making than we think they are, because we are biased to think that we are better at decision-making than we actually are.

This, ultimately, is the value of moral codes. Having a systematic set of pre-defined rules and priorities into which you can input your decisions often has a better outcome than trying to do everything case by case.

A moral compass, if you will.

In life, those assertions above are hard to measure. What is “good” decision-making in life? Is it happiness? Is it health? How do we test this?

The closest approximation we can find for people who follow moral codes may be those who are actively religious. And, it does seem that (once you control for age, gender, education, marital status, and income) people who self identify as “actively religious” are significantly more likely to identify as very happy, to not smoke, to drink infrequently, to describe themselves as “in very good health,” to vote in elections, to be members of various communities etc. etc.

For a more granular look at this study go to https://www.pewforum.org/2019/01/31/religions-relationship-to-happiness-civic-engagement-and-health-around-the-world/

But even then, this is a questionable proxy for moral codes. Many people function by moral codes that are not based in religion. Many who may identify as “actively religious,” and have a regular practice, don’t consciously abide by a moral code.

So, perhaps it is easier for us to look at examples where the goals of an action are simple and quantifiable, and case-by-case versus heuristic thinking is more obvious.

Day Trading and Driving

“Time in the market beats timing the market” is a famous heuristic in investment. Most of the folks I know with little knowledge of financial instruments still know it’s good to have exposure to assets, and, over time, they should expect simply holding risky assets to grow their wealth.

The average historical return of the S&P for the past 90 years is 9.8% per annum. But, anyone looking at any price chart can tell that if they sold and bought at the right time, they would have made much better returns. Or, maybe, that if they held the right stocks they would have better returns. It’s easy to see that, in a universal and literal sense, timing the market actually can beat time in the market. The heuristic that “time in the market beats timing the market,” strictly speaking, is wrong.

So, it’s understandable that many people choose to pick stocks and day trade rather than just holding the index. This is an example of seeing a heuristic that is easily disproved with a counter-example, and deciding to instead go with making case by case decisions. The decisions seems pretty logically defensible. And, in this case, unlike squishy ideas about moral codes and a good life, we have a very clear way of measuring “good outcomes” in trading — positive returns.

Whether the case by case vs heuristic approach is actually better depends on how good that person is at decision-making. Timing or choosing stocks well will make more money, timing or choosing badly will make less money. And, whether the people who choose to go with the case by case (day trading) approach tend to do better or worse is a clear indication of how good people are at assessing their own ability to do that decision-making. If people were, in general, accurate in their assessment of whether they’d do better day trading, the people who generally ended up day trading would be the people for whom it is profitable.

As it happens, the aggregate day trading action in the world is highly unprofitable. The vast majority of day traders (on an individual basis) are unprofitable. And, those who are unprofitable are only slightly more likely to quit.

This is one of many examples where a good heuristic generally beats an individual’s case by case assessment — but people tend to think they are part of the minority that can have a positive expected value from making the case by case assessment (I originally had an entire section here on heuristics in Magic the Gathering as well, due to it’s incredible decision-making complexity, but I thought it might not be relatable). This is an example of the same concept of “illusory superiority” that leads 93% of U.S. drivers to think they are in the top 50% of driver skill.

So what does this have to do with moral codes?

Systematic Moral Decision-Making

The trader in the case above can easily argue against someone encouraging them to just hold assets over the long term by pointing at a chart and saying “But if I just one time sold here, and bought here, I’d have better returns — even doing it just one time.”

The issue is a conflation of the accurate observation that exceptions exist with the implicit assumption that you can accurately identify future exceptions. That implicit assumption is necessary in order to make the case that the imperfection of the heuristic leads to the conclusion that you would do better deciding case by case. Logically isomorphic arguments exist in discussions of morality.

The most prominent of these is the “murderer at the door” problem, generally used to dismiss Kant. Kant was a staunch believer in moral imperatives (or, moral rules that apply in all cases).

One of them is “It is wrong to lie.” The obvious counter example is a murderer knocking on your door, asking if you are hiding their intended victim in your home, and you are morally obligated to say “yes,” which directly enables their murder and causes the death of an innocent. The same argument is often used with the supplanting of the murderer with Nazis and the victim with Jewish refugees (think of the opening scene in Inglorious Basterds). This obviously makes the moral rule “it is wrong to lie” sound a bit silly when applied universally.

With that said, it does seem like folks going around making case by case decisions on whether or not to lie are probably on net pretty likely to be getting rather poor “moral returns” on that approach versus just not lying (a lot like our day traders). Many people can relate to the experience of telling a “white lie” and, on reflection, wondering if they were making excuses to themselves for not sharing a hard truth. Isn’t this quite a quandary to sort through in creating a practical approach to moral decision-making in the real world?

No, not really.

Moral codes rarely exist with only one rule, and the practical applicability of any moral code with multiple rules for decision-making requires a ranking of those rules. To be able to put your situation into a system of moral rules that leads to a course of action, those moral rules must say which rules supercede which.

The “murderer at the door” example is trivially easily solved with a moral rule of “not taking actions that have a high probability of leading to the immediate death of innocents” that is ranked somewhere higher than “do not lie.” And, I imagine (or, I mean, hope) that anyone reading this article, if they were to undertake even a five minute exercise of writing down a moral code of ranked rules, would write one that included a rule like “don’t take actions that lead to innocent deaths” somewhere right near the top.

The very fact that this example case is easily generally agreed upon as problematic implies that it would violate something most people would include as a higher moral rule. This means putting a moderate effort into making a set of moral rules would quickly eliminate all obviously problematic exceptions to each heuristic rule. It is an argument again having a one rule moral code, but it is not an argument against having a moral code.

And that is precisely what I would encourage you to do. Write a set of moral rules for yourself, and do your best to follow them. Spend less bandwidth every day on what to do in particular cases, and consult what your rules say you should do. If this leads to actions that really don’t sit well with you, spend more time thinking on your rules. Make an attempt to create, and through time improve and develop, a code of morality. It will probably lead to better moral decision-making, and it will decrease the energy you sink into moral quandaries as they arise on a case by case basis.

Systematic decision-making generally beats case by case decision-making, especially insofar as you reallocate some of the energy you save (by reducing case by case thinking) into improving your decision-making system (moral code). As an old boss at Bridgewater once told me, “You can wake up every day and figure out whether to buy or sell gold, but you’ll do much better waking up every day and figuring out how to improve the system for deciding whether to buy or sell gold”.

--

--