Anthony Zhou

Psychology, Economics, Programming, or anything interesting.

On Logical Consistency

Exploring the implications of logical consistency for moral and practical goals.

Let’s say you believe that the best way to write a computer program quickly is to avoid early optimization. One day, you notice yourself tinkering with a for loop for hours to optimize its runtime, before having finished the method you’re writing. Fortunately, you’re a rational person. As a result, you decide that this behavior is inconsistent with your belief in avoiding early optimization, so you finish writing the method before optimizing the for loop further.

Now suppose you believe that animal suffering is important. On one hand, you support laws forbidding animal cruelty. On the other hand, you have not acted to support laws against factory farming, which arguably places animals in even worse conditions than animal cruelty does. Clearly, this behavior is inconsistent with your belief that animal suffering is important.

At first glance, these two scenarios look similar. Given that you start with a belief, the rational thing to do is to act in consistency with that belief. In practice, however, these cases are quite different. The programming scenario deals with a question of reality: on average, do you achieve a high-quality program more quickly when you optimize early or when you don’t? The animal suffering scenario deals with a question of morality: is it right to value animal suffering?

Questions of reality and morality are actually very different — claims about reality are falsifiable, whereas claims about morality are taken on faith. Demanding full consistency with a moral belief is equivalent to putting full confidence in that belief.

Within moral philosophy, we’re often faced with conclusions that go against what we intuitively think is right. Consider, for example, the repugnant conclusion, or the supposed utopia in which we all live in pleasure machines. You might think that these scenarios are necessary for the sake of moral consistency. In fact, though, once we realize that moral consistency presupposes full moral confidence, it becomes less clear that a rational person should support these scenarios.

This begs the question: is moral consistency rational? Surprisingly, I think the answer is no, because it presumes a seemingly irrational degree of confidence in moral beliefs. Given the lack of objective evidence for moral beliefs, the perspective most consistent with reality is actually to never have complete faith in a moral belief. Thus, in the sense of being consistent with reality, moral consistency may in fact be irrational.

Maybe this all sounds reasonable in theory. But what should we do in practice, if we place little to no confidence in each moral belief? Do we not have to act as if we hold certain morals to be true, even if at an intellectual level we don’t believe them?

Not exactly. In a world where we are uncertain about most morals, we can at least optimize for things that are instrumentally valuable in almost all moral frameworks. For example, most people believe it is good to survive. Money, physical fitness, and healthy relationships are all important for this goal. So even someone who is uncertain about their morals should accept wealth, fitness, and relationships as instrumental objectives.

But a life lived solely at the corporeal level might seem empty. Is there some way we could make this amoral life more fulfilling? Maybe we should add “life satisfaction” as another basic objective.

To address this metric, we can start by doing things that are roughly good. People generally report feeling more satisfied after doing good things, whether giving gifts or serving food at a soup kitchen. Note, however, that maximally good acts do not increase life satisfaction more than roughly good acts do. Beyond just diminishing marginal returns, the effect may actually go in the opposite direction (see research by Barry Schwartz on satisficers and maximizers).

Switching to this frame of mind — doing things that are roughly good — should not drastically change the behavior of altruists in general. For example, you might still think that it seems roughly good to stop factory farming. But you would no longer do so for the sake of moral consistency, using a moral law generalized from your behavior towards people who abuse their dogs. Instead, you would suffice to say that ending factory farming is roughly good. You might even argue further that it is typically (but not always) good to reduce suffering for sentient beings.

Refusing moral consistency is useful in two contexts: first, it gives us reason to reject beliefs that are internally consistent and intuitively repugnant. Second, it frees us to consider the full realm of moral possibility.

Alternative maximizing frameworks

Beyond simply maximizing human welfare, we can consider other metrics to optimize for, adopting these frames of mind when the context demands it.

One compelling strategy is to maximize knowledge — often called “seeking the truth.” Within this framework, scientific progress is clearly valuable. In addition, technology entrepreneurship can be seen as an extension of experimental science, because technology involves changing our reality and learning from the results.

Roles that are only instrumentally valuable in this framework include most legal, financial, and service sector jobs. They help us preserve the stable society that leads to increases in knowledge. In the same way, education is also instrumentally valuable, though in a more direct way, because it enables more people to reach the edge of our collective knowledge and push its bounds.

Another strategy is to maximize accordance with nature. In this framework, we should allow ecological-looking systems — like markets — to decide what’s right. Thus there are two kinds of moral actions: either you can work to free the markets from regulation and bad actors — as the libertarians often do — or you can work to maximize your personal wealth, because wealth is a sign that you have delivered value to the market. (See Principles by Ray Dalio for further explanation). If you also subscribe to cultural evolution, then you should believe that the cultures that currently exist are in fact natural and therefore good.

Yet another strategy is to maximize people’s ability to uniquely appreciate the universe. According to this perspective, we have a moral imperative to try things that other people haven’t before. In fact, this framework seems like a special case of the knowledge-maximizing strategy, which gives more credence to art, music, and other experiences that we don’t typically consider “knowledge.”

One final strategy is dataism, which seeks to maximize our collective capacity to process information. This approach differs slightly from the knowledge-seeking mindset because information processing does not imply an increase in knowledge. An information-processing agent could just as well use information purely for action, rather than accumulating knowledge. (See Homo Deus by Yuval Noah Harari for further explanation).

We can easily see how certain professions adopt certain of these frameworks, so that they can see their day-to-day work as serving a broader moral purpose. For example, scientific researchers are more likely than the average person to adopt a knowledge-maximizing perspective. Each of these moral frameworks has its benefits and drawbacks, so they are best used in moderation.

Footnote: Definitions of Rationality

We often draw a distinction between epistemic and instrumental rationality. Epistemic rationality is defined as updating beliefs in accordance with reality, whereas instrumental rationality is defined as acting in accordance with our existing goals.

My proposal here is to split instrumental rationality into two types, depending on the type of goal you are pursuing. I argue that it is rational to act in accordance with measurable goals (e.g., complete a computer program by the end of the day, or start a billion-dollar company), whereas it is not necessarily rational or irrational to act in accordance with moral goals (e.g. maximize human welfare). This conclusion seems counterintuitive, and that is precisely why pseudo-rational justifications for moral beliefs sound so appealing (often of the form “it is obvious that…” or “any rational person would agree that…”).

Anthony Zhou logo Join the Anthony Zhou community