There’s an arc of development which characterizes the life of many a scientifically-minded person, myself included. As a child, such a one tends toward reductionism: To understand the simple, unified laws which “evaluate forward” to give rise to the higher order dynamics of the world feels like to understand the world. But as the child’s world gives way to the more complex world of the adult–which swirls with the complexity of these higher order dynamics–her reductionism begins to feel insufficient. To have knowledge of the world, the adult feels she must understand not only the generators of its dynamics but also the dynamics themselves and the patterns which they contain. Moreover, she must be able to infer, optimize, and plan relative to these dynamics (“evaluate the world in reverse”, so to speak).
So the physicist becomes a computer scientist.
This post is about tracing out this developmental arc within the ethical sphere. Where the “physicist” (or the child with no real ethical problems) might look at various frameworks such as utilitarianism, and conclude that ethics has been solved therein, the “computer scientist” (or the adult who must make real, moral decisions) needs to actually have a practical algorithm for ethical decision making.
Despite some of the abstract or abstruse language, this post is meant to be practical, at least for me. It is practical because perhaps the worst thing that a person can do in the course of becoming an effective moral agent is to follow in the steps of a physicist for the first half of the arc in order to arrive at an abstract theory of ethics, without then continuing on (in the steps of the computer scientist) to find a practical way of living according to the physicist’s abstract principles. Yet this feels like exactly where I’ve been for some time, and so this post is meant to serve as a few conscious steps along the second half of that arc.
The computational lens on ethics
If pressed far enough from an epistemic, reductionist standpoint, I am a consequentialist when it comes to ethics. As a foundation for ethics, this seems almost tautological to me. If there were an ethical system which I knew at the end of the day would make be less happy or satisfied than some alternative, I would choose to abide by the alternative. This is my “physicist’s” view of ethics, full stop.
I tend to believe that other ethical systems such as utilitarianism can be derived as approximations to consequentialism when enough human psychology and second-order thinking is sprinkled in; for instance, if I were the sort of person who did not love others or care about them at all, I would end up as a very miserable person. More generally, if I make a habit of favoring myself over others, that will do something to my being that will make it less beautiful or joyful to experience.
I think this view tolerates many edge cases. Even forfeiting one’s life might be more tolerable than living with oneself knowing one could have saved a child or a loved-one.
I’ve found my own sort of personal brand of utilitarianism: If I knew that I would have to live the lives of those who are affected by my decisions, which decision would I make right now? I think this takes the core elements of the abstract formulation and makes them relatable and graspable.
But utilitarianism, even in my personalized framing, is still like consequentialism in that it presents ethics as an optimization problem coupled with the mandate “go and do the optimal thing.”
The difference is that utilitarianism takes the “my own personal happiness” objective of consequentialism and replaces it with “the good of all mankind” or “the good of all sentient beings”—perhaps, as I’ve argued, under the justification that actually these optimization problems are equivalent and yield the same solution, but that the latter is easier to solve because it discards large swaths of the solution space which can be shown to be suboptimal due to fairly well-characterized higher order effects.
Thus, starting from consequentialism, utilitarianism might represent one step in the direction of a computationally tractable “computer scientist’s” ethics. Notwithstanding this simplification, the optimization problem implicit in utilitarianism is still hard, in ways that I have come to personally appreciate more over time (in ways that only a physicist could ignore or discount!).
In essence, utilitarianism commits to something like “optimal future control.” There are two highly-coupled things that this project difficult:
- “Optimally” - Plans extend into the future as branching trees of decisions whose count grows exponentially fast. I can only give my earnest attention to a select few of these plans. I’ll necessarily rely on fallible, unconscious heuristics and patterns to help guide my attention.
- “Controlling” - For any given plan, how well can we predict it’s impact on the course of the future? In some cases, the impact of an action on the thing that we care about controlling is actually knowable with reasonable confidence. In other cases, chaos is more dominant.
Because of difficultly of optimal future control, being a disciple of utilitarianism can lead to a few common failure modes:
- Stuckness - Given a set of alternative choices, I can’t figure out which one leads to the best future world. I have too much uncertainty, and I’m very aware of it. I waffle endlessly. I’m stuck.
- Bad gambles - There’s a choice which looks like it will lead to an amazing outcome for everyone, with some apparently small probability of failure. I take it, but the bad world is what happens. I didn’t take the uncertainties seriously enough. SBF, ’nuff said.
Being stuck feels familiar to me. In conversations with my therapist, a common observation has been my apparent difficulty in dealing with uncertainty. For a long time, I’ve been unsure what to do with this observation; naturally, I struggle in dealing with uncertainty, uncertainty makes decisions hard! But over time, I’ve started to wonder if this might a signal that I’m operating in a cognitive frame in which uncertainty especially problematic.
Let’s take a moment to recap: Many of us might identify with consequentialist ethics at an epistemic level, but manage some kind of transformation to a version of utilitarianism as a fairly lossless simplification of the problem. But then we find that utilitarianism is still too complex of a framework to bring to many practical life situations. We might wonder–can we further progress this process of simplification? What other forms of ethics are consequentialist / utilitarian at their core, but more amenable to real life situations?
In my own thinking, I have often contrasted the theories of consequentialism and utilitarianism with the “virtue ethics” of the Greeks. My simplistic take has been that virtue ethics constitutes a cataloguing of the kinds of habits which lead to a happy life. That is, virtue ethics is a sort of empirical, learning-based approach to solving the consequentialist’s optimization problem.
There are at least two reasons why one might take this sort of empirical approach to ethics:
- The deeper optimization-based framing is inaccessible. The learning has been too shallow, and has failed to divine the unifying principle at the core. One has learned a few force laws, but has failed to find the “unified string theory”, so to speak (Physicist’s view).
- The optimization problem is too hard. Learning is actually being used as a strategy to take a problem that is essentially intractability, but for some statistical regularities that allow for a reasonably good solution (Computer scientists view).
I think that I’ve generally assumed the first reason to be the actual one (physicist, duh). Virtue ethics seemed like something overly coarse that should be replaced with the finer lens of one of these more universal ethic frameworks when working through the complex, gritty ethical situations that one might encounter outside of Plato’s virtue handbook. Now I’m less certain that virtue ethics might not contain some valuable principles which are very key to making the intractable problem of ethical decision-making more manageable.
The solution heuristic of integrity
An idea which has become very relevant to me recently in this regard is that of integrity. I don’t remember what Plato has to say about integrity, but I’m thankful to Malcolm Ocean for posing an very evocative conception (paraphrased): “Integrity is about doing what feels right regardless of the outcomes/consequences.” Upon hearing this phrasing, I immediately realized that I had beheld the seed of an ethical framing that could help with thee problems of paralyzation-by-uncertainty that had been occurring in my life.
So far, we have a few solution modes for the optimization-framing of ethics:
- Solve with the personal utility objective
- Solve with the proxy “universal utility” objective
- Rely on learning-based heuristics (e.g. virtue ethics)
How does the concept of integrity play into this picture?
One possibility: Integrity the solution strategy that arises from upgrading our optimization problem to one of “robust optimization.” That is, we add some kind of term like “variance” to our objective in order to capture our distaste for uncertainty.
One of the most natural justifications for doing something like this is that it takes into account the ongoing (repeated) nature of the games in which we must make decisions. A striking observation is that if we just look at expected payout (ignoring variance), the optimal solution to a single gambling-type game is not the same as the optimal solution to the repeated game. In the single iteration of the game, you can just look at your expected payout but in the repeated game you have to consider the fact that going all-in may eliminate you from further rounds, thus limiting your total expected payout over time. Thus, optimizing for repeated games in expectation starts to look like robust optimization (penalizing for variance) in a single iteration, especially when there’s a dynamic in which the negative side of some risk can take us out of the game.
This is all a bit of a hand-wavy line of reasoning, but it strikes me that many situations in life do indeed have this dynamic. Wins and losses can affect reputation, networks of relationships, health, mental habits—all things that can make it easier or harder to win or lose in the future.
So let’s suppose we’re convinced that in the episodic decisions of our lives, we should be practicing robust optimization. What does this look like?
Obviously, wild, SBF-style gambits will be out, so error mode 2 is covered. But uncertainty is latent in everything that we do, especially in the big ventures we might undertake. We don’t want to disqualify ourselves from such ventures. So what can we do?
Maybe the answer is that, instead of focusing our efforts on characterizing the uncertainty in any given choice, we focus on building structures in our lives which are robust to the latent uncertainties therein—on building integrity
Let’s look at some concrete parables.
…
Interpersonal
Lying. It’s easy to construct scenarios where it seems like the concealing or misrepresentation of certain information will be to everyone’s benefit. Even if we allow that this is indeed the case, it is usually only the case so long as the secret is properly kept or the lie protected from exposure.
Thus, whatever “good world” that we might create via lying or secret-keeping is a brittle world. As much as it may be high in expectation it is also high variance. In many worlds, this world explodes violently.
Coercion. Coercion might enable me to achieve some kind of desirable dynamic, but this dynamic can collapse when exogenous factors shift the balance of power or other incentives, causing this world, too, to collapse violently.
If we look at the general landscape of our relationships, we might tend to see one of two possible worlds:
- There is a lack of alignment, concordance, peace, etc. Things feel zero sum. And in order to satisfy our needs or achieve whatever goal we have with respect to the social context, we may feel the draw of resorting to tactics such as lying and coercion, or other modalities which in turn will further contribute to instability of the social landscape.
- There is alignment, concordance, cooperation, peace. Things feel positive-sum. People can be up-front with their feelings, intentions, wants, desires, needs, etc, and feel empowered to negotiate to have these things met.
When faced with the first of these landscapes, there different ways that we might respond. One option might be to optimize within the landscape, attempt to control outcomes, to grapple the uncertainties. This of course is susceptible to our familiar failure modes (exploding or stalling).
Another option is to try to push toward the second landscape. To consciously work toward alignment, concordance, cooperation, and general stability or integrity. (Obviously, this is the solution strategy favored by robust optimization)
(Practically speaking, what often keeps us from attaining integrity in social settings is that some kind of discomfort–some exercise of courage, vulnerability, or self-development–stands between us and an integral state. If we’re focused on immediate gratification, we will shy away from this kind of “hard work” and look for opportunities to find near-term satisfaction which may jeopardize our long-term prospects.)
Intrapersonal
We can extend this type of thinking to our own psyche.
I’m composed of many parts. I might find that my parts are wildly out of alignment. Different parts have different wants, which feel irreconcilable. I’m in turmoil and it feels zero-sum.
If I view this internal landscape as immutable, I might respond to it by listening to the vocal of my parts, or the parts whose demands feels like something that I can satisfy given the broader constraints of my life, and try to placate these parts. As far the other parts, it may be expedient to coerce these parts into quiescence or to gaslight them or lie to them. While this might lead to some respite or satisfaction over some timeframe, it’s very non-robust. In this state of internal turmoil, I’ll be prone to akrasia, indecision, depression, etc. My motivation might fail, my interest in life diminish, my world my collapse. There is instability that is intrinsic to the state of being at odds with oneself. To being not at peace.
To robustly optimize here is to seek personal integrity among my parts. To work innerly in order to unbind the relationships of coercion and gaslighting, to find authenticity and inner integration. This can and will percolate up into the interpersonal sphere. It will take courage and vulnerability to be the authentic and integrated self with those who have only experienced the sharded, suppressed one.
…
In all of these contexts, robust optimization tends to look a lot like worrying a bit less about predicting the impact that my actions will have on the world, and a little bit more about building systems with integrity (personally, interpersonally, in institutions, and so on).
At the personal level, this leads to another phrasing of integrity from Malcolm: “Finding the (best) smallest possible move that your being has to do here. " How can I move toward a me that is more whole, integral, equanimous, at peace.
In this sense, it seems that integrity is also helpful not only for avoiding the second failure mode which I presented (wild gambles), but also for the first: stuckness. Optimal future control is an impossible problem, one that will cause us to stall out. Building integrity is merely hard. It’s a long journey but with small steps.