There are facts that, through no fault of our own, we cannot help but be ignorant of. For example one cannot be expected to know where the next plane crash will take place, but it may be more rational to board an airline with a good record. Let us call this lack of knowledge justified ignorance. Actions affected by such ignorance are risks performed under justified uncertainty. Philosophers will quibble over whether we can ever know anything for certain, but we can all agree that many actions performed by mere mortals involve such risks.
The precise degree of risk and justification is dependent upon the availability of relevant information, and so will vary wildly from one case to another. This need not concern us much here for the rule we shall propose is intended to hold across all cases. Indeed, we maintain that the true measure of risk should not be calculated in terms of the pure probability of outcomes but multiplied by the significance of the outcomes in question. Risking losing one dollar against the ridiculously low chances of winning the lottery is far more prudent than taking a nuclear action that has a 99% likelihood of not ending the world. This is particularly true when it comes to sequences of risky decisions where a small probability of extinction, taken repeatedly, ends up raising the odds to close to certainty.
In moral philosophy there is a famous debate about the relation of duty to ignorance. Some argue that our obligations are tied to how things actually are (or will be), others to how we happen to think they are, and others still to how we can rationally expect them to be, given the information at hand. These views are all united by the thought that there is one right answer to this question of duty. An alternative school of thought maintains that there are several different obligations: the objective “ought”, the subjective “ought”, the “ought” of rational expectation, and so on. We shall not concern ourselves with these questions in this essay, important though they may be. Instead, we shall introduce a normative constraint which cuts across them in the sense that it holds true no matter which of the above views is the correct one to take.
We propose that actions performed under justified uncertainty should be subject to the Silver Rule (SR):
Do not expose others to a harm the near equivalent of which you are not exposing yourself to.
In earlier work we characterised this norm in terms of having skin in the game. The metaphor was intended to capture the thought that people should by and large not be allowed to make decisions that do not affect them. A captain should not be permitted to put his crew at risk of a harm the near equivalent of which he himself is not exposed to. The same holds true for the head of state, banker, doctor, military strategist, and middle-manager. Though apt, the skin in the game metaphor is too crude and vague to serve as a norm that can be implemented. SR is intended to clarify certain misconceptions and ambiguities.
The rule is too weak for cases of either unjustified ignorance or discarded knowledge. In such situations the risk-taker remains liable for harms to others that are minimal compared to those he undertakes himself. SR, by contrast, emphasises a symmetrical relation between the two harms in question.
SR implies that – all else being equal – the potential harm should be the very same harm for both the risk-taker and anybody else that is affected. The clearest case of this is that of equal financial harm. What actually matters, though, is that the harm in question can sometimes be a near equivalent rather than an identical harm. There is a Talmudic discussion showing that the “eye for eye” cannot be possibly literal, otherwise a blind person could blind others with total immunity. Consider the case of medical surgery: the surgeon cannot be expected to put his own organs at risk when deciding upon an operating procedure on another. But it is important that he risks his license and reputation, otherwise there would be nothing preventing him from using patients as guinea pigs to further his research career. Of course some goods are incommensurable, and it is impossible to straightforwardly equate the health of one person to the career of another. This is why we prefer to talk in terms of near equivalents. The important point is that the potential harm to the risk taker rises in proportion to the possible harm the risk at hand may have on others. It is in this sense that SR is a symmetrical rule, akin to “an eye for an eye” or “do unto others…”.
What sort of rule is the silver rule and why should we abide by it? The first thing to say is that it is a heuristic maxim or rule of thumb. By this we mean that we cannot exclude special circumstances (be they end of the world scenarios or everyday actions based on attitudes that cannot but be out of our control, such as matters of the heart) in which one is permitted – perhaps even obligated – to flout the rule. But the burden of proof is always on the rule-breaker to provide convincing reasons for why we should exempt his action(s) from it.
The next thing to note is that the rule places a normative constraint on acting under justified uncertainty. A person who fails to abide by it without good reason is not as he ought to be. What kind of an “ought” is this? We believe it is both rational and moral. It is rational insofar as it is prudent for a society at large to abide by it. By this we mean that societies which do not implement it are headed for disaster. But the rule is also a moral one because those who ignore the rule for individual gain are egoistic parasites, akin to non-compliers in prisoner dilemma type situations. Not only does such a person benefit by harming other individuals unfairly, he also puts society at large at risk. One need only contemplate environmental risks to see that this may include his own future self or at least that of his children. In this context at least, the moral and the rational are two sides of the same coin.
The above also serves as a preliminary answer to the question of why we should follow the silver rule viz. for reasons that are both moral and prudential. We shall argue for this stance by appeal to concrete examples, from middle-management to the pharmaceutical industry.
In trying to illustrate the silver rule, we already gave the example of the surgeon. The very simple idea there was that it would be a bad idea to provide surgeons with incentives for taking risks with their patients’ lives for the purpose of furthering their career. This would be the case if surgeons were always rewarded for their successes but never penalised for any failure. But given the long history of the profession, and the repeat or serial nature of the actions involved (as opposed to one-off actions that do not reveal their riskiness in a small sample), there prevails some kind of equilibrium between harm inflicted and harm saved.
Something like the above is true of companies that are deemed “too big to fail”. When their policies are shrewd or lucky, they get richer. But when they prove less fortunate the taxpayer is called in to bail the company out, in the name of saving the jobs of its employees. The same holds true of banks and other financial organisations. This would be acceptable were any profits it subsequently made returned to the taxpayer, ideally with interest. But when has this ever been known to happen? This is the problem of asymmetry and its socio-economic consequences are disastrous.
The asymmetry problem may be found in every walk of life. Consider the line-manager in charge of workload plans that do not affect him, the legislator who creates law affecting ethnic or gender groups that he does not belong to, the cosmetic company that tests on animals, the drug pusher who stays clean, the war commander who secures a cushy office position for his own son, or the oil tycoon who risks harming the environment. What these all have in common is a strategy which guarantees profits if they are successful, yet in the case of failure the greater portion of the induced harm is deflected on to others. The silver rule is designed to keep such cases to a minimum, for the greater benefit of all.
How should SR be implemented? We believe that this can and ought to be done at both informal and formal levels of contract and regulation. On the informal side, it is important that it be taboo for someone to flout the rule. Those who break the silver-rule ought to be social outcasts, not be celebrated on the cover of Fortune magazine. People should be educated at the most basic level to realise that exposing others to harms that one is not exposing oneself to is neither brave nor clever. More formally, institutions and nation-states should have laws and regulations designed to penalise those who break the silver rule. Not only is this not currently the case but, as we have already seen, the status quo frequently offers incentives to those who safely keep their neck hidden away from the line to keep on risking other people’s lives and assets.
Let us now consider a concrete example of a proposal for economic reform. In his recent book Capital in the Twenty-First Century Thomas Piketty proposes higher taxes on the rich as a solution to inequality. But what matters most is equality in the regulation of risk-taking itself, not its pure results. It is of paramount importance that we control risk itself and not just tax its outcomes. This is not an argument against high taxation per se. But the benefits or perils of taxation cannot be separated from the systematic use which the taxpayers’ money is put to. What good is increasing taxes if the tax money is allocated to bailing out those who took bad risks for private gain, thereby giving them an incentive to keep on doing so. The combination of high taxes and “too big to fail” bailouts will only serve to widen the gap between the 1% and the other 99%.
There is an additional problem with Piketty and the Pikettistas: the solution can increase the role of bureaucrats – who patently are not harmed by their mistakes – as opposed to those who are harmed. The idea of the silver rule can lead to a better definition of equality: one in which no person has a permanent spot at the top, where all share equal probability of losing the top-dog status. A bureaucrat would be as likely as a baker to join the rank of the unemployed.
Note that the economic literature focuses on incentives as encouragement or deterrent, helping with situations of informational asymmetries, but it does not look at disincentives as potent filters that remove incompetent and nefarious risk takers from the system. Consider that the symmetry of risks incurred on the road causes the bad driver to eventually exit the system and stop killing others. Thus an unskilled forecaster with skin-in-the-game would eventually go bankrupt or out of business. Shielded from potentially (financially) harmful exposure, he would continue contributing to the build-up of risks in the system.
We began this essay with some paradigmatic philosophical concerns about uncertainty in relation to ethics. Our discussion swiftly moved away from these specific worries to a more general one concerning what we owe to each other. This is not to question the importance of the initial problem but only to demonstrate that whatever the correct solution to it may be, this will not yet address a crucial ethical problem concerning acts performed in justified ignorance. Conversely, this problem can be dealt with irrespectively of the traditional one. Needless to say, no moral epistemology will be complete until we have a unified account of both issues. There is work to be done yet.
Constantine Sandis is Professor Of Philosophy at Oxford Brookes University and author of The Things We Do And Why We Do Them.
Nassim N Taleb is a Professor Of Risk at New York University School Of Engineering. He is the author of Incerto (Antifragile, The Black Swan, Fooled By Randomness, and The Bed Of Procrustes).