How to End Our Love Affair with Evidence

A central aspect of my philosophical work these days is this: to warn against over-estimating, for example, how much one can learn from past financial crises, in thinking about future financial crises. How much, to put it in more general – and philosophical – terms, one can learn inductively. There is plenty one can learn; but there is also a severe limit on what one can learn. There is a limit, in other words, on the value of evidence.

The danger of not being continually aware of this point is that one may think, at least unconsciously, that there are specific lessons to learn and that, once one has learnt them, then one’s job is done and one has genuinely ensured as best one can that there will not be further such crises in the future.

This would be a hubristic stance. Hubris, in the long run, inevitably leads to nemesis.

For: we are always going to be living in a social world that defies full comprehension and control. A world that we do not and never will fully understand, as my colleague Nassim N. Taleb puts it, in his 2012 book Anti-Fragile.

The real challenge, the deep thing that one has to learn, is how best to seek safety for the economy, for citizens, in such a world: in a world that one accepts as a world one cannot predict and control.

That we live in such a world is revealed by financial crises. In fact, that might itself be justly said to be the deepest thing one can learn from them. 

This is the challenge we face: to learn to live more safely in a world that we are never going to be able to understand or control or even ‘manage’. This entails a ‘letting-go’.

But the alternative is worse: that, by seeking to manage, to master, our world, we give ourselves a false assurance that all is going to be well, and make it more likely that we will ‘blow up’.

Now: it is of course an excellent thing to seek to learn from history. From the history, for example, of past financial crises. Hyman Minsky and John Maynard Keynes are among the maestros of having done so.

But there is danger in such learning, too. One such danger is what Taleb calls ‘the narrative fallacy’: falling into the trap of seeing in history an anticipation of all future possibilities, rather than (as one ought to) seeing in history only a tiny sub-section of what could have happened, let alone what might happen in the future.

When seeking to minimise the chances of financial crises in the future, one ought to focus most of one’s attention on what can be done to build down the risk of ‘black swans’ (Taleb, 2007); rare, devastating, inherently-unpredictable events. By definition, such events are vanishingly-rare in the historical record, and what such events there are are only a very poor sample of the possible such events that there could be.

The philosophical work that I am undertaking at present jointly with Taleb is devoted to exploring these and related thoughts in relation to financial crises, and to other such black swans (e.g. in the environmental sphere: compare Mark Carney’s recent remarks about the need to leave most fossil fuels in the ground in order to build down the risk of runaway climate change). In particular, Taleb and I are formulating a version of the Precautionary Principle not vulnerable to the kinds of objections standardly made against it (by Cass Sunstein – the author of Nudge, and others).

The Precautionary Principle (henceforth ‘PP’) states, basically, that, where the stakes are high, a lack of full knowledge or of reliable models – a lack of certainty – should not be a barrier to legitimate precautionary action. We shouldn’t, in other words, need certainty, in order to justify protective action.

Invoking precaution is thus an alternative to or a complement to invoking evidence. Our contemporary politics, economics, risk-management, medicine and science is fixated on evidence and on being ‘evidence-based’. My argument is that this is dangerous. One can’t have ‘evidence’ of things that haven’t happened yet, nor to any meaningful degree of things that are very rare, nor to any meaningful degree of things dependent upon human decision.

Why can’t you know the social world and manage it?

There are two main reasons:

(1) The social world is a made up of – constituted by – understandings (Winch, 1958, 1990; Read, 2008). Of interpretations. It defies scientisation. It is an illusion to think that it can be known as the physical world can be known, ‘from the outside’. It can only be truly known ‘participatorily’.

(2) Even if, per impossibile, the social/economic world could be known scientifically, it still could not be controlled/managed. Because it is a moving target (Read, 2012). Because human beings respond to attempts to know them: by seeking to make them true, or by seeking to make them false, or in other ways. There are many examples of this. A famous and salient one is ’Goodhart’s Law’ ( https://en.wikipedia.org/?title=Goodhart’s_law ). A simpler version of the point is encapsulated in Louis Armstrong’s (or sometimes it is attributed to Humphrey Littleton ) marvellous remark on the future of jazz…: “If I knew where jazz was going, I’d be there already…”

There are limits, unsurpassable limits, on the knowability of our future.

We might think that as we come to know more, these limits will recede. But this is not true. The PP is increasingly relevant, due to man-made dependencies that propagate impacts of policies across the globe: this applies strongly to ‘globalised’ economic and financial systems and to ‘globalised’ ecosystems (e.g. the climate system). In contrast, absent humanity, the biosphere engages in natural experiments due to random variations with only local impacts.

Now, the PP is essential for a limited set of contexts and can be used to justify only a limited set of actions. GMOs are one good example (see my recent ‘evidence’ to Parliament on this): they represent a public risk of global harm. The PP should be used to prescribe severe limits on GMOs. Likewise, the PP should be used to proscribe various forms of financial behaviour that have a potentiality to unleash black swans.

In conclusion:

i) The social world is necessarily partly opaque to social/‘scientific’ knowledge, precisely because it is constituted by human beings, who are intrinsically understanders, intrinsically responsive to efforts to know them, etc.

ii) We need to be less fixated on the evidence, where the human world is concerned, and more determined to take up a precautionary stance. The stakes are high… It would be wrong to gamble, in such a situation… And being ‘evidence-based’, I have shown, is, ironically, being just such a foolish and unethical gambler…

Bibliography

Read, Rupert et al, 2008: There Is No Such thing As a Social Science
Read, Rupert, 2012: Wittgenstein Among the Sciences
Taleb, Nassim, 2007: The Black Swan
Taleb, Nassim, 2012: Anti-Fragile: How to Live in a World We Don’t Understand
Read, Rupert; Taleb, Nassim; Douady, Raphael;Norman, Joseph; Bar-Yam,Yaneer, 2014: “The precautionary principle”, http://www.fooledbyrandomness.com/pp2.pdf
Winch, Peter, 1990 (1958): The Idea of a Social Science

Rupert Read is chair of the Green House think tank, and a reader in philosophy at the University of East Anglia.

Leave a Comment

Your email address will not be published. Required fields are marked *

*