Science and society deal with complexity – but, judging from the current state of our social and natural world, they don’t seem to be very good at it. Why? And how can we change their behaviour in ways that keep the worst – collapse, catastrophe, and extinction – from happening?
The Double Challenge of Complexity
The social and ecological systems that surround us are complex. Beyond the common sense understanding of the term, this means they have the following properties:
- They consist of large networks of individual components.
- These components interact without central control, but following comparatively simple rules.
- From these interactions, complex collective behaviour emerges that can change non-linearly through reinforcing feedback loops.
Such behaviour is hard to explain and often impossible to predict. Nonetheless, dealing with complex systems has always been a vital task for humans and other animals trying to survive in their social and ecological environment. Before the arrival of language, natural selection and the biological adaptation it enabled took care of this challenge: It created what might be called implicit, evolutionary knowledge1 about coping with complexity, embodied at species level – adapted automatic behaviour, inherited instincts, and innate mechanisms for individual learning.
Language2 changed all of that. It allowed intersubjectivity and the cultural – as opposed to biological – transmission of information, enabling collective learning and the accumulation of explicit knowledge. This led to an explosion of social complexity and to a constant acceleration of cultural, technological, and economic development. As an effect, biological, social, and technological adaptation have become decoupled, because they operate on different timescales – technology changes faster than social norms, and both change faster than biological design.3 Thus humans created an environment to which they are in many ways biologically and socially maladaptive.
This situation poses a double challenge: On the one hand, our capacity to understand the complexity we created is severely limited – we cannot fully and explicitly describe, explain and predict the behaviour of complex systems, and our instincts and intuitions as well as our norms and values produce inappropriate and counterproductive reactions to the systems we have created or changed. On the other hand, the rapidly accumulating negative effects of these changes and our reactions, from social fragmentation and political polarisation to runaway climate change and ecological breakdown, require a more radical change in behaviour to avoid societal collapse and civilisational catastrophe than the slow and evolutionary change of social norms and biological design is able to produce.
Paradoxically, our best shot at dealing with this challenge lies in recognising exactly how our capacity to understand complex systems is limited – and in leveraging that fact to facilitate radical change.
Models and Metaphors
We navigate our environment with the help of models – interpretive system descriptions we use for exploration, explanation, and prediction. If successful, a model represents4 parts of its target system, often in an idealised or simplified way, but not all of its features – that would make it an identical (and unwieldy) copy.
Maps are models in this sense, as are physical, e.g. architectural and engineering models. But there are others: Internal models in the minds of organisms are called mental models;5 models guiding perception and action, predictive models; models representing the causal structure of a target system, causal models; models that are part and product of scientific research, scientific models.
We understand complex systems only through models. But our personal mental (predictive, causal) models are by far not enough to deal with their level of complexity – we need to share our models and collaborate on new ones. This is where language comes in, and it is how we accumulate explicit knowledge.
When developing models, we routinely use analogies: We describe and interpret a system using knowledge about another system, comparing the two, noting similarities and dissimilarities, deriving explanations and predictions, and opening up a space for discovery. We use, e.g., the billiard ball analogy to describe molecules in a gas and explain its temperature, knowing that certain attributes of the ball (mass, impulse) can be projected onto molecules, others (colour, number) cannot, and still others might be helpful guides for further research.
Mathematical models and formal approaches like causal and systems modelling transcend analogy and strive for higher levels of abstraction. But even there, metaphors shape our perception and understanding: When analysed as cognitive tools, mathematical representations can be shown to be based on metaphor.6 By describing light as a particle or a wave, e.g., we structure the conceptual space in which we model, explain, and predict its behaviour in ways that mirror our everyday experience with particles and waves. The same goes for describing organisations as machines, organisms, or cultures; describing the mind as a computer, brain, or regulating device; describing money as commodity or claim; or describing morality as strength or empathy.
Such conceptual metaphors – “unconscious, automatic mechanism[s] for using inference patterns and language from a source domain … to think and talk about another domain”7 – are “constitutive of the theories they express, rather than merely exegetical”8. Beyond explicit theory, metaphors shape every area of discourse by structuring our conceptual spaces: From thinking about emotions in terms of temperature to talking about arguments in terms of war, “the way we think, what we experience, and what we do every day is very much a matter of metaphor.”9 Metaphors make new domains of perception and understanding accessible by leveraging existing concepts, interweaving experience and abstraction in a way that makes them almost inseparable.
This role of metaphors creates a fundamental tension:
On the one hand, metaphor can be seen as a “a form of creative expression” and ”as a means of liberating the imagination”10. On this view, just like no model of it is complete, “no one metaphor can capture the total nature”11 of a complex system – every metaphor highlights certain aspects of it and hides others, from specific features to overall structure and boundaries. This calls for a „conscious and wide-ranging theoretical pluralism rather than an attempt to forge a synthesis upon narrow grounds“12.
On the other hand, metaphors potentially import and impose restricting or misleading conceptual structures. If they are part of a dominant paradigm, metaphors can colonise formerly independent subjects of research, promoting absolutist interpretations that preclude creativity and a humble approach to complexity. The result is epistemic imperialism, made invisible by its use of familiar and thus seemingly innocuous concepts.
Paradigms and Power
We adopt or keep models of our environment if they are useful. A model can be useful in widely varying respects, e.g. for biological survival, scientific explanation, social standing, economic success, or psychological well-being. This multitude of criteria creates an intricate incentive system in which the criteria determining the selection of a model can become decoupled from its domain.
That happens when scientific paradigms form and stabilise.13 Members of a scientific community initially adopt a new scientific model if it is more useful for explanation and prediction than its competitors. Once the community has formed a consensus around the new model, social and economic incentives for further adoption emerge: Research using the model is published and funded more often than research not using it, researchers adopting it are employed and promoted more easily than researchers not adopting it. The model and its accompanying theories, exemplary solutions, and tools evolve into a general consensus on how “science is done”14 in this domain, aligning individual behaviour. The paradigm can thus be seen as an attractor of the complex system that is the scientific community: a set of states towards which the system evolves and stabilises.
In the process, social and economic factors gradually replace explanatory and predictive power as the dominant selection criteria. This can lead to situations where a scientific community clings to a model with stagnating or declining explanatory and predictive success because it is socially and economically useful for its members to do so15 – the system stays on its current attractor, regardless whether it still fulfils its original purpose.
Such paradigms and the models and metaphors they are based on can shape whole disciplines and even societies. In philosophy, e.g., from Plato to Locke the pervasive “idea of ‘foundations of knowledge’ [was] a product of the choice of perceptual metaphors”16, while conceptual advances from Kant onwards were “contained within the framework of causal metaphors”17. In mainstream economics, a ubiquitous paradigm is based on the conceptual metaphor of growth: Although “99.9% of human history has been no-growth history”18, economic growth as measured by increase in GDP is now “the key political priority in all the advanced Western nations”19. The metaphor establishes a positive frame – “thriving economy, more spending power, richer and fuller live, increased family security, greater choice, and more public spending”20 – that creates buy-in and eclipses the catastrophic long-term effects of growth-focused policies on nature. Since the ecosystems containing the economy are not represented in the paradigm’s models, these effects are only seen as “[c]ollateral damage to ‘the environment’” that is “a mere ‘negative externality’ that can be corrected by appropriate pricing”21, not as a reason to reject the paradigm.
As the last example suggests, in the wider society such paradigms can act as ideologies. The socio-economic context in which the scientific community is embedded shapes its incentive system in a way that makes alignment with power structures the dominant selection criterion for scientific models, leading to models that justify and strengthen these power structures. The familiar metaphors underpinning the models hide not only driving socio-economic and power interests, but also the very fact that alternative models could be constructed – and when colonising other areas of research, make the ideology even more encompassing and thus invisible.22
Taken together, the model-shaping character of metaphors, the power of paradigms, and their function as ideologies frequently favour conceptual foundations for our understanding of complex systems that serve social and economic purposes alien to the task of understanding – and often detrimental to to the realisation of values like justice and sustainability. In many cases, this stymies creativity and diminishes real scientific progress. In others, it leads to science in the service of oppression23 and to a disconnect from thoroughly validated evolutionary knowledge in the interest of short-sighted optimisation. The most fateful case of the latter is the demise of age-old practices of circularity and embeddedness in nature in favour of extraction and exploitation, starting with the switch from foraging to agriculture and leading up to the growth paradigm, consumer capitalism, and now impending collapse.
Making Shift Happen
Once we have arrived at this point, we see not only how and to what end our understanding of complex systems is limited: We make sense of them using metaphors, whose adoption and spread is driven and shaped by socio-economic factors. We also begin to see a way out of this predicament: We realise that our understanding is “imprisoned by its metaphors” – and thus “stimulate an awareness through which it can begin to set itself free”.24
This “radical humanist critique”25 of our understanding is based on a view of language as the product of complex cognitive processes, not a transparent tool amenable to formal analysis. This latter “commonsense, folk-theoretic picture of speech, thought, and communication … constitute[s] a misleading context for scientific communication”26. Instead, the cognitive view of language emphasises that “elaborate constructions must occur that draw on conceptual capacities, highly structured background and contextual knowledge, schema-induction, and mapping capabilities”27 to make sense of anything. These resources are not only based on biological foundations, but also shaped by cultural evolution and their socio-economic context.
In this view of language and thought, approaches to complex systems that are traditionally thought to constitute diametrically opposed paradigms28 are in fact complementary, representing, to use a metaphor, two sides of the same coin: A positivistic account of our cognitive capacities leads to an interpretive critique of their application; an interpretive critique of how paradigms shape our explanations leads to a positivistic account of how this process unfolds cognitively and socially. In Hegelian terminology, the cognitive view of language and the complexity view of social systems represent the sublation of traditional positivistic and interpretive paradigms – they are both preserved and changed through their interplay with each other.29
In practical terms, this means the radical change in collective behaviour we need to meet the challenges of our time can most promisingly be provoked by a two-pronged approach:
Science and those applying it have to be critical of their metaphors – this is the interpretive aspect. The critique consists in making metaphors visible as such in the first place, tracing their socio-economic and historical roots, and supplying alternative metaphorical interpretations of the complex systems we are trying to understand. This can lead to radically different models and paradigms of e.g. the economy, money, and our relationship to nature, enabling behaviours and pathways to change that were hidden or thought to be unviable before.30
Shifts towards paradigms based on these new metaphors have to be understood as non-linear changes in system behaviour – this is the positivistic aspect. Non-linear change is, intentionally or unintentionally, triggered when reinforcing feedback loops between components of a system alter its behaviour dramatically and irrevocably, moving it beyond so-called tipping points. After tipping, the states the system evolves and stabilises towards are different than before – it has found a new attractor. Examples for such changes are political revolutions, market crashes, and ecosystem collapses.
To induce them deliberately, the target system’s critical components and feedback loops between them have to be identified and specifically targeted with effective action. Empirical research into civil resistance e.g. has shown that its chances for success are highest when creating positive feedback loops between activating ever larger parts of the population – e.g. through festivals, protests, and direct actions – and eroding government power and resources – e.g. through dilemma situations, attrition, and loyalty shifts. Using such an approach, tipping points can be reached surprisingly quickly and with relatively small numbers of actively engaged people.31
Since these mechanisms are non-linear, though, their effects cannot be predicted with any certainty – using them will require an iterative approach focused on using and experimenting with multiple perspectives on the system in question, creating “probes to make … potential patterns … visible before we take any action. We can then sense those patterns and respond by stabilizing those patterns that we find desirable, by destabilizing those we do not want, and by seeding the space so that patterns we want are more likely to emerge.”32 From science to society, creating change that way is a process of learning.
Sense-making and Survival
We have created a world that will not sustain us if we don’t change radically. But the ideas and images we changed the world with keep us from changing ourselves: They frame our sense-making and render its errors invisible, making us feel invincible right to our downfall. This, then, will be our key to survival: Understanding the limits of our understanding – and leveraging them for radical change with the mindset of a learner.
- Bailer-Jones, D. M. (2009), Scientific Models in Philosophy of Science
- Bendell, J. (2018), “Deep Adaptation: A Map for Navigating Climate Tragedy”, Institute of Leadership and Sustainability Occasional Paper 2
- Boyd, R. (1993), “Metaphor and theory change: What is ‘metaphor’ a metaphor for?”, in: Ortony, A. (ed.), Metaphor and Thought, 2nd edition: 481–532
- Chenoweth, E., and Belgioioso, M. (2019), “The physics of dissent and the effects of movement momentum”, Nature Human Behaviour (July 2019)
- Chenoweth, E., and Stephan, M. J. (2011), Why Civil Resistance Works: The Strategic Logic of Nonviolent Conflict
- Clark, A. (2013), “Whatever next? Predictive brains, situated agents, and the future of cognitive science”, Behavioral and Brain Sciences 36 (3): 1–73
- Craik, K. (1943), The Nature of Explanation
- Dennett, D. C. (2017), From Bacteria to Bach and Back: The Evolution of Minds
- Denzau, A. T., and North, D. C. (1994), “Shared Mental Models: Ideologies and Institutions”, Kyklos 47 (1): 3–31
- Dupré, J. (1994), “Against Scientific Imperialism”, Philosophy of Science Association Proceedings 2: 74–381
- Eliasmith, C. (2003), “Moving beyond Metaphors: Understanding the Mind for What It Is”, The Journal of Philosophy 100 (10): 493–520
- Eoyang, G. H. (2011), “Complexity and the Dynamics of Organizational Change”, in: Allen, P., Maguire, S., and McKelvey, B., The Sage Handbook of Complexity and Management, 319–334
- Fauconnier, G. (1994), Mental Spaces: Aspects of Meaning Construction in Natural Language, 2nd edition
- Frigg, R., and Hartmann, S. (2018), “Models in Science”, in: Zalta, E. (ed.), The Stanford Encyclopedia of Philosophy (Summer 2018 Edition)
- Gärdenfors, P. (1996), “Mental representation, conceptual spaces and metaphors”, Synthese 106: 21–47
- Hawkins, S., et al. (2018), Hidden Tribes: A Study of America’s Polarized Landscape
- Heylighen, F. (1998), “Attractors”, in: Heylighen, F. Joslyn C., and Turchin, V. (eds.): Principia Cybernetica Web
- Hesse, M. (1966), Models and Analogies in Science
- Hitchcock, C. (2019), “Causal Models”, in: Zalta, E. (ed.), The Stanford Encyclopedia of Philosophy (Summer 2019 Edition)
- Ingham, G. (2004), “The Nature of Money”, Economic Sociology: European Electronic Newsletter 5 (2): 18–28
- Jackson, T., (2016), “Beyond Consumer Capitalism”, Centre for the Understanding of Sustainable Prosperity Working Paper 2
- Johnson-Laird, P. N. (2010), “Mental models and human reasoning”, PNAS 107 (43): 18243–18250
- Kuhn, T. S. (1970), The Structure of Scientific Revolutions, 2nd edition
- Kurtz, C. F., and Snowden, D. J. (2003), “The new dynamics of strategy: Sense-making in a complex and complicated world”, IBM Systems Journal 42 (3), 462–483
- Lakoff, G., and Johnson, M. (1980), Metaphors We Live By
- Lakoff, G. (1995), “Metaphor, Morality, and Politics, Or, Why Conservatives Have Left Liberals in the Dust”, Social Research, 62 (2): 177–213
- Mitchell, M. (2009), Complexity: A Guided Tour
- Morgan, G. (1980), “Paradigms, Metaphors, and Puzzle Solving in Organization Theory”, Administrative Science Quarterly 25 (4): 605–622
- Morgan, G. (1986), Images of Organization
- Rees, W. (2010), “What’s blocking sustainability? Human nature, cognition, and denial”, Sustainability: Science, Practice, & Policy 6 (2): 13–25
- Rorty, R. (1979), Philosophy and the Mirror of Nature
- Steffen, W., et al. (2018), “Trajectories of the Earth System in the Anthropocene”, PNAS 115 (33): 8252–8259
- Tweney, R. D. (2017), “Metaphor and Model-Based Reasoning in Mathematical Physics”, in: Magnani, L., and Bertolotti, T. (eds.), Springer Handbook of Model-Based Science: 341–354
- Wray, L. R. (2012), “A Meme for Money”, Levy Economics Institute Working Paper 736
I am grateful to Gregor Groß and Phil Harvey for comments on earlier drafts of this essay.
In the context of this essay, by language I mean human language. I am aware this is a problematic restriction, but a discussion that does the issue justice is beyond the scope of this article and therefore left for another time. ↩
This is meant in the broad sense of “tracking features of the target system”, which avoids any representationalist assumptions or commitments. ↩
Stereotypes are mental models evolved to navigate complex situations in a very short time. Suppose we drive a car: Instead of calculating what moving leaves mean about wind velocity, risk of a tree falling over etc., we use a stereotype from past experience: that leaves moving like that pose no danger to us and our car. (Hat tip to Gregor Groß for this example.) ↩
Tweney (2017) does this for the mathematic formulation of Maxwell’s theory of electromagnetism. ↩
Lakoff (1995), 182 ↩
Boyd (1993), 486, author’s emphasis ↩
Lakoff and Johnson (1980), 3 ↩
Morgan (1980), 612 (my emphasis) ↩
ibid., 612 ↩
ibid., 612, my emphasis; cf. also Eoyang (2011), 323 ↩
The following sketch is of course idealised; in reality, what is described here as a linear sequence of action will more likely resemble a jumble of criss-crossing trajectories. ↩
This understanding of paradigms corresponds to Thomas Kuhn’s wide use of the term as denoting “the entire constellation of beliefs, values, techniques, and so on shared by the members of a given community” (Kuhn 1970, 175), which encompasses exemplary puzzle solutions (paradigms in the narrow sense). ↩
A current example for such a situation is the dominance of Superstring or M Theory in theoretical physics despite its obvious stagnation in explanatory and predictive productivity. ↩
Rorty (1980), 159 ↩
ibid., 161 ↩
Rees (2010), 17 ↩
Jackson (2016), 6 ↩
ibid., 7 ↩
Rees (2010), 17 ↩
A current example for such an expansion of ideology is the habit of turning every social context into a market – “children, risky sex, marriage partners, etc.” (Dupré 1994, 9). ↩
Examples range from “Scientific Management” through scientific racism and eugenics to facial recognition for tracking people. ↩
Morgan (1980), 605 ↩
ibid., 605 ↩
Fauconnier (1994), xvii ↩
ibid., xviii ↩
The classification of paradigms used here goes back to earlier work by Morgan, upon which his (1980) and (1986) are built. ↩
See Jackson (2016) for an example of an alternative economic paradigm, Wray (2016) for a “new meme for money”, and Lakoff (1995) for progressive alternatives to conservative frames for a range of political issues. ↩
Chenoweth and Stephan (2011) and the subsequent work summarised in Chenoweth and Belgioioso (2019) are pertinent here. ↩
Kurtz and Snowden (2003), 469 ↩