We make sense of the world in order to act in it.This is lifted straight from Snowden et
al. (2021)
Making sense means establishing coherence: When we make sense of a situation, we realise how its parts are connected, and how it fits into the wider context.
In other words, we form a Model of the situation.
Sensemaking doesn’t have to be a conscious act. Most of the time, we find our way around our Environment without actively thinking about it.
In such a case, we use an Implicit Model of our environment.
An implicit model is a non-conceptual, embodied representation of our environment – we navigate it safely because our body, our senses, our social norms, our tools and artefacts guide us through it.
Conscious sensemaking comes into play when our implicit models break down.
Thus we can define sensemaking as
the deliberate effort to understand events. It is typically triggered by unexpected changes or other surprises that make us doubt our prior understanding.Klein et al. (2007), 114
When that happens, we experience Decoherence –
things suddenly don’t make sense anymore.This experience is particularly widespread in reaction
to the current global
Polycrisis.
We wonder: “What’s going on here?”
To answer this question, we build and test Explicit Models.
An explicit model is a purposeful description of our environment, often in the form of a story about causes and effects (a Causal Model), sometimes expressed in mathematical terms or as a simulation.
Conscious sensemaking is a paradoxical task:
An implicit model breaking down is the exception, not the rule. Most of our implicit models, which have evolved biologically and culturally and which we often share with others, are quite resilient and deal well with unexpected situations.
Explicit models, on the other hand, are often brittle and badly prepared for surprise, especially when we’ve built them alone. In addition, the linear causal stories we tend to tell are ill-equipped to capture the world’s interconnectedness and complexity. And most importantly, explicit models can be distorted by Ideology.
As a result, our explicit models fail more often than the implicit ones they are meant to replace. They frequently don’t create coherence or, worse, only give an appearance of coherence where they really is none, turning into Conspiracy Narratives and delusions.
And yet, we need models to act. We need to understand what’s going on before we can choose a reasonable response. And we need this understanding in time to not miss the window for our response.
This, then, is the challenge for sensemaking under uncertainty and change, when our evolved implicit models break down: To build adequate, resilient, and surprise-ready models that are accessible and timely so we can act on them.
There is a number of strategies to deal with this challenge:
- Prioritise abstraction over metaphor
- Move up and down the ladder of abstraction
- Launch safe-to-fail probes and adapt models continuously
- Switch from individual model building to collective pattern recognition
- Support the evolution of new implicit models
- Replace inadequate shorthand abstractions
- Expand the space of thinkable thoughts
Sensemaking Frameworks also aim to help with the challenge, but there are good reasons to Replace Sensemaking Frameworks with Scale-free Abstractions.
References
- Klein et al. (2007): “A Data-Frame Theory of Sensemaking”
- Snowden et al. (2021): Cynefin: Weaving Sense-Making into the Fabric of Our World