#concept12 mentions

We make sense of the world in order to act in it.

Making sense means establishing coherence: When we make sense of a situation, we realise how its parts are connected, and how it fits into the wider context.

In other words, we form a Model of the situation.

Sensemaking doesn’t have to be a conscious act. Most of the time, we find our way around our Environment without actively thinking about it.

In such a case, we use an Implicit Model of our environment.

An implicit model is a non-conceptual, embodied representation of our environment – we navigate it safely because our body, our senses, our social norms, our tools and artefacts guide us through it.

Conscious sensemaking comes into play when our implicit models break down.

Thus we can define sensemaking as

the deliberate effort to understand events. It is typically triggered by unexpected changes or other surprises that make us doubt our prior understanding.Klein et al. (2007), 114

When that happens, we experience Decoherence – things suddenly don’t make sense anymore.This experience is particularly widespread in reaction to the current global Polycrisis.

We wonder: “What’s going on here?”

To answer this question, we build and test Explicit Models.

An explicit model is a purposeful description of our environment, often in the form of a story about causes and effects (a Causal Model), sometimes expressed in mathematical terms or as a simulation.

Conscious sensemaking is a paradoxical task:

An implicit model breaking down is the exception, not the rule. Most of our implicit models, which have evolved biologically and culturally and which we often share with others, are quite resilient and deal well with unexpected situations.

Explicit models, on the other hand, are often brittle and badly prepared for surprise, especially when we’ve built them alone. In addition, the linear causal stories we tend to tell are ill-equipped to capture the world’s interconnectedness and complexity. And most importantly, explicit models can be distorted by Ideology.

As a result, our explicit models fail more often than the implicit ones they are meant to replace. They frequently don’t create coherence or, worse, only give an appearance of coherence where they really is none, turning into Conspiracy Narratives and delusions.

And yet, we need models to act. We need to understand what’s going on before we can choose a reasonable response. And we need this understanding in time to not miss the window for our response.

This, then, is the challenge for sensemaking under uncertainty and change, when our evolved implicit models break down: To build adequate, resilient, and surprise-ready models that are accessible and timely so we can act on them.

There is a number of strategies to deal with this challenge:

Sensemaking Frameworks also aim to help with the challenge, but there are good reasons to Replace Sensemaking Frameworks with Scale-free Abstractions.


Concept Mapping

What Concept Mapping is a tool for collaborative Sensemaking.

Cultural Evolution Is Multilevel Meme Variation, Selection and Replication

Cultural Evolution is the change of information capable of affecting individuals’ behavior over time.


Originally an occult concept referring to “a non-physical entity that arises from the collective thoughts of a distinct group of people”Wikipedia , it can also be understood as denoting a Meme, focusing on the meme’s point of view and agency in Cultural Evolution.

Heuristic Device

A heuristic device is [a]ny procedure which involves the use of an artificial construct to assist in the exploration of …[complex] phenomena.

Large Language Models Are a Feedback Loop for Society

Large Language Models (LLMs) will soon be a main source of content in our digital (virtual, augmented) world(s): Machine learning generated content is just the next step beyond TikTok: instead of pulling content from anywhere on the network, GPT and DALL-E and other similar models generate new content from content, at zero marginal cost.

Prioritise Abstraction Over Metaphor

Usually, the Abstractions we are using are based on Conceptual Metaphors.

Rationalism Is Not Rational

Rationalism à la Eliezer Yudkowsky has two connected fundamental flaws – an ontological and an epistemological one.

Replace Sensemaking Frameworks with Scale Free Abstractions

We want to maximise scope, detail, and cognitive efficiency of Sensemaking.

Scale Free Abstraction

Scale-free abstractions are a specific type of Shorthand Abstractions: highly general concepts taken from our best current thinking about evolution, cognition, and the world as a hierarchy of systems.

Sensemaking Framework

A Sensemaking framework is the codification of useful sensemaking practicesIn this context, useful specifically means helping Build adequate, resilient, and accessible models on which we can act in a timely manner.

Separate Explanation from Emotion

The world is a hierarchy of systems.

Strategy Is a Learning Process

Strategy is de facto always an iterative learning process, even if this is often not made explicit and information gaps between iterations make it less effective and efficient.