The world is a hierarchy of systems. As we necessarily perceive these systems in their stable states, i.e. their Attractors, we can identify the systems as we perceive them with their attractors. So when we’re trying to understand systems, what we’re really looking at are attractors within attractors.
To understand them, we can go two ways:
- Up, to understand how the larger context, its Environment or super-system in the hierarchy of systems, acts as a contextual Constraint on the system in question, reducing its degrees of freedom and restricting it to certain attractors.
- Down, to understand how the system’s components, or sub-systems in the hierarchy of systems, interact to create a certain (stable) system behaviour, i.e. how attractors emerge from its System Dynamics.
The important thing now is to resist the urge to understand these two
moves as analytical, using the same explanatory
strategy. This is the mistake that See Meadows (2008).
is making: To think we can analyse Complex Systems, their components and context, sufficiently well to arrive at adequate Models that allow us to understand why attractors emerge on the various levels.
In reality, “[n]o matter how we construct the model, it will be
flawed, and what is more, we do not know in which way it is
flawed”.Cilliers (2001), 137
So instead of an analytical, we should employ an exploratory mode of inquiry which repeatedly switches between the levels of the hierarchy and between different explanatory strategies:
- Looking for large-scale patterns, i.e. attractors, “high up” in the hierarchy, the models of which have a wide scope and high generality – this leverages Causal Emergence to make the most efficient use of the available information.
- Creating hard-to-bias reality checks “far down” in the hierarchy – this means immersion into system dynamics to generate data for constructing and testing more abstract models (i.e. models of attractors higher up in the hierarchy) instead of modelling the lower-level dynamics in a detailed way.
Such a level-switching
ExplanationManual DeLanda’s reconceptualisation and use of
Deleuze’s ontology (DeLanda 2002) is an example for such a strategy: an
abstraction of the general ideas behind the most advanced
scientific thinking that informs very concrete, materialist
analyses and explanations.
needs to include the following approaches:
- Make switching between the levels as easy as possible
- Use direct experience as a stimulant for creating divergent models
- Recast direct experience in the light of abstract insights
Scale-free Abstractions reduce the cognitive cost of both work on all of the levels (re-usability) and switching between them (unification).
In addition to the levels of description, there is a more fundamental difference between certain steps on the ladder: There are systems and models of varying scope and generality on the different object-levels – and then there is the meta-level of theoretical reflection, on which, e.g., this note or Conceptual Engineering are operating. This reflection informs and critiques the work on the lower levels and should act as a constant check on it.
- Meadows (2008): Thinking in Systems
- Cilliers (2001): “Boundaries, Hierarchies and Networks in Complex Systems”
- DeLanda (2002): Intensive Science and Virtual Philosophy