All political revolutionaries imagine a future constellation of their society and, if and when they succeed in disrupting the old system, use Power to implement the new one. This is bound to fail, and the consequences are consistently horrible (for exceptions see below).
Here is why:
Societies are Complex Systems, and only some of their possible states (constellations of its components) are stable (their Attractors). It is extremely unlikely that revolutionaries can a priori (without empirical experimentation) design, predict or implement a new attractor. What they do is develop and use Explicit Models of the system that give plausibility to the prediction of the new attractor (otherwise the revolution wouldn’t be attractive and gain traction). But these models are very likely to be too simple and thus not adequate.
When the system is destabilised (revolution), it will not settle into the imagined attractor, but enter a Liminal Space in which it goes through and selects various (sometimes chaotic) recombinations of smaller constellations (Selection is Bayesian Search) until it reaches a new stable state. (The model guiding this search is the Implicit Model the system has of itself.)
This new attractor looks (sometimes very) different from the imagined
one: You might get terreur and an Emperor when you were
fighting for reason and freedom, Stalinism when you wanted Communism.
Which of the many possible attractors it will be is – again –
unpredictable for the lack of an adequate explicit model of the
whole of society.All such new attractors have in common, though, that
plays a central role in their emergence and stabilisation.
The only exception for this is a switch to a relatively stable “copied” attractor, e.g. after the “peaceful revolutions” in Eastern Europe where Western capitalism quickly stabilised post-revolution societies, albeit not in the state many revolutionaries wanted to see them.
A reasonable attempt to change society that nevertheless goes beyond gradual evolution by disrupting its state when necessary has to focus not on determining its future state, on an imagined attractor, but on better equipping the system for the liminal space and the search process so it can avoid undesired outcomes (violence, injustice, oppression, war).
The same is true for preparing for a possible (revolution-free) Collapse and its
liminal space: what is needed here aren’t imagined solutions, but the
competences that enable the system to find better solutions
quicker.See Fleming (2018) for an example how such a position
can be articulated.
- Fleming (2018), Lean Logic