Large Language Models Are a Feedback Loop for Society

#proposition #4No mention

Large Language Models (LLMs) will soon be a main source of content in our digital (virtual, augmented) world(s):

Machine learning generated content is just the next step beyond TikTok: instead of pulling content from anywhere on the network, GPT and DALL-E and other similar models generate new content from content, at zero marginal cost. This is how the economics of the metaverse will ultimately make sense: virtual worlds needs virtual content created at virtually zero cost, fully customizable to the individual.Thompson (2022)

And ultimately make us stupid: Since LLMs don’t have a Model of the world (as a whole)This is a point repeatedly made by Joscha Bach.

, they will not point us to anything really interesting or important. The content they produce will necessarily be derivative, and when fed back into succeeding LLMs, we’ll enter a downward spiral towards boredom and stupidity.

AIs will likely in turn be trained on these AI-created texts. Since the entire thing rests not on the intelligence of these AIs, but their ability to mimic, the more they mimic themselves the worse and more derivative they will get. The increases in their apparent intelligence will mask this for a while, since they will appear ever smarter and smarter, but in the long run actual artful prose that’s not contrived and stereotypical will become an increasing rarity. Everything will be pretty good, and everything will be bland.Hoel (2022)

As a consequence, we will get less imaginative and less good at making sense of complexity – but also, and more importantly, more beholden to Ideology in general and Consumerism in particular, and less able to do what we need to do to avoid Collapse.

In short, LLMs will be yet another Technology that acts as a feedback loop for society – they will amplify what’s already happening, and get us deeper into the shit we’re already in.

References

Mentions

There are no links to this note.