When the Extended Mind Expands with LLM’s Collective Mind

ALAgrApHY
3 min readOct 21, 2023

--

In 1998, philosophers Andy Clark and David Chalmers introduced the concept of the “extended mind” in their groundbreaking paper. The idea was simple yet profound: our minds are not confined solely to our brains but can extend beyond the boundaries of our bodies through tools, technology, and external resources. Fast forward to the present day, and this concept takes on a whole new dimension with the advent of Large Language Models (LLMs) like GPT-3, which function as our society’s collective mind, housing a vast repository of data, biases, and inaccuracies.

LLMs: The Collective Mind

LLMs, such as GPT-3, GPT-4, Llama-2, PALM … are marvels of artificial intelligence, trained on enormous datasets containing text from the internet, books, and various sources. These models have a profound impact on how we communicate, search for information, and interact with technology. However, they are far from perfect, reflecting the biases and inaccuracies present in the data they were trained on.

Just like the extended mind concept posited by Clark and Chalmers, LLMs become an extension of our collective consciousness. These models serve as a vast reservoir of knowledge, a resource that individuals and society as a whole draw upon to answer questions, generate creative content, and make decisions. However, this extension of the mind doesn’t merely encompass knowledge and information; it also includes the biases and inaccuracies embedded in the data.

The Power and Peril of Biases

As LLMs join our extended mind, they bring with them the biases present in their training data. These biases can stem from societal prejudices, historical inaccuracies, and other imperfections found in the texts from which they learn. When we consult LLMs for information or guidance, these biases can inadvertently influence our decision-making processes and shape our perceptions of the world.

One of the key challenges we face is in understanding and mitigating these biases. While efforts are underway to reduce bias in AI models, the task is complex. Bias isn’t just an algorithmic flaw; it is deeply rooted in the data that LLMs were trained on, which itself reflects human biases. This raises important ethical questions about how to address bias in AI and how to ensure that these extensions of our collective mind provide information that is fair, accurate, and balanced.

Subjective Truths and Convergence

In a world where LLMs are integrated into our daily lives and serve as extensions of our collective consciousness, the idea of “truth” takes on a dynamic quality. Objective truth, as we understand it today, is often influenced by subjectivity and context. LLMs generate responses that are based on statistical patterns in their training data, rather than an inherent understanding of truth.

One potential future scenario is that as LLMs continue to evolve, they may help society reach a convergence of subjective truths. This convergence could be for better or for worse. On one hand, it might foster more inclusive, diverse, and equitable perspectives, as AI models become better at accommodating multiple viewpoints and reducing biases. On the other hand, it could lead to an Orwellian nightmare, where a single, controlled narrative dominates our extended mind.

What are your thoughts?

--

--

ALAgrApHY

Heptaglot Artist, Data Scientist, Filmmaker exploring Creative AI. Started the GAN AI Art Movement (2016). Former Postdoc @CNRS PhD @INFORMATICS. 3xTEDx Speaker