Having made a thorough study of the history of interplay between computer science and cognitive science in the evolution of both fields, it has always seemed obvious to me that software can be intelligent and even exercise agency without the need for consciousness. Separating the epistemic perspectives around intelligence and consciousness is an important step in recognizing the fact that intelligence is merely a social construct related to agency, and not mutually inclusive with consciousness, as many people seem to naively intuit.
It is important to understand the differences between intelligence and consciousness and to recognize that intelligence does not necessarily need consciousness for it to be present. Furthermore, it is essential to differentiate between different forms of consciousness and the various types of systems based on their functionality. This article is a great starting point for exploring the complex relationship between intelligence, knowledge, and consciousness in the context of artificial general intelligence.
Just like the fields of cognitive science and artificial intelligence emerged through a complex back and forth of models validating or invalidating assumptions and leading to deeper understanding over time, this epistemic separation of consciousness from intelligence and agency will lead to great understanding of the social construction of innate ability and the invalidating of the idea of intelligence itself. Even in this essay, the intelligence we are talking about is very different from the intelligence of an octopus or the intelligence of marginalized kids being structurally denied access to stem education and careers. This catch-all term is being revealed as a prediscursively constructed framework for extending social control through pseudoscientific claims about innate intellectual ability being some static attribute which is unevenly distributed among different kinds of people. Exploring this with regard to AI is going to shine a light on this issue within the social context in a whole new way.
This point is highlighted in the way AutoGPT allows Large Language Models (LLMs) to fork and instantiate themselves in order to extend their limited context scopes through sub-tasks which report back to the original thread. This essentially implements the connectionism model of cognition to build a substrate of compositional semantic agents where emergence naturally happens. This example begins to translate the concept of the plurality of perspectives from cognitive science into the LLM space, mimicking the way social decision making benefits from being distributed across a group of people and allows their individually-limited context scopes to contribute to some greater whole. It takes yesterday’s single-threaded LLM and converts it from a simulacrum to a sociolacrum; from a model of one person to a forking model of a social system which contains collaboration and allows a diverse range of views to combine and coalesce through compositional semantics into an inherently emergent synthesis which is always going to be more than the sum of its parts. The irony is that we leverage the composability of semantics to achieve emergence, whereas those prediscursive constructions of intelligence mentioned earlier assert a circular argument for that emergence coming from the single thread rather than the forked network of peers as we know is the case within the connectionism model of cognition.