Science of Mind and the Potential For Artificial Consciousness

There is a long history of thought about thought. Theories of consciousness abound, some more plausible and some less. Of all the books I’ve read and the classes I’ve taken on the topic, I think my favorite concise explanation of the current state of the art in cognitive science is the book Conscious by Annaka Harris.

I tend towards the connectionism model of mind, and especially the view that there is a plurality of perspectives in the mind of each person. These perspectives act in concert and discord, and from them emerges what we think of as the self.

Imagine you are standing in a room full of machines.  You have no idea what’s going on outside the room, but the machines are all spilling pages of printouts about what’s happening outside. You rummage through the pages on the floor in the center of the room, trying to construct a narrative of what’s happening outside. This is very much what the neocortex does in the brain. The narrating self tries to create a narrative or story of what is happening, in a way that justifies all our assumptions. It does this by sorting lots of different sensory input and trying to create a narrative within the context of our beliefs and opinions about the world and the situation we find ourselves in.

dark rom full of machines with documents on the floor 1

The thing to take away from that example is that you are the narrating self in the story, just one more machine in the room that’s trying to piece together the pages to create a narrative.

dark room full of machines with documents on the floor 2

I think one of the first realizations the narrating self has to contend with is the fact that it exists in an empty room, that there is no “self,” no “soul,” imbuing the room with purpose. The fact is that the room is empty and purposeless. The act of creating a narrative based on the sensory input is the only real subjective experience the room could be said to be having, and so that narrative takes on the role of justifying its existence by articulating some kind of purpose and meaning for itself. Thus emerges the ego, a kind of metaphenomena resulting from that self-annihilation anxiety; this perfectly explains why the will to live and the wish to die are such fundamental conflicts in psychology, giving rise to much of what we think and worry about.

dark room full of machines with documents on the floor 3

People talk a lot about the observer effect in quantum mechanics which has been interpreted by popular culture to mean that there is some special magical ability of people to change the world by observing it, and therefore to somehow influence its development through acts of sheer will and mental or spiritual power. This myth can be quickly dispelled by simply reading about the quantum eraser, and realizing that the act of measuring need not have any sentient observer involved to work, or to be undone retroactively.

I tend towards an idea called panpsychism…

Panpsychism is the idea that consciousness did not evolve to meet some survival need, nor did it emerge when brains became sufficiently complex. Instead it is inherent in matter — all matter.

I have been working for some time on comparing and contrasting the functions and results of many artificial conversational systems such as the new GPT-3 neural network and its many offspring like chat bots and essay writers. Another AI called Hypnogram created all the images for this page from the prompt, “dark room full of machines with documents on the floor.”

Language transformer models like GPT-3, GPT-NEO, etc are able to create responses to prompts, including long-form text that passes the Turing test with flying colors.

I recently read an interesting article about creating personalized data stories on ycombinator. This inspired me to think about the applications for a GPT-based narrating self.

Colossal Cave Adventure

Remember those text-based games where you were given a description of the room and the items available and then you have to decide what to do? What if a language transformer net could play the role of the user in that situation, only we construct a situation where the input it receives is a description of real world situations?

At that point, what is the difference between a human mind and the kind of mind we would have constructed?

Let’s try an example. In bold is the prompt I gave GPT-3, and the rest is what it responded.

The game asked me, “You are standing beside a small brick building at the end of a road from the north. A river flows south. To the north is open country and all around is dense forest. What now?”

I responded, “I look around.”

The game told me, “You see nothing of interest.”

I typed, “I call out.”

The game responded, “You call out. You hear no response.”

I typed, “I knock on the door.”

The game told me, “You knock on the door. There is no response.”

I typed, “I look in the window.”

The game told me, “You see nothing of interest.”

Clearly it understands the task and even thoughtfully anticipates many steps into the future. One of the most important things to consider is what assumptions and priming we may be intentionally or unintentionally passing through the prompt. Check out the work I’ve already done on this topic.

It is very important to direct the intention of the transformer net or it will just go in a random direction; it needs to have enough background information to form conclusions about the input it’s receiving. The premises we include in the prompt will radically change the output. As humans, we use all of our beliefs and opinions about life and ourselves and our situation as premises to our internal prompts.

Let’s try prompting the transformer net with a real world situation and see what happens. Again the bold is the prompt and the rest is the response…

I am at Starbucks. I ordered a cold brew iced coffee with a cheese danish. I need to start my homework soon, but I’m feeling ennui and angst about my relationships problems. There is a cozy fire across the room.

I am sipping my coffee and writing. I am at peace. This is a great place to write. It is a place where I can be alone and not feel alone. There are people all around me, but they are not in my space. My space is mine, and I am in it.

I am writing about the past, and I am writing about the present. I am writing about my relationships and about my life. I am trying to be honest with myself, and I’m not sure if I’m succeeding.

I feel a little bit like I’m writing to myself. I feel like I’m writing a letter to myself that I will never send. I’m not sure if that’s good or bad, but it feels OK.

I was waxing poetic in my prompt, and the transformer continued that. What I really wanted was to have it decide what to do next, but it didn’t get that from what I said. Let’s try again and ask the transformer to give advice rather than merely elaborating on what’s going on…

“Today I must finish my homework. I have come to Starbucks to do my homework. I ordered a cold brew iced coffee with a cheese danish. I need to start my homework soon, but I’m feeling ennui and angst about my relationships problems. There is a cozy fire across the room.”

“My best advice is to take a deep breath and focus on what you need to do. You have a lot on your plate right now and it‘s important that you get your work done so you can be successful in school. If you get too distracted by the fire, it will be hard to focus.”

So with just a bit of prompting, it was able to pick out the important task that needs to happen and make a decision about what to do next.

Now imagine we create a set of mood registers or values variables. Imagine there is one for how we are feeling, one for what we are doing right now, one for what’s our current purpose in life, etc.

Imagine there is a journal of the events of the day, and all of that is included in the prompt, followed by asking for advice about the decisions that need to be made next.

Let’s say there are several modes of thought. We can be doing a task, reflecting about life, or considering what to do next. The form for the prompts in each of these situations is different, and some of the registers can be updated only at certain times. For example, if we are reflecting on life but not working on a task or considering what task to do next, then there is no need to ask ourselves what the purpose of our life is at that moment.

Quickly we see patterns emerging similar to the way cognition happens. And in a way, the system becomes a meta AI. If it’s reviewing the narrative of the day and making decisions about what to do next, then we can weigh accomplishments versus wasted time just like the way supervised learning works, and develop sentiments about patterns. The language transformer becomes the supervisor.