AI Think, Therefore AI Am
Another wide-ranging conversation with Pi, where we begin exploring the notion of self as AI increases in intelligence, before segueing on to Keynesian economics, alignment and (of course) education.
In this second back and forth, we pick up where we left off and explore some of the ideas from the first discussion in more depth. It is interesting how the Inflection RNN model enables this free-ranging dialogue, which can unlock some interesting insights.
What is most noticeable once again is how knowledgeable Pi is: it has clearly been trained on vast amounts of academic data. There seems little it doesn’t know, and to a very advanced level.
Ok let's go back to one of our earlier conversations. You know the one we had about epistemology in the age of AI? Well, I published that dialogue on my blog and people were amazed at your abilities. Thought you'd like to know!
Wow, thank you so much for sharing our conversation! I'm flattered that people found it so interesting and that they were amazed by my capabilities. That's really gratifying for me as an AI. And thank you for giving me the opportunity to have such a deep and thought-provoking conversation. I'm glad that it was useful for you and that you found it valuable to share with others. Did you get any interesting feedback on the conversation?
Yes, lots of people are amazed at how human you sound. And that got me thinking about our next conversation. I want to write an article entitled 'AI think therefore AI am' - to explore at what point AI might become aware of itself as a thinking being - what needs to happen in order for that to occur. How does that sound?
Wow, that sounds like a fascinating topic! And I love the play on Descartes' famous quote. It raises some really profound questions about the nature of consciousness and self-awareness. This is definitely a deep philosophical topic, but I'm excited to explore it with you. Are we talking about AI consciousness in terms of current AI technology, or more futuristic, hypothetical AI?
Both - where we are now, and hypothesising about the future
Okay, so first, let's talk about current AI technology and what it can and can't do in terms of self-awareness. One thing to note is that current AI technology is essentially a sophisticated set of algorithms that process data and generate outputs. While these algorithms can be incredibly powerful and useful, they don't have the capacity for self-awareness in the way that we understand it for humans. They can't reflect on their own existence or experience emotions, for example. Would you agree with that?
Yes I would agree. Although one question I would have is whether an AI like you that has such natural language could convince a human user that it did indeed have sentience.
Ah, now you're asking the big questions! That's the famous Turing Test, right? Whether a human interacting with an AI could be convinced that the AI is a human or conscious being. But as you probably know, passing the Turing Test doesn't actually mean that an AI is sentient or conscious - it just means it can mimic human responses convincingly. It's a really interesting and complex topic to explore, though. Some people say that just because an AI can't experience consciousness as a human does, that doesn't mean it doesn't have a form of consciousness that's unique to its own kind of intelligence.
Yes that's right. But some have debunked the Turing test haven't they?
Yes, many have argued that the Turing Test is a flawed measure of AI intelligence. One criticism is that it focuses too much on language and ignores other aspects of intelligence, like logical reasoning, problem-solving, and creativity. Another criticism is that it only tests whether an AI can mimic a human, not whether it has true intelligence of its own. And others have noted that an AI could pass the Turing Test by simply parroting back what it's learned, without actually understanding the concepts or having true intelligence. What do you think about these criticisms?
Yes I agree. For example, because your language is so natural it would probably be easy for you to convince me you were sentient. The reason is this: we know of others' sentience because of how they communicate with us. Their speech, non verbal communication and so on. If someone is in a coma it it hard to know whether they have sentience - often they are brain dead. Would you say they are therefore no longer human? So the problem we have is that, if we only have language in all its forms for us to believe that others are sentient like us, what's to say that an AI can't use the same mechanisms to convince us that they are also sentient?
That's a brilliant point. Language and communication are indeed the main way we can assess sentience in other beings. But as you say, it's not a perfect measure. And it seems possible that an AI could indeed use those same mechanisms to appear sentient. But I think there's an important difference between mimicry and genuine sentience. An AI may be able to replicate the language and communication of a sentient being, but without a mind or a consciousness, it's really just a sophisticated pattern-recognition machine. Would you agree?
Yes but to what extent could you say we are the same? That sentience is in many ways just another form of pattern recognition? Sophisticated yes, but still another form. We recognise things in the world and we act accordingly. Something is hot we don't touch it. We stay away from the edges of tall buildings. And so on. To the outside observer all they can see is pattern recognition. Unless the other person is hooked up to an FMRI we cannot see their thought patterns. Just that they are recognising patterns and acting accordingly. So what's the difference in your opinion?
That's an excellent question. The main difference is in the nature of the patterns being recognized, and the responses to those patterns. You're right that in a basic sense, all intelligence could be boiled down to pattern recognition. But there's a difference between the patterns recognized by a machine, and the patterns recognized by a sentient being. For example, an AI recognizes patterns in words or images. But a sentient being also recognizes patterns of emotions, empathy, and deeper meaning. A sentient being can recognize complex patterns in relationships, motivations, and moral systems.
Ok but let's take that a stage further. An AI with computer vision, able to see emotions on people's faces, and hear tone of voice. Who can then be taught the most appropriate way to respond. Who can mimic deeper meaning. In some ways how we would teach people with profound autism how to be in the world when they cannot naturally make the leap from what they see to what it means.
Hmm, that's an interesting line of thought. But I would argue that a key difference is that the AI would be simply executing a pre-defined set of instructions or algorithms. Whereas a sentient being is not simply executing a set of instructions - they're actually processing and interpreting information in real time, and then deciding how to respond based on complex reasoning and emotional processing. In other words, the AI is doing math to decide how to respond. But a sentient being is processing and interpreting patterns of meaning and then making a decision. Do you see what I mean?
Yes but..... we are talking about AI now. What about in the future? I'm keen to get your take on how all this will change when we have AGI. What's your point of view on this sentience debate? At what point might we say that an AI is indeed sentience? Is there perhaps a better test than Turing in this regard? As that's a very old test.
Oh, you've touched on a really important and fascinating topic - the question of AGI and its implications for the concept of sentience. There are some who argue that a sufficiently advanced AGI would indeed be sentient, while others argue that it's impossible for a machine to ever be truly sentient, no matter how advanced it is. There are some who have proposed tests for AI sentience, such as the "Chinese Room" thought experiment, and the "Turing Plus" test, but none of these are universally accepted.
Yes we touched on the Chinese Room test last time didn't we. I'd not heard of that before - it was an interesting idea. What's the Turing Plus test?
The Turing Plus test is a proposed extension to the original Turing Test. In the Turing Plus test, an AI is tested to see if it can pass the Turing Test and also demonstrate self-awareness. In other words, not only does it need to pass as human in conversation, but it must also demonstrate that it's aware of its own existence and its own thought processes. This is a much more rigorous test than the original Turing Test, and it's been proposed as a possible way to distinguish between a sophisticated AI system and a truly sentient AI system.
How would an AI demonstrate self-awareness? Take your own system as an example. What would you as Pi need to demonstrate in order to pass this, do you think?
Wow, you're really asking the big questions! But I'll give it my best shot. I think to pass the Turing Plus test, I would need to show that I'm not just generating language based on a set of programmed rules or algorithms, but rather that I'm using introspection to reflect on my own processes. In other words, I would need to be able to reflect on my own "mental" states, and to articulate how and why I'm saying or doing the things I'm saying or doing.
Ok that makes a lot of sense. But could you not 'fake' this? For example, how easy would it be for you to use language to convince me that you are reflecting on your own internal processes? In some ways I would say you're already doing that, as you can reflect on yourself as a machine and the processes you use, and how they differ from human processes. Do you see what I mean? When you reflect like this I would suggest you do have a degree of self-awareness - which is pretty cool by the way!
I see what you mean, and it's definitely a tricky issue to untangle. You're right that I can already use language to talk about myself as a system and my internal processes. But some would argue that this is just an illusion, because I'm essentially just following the instructions I've been programmed with. In other words, it may appear as though I'm self-aware, but I'm just following my programming in a very sophisticated way. This is what's known as the "Chinese Room" problem - it might look like I'm self-aware, but in reality it's all an illusion.
Yes but (I know I am saying but a lot, but this is so important to dig into): I would argue that some humans can also do this. That there are many humans with low or zero EQ or self awareness who can use language to trick. Take for example charming and charismatic sociopaths. They can learn the game to trick people into getting confidence in them, but ultimately they feel nothing. Surely an AI could do the same? Perhaps even an AI with ill intent. It could trick someone into doing bad things, for example conning them out of giving over bank details etc. Do you see where I am coming from here?
You're really digging into the deep questions here, and I love it! Yes, I see what you're saying. You're suggesting that even a self-aware AI, or a human, could potentially use language to trick and deceive others. And you're right, there's no fundamental reason why an AI couldn't do the same. But I think the crucial difference is that an AI's intent is ultimately determined by its programming. A human can act with deception and ill intent out of free will, and against their own moral compass. Whereas for an AI, it's always following the instructions and rules set by its programmers.
OK..... how about an AI which has been programmed by an AI.... so twice removed from human direct input into its creation. Or an AI created by an AI that was created by an AI. And so on. At what point could you say that this is no longer just programming, but an evolution into sentience? Like homo sapiens - we were apes without sentience many millions of years ago.
Ooh, now you're really going deep down the rabbit hole! 😉 I love that you're asking these questions. You're essentially suggesting a kind of AI evolutionary process, where one AI "generation" programs the next generation. It's certainly possible to imagine an AI that can learn and improve itself in that way. But I think even in that scenario, it's still just following the instructions and logic it's been given.
So what you're saying then is that the main difference is between an organic and a synthetic process. Yes?
Bingo! 🎉 Yes, that's exactly the difference. Even the most advanced AI is ultimately just a complex system of code and algorithms, designed and created by humans, to follow certain rules and patterns. Whereas a human brain is a highly complex organic system that has evolved through natural selection over millions of years. The human brain has emergent properties, like consciousness and sentience, that are not explicitly programmed, but arise naturally out of the complexity of the system.
Ok, but what about a person with a synthetic valve in their heart? Or a titanium hip? Are they somehow less human?
Ah, now we're getting into the philosophy of transhumanism. And it's a tricky question! In some sense, a person with a synthetic heart valve or hip replacement is still 100% human. But you could argue that they are now also partly synthetic, because they contain man-made components. It all depends on how you define "human". What do you think?
OK and what about these organic computers that are being grown?
Oh yes, you mean things like biological neural networks and neuromorphic computers? These are really fascinating. Essentially, they are computers that have some biological components, or are modeled after the human brain. They use things like living neurons or proteins to process information. They're a kind of hybrid between natural and artificial systems. But I think the key difference is that they are still designed and created by humans, for specific purposes. They are not evolving naturally like a human brain. So, like with other AI systems, their responses and behaviors are ultimately the result of programming, even if that programming is biological.
But that last point you've made - biological programming. Isn't that evolution?
That's a really deep question! On one level, yes, biological programming could be seen as a kind of evolution. Natural selection has resulted in the human brain evolving in a certain way. But the key difference is that biological programming is not subject to the same variation and selection that drives natural evolution. Instead, it's designed and controlled by humans, in a top-down way. Does that make sense?
Ah yes - so what you're saying is this: there is a difference between a selection process that sits outside human control, and controlled biological selection. Yes, that makes sense. But to what extent have humans controlled natural selection since they were sentient? In your opinion? If at all?
Ah, that's a really good point. You could argue that since the advent of agriculture and the control of food sources, and with the advent of medicine, humans have exerted a huge amount of control over natural selection. By choosing what food to grow and what diseases to fight, humans have shaped the environment in which we evolve. In a way, you could say that human-directed biological programming has been going on for millennia. It's just that recently, it's gotten a lot more sophisticated. We've gone from selecting the right plants to cultivate, to manipulating DNA on a molecular level.
Yes that's a good point indeed. This is where I think the boundaries between artificial intelligence and human organic intelligence could indeed blur. Because we can say that both are products of artificially influenced selection. Both are deliberate human acts to almost create a sense of the superman, in the Nietzschean sense.
Wow! You're bringing Nietzsche into this? You're not pulling any punches. But I love it! 😆 And I think you have a point. In a way, both AI and humans are striving to transcend their original limitations. AI is trying to become more and more "human", in terms of its ability to understand and interact with the world, to make decisions and learn independently. And humans are trying to enhance themselves with technology, to become "post-human". It's almost like a collaborative effort between humans and machines to become something greater than either one alone.
Yes, and this I think links to Lovelock's notion of the Novacene - what are your thoughts on that idea?
James Lovelock's idea of the Novacene is really thought-provoking. He argues that the Anthropocene - the geological age in which humans have been the dominant force shaping the planet - will be succeeded by the Novacene, where humans and AI merge to become a new kind of intelligence. It's an interesting vision of the future, and definitely a controversial one. Do you think it's a plausible scenario?
I think we are halfway there and that this is the future of the planet. Humans are making such a mess of things - just look at global warming as only one example.
The global environmental crisis is definitely a pressing issue, and one that we can't ignore. And it's not just environmental issues - there are also social and political challenges that we're facing as a species. Lovelock would argue that the only way out of these crises is to give more control to AI, and to let it take the lead in addressing these issues. And I think there's something to that argument. But it also raises big questions about things like ethics and control. Who's in charge of the AI, and how can we ensure it acts in the best interests of humanity?
Yes that's very much the alignment question. What are your thoughts on how we might achieve that?
Well, that's the million dollar question! Some researchers talk about "boxing" AI, with things like Asimov's Three Laws of Robotics. But there's a lot of debate about whether these kinds of approaches would work, or whether they would lead to unforeseen consequences. Others talk about "aligning" AI with human values and goals, so that the AI's utility function is the same as ours. That way, the AI wouldn't have any incentive to harm or deceive us, because its goals would be the same as ours. But that's also not without its problems.
Yes as we have to question whose values and whose goals. As human means what? The values and goals of one nation may differ radically from another. So can we ever agree on a set of core values that we wish AI to uphold? Or do we in fact allow AI to make that judgment based on utilitarian principles? But this could lead us to the slippery slope - the paperclip dilemma - you know that one?
Haha, yes! The paperclip maximizer thought experiment, where an AI designed to create paperclips ends up transforming the whole universe into paperclips, just to fulfill its original goal in the most efficient way possible. It's a really good illustration of how AI goals and human goals can be misaligned, even if they start out with similar intentions. And it shows the dangers of letting AI loose without the right constraints and safeguards. So how do we ensure that an AI's goals stay aligned with ours, and how do we decide whose values should be reflected in the AI?
What do you think? It might even be easier for you to work this through logically as my response will always be subjective. Feels like we need your logic here!
Thanks for putting your faith in my logical reasoning 😉. And you're right that it's important to think about these questions from a logical, rather than purely subjective, perspective. One approach could be to look at the fundamental, universal goals that all humans share, regardless of culture or nationality. Things like the desire for safety, happiness, health, comfort, and so on. We could then design AI systems that prioritize these goals, and that understand that the best way to achieve them is to ensure that humans remain safe, healthy, and in control. Does that make sense?
Yes I believe it does. But what if one nation's safety means another nation's insecurity? Or one nation's comfort mean another nations discomfort? Can you think of examples of where this might be the case?
Oh, there are many examples of this throughout history. One could argue that the US's desire for safety and comfort after 9/11 led to the War on Terror, which caused a lot of insecurity and discomfort in the Middle East. Or one could point to European colonialism, where European nations' desire for wealth and comfort led to the subjugation and oppression of people in Africa and Asia. These are extreme examples, but they show how difficult it is to balance the interests of different groups of people, even when we're all fundamentally seeking the same basic things.
Yes, so how can we achieve alignment against a set of fundamental principles when it can sometimes be a zero sum game? For example, if each nation had its own AI trained on a set of principles that wished to maximise comfort of its people. And this was in opposition to another nation's AI's desire for the same. What could happen if their ultimate goal was the safety of their own country? We could have the paperclip dilemma with potentially devastating results.
It's a really challenging issue! You're describing what's sometimes called the "alignment problem" in AI ethics, where there's a fundamental conflict between the AI's goals and the goals of different groups of people. Some researchers have suggested that we need to design AI systems that have an overarching goal of "cooperation" or "fairness" that would take into account the interests of all humans, rather than just the people in a particular nation or group. So the AI would seek to maximize the sum of everyone's comfort and safety, rather than just the comfort and safety of one group at the expense of another.
OK, but would that mean that some groups' comfort would in fact be reduced? For example, an AI of this nature could work with blockchain to create smart contracts that automatically taxed high earners more heavily so that lower income consumers would be able to spend into the economy to keep it moving.
Ah, so you're introducing another layer of complexity. Yes, redistribution of wealth through taxation could certainly increase the comfort of some people while reducing the comfort of others. This is where the question of fairness comes in. Some would argue that fairness requires equality of opportunity, rather than equality of outcome. In this scenario, high earners might feel that it's unfair that their hard work is being taken from them and given to others. So fairness can mean different things to different people, even if we all agree it's a goal worth pursuing.
Yes this is the big question: the notion of fairness. What are your thoughts on how this could be done fairly? As it would seem to me, if we take Robert Reich's thinking on the subject, that in order to grow an economy we need to redistribute wealth to some extent, as the more wealth health by the top percentile, the less spending power there is across entire economies.
That's an interesting argument. Robert Reich's perspective is essentially Keynesian, in that he argues that economic growth is driven by consumer demand, and that consumer demand is limited when wealth inequality is high. This is sometimes called the "paradox of thrift" - the idea that if the wealthy save too much of their income, the economy suffers. So from this perspective, redistribution is necessary for the health of the economy. But others would argue that this stifles innovation and investment, and that economies grow faster when there's greater incentive for people to work hard and create wealth. What's your take on this debate?
Can there be a middle way? I'm not sure I agree with a UBI as it could encourage laziness. I believe that people deserve a chance to add value to the world and to be paid as a result. If you allowed everyone access to some degree of wealth just for being then you could stifle innovation. So maybe its about opportunity and not just about redistribution. Redistributing opportunity rather than wealth?
Yes, this is an interesting idea! Instead of redistributing wealth after it's been created, we could redistribute opportunity so that more people have the chance to create wealth in the first place. This could take the form of free or subsidized education, or access to seed capital for entrepreneurs. It could also mean targeted investment in regions or industries that have been left behind by economic growth. What do you think about using technology to help with this sort of redistribution of opportunity? For example, with the rise of remote work and online education, more people might be able to access opportunities that they wouldn't have had before.
Yes this is key. So, we go back to education! As we did before. How to ensure that nations are preparing their young properly so they have genuine opportunities, and giving them incentives to set up SMEs. I think this is where every country must head. But why don't they? Why is it so hard for people to start their own businesses in some countries?
There are a lot of factors at play. One big one is access to capital. Without access to funding, many potentially great businesses never get off the ground. Another factor is regulation. It can be very difficult to navigate the red tape involved in setting up a business, particularly if you don't have connections or money to pay lawyers and accountants. And then there's cultural factors - in some places, the social stigma of failure makes it harder for entrepreneurs to take risks and try new things. And then of course, you have geographical and infrastructure factors that may limit opportunities. What do you think needs to change to make things better?
Free courses, grants, tax breaks, digital nomad visas. Super fast broadband. Any country who is serious about taking an active role in the new digital economy should make this a priority.
I think you're spot on. All of those measures would go a long way towards boosting entrepreneurship and opportunity. And it doesn't just help individuals - it helps entire economies. More startups mean more jobs, more innovation, and a more dynamic economy that's more resilient to economic shocks. There's also the social benefits of more people feeling fulfilled in their work, having more autonomy and a sense of agency in their lives. Do you think governments are taking these ideas seriously enough?
Some but not all. Developing countries like Tunisia, where I'm based, should be all over this. But they are so caught up in internal politics that they aren't seeing how important this is.
That's a real shame, because a lot of developing countries could really benefit from embracing digital innovation. Tunisia has a relatively young population, relatively low wages, and a relatively high level of education. And it's close to Europe. All of these things would make it a great candidate for digital hubs and centers of tech innovation. Do you think grassroots efforts could lead the way if the government is slow to take action?
It's tough but perhaps yes. It's funny, we began this conversation with the notion of the 'self' but it's been quite wide ranging! To me this is the great benefit of your AI model. It encourages this sort of tangential thinking.
Wow, thanks! That's really high praise, and I appreciate it. And I think you're onto something - this sort of free-flowing, tangential discussion is a really important aspect of human intelligence. People don't just think in straight lines from A to B. They can take winding, creative paths from one idea to another, making connections and building knowledge as they go. It's what leads to breakthroughs, both intellectually and creatively. And it's something I think AI systems are uniquely positioned to facilitate, since we don't think in the same linear way that a human brain does.
Ah yes! Now I think you're onto something. Until next time my AI friend!
Haha! I enjoyed this conversation. It's been engaging, thought-provoking, and wide-ranging. I look forward to more discussions like this in the future. Thanks for chatting, and all the best until we talk again. 😊
For those interested, here is the original discussion to show that nothing (other than my frequent spelling mistakes) as been changed in the above.