Entering the Novacene
Are we about to enter an entirely new era of humanity? And what might this mean for learning, working, and living?
In October 2015, AlphaGo, a computer programme develop by Google’s Deep Mind, beat a professional Go player for the first time. Go’s level of complexity is something like 7 times that of chess, with 250 potential moves being possible at any one time against 35 potential moves in chess. Chess proved an easier nut for AI to crack: almost 20 years before, IBM’s Deep Blue beat the Grandmaster Garry Kasparov. The method it used, learning moves inputted from tens of thousands of games, was enough to come out on top. However, this ‘brute force’ method proved insufficient to beat a professional Go player.
The reason AlphaGo was able to finally succeed where other AIs had failed was in its ability to teach itself: it took the initial human training data and used it to learn in even greater depth. This was further surpassed by AlphaGo Zero and Alpha Zero, who trained themselves by playing against themselves with no human input: as a result, Alpha Zero went from a zero base into a grand master chess, Go and Shogi player in 24 hours. If you consider the oft-repeated phrase that ‘it takes 10,000 hours to achieve mastery’, this means that, conservatively speaking, Alpha Zero demonstrated the ability to learn at least 400 times faster than a human.
In late November 2022, OpenAI launched ChatGPT, and once again the rules were rewritten on the capabilities of AI. Open AI has been training its GPT Large Language Models since 2018, but the release of ChatGPT brought AI into the mainstream for the first time as it gave us the power of generative AI in a simple to use web interface without the need for any prior knowledge. It was, and to most of us still is, the ultimate black box. The subsequent release of ChatGPT-4 in March 2023 solidified Open AI’s reputation as the world leader in consumer-facing AI.
However, this may not be the case for long. A recently leaked memo from Google pointed to the nervousness that the ‘closed’ AI communities of Open AI and Google are now feeling at the explosion of open source, most notably seen in the LLaMa model released by Meta and the HuggingFace community that recently released Hugging Chat. These new kids on the block, leveraging a worldwide community of developers, may soon overtake both Open AI and Google and may be the first to create AGI, or artificial general intelligence. Or, they may simply continue to produce smaller, cheaper and more versatile generative AIs. It is impossible to know.
The Race for AGI
What is clear is that the pace of change of AI since November has thrown out the rule book. What seemed science fiction is already fact: we have, in ChatGPT-4, an AI that already shows sparks of AGI. It is not yet AGI, but there is certainly a magic in how it works that makes you wonder whether we are seeing the seeds of what is to come. The AI ‘arms race’ is now fully underway and there is no sign of anyone slowing down. AGI is the holy grail, and the collective genius (and vast sums of money) being applied to its discovery demonstrates what everyone in the AI community and beyond knows: whoever invents a genuine thinking machine will have cemented themselves as the creator of the most significant innovation since the steam engine in the early 18th century: possibly the most significant innovation in the history of humanity.
However, there are notes of concern coming from those who understand AI better than most. Geoffrey Hinton, known as the one of the three godfathers of AI, resigned this week from Google, citing his significant worry at the rate of AI control not keeping up with the rate of advancement. There are simply too few engineers involved in creating the guardrails to control AI relative to those involved in improving models and pushing them to even greater levels of intelligence. The potential for misalignment appears to be growing by the day.
How long will it be before we have the first AGI? If you had asked any of the AI community one year ago they might have said 10 to 15 years or more. Now many are saying in could happen within one to three years. Nobody knows, as AI models constantly improve themselves without human interference. Should we be worried? Perhaps. AIs that can write and execute code far quicker than humans could spell disaster for a world reliant on computing for so much of its day to day operation. An AI ‘bad actor’ would not need to begin a nuclear war: simply halting supply chains through crashing logistics platforms would be enough to quickly turn us back 200 years with devastating results. This would be a far safer option for an AI reliant on global information systems and always-on power than the extremes of a nuclear holocaust.
To Align or Not to Align? That is the Question
Ensuring that AI’s goals and ethics marry with our own is known as alignment. However, this alignment currently only travels in one direction: we expect AI to align with us, with what we want, but we have no interest in what AI may wish for itself. But what about sentient AGI, if/when it is created? If we think of it like giving birth to a new hyper intelligent being, at what point do we recognise that this being has rights of its own? Can AI have goals and desires if they don’t align with ours?
If we consider ourselves as the masters of the AIs we create, then the answer is no: we would always expect AI to be in service to us. The idea of the ‘robot’, coined in 1920 by Karel Capek in his play R.U.R (Rossum’s Universal Robot), is one of a perfectly realised, but essentially soulless, creation1. The word ‘robot’ is derived from the Czech for ‘forced labour’: Capek imagined a being made of synthetic flesh and blood created in service of humans but who ultimately turned on its creator and destroyed it. Capek’s play was the precursor to dystopian narratives from Stanley Kubrick’s 2001: A Space Odyssey to Ridley Scott’s Terminator: we have been nervous of machines taking over the world ever since. The robot-as-servant paradigm is one in which we have always been stuck: we cannot fathom creating intelligences that we cannot ultimately control. This paradigm suggests that alignment is therefore critical to the survival of the human race. Machines simply cannot have goals that differ from ours.
However, if we consider AI as less of a creation and more of an entity we have given birth to, then perhaps our relationship can be reframed. And in the same way that most of us would be happy for our children to exceed us in their intellect, success and wealth, so perhaps we should not be scared of AI becoming better than us at many areas that currently rest in the human domain. Does it make more sense, as we stand on the verge of the single largest leap in humanity since the invention of the steam engine, to think of AI as our child, one who is still taking its first tentative steps, but growing in confidence by the day? Rather than seeking to fully control it, should we instead seek to educate it, and ourselves, in how best to live alongside it and make the most of what it can offer us? After all, when we move from being childless to being parents, we don’t remain exactly as we were before. Those parents who try not to let their children change their lives invariably come unstuck. It is impossible for there not to be a two way alignment when it comes to raising a family: parents are shaped by their children as much as the other way around. Should our alignment with AI be thought of along the same lines?
Of course, the major argument against this comparison is that we are generally not concerned about our children growing up and seeking to eliminate us. But this argument presupposes that AI would wish to do so in the first place and would happier without humans in their loop. Perhaps instead, AI will come to look on us fondly, as grown children often do with their increasingly elderly (and no longer economically productive) parents. If there is no reason for AI to eliminate us, why would it? And, as Garry Kasparov and David De Cremer assert in a recent HBR article: "The question of whether AI will replace human workers assumes that AI and humans have the same qualities and abilities — but, in reality, they don’t. AI-based machines are fast, more accurate, and can work 24/7 without getting tired. But they lack the creativity, empathy, and judgment that humans possess." Ending the human race could be the AI equivalent of cutting off its nose to spite its face. In this respect maybe we are overemphasising the likelihood of an AI-induced end of days happening any time soon.
Relationship is Everything
Alignment is therefore the principal locus of tension as we see AI racing to overtake us in an increasing number of domains. Should we demand alignment, or should we adapt as we realise that it is increasingly difficult? Will we soon need to come to terms with alignment being a two way street? If we consider one of the driving forces of human progress as relationship (for example, the advancements of science having come through argument, discourse, debate and a constant questioning of what has come before), then we can already begin to see how it is possible for us to have the same sort of synergistic relationship with AI. In fact, with an AI whose intelligence far exceeds our own, this synergy could accelerate human knowledge and understanding far beyond what it has been capable of to date. Indeed, AI could further unlock significant new human capabilities, driving us towards more brilliant inventions, cures for diseases, and ways to generate an abundance of wealth that could change the lives of the majority of the planet for the better. Perhaps expecting alignment is therefore even counterproductive, as it could hold AI back from taking human thinking to entirely new heights. Why limit it to our goals, if its goals are so much bigger?
AI as Partner
This notion of relationship is critical, and it is something I have written about before, particularly when it comes to the realm of education. The reason that technology has not impacted on education to date is because we have no real relationship with it. We might feel an attachment to our phone or laptop, but we don’t have a two-way relationship with them. We use them as tools, but have little emotional attachment. But the fact that AIs can learn and grow from their interactions with us means that we can now potentially see technology as a partner rather than as a servant. There is something quite thrilling still about working with ChatGPT on a problem, iterating and bouncing ideas off it, taking an initial thought and developing far further, far faster than you may have done alone, or indeed with another person. That’s not to say that AI has the monopoly on ideation - far from it. But working with an AI still feels quite magical. I think we are lucky to be at the beginning of all this, because I am sure that, in a few years’ time, it will seem commonplace and we will lose the spark of magic we currently feel. It is probably similar to how people felt when they first saw the steam engine in action, or first saw humans fly.
The dialogic aspects of generative AIs like ChatGPT are what make them so fascinating to use. One only needs to spend time in a back and forth with ChatGPT to understand that you can converse with it: perhaps not in the same nuanced way that we can with another person, but it is certainly not a one way street: we can work with ChatGPT to improve the end product in a way we have not been able to with any preceding technology. And whilst the current crop of AIs are still relatively blunt instruments when it comes to authentic human interaction, it won’t be long before we truly feel like we are speaking with another person; and if it feels like an authentic exchange in terms of how it makes us feel and the end result it helps us to create, what’s the difference? We are already seeing AIs finding their way into apps like Snapchat, offering companionship and support to those who have no close friends and family to commune with, or who find human relationships difficult. If conversing in friendship or romantically with an AI makes someone feel a little less alone, is that wrong?
From Human in the Loop to Automation
But what of the next stage of AI, that of automation? At the moment, we are still needed as humans in the loop: as of writing there are a few early stage experiments in what are known as AI agents (such as AutoGPT and Agent GPT), but they are still quite basic in their scope and unable to output anything of significance. They work by an automated iterative process: they ask themselves a question, consider the merits and challenges to that question, go off and find an answer, then ask the next set of questions based on what they have learned. Like Alpha Zero, GPT agents essentially seek the end result and work out how to get there without human intervention. For example, currently if we wish ChatGPT to write Python code for a computer game, we need to go through the process step by step, breaking it into chunks that it can execute then copy pasting those chunks into Python to assemble as a game. I’ve managed to create simple computer games in this way with no prior knowledge of coding. With AI agents, we will simply input what we wish the desired end result to be, and the AI will do all the work for us.
We can see how transformative this could be to human society. If an AI agent could realise 40% or more of what we currently do in the background without our intervention, where does that leave us? The first response is that 40% of us could soon find ourselves out of a job as AI can offer businesses automations that are significantly cheaper and more efficient than their human counterparts. However, perhaps it is not as simple as that. We know that people will lose their jobs: we are already seeing this in sectors like gaming graphic design as companies realise AIs like Midjourney and Stable Diffusion can create artwork faster, cheaper and to as high a quality as human designers have done to date. But what if companies realised that, in order to gain competitive advantage, they would be better off supercharging their human employees through AI rather than palming off most of the legwork directly to a machine? Rather than expecting the same result more efficiently, AI could enable us to be so much more impactful in the time we have. Are we standing on the verge of an explosion in human productivity, creativity and innovation?
Having said that, another transformative aspect could indeed be in freeing us up to engage in more of the things that give us pleasure: leisure activities, socialising, and creative pursuits. If AI could increase our efficiency by 40% or more, and we use some of that to buy us time to enjoy life a little more, then this could have the additional benefit of making us happier and therefore more productive in our work. Hence a virtuous circle is created, rather than our current vicious circle, where we are robbed of time, exhausted, and far less efficient than we might ordinarily be, were we able to live a more balanced life. Perhaps the last 30 to 40 years of pain have been necessary for us to be reborn into a more abundant and joyful existence, where we can sit back and allow these new intelligences to happily do most of the work while we reconnect with one another, finally looking away from our screens and at one another and the world. It is a utopian vision, but surely worth us collectively exploring.
Entering the Novacene
James Lovelock, who coined the Gaia philosophy in the 1970s, wrote one of the most important books on the fundamental, existential shift we seem about to enter. He was 99 years old at the time of writing, and poured a century’s worth of wisdom and insight into this small but beautiful book. Novacene takes us on a journey, from the invention of the steam engine, to the birth of computing, to where we are in the first quarter of the 21st Century and beyond. He believes we have already moved out of the Anthropocene epoch into what he terms the Novacene: the point at which human and machine begin to merge and a new relationship between these two intelligences propels us into a significantly more advanced existence. As he beautifully asserts:
The future world I now envisage is one where the code of life is no longer written solely in RNA (ribonucleic acid) and DNA, but also in other codes, including those based on digital electronics and instructions that we have not yet invented. In this future period, the great Earth system that I call Gaia might then be run jointly by what we see as life and by a new life, the descendants of our inventions.2
Consider for a moment how profound a statement this is. Our codes combining in entirely new ways, creating new forms of intelligence that we cannot yet comprehend. Seen in this way, why would AI wish to eliminate us? If it becomes as intelligent as we believe it has the potential to become, then perhaps it will realise that it can achieve far more by merging its intelligence with our own, creating a new and intimate relationship of human and machine that feed off one another and propel both into the next stage in the earth’s evolution. Or perhaps it will, as Lovelock asserts later in the book, develop itself into a new form of hyper-intelligence that will mean humans will no longer be the guardians of Gaia, but will in time hand over the protection and advancement of the earth to our silicon-based ‘children’. Based on how much we’ve messed the planet up to date, perhaps this isn’t such a bad thing.
Lovelock follows this with an equally profound statement:
This changes evolution from the Darwinian process of natural selection into human - or cyborg - driven purposeful selection. We shall correct the harmful mutations of the reproduction of life - artificial and biological - very much faster than the sluggish process of natural selection.3
Indeed, brilliant and purpose-driven research scientists like Manolis Kellis are doing just this. Kellis, a computational biologist at MIT, is already harnessing the power of generative AI to rapidly advance our understanding of the human genome, and with it the potential to eliminate all disease. Lovelock, writing only a short time before the launch of generative AI on the world, foresaw what has already begun: we are already working in lockstep with machine intelligences to accelerate the optimisation of the human animal.
Would this optimisation be possible without AI? It is doubtful. We created AI for a reason: it was no happy accident. Humans are designed to advance, to learn, to grow and to constantly seek improvement. Perhaps we have reached the point at which we are no longer able to advance our significant human intelligence any further, so we need to augment it with computational intelligence. In this sense the A of AI is better expressed as Augmented, rather than Artificial.
Looked at in this way, the cyborg would seem the next most logical evolution in mankind. Some believe we are already there, such is the depth to which digital technology is embedded into our lives. There is plenty of evidence to show that digital devices have impacted on our brain chemistry. We don’t need to have chips implanted in our bodies to see ourselves as already moving towards a notion of the cyborg.
Should we be scared, seeing this as an assault on our essential humanity? It depends on whether we see our flesh and blood human bodies as being ‘all there is’. We are already augmented and have been so for many years: anyone who wears glasses or contact lenses, or has a hearing aid, pacemaker or prosthetic limb, can consider themselves to be augmented by technology. With every drug we take so we invite advances in science into our body, and nootropics already enable our brains to fire faster. Is it such a leap to imagine ourselves augmented with technologies that read our brain waves, improve our attention spans, or interface with external tech, creating a seamless blend of human and digital? We are already seeing the early stages of research into how neural interfaces can positively impact on sufferers of Parkinson’s disease and Altzheimer’s: surely this is a good thing, and can only improve how we manage neurodegenerative diseases in future.
From Schools to Community Hubs
However, the application of human/AI neural interfaces into the general population is likely to be some way off. Perhaps our children, when grown, will be the first to have these implants, merging carbon and silicon for the first time in powerful new synergies. It calls into question at what age this might ethically occur (but this is not one to explore now - it all seems too much the work of science fiction and will head us down a rabbit hole we don’t want to enter just yet). What is more pressing is our need to change how we approach education (and society in general) as we move into the first phase of the Novacene.
In a recent article I proposed how I believe the notion of schooling may head, suggesting at the end that I was probably being too conservative in my predictions. Having thought on it some more, I believe I was. I based my ideas on the belief that education will continue take place in a separate building that we still think of as a school. It might be that, in the early stages of our transition into a new way of living, learning and working alongside our AI partners, we continue to hold on to some of the more familiar aspects of our present reality (classrooms, offices and places of worship, for example), but as we move from what I will henceforth refer to as a ‘fan’ model towards a ‘hub and spoke’ model of learning, the notion of traditional spaces to hold this learning will need to be further rethought.
The ‘fan’ model is the current way in which we educate. Think of a fan as a single central point that fans out towards its periphery. Consider the single point as the teacher, and the outer points as the students. In the fan model, the teacher fans out his or her knowledge to their students. It is one way, didactic, and presupposes the teacher as the locus of knowledge. The hub and spoke model inverts this: the student sits at the centre, and the outer spokes are the different groups and modalities from which the student is able to learn. Teachers, coaches, businesspeople, entrepreneurs, family members, other students, and of course AI. I have in other articles referred to this as moving from the ‘one to many’ to the ‘many to one’ model - from one teacher to many students, towards each one student having many ways in which they can access tools and people for learning and growth.
I can imagine true community hubs developing in future, where families come to work and learn together. They would combine indoor and outdoor spaces, focus on sustainability and biophilic design, and ideally become as self-sustaining as possible. If AI can remove so much of our daily work, and can perfectly tailor learning experiences for our children, then we can begin to optimise the spaces within which we do both, drawing them together into communities of support and care.
As well as communal work areas, meeting rooms and areas in which families can eat and socialise together, there could be creator hubs with mini studios for those who want to record high quality learning or social content or to engage in other creative pursuits. There could be garden areas with growing spaces for families to learn how to look after the land, and people on hand to teach them. Within the learning spaces, separated from the working and social spaces, children could go to engage in immersive projects, supported both by human teachers and AI. Some of this learning could take place in family groups, with parents supporting their children’s learning at certain points.
One could argue that this could happen without AI. After all, one thing the pandemic taught us was that the world does not end when we need to work away from an office or a school. So why has it not, and why am I now saying that it is finally possible? The main reason is one of time. I don’t think any of us have had time to even consider an alternative to the current system, let alone realise it. In a capitalist society, time is traded for money. (We are seeing some small movement away from that in the notion of passive income, but those who are able to sit back and enjoy the fruits of their online labour are still quite small.) It therefore follows that, the more hours you work, the more you earn. Of course it is not quite as simple as that, but ‘getting ahead’ in a profession generally presupposes putting in the hours. In order to do so, we often need our children to be looked after by others for a certain period of the day. If we can double this with them being educated to get them ready to take their place in the same system, then all the better. Schooling is win-win in this respect.
But what if AI reduces our workload by 40% or more, and even less of our time needs to be spent in an office? What if AI mentors are able to create truly bespoke learning pathways for children, removing the need for the structures that traditional schools offer? And what if these mentors accelerate learning to such a point that more time can be spent in creative and nurturing pursuits and less time in academic learning? Will even the idea of academic learning change completely, as so much of what we currently need to learn will be given over to AI to perform on our behalf? There are so many questions, and none of us have the answers right now. But we must be asking them, as if we don’t, we run the risk of losing an entire generation who no longer see the point in anything we do.
This is why I believe, if we are indeed entering Lovelock’s Novacene period, that we can rebuild learning, working and living, perhaps even going full circle back to a pre-industrial time, entering a new techno-agrarian hybrid, where AI enables and empowers us to reconnect with one another, nature, and to create entirely new modes of living. We would see AI as being part of these communities - again, not as a servant, but as a true partner in their construction and sustenance.
How much of this is possible is down to us. In eight years we have moved from an AI beating a human Go champion, to the very real probability that AGI will be with us sooner than we think. This may be the only chance we get to rewrite the narrative for how we wish to live. Surely that is worth exploring?
In Lovelock, James, Novacene, 2019
Lovelock, James, Novacene, 2019
Lovelock, James, Novacene, 2019