When I was around nine, my father introduced me to rugby. I joined a mini rugby team, played a handful of games, and hated it: the cold, wet pitches, the mud and dog crap, the communal changing. I knew he was disappointed as rugby was a big part of his life, but I just couldn’t do it. To be honest, sport in general only made sense a few years later when I took up golf, because when it rained I could put up an umbrella.
Looking back, I can see that, while I detested standing in a freezing field wearing shorts and a thin shirt, having team-mates has its benefits. Relying on others, and others relying on you, can bring out the best in us. If you care about them, and the team in general, you’re more likely to raise your game, and less likely to make excuses. Team sports can teach children a lot, and I probably missed out having never experienced it much myself.
I wanted to start with this reflection because of Ethan Mollick’s recent post on the Cybernetic teammate. What Mollick et al’s research concludes is how much more effective teams (and indeed individuals) are when working alongside AI as part of a human/AI team. It’s as if AI is able to bring out the best of the teams it works with.
This got me thinking about the education system as a whole, and its role in shaping the grown-ups of the future. And what I have concluded, as a logical corollary of Mollick’s findings and my own developing ideas, is that what this means is nothing less than the most profound rethinking of formal education since its inception.
You’re on your own
Before we move on, let’s take a quick look at how our current system deals with the team effort. As it happens, not too well. Within the industrial model of education, we privilege solo performance over collaboration. Children may learn together, eat together, and play together, but when it comes to exams, they’re on their own. From a standardised perspective this makes sense: it’s challenging to rank individuals against a set of standards if output is the result of collaboration.
But as we know, this is not how the world works. It’s rare that we work in isolation and can therefore be judged in isolation from those we work with. That’s why I’m wary of performance management in its most traditional sense: I just don’t see how you can judge individuals in the workplace without looking at the broader context. For example, the class teacher downgraded for classroom management by their line manager, when that same line manager offers no support with more challenging students. I’ve seen it so many times.
The power of the team
There’s plenty of research to show that promoting teamwork in schools is a good thing. Collaboration can be a great classroom tool to encourage communication, problem-solving, turn-taking, and leadership, as well as the potential synergies of ‘two heads being better than one’.
Robust evidence indicates that student-student collaboration can boost academic achievement. In a meta-analysis of 28 randomized controlled trials spanning over 4,000 students across North America, Europe, and Asia, cooperative learning had a significant positive effect on achievement
On average, students who learned in small groups performed about 0.5 standard deviations better (≈ moving from the 50th to the 67th percentile) than students taught via individual or traditional methods
This moderate effect size (Hedges’ g ≈ 0.47) was consistent across subjects (math, science, etc.) and grade levels
In practical terms, group work often leads to higher test scores and deeper understanding of material, as peers can explain concepts to each other and address knowledge gaps collaboratively
For example, a controlled experiment in a high school Algebra class found that students in a cooperative group setting had significantly greater gains in math achievement than those in a lecture-based class
academia.edu, uc.edu, colab.ws, quoted in ChatGPT Deep Research, April 2025
As the saying goes, ‘a problem shared is a problem halved’. Tackling challenges together can lead to better performance and create a stronger and more cohesive classroom with trust at the core.
Enter AI
So what does this have to do with AI? Actually, quite a lot. If we want students to truly excel, it would seem that having AI as a teammate can bring clear advantages. Whilst in general, teams in Mollick’s study who used AI performed marginally better than individuals with AI (as is shown in the first chart), when it came to producing ‘top-tier solutions’, the cybernetic team were significantly ahead:
Teams using AI were significantly more likely to produce these top-tier solutions, suggesting that there is value in having human teams working on a problem that goes beyond the value of working with AI alone.
This leads me to the core of my argument: if we want to nurture the next generation of problem-solvers, empowering them to tackle the world’s ‘top tier’ challenges—the ones that, to date, we’ve been unable to solve—then we owe it to this generation to give them the most advanced tools as soon as possible.
What does this mean in practice? It means bringing AI into the centre of our education system. Enabling the conscious, deliberate teaming-up of human and AI, and building new curricula, assessment, spaces, and human support around this unit. not bolting AI onto our current creaking structure. In other words, a child’s education would become an education alongside AI.
An education with both human and cybernetic classmates, if you will.
This may seem extreme. It may seem like I am suggesting we allow the entire education system to be cognitively offloaded to machines. That, by allowing AI inside the tent, we are somehow relegating our young to be nothing more than systems operators, PAs to our machine overlords.
This could not be further from the truth. In fact, I’d say that, unless we do make this radical shift, operators is exactly what our children will become. We are already seeing it: students offloading the ‘stuff to be done’ to ChatGPT—essay-writing being a case in point. The longer we persist in an education system that can already be handed off to AI almost wholesale, the more likely our children will see schooling as something they can push to AI while they focus on what matters. And if AI is doing all the writing, who is doing all the learning?
If Mollick is right (and I have no reason to believe otherwise), then unless we do bring this rapidly developing new tool inside our education system, and soon, we are putting our young at a serious disadvantage.
So how might this look?
If we begin with the premise that education becomes a process of gradually deepening connection with AI, we can immediately see how many of the organisational structures in our current school system cease to be relevant. In his book How to Think About AI, Richard Susskind elegantly demarcates the different ways we can understand AI. We can see it as automating what we can do ourselves (such as lesson planning and assessment), innovating the delivery of our current education system (by handing off personalised learning delivery to AI tutors), or eliminating many of the structures that hold the system together.
Most of us are stuck on number one, some are playing with/kicking against number two, but I’m more interested in number three: what can we eliminate as a result of a human/AI collaborative model of education.
Susskind uses the manure crisis at the end of the 19th century as an excellent example of elimination. Horse manure (and horse carcasses) were piling up at the sides of roads, so much so that policy makers were at a loss as to how to manage the problem. Then a few years later, the internal combustion engine was invented and cars replaced horses. The problem was eliminated.
If we apply this to the current education system, and see the human/AI team as the equivalent to the arrival of the car, what horses might we eliminate? I think there are several.
1. Curriculum
First and foremost, the siloed, subject-based curriculum. AI has brought into sharp relief how increasingly irrelevant this now is. This is not about removing the learning of science, or maths, or history, but rather how they can be grounded, interwoven, and contextualised through AI-supported learning experiences. I’ve used the Google Maps analogy on several occasions for good reason: unlike the paper map, which shows one clear route to a destination, Google allows for detours while still getting you where you need to go. If there are traffic problems ahead, Maps will reroute you. Yes, it can sometimes make mistakes, but these are rare. There are always exceptions, which detractors of AI will gleefully point to. By and large, AI-driven maps systems are significantly more reliable than human map-reading. Anyone who has witnessed parental fallout from poor in-car navigation knows exactly what I mean.
Crafting meaningful learning pathways through an artful combination of subject disciplines is achievable today. AI is already good enough—it just needs to be properly channeled. We need to quickly evolve from the unimaginative ‘AI lesson planner’ paradigm and understand its capabilities. These do mean a radical shift in the role of the educator (something I come on to below). This must happen. We cannot hold on much longer to the rags of a profession we continue to dress ourselves in. Our students can see right through them.
2. Timetabling
As soon as we remove the siloed curriculum, we remove the need for timetabling. A schedule places students and a teacher in a room at a certain time of the day, enabling the efficient distribution of human and physical resources. It is seldom optimised for learning: indeed this is an impossibility, as some children learn better earlier or later in the day depending on age, background, genes and so on. If we move towards AI working with individuals and teams of students, we create opportunities for these cybernetic groupings to move in unexpected and unique ways, broken free from the confines of the one hour lesson. Put simply, if AI can organise us, we no longer need our time broken into neat little boxes.
3. Assessment
If we see an education as one lived alongside AI, then it makes little sense to sit students on their own in a sports hall in May, June or November, in order for them to scratch hurried responses on pieces of paper. If the outcome we need is young people entering the world with the skills and resilience to be courageous, think outside the box, and make a difference, then terminal exams should no longer be part of the process. Brian Roemmele has suggested that AI is better defined as Augmented (rather than Artificial) Intelligence, and I’d be inclined to agree: as AI increasingly augments us, it seems archaic to persist with an assessment system that predates any digital technology, let alone AI. Terminal assessments are about as Artificial as it gets.
4. The role of the educator
Finally, where does the educator fit into a world without siloed lessons, restrictive timetables, and solo assessments? There needs to be some redefinition of the role, but two things stand against any notion of teachers being wholesale replaced by AI. First and foremost, under-18s need to be looked after by an adult—and if it isn’t a parent, it needs to be someone qualified and safe to look after children. This won’t change any time soon, so it feels pointless arguing otherwise. Secondly, children need mentors, older people they look up to, adults who inspire them and make them want to achieve. So much of our motivation comes from wishing to make the people we respect proud of us. This doesn’t change as we mature: we are all just children who got older, after all.
People need people, and young people even more so. But what I believe we will see is a change to how these roles and relationships are articulated. I believe (and have been saying for some time) that adults will become one node in a support network surrounding students, and that teams of AI agents, peers, experts and other digital tools will become other nodes.
We can probably even abandon the term ‘teacher’, and focus instead on different support roles within a school. Some will be coaches, some learning designers, and others subject matter experts (who may be industry experts, rather than school employees). We might even see a movement back to the pre-industrial model where older students take on some tutoring and support responsibilities of younger children. Students could choose how they have their learning delivered: some might prefer an online, self-paced module, whilst others may need the one-to-one of a learning coach or student mentor. Each day would be working on challenges related to a longer term project.
The challenges ahead
There are many obstacles in the way of such a radical shift, but they’re not insurmountable. For those schools in the European Union, the high risk nature of AI that can potentially impact a student’s life chances needs to be carefully factored into the above. At first glance it would seem that the EU AI Act will make the cybernetic teammate impossible to implement, such is the potential for these teammates to impact on a student’s development. However, I would argue the opposite: if we are intentional in our use of AI, managing the team rather than AI managing us, it is of potentially less risk than an algorithm we have little agency over. Put it this way—would you rather we trained students to harness the power of their AI team, whilst being aware of the limitations and risks, or ignore it and allow it to continue on without them (which is exactly what it will do, regardless of whether we philosophically agree with it or not). I know which I’d choose.
We must go into this with our eyes open, aware of the risks of cognitive offloading, perpetuating bias, and the mistakes AI can make. We talk a lot about the need for critical thinking in the AI age, but rarely articulate what this means. I think it means exactly what I’m proposing here. By bringing AI into the centre of our education system, we open our students’ eyes to its massive potential and obvious limitations—limitations that are decreasing all the time.
One thing is for sure: we have to make a choice. Ignoring AI is no longer an option. With every day, so Kurzweil’s law of accelerating returns becomes increasingly apparent. AI is, quite simply, getting better and better. That won’t stop, at least not any time soon. The AI we have today is better than the AI we had yesterday, but worse than the AI we will have tomorrow—and many of us were saying this long before November 2022. We have a moral obligation to make this fundamental shift, to bring AI into the centre of our lives, to work alongside it, learn from it, and ensure it learns from us. The consequences of not doing so could be dire: we could relegate the young to be nothing more than servants of AI, hopelessly ill-equipped to manage it in any meaningful sense.
Looking back, I realise I missed out the first time I was presented with the chance to be part of a team—part of something larger than myself. I don’t want that for my children, which is why my nine-year-old son already firmly sees AI as a design and coding teammate, and is currently building a website filled with retro arcade games. This needs to be the rule, rather than the exception.