There is no doubt whatsoever that AI is set to dominate our world for the foreseeable future. Its capacity for disruption is tsunami-like in magnitude, and if we choose to ignore it, to bury our heads in the sand and pretend that we can continue on as is, we are set for a rude awakening. Like it or not, AI is here, but that doesn’t mean we should jump headlong into full adoption without being mindful of the risks.
Let’s look at six arguments against AI and what we can do to counter them. My focus as always is on education as this is my specialist area, but many of these arguments span most sectors.
1. AI will make people lazy
A major concern that many teachers have impressed on me during training sessions I have run is that ‘these tools will just make kids lazy and they’ll no longer think for themselves’. The problem with that argument is that students have been been copy-pasting from the internet for years and many teachers haven’t noticed.
I believe that the opposite will happen as we introduce generative AI: much of the time, students struggle to get their thinking into readable essays, reports and presentations because they don’t know where to start. Their teacher gives them the information (in the form of notes made in class, worksheets and so on) and the student has to take all that information and turn it into a format that the teacher can check for understanding. Many teachers take the time to show students how to structure their essays, but many don’t (or do it badly because they were never taught themselves).
ChatGPT can take the pain away by doing the initial essay planning. If a student types in ‘Please give me an essay paragraph plan to answer this question for my GCSE English: <Insert question>’, ChatGPT will do just that, giving them a nice clear plan they can work from. Once they have written the essay, the student can copy paste paragraphs into ChatGPT and ask for suggestions for how to improve, based on the essay plan previously given. That’s not being lazy: that’s using a tool to support better organisation and deeper thinking.
2. AI will steal our data
There is a further concern over data ownership and data protection. Do we really want to be inputting masses of personal data into machines where we have no idea where this data sits? This is why I would strongly suggest not introducing any ‘layer 2’ application into your school. There are so many AI writing and planning helpers out there at the moment, and the majority of them are build over the top of ChatGPT. They might create a more structured user interface (UI) so that they seem to give you more than ChatGPT, but in truth they don’t. But what they can do is harvest vast amounts of data, and if the company who have created this application then cease operating (which will happen) they have your and your students’ data. Better to stick to ChatGPT and turn off the Chat History & Training function (you can find it in settings) which will stop Open AI using your data to improve the model. I personally keep mine on, as I like to think of how my inputs are helping the AI to get better, but for students it might be better to switch it off.
But also, how much personal data do we genuinely input into these AIs in the form of prompts? I cannot think of a single piece of information I have inputted that would be in the least bit compromising or indeed interesting to anyone else but me. Every single thing I have written into ChatGPT I’d happily share with anyone interested. Does it really matter whether students input their essays and ask for help for how to improve? Is that in any way, shape or form sensitive data? Schools need to be pragmatic here: it makes more sense to train staff and students not to input anything personal. The best way to think of it is this: if you’re happy sharing your prompts with your teacher and the class, then it’s ok to input. If what you’re prompting you’d not tell a soul, then you’re probably using it in the wrong way. Besides, ChatGPT will simply respond ‘as an AI language model, I am unable to answer such a personal question’ or words to that effect. This to me is a storm in a teacup and isn’t addressing the main question, which is: students will use it increasingly whether we like it or not, so how can we help them use it to their advantage and in ways that keep them safe?
3. AI will replace human teachers
I think we know the answer to this one. AI will not replace you unless you allow it to. What do I mean by that? Well, if you continue to teach in the same ‘facing forward eyes on me’ for every moment of the day when the rest of your school are engaging students in real world, immersive, inquiry-led experiences augmented by AI, VR and AR, then your students will soon begin to explain to you (largely through their behaviour) that you’re not heading in the right direction. Don’t get me wrong: there will still be the need for some direct instruction from time to time, but the bulk of our day should move towards guiding students through real world, immersive, AI-supported projects.
The challenge then is to work out how best to offer this support. I think it will largely consist of the following:
being the overall architect of learning pathways, co-creating immersive experiences with other teachers and with students themselves;
setting clear learning goals and ensuring they align with curriculum standards and exam requirements;
giving students the initial training on the best way to work with AI. Students need to understand the notion of how to prime the AI with initial context so it understands how to ‘think’, how to generate initial ideas, how to develop these ideas, and how then to take the output from the AI and use it to develop their own thinking;
ensuring students have the right resources before they start, whether these be physical or online, so that they don’t hit blocks when they are engaging in project work;
working alongside students as they find their way through projects: not to correct or even to steer outcomes, but rather to offer the right questions at the right time, modelling effective thought processes to demonstrate to students the most effective approach to any one problem;
encouraging creativity and innovation: creating an atmosphere of experimentation and risk-taking, and not privileging one ‘right’ answer over another (whilst ensuring students don’t go so far off track that learning outcomes don’t give them what they need for success later);
fostering collaboration and teamwork through ensuring students feel comfortable and supported working together and have opportunities to provide peer support and feedback;
supporting self-directed learning using AI tools by modelling good practice, initially scaffolding this approach, then allowing students the time and freedom to run with their interests and develop new ways into thinking about the areas they’re exploring.
Along with this will come the more vocational aspects of the teacher’s role, such as developing thought leadership in their specialist areas, sharing ideas and innovations with likeminded educators across the world in order to further develop their own practice. We need to see teaching as no longer bound inside the four walls of the classroom: the best teachers will be constantly looking to expand their network and sphere of influence, offering their support and expertise in a global context. In that way everyone improves and ultimately students continually benefit.
4. AI will perpetuate and amplify biases and inequalities
There have been many stories of AI’s output being biased, particularly towards race and gender. Because it is trained on vast amounts of the internet, and then tuned with human input, the challenge is always to ensure total impartiality (as the internet is not the most impartial of places, and humans by their nature have biases, computer scientists being no different from the rest of us). However, as the models are further trained through constant human feedback, they become less biased. Open AI continues to work in this area, creating tools to detect bias in data sets then taking steps to reduce this bias. My experience to date of ChatGPT-3.5 versus version 4.0 is that the more recent version certainly seems less biased, and Midjourney version 5.0 versus version 4.0 to me seems less prone to outputting images that are biased towards gender and race when inputting generic prompts. They’re still not perfect, but perhaps we don’t want or need them to be, as neither are we.
We need always to be teaching students how to approach any source with a critical eye, and the same holds true for AI output as it does an article on the internet, Youtube video or book. Just because ChatGPT sounds convincing, it doesn’t mean we should take its output as gospel. We need to ensure that schools produce clear frameworks and standards within which to work, in terms of acceptable use, academic honesty, and the balanced appraisal of AI-generated content. We should also ensure that we are teaching students to see AI as an augmentation of their thinking and decision-making processes, not as a replacement. We should look to AI as an assistant - one that we can leverage to make ourselves more productive, but also one that we need to keep an eye on, lest they wander off down some narrow-minded rabbit hole and give us perspectives that are not conducive to the advancement of tolerance and the acceptance of difference.
5. AI will erode student privacy and security
There is a danger that too much unfettered use of AI will result in data getting into the wrong hands, with undesirable results. Cyber terrorism is on the rise, and sectors like education and healthcare seem particularly vulnerable, largely because they often have antiquated systems and staff poorly trained in knowing the signs of a cyber attack and understanding how best to protect against them.
As we have already seen, if for the most part if we look at AI as supporting our planning and student learning, then little particularly sensitive information will be inputted into these AI models. A student’s academic profile does not constitute particularly sensitive data: it is difficult to see how a cyber terrorist can hold a school hostage by demanding payment to release data which only consists of student progress. However, if our information management systems become linked to AI, organising data in simple to use ways, then there is a heightened risk.
Systems need to be built with this in mind, ensuring that data is kept separate from the AI that can organise it. An example would be using a large language model to support output from a database, such as timetable management. Ensuring the data itself is kept on a school server, and not in the cloud, is one way of maintaining some data security. Another way is ensuring the human users of these systems are well trained in understanding how to keep data safe, and what to look out for in terms of cyber threats. Ensuring no one, for example, downloads an app from the App Store that seems to be ChatGPT but is actually a trojan horse that accesses data held on the device, is one way of protecting data. At the British International School of Tunis we have run staff training sessions to remind staff not to download apps (one member of staff at a recent INSET session sheepishly told the group that he’d downloaded one such app and had suddenly found it had accessed his Tinder profile: so be warned).
If we turn it on its head, we can actually start using AI to protect us. Systems like Darktrace, CylancePROTECT and IBM Watson for Cybersecurity use machine learning algorithms to identify and respond to cyber attacks in real time. They can detect and block both known and unknown threats, analysing large amounts of security data and providing recommendations and guidance to security teams.1 We need to understand that cyber terrorists are using increasingly sophisticated methods to attack our systems, often using AI itself. We therefore need to fight fire with fire. I cannot stress enough how important it is that we take this seriously and do not delay ensuring you have fully offline backups to your data (by having an onsite backup drive that is physically unplugged from the internet every evening). Check out the work of IT Governance, who offer excellent online courses as well as cyber security guidance on their highly informative website.
6. AI will exacerbate existing inequalities in education
Introducing AI into schools could potentially increase inequalities that are already there, including unequal access to technology, an increase in AI algorithmic bias (as we have already explored above) and limited data availability, if schools in some parts of the world don’t have access to the data needed to develop AI systems.
Governments the world over need to prioritise investment into the systems and user hardware that will ensure their countries are not left behind. We have already seen how AI can increase productivity significantly: a survey by Deloitte found that companies using AI reported a 37% increase in productivity, and Accenture estimates that AI can increase labour efficiency by tip to 40% by 2025.2 More recently, a study from the National Bureau of Economic Research found that generative AI platforms boost worker productivity by 14%: this is the first empirical evidence on the effects of generative AI in a real-world workplace.3 The net effect of this is massive: Goldman Sachs estimate that generative AI could raise global GDP by 7%: that’s a $7 Trillion increase.4 If we get this right, AI could usher in a new era of global abundance, particular if it manages to solve some of the biggest problems in the world, such as curing all disease (see the incredible work of Manolis Kellis on this)5 and enabling the creation of new, cheap forms of renewable energy to power the massive (and energy hungry) supercomputers needed to host these AI models.
In areas of the world such as Sub-Saharan Africa, where there is limited access to computers, efforts need to be made to ensure schools are equipped with enough devices to ensure access. A study by the Word Economic Forum showed that “89% of learners across Sub-Saharan Africa still do not have access to household computers and 82% lack internet access”6: this will become a huge problem as AI becomes increasingly and unnoticeably integrated into our daily lives. Some countries such as Rwanda are driving major improvements in digital access, such as the initiative Irembo, a government-to-citizen services e-portal and expanded 4G access across the country. However, Rwanda is in the minority, and has been able to push ahead largely because it has had continuity of leadership from Paul Kagame, who has been in power since the 1994 genocide. Many countries in Sub-Saharan Africa have not had the same fortune, and as a result lag far behind in getting their people ready for the AI tsunami that is about to hit. It is a worry for sure, but there is hope that AI will enable solutions to this problem to be found.
Having access to a simple to use AI ‘mentor’ that can be accessed through a cheap smartphone is one solution that, in the next few years, could become a reality. If learners in impoverished parts of the world had access to an always-ready, expert AI guide, who could curate learning pathways and resources for students no matter where in the world they are, would be an instant leveller. The Mastercard Foundation should turn their attention to this, as their work since 2006 in this area shows them to be a world leader in investment into education to reduce inequality.7 The dearth of teachers in this part of the world means that having AI tutors could go some way to give access to quality education to children who would ordinarily have had none. Whilst being less preferable to having humans in the room, it is better than having no access. AI could prove a real game-changer if investment is made into creating these AI guidance models and ensuring access to them is simple and cheap.
There are a number of concerns about AI, and many of them are valid. However, as we have seen, there are ways to counter these concerns, and whilst we must be mindful of them, we should not allow them to stop us from embracing AI, as the alternative is that we fall way behind schools and indeed entire countries who use it to supercharge creativity and solve increasingly tough problems, accelerating growth and leaving those countries who ban it (such as Italy most recently) struggling to keep up.
1 ChatGPT, April 2023
2 ChatGPT, April 2023
3 Bing Chat, April 2023, referencing https://tech.co/news/study-ai-improves-worker-productivity
Study: Generative AI Platforms Boost Worker Productivity by 14% (tech.co)
4 https://www.goldmansachs.com/insights/pages/generative-ai-could-raise-global-gdp-by-7-percent.html
5 https://news.mit.edu/2012/profile-kellis-0417
6 https://www.weforum.org/agenda/2021/03/digital-inclusion-is-key-to-improving-education-in-africa/
7 https://www.ifc.org/wps/wcm/connect/region__ext_content/ifc_external_corporate_site/sub-saharan+africa/priorities/financial+inclusion/za_ifc_partnership_financial_inclusion
AI fans have no place here on Substack.
I really enjoyed your post! You make some great points about using AI tools responsibly, keeping data safe, and addressing educational inequalities. As AI keeps progressing, it's so important for everyone – teachers, policymakers, and tech experts – to work together and make the most of its potential while tackling the concerns you mentioned. Super helpful roadmap for navigating these challenges. Keep up the great work!