Issue 132 - 2024 Autumn term
Rob Robson explores the ethics and practical implications that school, college and trust leaders should consider so that artificial intelligence (AI) can be used safely and well in their institutions.

Leading AI in education

Rob Robson
ASCL Trust Leadership Consultant

Leaders are increasingly interested in discussing artificial intelligence (AI) in schools, trusts and colleges, primarily sparked by the significant curiosity surrounding ChatGPT, a widely adopted, large language-generative AI system. While AI has garnered interest in UK education, many leaders acknowledge they haven't had the opportunity to thoroughly explore its ethical and practical implications due to time constraints. 

Generative AI (GAI), such as Google Bard and ChatGPT, has been under development for years, with some companies diligently testing their products before launching them. However, others have hastily introduced untested AI products into the market, using unwitting users as testers. This practice raises numerous concerns, particularly when users are unaware of the origin of the generated information. 

The AI debate 

Technology elicits a spectrum of reactions from school leaders. At one end, some early adopters eagerly embrace new technology, looking to improve staff and student experiences. However, when it is poorly implemented, no matter how enthusiastically, technology can add to workload and stress. At the other end, some leaders approach technology with suspicion, preferring to assess its broader societal impact and may opt for technology-free classrooms, seeing it as a learning distraction. Both ends of the spectrum are understandable, but AI, particularly GAI, is not something that can be either treated with the utmost suspicion or embraced without reservation. 

Sitting back and not engaging is not an option; students and school staff are already using this technology, whether they are using it at school or at home. However, there are several areas in which we need to be active if we are to get our relationship with AI right for ourselves as leaders, our staff and, of course, our children and young people. 

The AI debate often centres on English mathematician and computer scientist Alan Turing's imitation game, also known as the Turing test. In this concept from his 1950 paper Computing Machinery and Intelligence, a human judge converses via text with an unseen entity, which could be human or machine. The judge's aim is to determine whether they're talking to a human or a machine solely based on the text responses. If the machine could consistently mimic human responses to the point of indistinguishability, Turing proposed considering it as possessing human-like intelligence. 

Brilliant as it is, the problem with the imitation game is that it has pushed our thinking about artificial intelligence in the wrong direction. Seeing AI as being something comparable to a human has sensationalised the whole area, and we have started to over-focus on a future when computers might become sentient, whether they will have feelings and, in the tabloid headlines, whether they will take over the world and destroy humankind. Of course, we should worry about this happening, and we need our governments to engage with this huge debate, but it is important to remember that we are nowhere near this with the current capabilities of artificial intelligence. 

Vital ethical discussions 

While we talk about these futuristic hypothetical situations, we are missing the immediate, and in my opinion, vital, ethical discussions around the current use of artificial intelligence, which, broadly, is unregulated, and, at the moment, lacks the political will needed to regulate it so that it can be used safely and well in schools, colleges and trusts. 

As education leaders, we need to engage with the following areas:

Leaders must make informed decisions about the role of generative AI in education

While it's crucial to engage with AI and ensure students use it ethically, it's equally important to recognise where AI doesn't belong. AI won't replace teachers because great teaching relies on relationships, not just knowledge delivery. Areas emphasising emotions and relationships may not be suitable for AI, like mental health support, although it's a growing area. 

Understanding AI's biases is essential

AI systems are trained on data sets that can contain biases, sometimes from historical or cultural sources. For instance, ChatGPT, trained on medical knowledge, has saved lives, but biases exist in some medical data sets. To address this, staff training on bias in AI algorithms and lessons on real-world bias examples could be introduced.

Transparent use declarations and policies in educational institutions are vital

Schools, colleges and trusts need policies requiring transparency in AI use. It should be clear who owns AI systems and how to contact the owner if issues arise.

Knowing the AI system version is crucial

Users should differentiate between free and paid versions, understanding the functionality they offer. If an AI system is a ‘Beta’ product (a new product still being tested), that should be obvious and the limitations of such a system acknowledged.

AI training methods matter. Users should understand if the system relies on static data or continuous updates. Monitoring data for accuracy and biases is essential.

Source bibliography disclosure is important

When AI uses a source to generate a response, it should automatically reveal the source's bibliography, addressing questions of plagiarism and accuracy warnings.

A hard prohibition list helps define AI's boundaries. AI systems should maintain lists of areas they won't access for information, especially in ethical contexts.

Reporting known issues and providing live reports is necessary to prevent problems

The Post Office scandal – where over 700 branch managers were given criminal convictions when faulty accounting software made it look as though money was missing from their sites – is one example of when things can go seriously wrong. Transparency in addressing AI system issues is essential.

Understanding AI's decision-making rationale is vital

While some aspects may be considered commercially sensitive, open-source AI platforms enable transparency, allowing users to refine tools and suggest improvements. This level of transparency would also encourage people to suggest improvements and corrections to unforeseen outcomes. The success of this approach can be seen in the operating platform Linux, where users and engineers constantly refine its effectiveness. 


AI in action
For interest, this article was originally over 2,000 words. GAI was used to reduce the article before (human) editing.

Featured Articles

Logo