close
close

Mondor Festival

News with a Local Lens

Ofsted to review use of AI in schools and colleges
minsta

Ofsted to review use of AI in schools and colleges

The government has asked Ofsted to carry out an independent review into the use of artificial intelligence (AI) in schools and colleges.

The review will look at how education institutions are already using AI and the potential uses the technology could have for the sector.

Ministers have already expressed hope that AI could help “transform” teachers’ workload.

Outlining the terms of reference for the review on Tuesday, Ofsted said it will “study how schools and further education (FE) colleges are using AI to support teaching and learning and to manage systems and processes administrative.

“We will examine the role leaders play in integrating AI and managing the risks associated with its use.

“We will collect data from FE schools and colleges as well as academic literature and expert interviews. This will allow us to see how AI is already being used and help us think about its potential uses and benefits.

“Intentional and unintentional impacts”

It will also examine how schools and colleges monitor the “intentional and unintended impacts” of AI, govern its use, and “manage risks associated with the use of AI.”

An Ofcom survey last year found that Snapchat’s chatbot, My AI, was used by 72% of 13 to 17-year-olds.

The increase in young people using AI has raised concerns about it being used by students to cheat when doing homework or classes.

The Ofsted study aims to raise awareness among policy makers and education providers of the benefits and challenges of AI in education, and to identify training that Ofsted inspectors may need for help better understand AI and how it is used.

The report will bring together evidence from around 20 schools and colleges, considered “early adopters” of AI. Ofsted will interview leaders responsible for rolling out the use of AI in these schools and colleges.

The report will also review existing research and consult international inspectorates and academics knowledgeable about the use of AI in education.

The evidence will be collected in the spring and Ofsted says it hopes to publish its findings next summer.

“Required” exams are graded by humans

It comes as Sir Ian Bauckham, the government’s choice for chief regulator of Ofqual, warned MPs that while there were potential “interesting uses” of AI to generate questions of exam, it was “imperative that a human supervise the grading of student work.”

“AI still makes mistakes. It’s mind-blowing,” said Bauckham, who has served as acting chief regulator since January.

Mr Ian Bauckham
Mr Ian Bauckham

“Decisions made by AI evaluating work that a student has produced for a high-stakes assessment are less transparent and therefore less subject to challenge than they might be if marked by a human.”

Ofqual has “carefully sampled public trust and attitudes in this space and… the overwhelming majority of the public want a human being to oversee the marking of student work”.

But AI can be used for other purposes – for example “for quality assurance of the review process”.

“It can sample, it can verify… There are a lot of helpful, useful, quality-enhancing things that AI can do, but the marking work itself needs to be supervised by a human being. »

But “exciting uses” in question generation

Generating questionnaires of roughly the same level of difficulty each year is “labor-intensive” and “difficult,” Bauckham said.

“It may well be that AI could contribute to this, and my view would be that there is less risk to public trust, provided a human is in the loop for final approval, than in the actual grading of student work.”

The former headteacher, who appeared before the education committee for his pre-appointment hearing, said the “vast majority” of GCSE and A-level assessments involved “some degree of in-depth writing,” which should be graded by humans.

However, he acknowledged “that there may be very simple and selective answer items, so multiple choice questions, which can be marked safely by a machine, but we would still expect a human either in the loop, verifying that it happens, sampling quality and so on.”

He added that it was “very difficult to challenge the decision of a machine.”