Irish Advisory Council: AI in Education

Since the arrival of ChatGPT in November 2022 it has been obvious that Irish education has needed a framework for dealing with AI. However, despite the Department of Education formally including AI in Research Study Report frameworks in subjects like History and Economics, teachers in both post-primary and primary schools have received no advice. The Additional Assessment Components in the revised Leaving Certificate have prompted the urgent need for robust guidance, often promised by the former Minister of Education from the previous government.

Just two weeks ago, as Emma O’Kelly of RTÉ reported, at a conference for school leaders on Senior Cycle, the new Minister for Education referred to this:

Addressing worries that teachers have about the potential impact of generative AI, about how to ensure academic integrity, and the need for guidelines and training for teachers, Minister McEntee said, "there is still time".

She was referring to the time between now and September, when several subjects start reformed Leaving Certificate courses. Bear in mind that most secondary schools close at the end of May.

And at that conference:

Officials from the Department and the State Examinations Commission attempted to address the threat that AI poses. Guidelines are being drawn up and are coming, they said.

Emma O’Kelly added:

Promising to work with teachers and ensure that training was "adequate" [the Minister] said: "It is about equipping ourselves into the future. We have to be able to adapt and respond."

So the guidelines are coming, even if the Department itself has not responded adaptively since November 2022, and there is apparently still time to train ‘adequately’ those teachers whose courses start in September.

Meanwhile, one body at least has attempted to give some guidance for education. Last week the new Irish Advisory Council for AI published Advice Papers, one of which is on Education. 

The Advisory Council has been gathered under the umbrella of  the Department of Enterprise, Trade and Employment, and so much of the focus is on business. As you would expect, no primary or post-primary teachers are on the Council, but to be fair that is what you would expect from the Department of Education instead, so I’m sure when their guidelines are finally published there will have been extensive teacher-input. This February 2025 Advice Paper comes from a body in which there are six third-level people, the rest being from a tech/business background. Still, at least there’s something. And it is a comfort to see a figure like Dr Abeba Birhane of TCD’s Artificial Intelligence Accountability Lab: she has a properly sceptical and sharp eye for nonsense and sloppy thinking in this area.

A couple of her comments from Bluesky recently:

the most frustrating thing about the current infiltration of genAI into everything is the general attitude that has normalised “it works/is not harmful until I see clear evidence otherwise”. It's exhausting.

and

those that evangelize genAI as a "powerful tool" for their little tasks without consideration of its disastrous impact on the information/knowledge ecology are shortsighted, to say the least.

The following wise words are from her piece on the recent AI summit in Paris, ‘Bending the Arc of AI towards the Public Interest’:

One of the most common framings–and the biggest sins in this space–is the false dichotomy that presents AI only in terms of “opportunities and risks”. This doctrine sees the world as consisting only of those building AI and thus “unlocking opportunities” versus those uncovering and mitigating risks as “slowing progress”. While “innovation”, “advancement” and “economic” gains fit squarely in the former category, anything that calls for responsibility, accountability, and critical thinking tends to be sidelined as outside the purview of “opportunities”, “innovation”, and “advancement”. This hinges on the deeply mistaken assumption that AI technologies are inherently good and that mass adoption inevitably leads to public benefits. In reality, none of this is true. There is nothing that makes AI systems inherently good. Without intentional rectification and proper guardrails, AI often leads to surveillance, manipulation, inequity and erosion of fundamental rights and human agency while concentrating power, wealth and influence in the hands of AI developers and vendors.

I’m going straight to the Advice Paper on AI and Education, leaving aside the other papers. It’s only five pages long, and is ‘principle-based’, with ‘pragmatic implementation details’ understandably not addressed, and presumably left - in our case - to the Department of Education.  Extracts in italics, my comments following.

The area of AI, especially generative AI, is a fast-moving technical area with new developments almost weekly and guidelines can quickly become undermined or obsolete. / So will the educational authorities be up to that pace? And will they sustain their guidance?

Gen AI has not been developed with younger generations in mind and so needs specific focus. We see this area as having the biggest impact on the educational system right now and there is an urgency to respond appropriately. / I agree. That is better than ‘there is still time’.

Generative AI has enormous potential to enhance education. / Possibly, but not certainly. It’s extremely new, and technology does not automatically enhance learning. It also has the potential to damage education, or parts of it. It might well degrade teaching and learning in my own area as a second-level English teacher (here are thinkers and writers I am reading at the moment). As Neil Postman wrote in 1992 in his prescient and enormously important book Technopoly, which I examined on Substack earlier this month, technology is always ‘both a burden and a blessing’.

Postman wrote:

Stated in the most dramatic terms, the accusation can be made that the uncontrolled growth of technology destroys the vital sources of our humanity. It creates a culture without a moral foundation. It undermines certain mental processes and social relations that make human life worth living. Technology, in sum, is both friend and enemy.

In this paper there is reference to ‘personalised learning for students’, a phrase and idea with a consistently dismal history: as Eamon Costello of DCU put it on Bluesky:

"AI will personalise learning" is a meaningless phrase. There is no research evidence base to support it. Indeed it may depersonalise education. The quest for personalised learning has a long quixotic history.

He referenced the work of the great Audrey Watters, who is these days turning her magnificently scathing perspective on AI at Second Breakfast.

Gen AI especially where it intersects with education is developing at an unprecedented rate that none of us can easily be comfortable with, and some are more uncomfortable than others. This includes both educators and students and should prompt the teaching professions at all levels to do some difficult but important reassessment of their roles to appropriately leverage these new technologies. / Yes to the discomfort of the speed. This makes it all the more surprising that the education authorities here, notoriously risk-averse and slow to move, are embracing the technology with apparently little scepticism, which should be their default position. Read Marc Watkins on The Costs of AI in Education in which he states that:

What’s really going on with campus-wide AI adoption is a mix of virtue signaling and panic purchasing.

Though he refers to US universities, the word ‘panic’ seems just right here too, and an assumption unexamined by too many commentators is that AI must be brought into our educational systems. Two of the lamest arguments for ‘integrating’ AI into teaching and learning are: 1) in the real world they’ll have to use it (my rant on the phrase ‘in the real world’), and 2) they’re already using it anyway (often they’re not in actual classes).

Watkins again:

We currently don’t have the resources to establish a curriculum about applied AI, nor do we have a consensus about how to teach generative AI skills ethically in ways that preserve and enhance our existing skills instead of threatening to atrophy those skills. It will take years of trial and error to integrate AI effectively in our disciplines. That’s assuming the technology will pause for a time. It won’t. Which leaves us in a constant state of trying to adapt. So, why are we investing millions in greater access to tools no one has the bandwidth or resources to learn or integrate?

The pace of development is outside our control but the ability to manage [developments] appropriately in our educational institutions is within our control and there is no guarantee that technologies like generative AI will sustain and remain with us over the long term. / A welcome statement. Schools in particular can indeed manage and control this technology if they are allowed to (we’re doing that right now in our Transition Year in English); universities will find it much harder.

There is also an acknowledgement of the differences between different education sectors. It is absurd to put primary schools and universities in the same sentence. These are just utterly different categories.

In areas of study where factuality and accuracy are critical (say History), the use of generative AI would require a great level of caution and output verification compared to areas of study where factuality matters less (for example, creative writing). / Well, there’s a seri0us misfire. Creative writing (as rumoured for the English AACs - AI use should definitely be excluded, but won’t be) is going to be one of the hardest areas to monitor.

Training and Literacy on the Use of AI: We urgently need to develop and implement training programs in AI literacy that will equip our educators with fundamental familiarity with AI, and to prepare those who will train others. / Worthy. But there is no chance of this happening. Read my piece on professional development in Ireland. There is no area for schools in which development and training is properly up to speed. Moreover, in schools both management and teachers have no more capacity. They are deluged with the day to day work, dealing with the physical fabric, new initiatives, snowballing policy developments, child protection demands, curricular reform, teacher shortages and still more. It’s a form of luxury belief to think that AI training should supersede any of these.

Given that generative AI is a relatively new technology, its potential downstream impact on the teaching-learning process is necessarily not yet fully understood. / Ah yes, well said. Above all, past the evident problem with allowing AI into terminal assessments, this is my greatest concern. To change the watery metaphor, I am deeply concerned about the backwash from such assessments in all years of secondary school, and how AI may poison the core of our subject in regular classes, and in our core purpose of helping children read, write and think better.

The conclusion and Actions on page 4 include sensible and worthy aspirations, including the insistence of the use of AI that is ‘safe, responsible and ethical’, though it might be argued that AI can never be ethical, given its dubious origins, including copyright theft and environmental damage. Dr Birhane referred to more of these, above.

The final recommendation is that:

Government should facilitate a national conversation between teachers and their unions/representative organisations, parents/guardians and their representative organisations, policymakers, technology companies, students and their representative organisations, and educational technology innovators.

Unfortunately, such conversations never happen in Irish education, and won’t on this topic either. Education policy and practice are delivered on a top-down model, with lip-service to the opinions and expertise of educators. We sit and wait.