Tech Threatens Leaving Cert Integrity

In late July 2024, in San Francisco, data analysts at OpenAI observed a significant 90% increase in the usage of their AI tool, ChatGPT, in the Philippines coinciding with the start of their school year. This observation supports the theory posited by Sarah Friar, OpenAI’s CFO, during a recent education seminar. She confirmed that the majority of ChatGPT users globally are students. Professor Marc Watkins of the Mississippi AI Institute rhetorically questioned which application sees a peak in usage of eight to nine months a year, with a lull during holiday and summer breaks.

In this context, the introduction of “additional assessment components” (AACs) to the Leaving Certificate is taking place, with seven subjects incorporating these new components commencing in September 2025, followed by another seven in 2026 — including English, a subject I instruct — subsequently expanding to all subjects.

It has been resolved that a minimum of 40% of the final grade for each course will be attributed to an evaluation conducted outside the examination room, which will be scored by the State Examinations Commission. Critics argue that rather than alleviating pressure for students, this shift could potentially heighten stress levels, as Leaving Certificate students have to grapple with additional high-stakes components alongside their end-of-term examinations.

There is speculation that assessments for some fundamental courses might be precariously shifted back into the fifth year, posing a threat to elements of school life such as sports and consequently affecting student wellbeing.

Coincidentally, as these new components are integrated into the Leaving Certificate, an explosive technological advancement, potentially threatening their validity, is undergoing consequent evolution. Although an English AAC in a non-critical environment may seem appealing, every evaluation stage in the Leaving Certificate procedure carries significant weight.

There appears to be little confidence in a dependable technical system for monitoring the use of Artificial Intelligence (AI) in student’s coursework. Bradley Busch from Inner Drive referenced a recent German research at the past month’s researchED Belfast convention. This research revealed that only 38% of educators can reliably ascertain if student work is authentic or computer-generated.

Alarmingly, the study showed that teachers are overly assured in their ability to identify this. Meanwhile, the most proficient AI detection systems only provide accurate results approximately 67% of the times. Thus, about one-third of students either face unjust accusations of dishonesty or get away with such academic deception unabated. The investigators concluded that these detection instruments are neither trustworthy nor reliable. Prof Ethan Mollick, from the Wharton School at the University of Pennsylvania, a leading authority on AI in education, described these AI detectors as error-prone and advised against their usage on individuals.

Prof Áine Hyland voiced concerns in the recent issue of The National Association of Principals and Deputy Principals’ Leader magazine in Ireland. She appealed for a delay in the implementation of the 40% AACs while monitoring the global experience of GenAI application for evaluation. Given the significant impact of the Leaving Cert, and the reality that some parents will go to any length to secure a high-points university placement for their children, it is unreasonable and inequitable to put the responsibility on teachers to confirm that the work submitted is solely that of the student.

Another potential issue is the concept of self-declaration by students. This approach has been proposed for the current history, geography and RE coursework, which account for 20% of the pre-enhancement Leaving Certificate final grade. However, it is unrealistic to expect students to openly admit the use of AI tools like ChatGPT for their research project when it could significantly impact their grade. Educators worry that such a system of self-reporting will be included in the supposedly “comprehensive guidelines” for AI usage, weakening the credibility of academic work.

The subject of my discussion, a mandatory aspect for all post-primary pupils nationwide, revolves around the hypothetical possibility of students sitting for an English oral exam under controlled conditions, akin to what is regularly done for Gaeilge, reproducing around 40% of the curriculum. However, the reality of the situation is that implementing such an arrangement lacks feasibility, given the extensive amount of resources, time, and organisation it would necessitate. Without enforcing rigorous supervision and restricting access to online facilities, our curriculum is likely to be compromised, with 40% of it being susceptible to untraceable alterations. Despite AAC in English representing a beneficial addition to a low-risk setting, the existing system of the Leaving Certificate caters to a high-stakes environment.

Even with its enduring drawbacks on record, the Leaving Certificate is widely accepted globally as a credible assessment format, which we stand a chance of losing. Given the remarkable advancements achieved by ChatGPT within a short span of two years since its initiation, we remain uncertain about the kind of technological breakthroughs GenAI could accomplish over the same period prior to the finalisation of the first AACs.

Following the budget, a €9 million investment was committed towards mobile-phone pouches to shield students from the negative repercussions of technology. Regrettably, it would be highly paradoxical to let another technological aspect destruct the sanctity of the Leaving Certificate.

The author, Julian Girdham, an eminent English tutor, shares his insightful views on literature, teaching methods, and the overall education scenario through his writings on www.juliangirdham.com.

Condividi