By Johan Steyn, 8 February 2023
Regular compliance training is a part of most professional jobs these days, primarily when you work for large audit and consulting firms. In my experience, these courses are normally very boring. I have often had a good chuckle when colleagues tell me how they are “winging” the assessments.
Usually one has to watch the complete video or interactive media presentation of a module before one can get to the test. So you start the video, see how long it will play, and then go make a coffee or clean the kitchen. Every few minutes you check in quickly to see if the course presentation is near completion.
The test is often a multiple-choice assessment. In my view, if you know the subject matter well, the test is more difficult as some of the options are designed to trick you. But if you do not know the content well, it is a simple mathematical probability exercise. Each question has, say four options to select from, and you, therefore, have a 25% chance to select the correct one.
If you memorise — or better, take screenshots of the answers you had incorrectly answered — your chances of passing the test the next round is much higher. Repeat the aforementioned process a few times and, voila, you will pass the module.
The goal that corporate course designers have in mind is to keep the training simple and focused, and to aim for an outcome of understanding that could be applied.
Recently on a podcast with Patrick Bet-David, the renowned astrophysicist Neil deGrasse Tyson made an interesting statement: “People who cheat on exams do so because the system values your grade more than the student values learning.”
His comments were in the context of a currently much-debated topic, namely how a generative artificial intelligence (AI) platform like OpenAI’s ChatGPT can help students plagiarise their submissions or cheat in exams. This explosive and vastly improving technology is a major headache for educational institutions.
In a recent paper, “ChatGPT Goes to Law School,” Jonathan Choi and others from the University of Minnesota Law School wrote that “ChatGPT performed at the level of a C+ student on average over 95 multiple choice questions and 12 essay questions, receiving a low but passing mark in all four courses.”
Even though it is uncertain how prevalent the use of the application is among students and how detrimental it could be to learning, some educators are now moving with amazing speed to reconsider their assignments as a response to ChatGPT.
In academic education, the adoption of generative AI presents both benefits and major hazards because of the rapid transition that is taking place with the advancements of this technology. But there are also some major upsides.
One of the most significant benefits of employing AI in these kinds of environments is to tailor each student’s educational experience. Through natural language processing and machine learning algorithms, AI can determine a student’s strengths, limits and preferred learning style. This paves the way for a more individualised curriculum and training that is tailored to the student’s specific needs.
This might encourage more student participation, which in turn might lead to higher levels of academic accomplishment. In addition, the capability of AI to automate processes such as grading and providing feedback could free up time for educators to focus on individualised instruction and mentoring.
Academic institutions, students and universities must collaborate to define standards and regulations for the appropriate use of artificial intelligence in education to combat the challenges posed by this technology. They should also enthusiastically investigate the many benefits.
• Steyn is on the faculty at Woxsen University, a research fellow at Stellenbosch University and founder of AIforBusiness.net