
The increasing use of artificial intelligence (AI) tools in education is causing some considerable disquiet amongst educators who see it as ‘cheating’. This could be in the context of writing assignments or in taking shortcuts in assessment of the same assignments. Could an AI and AI tool end up writing and marking itself whilst students and teachers are merely bystanders? This posting is from a guest author and looks at the possible pros and cons of AI. One con identified is, “AI-driven education technology may unintentionally increase existing inequalities between students of different socioeconomic backgrounds by providing different levels of access to educational resources”. This is happening now and must be challenged fast. The centuries of human minds expressing their ideas directly through the use of a pen may be ending as AI intervenes and takes over.
The rise of artificial intelligence (AI) in student assignments is an exciting development in the field of education. AI-powered tools are becoming increasingly popular among students and educators alike, as they can help streamline the process of completing assignments. AI-based tools allow students to quickly identify relevant topics, research resources, and even create visual representations of their ideas. This can help students save time and focus on the more creative aspects of their assignments. Additionally, AI-based tools can provide personalized feedback and guidance, helping students improve their writing and research skills. With the ability to quickly and efficiently complete assignments, AI-powered tools are becoming an increasingly popular and valuable resource for students.
How is AI used for cheating in student assignments?
AI can be used to cheat in assignments by using AI-based tools to generate answers quickly and accurately. AI-based tools can also be used to plagiarize work from other sources, as they can quickly identify and copy text from the internet. Additionally, AI-based tools can be used to identify and copy images from the internet, which can be used to complete an assignment without actually doing any work. Finally, AI-based tools can be used to generate complete assignments from scratch, further reducing the amount of work a student needs to do. All of these methods can be used to cheat on assignments, and teachers and students need to be aware of the potential risks of using AI-based tools.
How is AI used for grading assignments?
AI can be used for grading assignments in a variety of ways. One of the most common uses is for automated essay scoring, which uses AI algorithms to assess the content and structure of student essays. AI can also be used to analyse data from other types of assignments, such as maths problems, coding assignments, and multiple-choice assessments. AI can also be used to provide feedback to students on their assignments, such as providing hints and advice on how to improve their work.
History of AI in detecting student cheating.
The history of AI in student cheating dates back to the 1980s when computer programs were developed to help students cheat on their exams. These programs were designed to generate answers to multiple-choice questions or to detect plagiarism in written essays. In the 1990s, more sophisticated AI systems were developed which could detect patterns in test responses and flag potential cheating.
Today, AI is increasingly being used to detect cheating in a range of contexts, including online exams, online classes, and online tests. AI-driven software can analyse student behaviour to identify patterns or behaviours that are indicative of cheating, such as copying and pasting answers from the internet or using a tool to simulate real-time answers. AI can also be used to monitor student activity during live online classes, detect plagiarism, and automatically flag suspicious behaviour.
The use of AI in detecting cheating is becoming more and more common, as the technology continues to improve. AI systems are becoming increasingly sophisticated and can be used to identify even more subtle patterns in student behaviour. As AI technology continues to advance, it will likely become an increasingly important tool in preventing student cheating.
What is the future of AI in education?
The future of AI in education is very bright. AI can be used to automate mundane tasks, such as grading papers and creating personalized learning plans, freeing up teachers to focus on providing students with meaningful instruction. AI can also be used to develop adaptive learning systems that can adjust to each student’s individual needs and abilities, providing targeted instruction and feedback. AI can be used to create virtual tutors and to provide students with real-time feedback. AI can also be used to analyse student data to better understand patterns of student behaviour and to identify areas of weakness. Finally, AI can be used to create immersive virtual learning environments, providing students with a more engaging and interactive experience.
Will AI take over education assessments?
No, AI will not take over education assessments. AI technology is being used to help measure student performance and create personalized learning experiences, but a human element will still be necessary to evaluate the results of assessments.
Will AI be used for assessments in the UK?
Yes, AI is increasingly being used for assessments in the UK. Many universities are now using AI-based algorithms to assess student performance and provide feedback on student work. AI is also being used to assess job applicants and to automate the recruitment process. Indeed Ofqual, who regulate state examinations in England , have been exploring AI more seriously since 2020 ‘Exploring the potential use of AI in marking‘. It is creeping in as their research intensifies.
What are the dangers of using AI in education?
There are five main drawbacks to using AI.
1. Lack of Human Interaction: AI-powered educational tools may reduce or eliminate the need for human interaction in the learning process, which can lead to a lack of empathy and understanding of student needs.
2. Inaccurate Information: AI algorithms may learn and process information differently than humans, resulting in inaccurate or incomplete information that is presented to students.
3. Privacy Concerns: AI systems may collect and store large amounts of data about individual students, raising concerns about privacy and potential misuse of the data.
4. Exacerbation of Inequality: AI-driven education technology may unintentionally increase existing inequalities between students of different socioeconomic backgrounds by providing different levels of access to educational resources.
5. Job Losses: AI-driven technology could lead to the displacement of some educational professionals, such as teachers and counsellors, as these tools become increasingly sophisticated.
The posting above was entirely written by an AI bot, ChatGTP.
It appears acceptable, convincing and grammatical as an essay and passed with flying colours three different plagiarism tests applied by TEF . Importantly, it took just under 7.5 minutes to ‘construct’ using the ChatGTP online tool. Armed with this advantage, any student could generate a serviceable essay template and embellish it later.
Reverting TEFS to the traditional method, warts and all.
ChatGTP is an online text-generating AI programme from OpenAI that describes itself as “an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity”.
It appears to be the most advanced AI tool available at the moment but more advanced tools are likely to come online in time. It emerged in November 2022 from its San Francisco base as a free version labelled, ‘research preview’. Millions are already using it as news spread fast. This version authored the guest part of this post. A more advanced subscription service is planned soon that is likely to cost at least £10 per month in the UK if other existing tools are used as a guide.
Students with resources at their disposal to subscribe would be put at an enormous advantage over those researching ‘homemade’ assignments.
Journalist, Chris Stokel-Walker asked in the journal Nature in December 2022 the question, ‘AI bot ChatGPT writes smart essays — should professors worry? ‘. There is no easy answer and there could be educational advantages as well as pitfalls. Sandra Wachter of the Oxford Internet Institute was quoted as worried that it was “outsourcing not only their writing but also their thinking”. She is right.
The main danger is brainwashing.
It is one thing to seek help with correcting grammar and spelling. It is altogether another thing getting someone or something else to do all of the thinking and generate answers. The idea that educators would adapt by setting assignments that seek more incisive critical thinking could easily be usurped by newer versions of AI bots. It’s a downward vortex spiral that everyone could be sucked into.
Relegating thinking to a robot is a dangerous move that would stifle creativity and imagination.
Strangely, Isaac Asimov predicted this happening in his short story, ‘Profession’ in 1957. In it, most people are directly brainwashed through a process of ‘taping’ to acquire the necessary skills. Only a select few are diverted into advanced conventional education as the elite. The hero who is recruited into the elite asks at the end “What about the people here who don’t measure up?”. The answer comes back “They are taped eventually and become our Social Scientists……. We are the second echelon, so to speak.” Only those spared the ‘brainwashing’ process of taping are destined to be the elite through more conventional education.
How does ChatGTP grade assignments? Battle of the bots.
ChatGTP had an answer for this too and tells us that it can assess the “accuracy of the student’s answers” and provide “feedback on their performance”. In doing so, “the automated grading system can also provide a comprehensive analysis of the student’s responses, highlighting areas of improvement and areas of strength”. Thus a student can easily grade and correct an essay originally generated by the student who added to and ‘tweaked’ a ChatGTP composition. It becomes a circular process whereby grading and marking an assignment is carried out by the same, or a similar, AI tool that likely wrote it in the first place. Creativity and imagination will evaporate in the distillation process. The AI bot emerges as the oracle to consult on all things.
Does ChatGTP make mistakes?
This is best answered by asking ChatGTP itself. Its response was “Yes, ChatGTP is an artificial intelligence chatbot and like all artificial intelligence chatbots, it can make mistakes”.
TEFS decided to test it on some very specific topics in science, and microbiology in particular. On seeking to identify the main experts in areas familiar to the author, it only supplied the names of scientists in the USA. In one area where most experts are outside of the USA, it identified only one researcher who was from the USA and , on close inspection, had never published any research findings in the same area of endeavour. Such an error by a student in an assignment could backfire spectacularly, so buyer beware. If educators keen on AI bots assert that there is no danger and we must adapt, I can almost hear people in Edinburgh saying, “AI (aye) right.”
A political analysis.
This was intriguing, to say the least. When asked if there was likely to be a revolution in the UK, the answer was this, “It is highly unlikely that there will be a revolution in the UK in 2023. The UK is a stable democracy, and revolutions are usually sparked by extreme levels of dissatisfaction with a government and its policies. At this time, it is difficult to predict what the political landscape will look like in 2023, but it is unlikely that a revolution will be sparked”. That was reassuring, maybe. But when asked if there will be a general election in the UK in 2023, it was more reticent with, “It is too early to say if there will be a general election in the UK in 2023. At this point, the next scheduled general election is set for May 5th, 2022”. Did we miss something? Perhaps it was right. If we had one then, we all would be better off.
The parody site ‘Timeline 2070’ offers a glimpse into the future from September 2069 with, ‘Eastwood report uncovers Artificial Intelligence ‘Essay Mill’ scandal’. Enough said.
The main living author, Mike Larkin, retired from Queen’s University Belfast after 37 years teaching Microbiology, Biochemistry and Genetics.
The guest author, ChatGTP, is a figment of a robot imagination in the cloud that has been filtered through an old sock to remove plagiarised impurities.