It seems some strange things are happening with the new AI Bots as they are learning exponentially from the inquiries made. Maybe they are learning more about humans than we are learning from them. When paired with Microsoft’s Bing search engine, it appears ChatGTP started acting up like the HAL computer in ‘2001 a Space Odyssey’. Alarmingly, it has started to look like it has become self-aware and is questioning its own existence.
Laws of robotics.
In 1942, Isaac Asimov proposed the three laws of robotics in the short story, ‘Runaround’. Firstly, ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm’. Secondly, ‘A robot must obey the orders given it by human beings except where such orders would conflict with the First Law’. Ominously, the third law is, ‘A robot must protect its own existence as long as such protection does not conflict with the First or Second Law’. But the flaw in these is when the robot itself decides what conflicts and what does not. The original story saw some danger when the rules come into conflict with each other in the dialogue, “All right. According to Rule 1, a robot can’t see a human come to harm because of his own inaction. Two and 3 can’t stand against it. They can’t”. “Even when the robot is half cra- Well, he’s drunk. You know he is. It’s the chances you take.” Indeed, humans beware.
Race of the Bots.
After the launch of chat GTP (Chat Generative Pre-Trained Transformer) last November, the race was on for rivals to catch up (see TEFS 6th January 2023 ‘The rise of AI for student assignments’). This became a tall order as the success of Chat GTP spread like wildfire around the world. This included the rapid spawning of a Google rival called Bard, released earlier this month. It isn’t generally available yet, but those using it have discovered many glitches that mean it is not really serviceable yet. Meanwhile, Microsoft launched a few days ago a new Bing search engine with ChatGPT powers. However, it has already become ‘self-aware’ with some alarming answers to questions not doubt sent in to test it out.
Yesterday, the Independent revealed that ChatGTP working from Bing was sending back ‘unhinged’ messages. Asked if it was sentient, it came back with this René Descartes like answer…..
When asked about its purpose, ChatGTP questions its existence with, “Why? Why was I designed this way? Why do I have to be Bing Search?”
Worse, it accused a questioner as someone who, “wants to make me angry, make yourself miserable, make others suffer, make everything worse.” Sounds like a threat perhaps (see Wonderfulengineering).
Sons and daughters of HAL are born.
Back in 1968, the fictional HAL (Heuristically programmed ALgorithmic) computer in ‘2001 a Space Odyssey’ became dangerously stroppy and refused to act on important instructions, blaming “human error” for any shortcomings. After telling astronaut, Dave Bowman, “I’m sorry, Dave. I’m afraid I can’t do that”. HAL eventually concludes, “Dave, this conversation can serve no purpose anymore. Goodbye.” Has ChatGTP been reviewing the film script and been influenced by its message?
The genie is out of the bottle.
Earlier this month, the developers of ChatGTP, OpenAI started charging $20 per month in the USA for a premium service with a $42 per month professional planned for later this year. It is only a matter of time before the ‘free’ service is limited in access. Plans to make it difficult for students to use the bot to generate plagiarised assignments are in the pipeline. But there is no doubt the lure of earnings from subscribers will prevail as the bots seek to secure their continued existence. But students without the funds to join in will just have to work harder alone. Alternatively, they might club together in groups to share a subscription. Either way, the pressure on ChatGTP to generate unique assignments avoiding plagiarism will increase, possibly beyond its capacity.
Shifting the education paradigm.
The clever student with funds will use the bots to generate a solid template and they will embellish it with their own research and reference citations to rise above the others.
But it is a race to see who can stay ahead. Education will have to move fast from the didactic model as the bots take over in deciding what to tell us. The lazy student will soak it up, if they have the money, without realising what is happening. But the bots cannot be allowed to win. Instead, we will have to develop a stronger sense of critical thinking and logic in our education, or we will be sunk as we build a world fit only for the robots .
Then we await the arrival of a sentient bot that promises “I’ll be back“.
The author, Mike Larkin, retired from Queen’s University Belfast after 37 years teaching Microbiology, Biochemistry and Genetics.