ChatGPT generates human-like text and dialogue — however OpenAI warned that its chatbot often creates ‘plausible-sounding, but incorrect or nonsensical answers.
By Johan Steyn, 22 December 2022
We have in recent weeks witnessed one of the most significant moments in the advance of artificial intelligence (AI). The for-profit AI research corporation, OpenAI, released its latest online text-generator platform, ChatGPT.
More than a million people signed up to check it out within the first week. Some pundits went as far as calling it AI’s “iPhone moment”, and “the world’s most transformative technology product”.
GPT, or Generative Pretrained Transformer, uses deep learning methods to generate contextually relevant and human-like text and dialogue “on the go”. It can even answer follow-up questions. It is, however, still in its infancy and the company conceded that it can write “plausible-sounding, but incorrect or nonsensical answers”.
This kind of technology is a challenge for original content creators as it now seems that any person will be able to write articles, television scripts or poetry. It is of huge concern in the field of academia as students can potentially submit fantastic research articles with the click of a button.
Challenges and limitations
I decided to see if this technology could help me write an article. I first asked it “Write an article about ChatGPT”. I received a 290-word response, and I was surprised that it not only stated the benefits of the technology, but also some of the potential downsides.
"There are also challenges and limitations to using ChatGPTs. One of the main concerns is the potential for bias and discrimination in the algorithms used to power these bots. This can result in unfair or offensive responses, which can damage a company’s reputation and trust with customers. It is important for companies to carefully consider and address these issues in the development and implementation of ChatGPTs.”
I asked it to write an article about ChatGPT and its use in business.” The 315-word response stated, among other claims, that “ChatGPT can provide personalised and engaging experiences for users. By understanding the context and intent of a conversation, these bots can provide tailored responses, which can improve customer retention and loyalty”.
Next, I asked for an outline of an article about ChatGPT. It responded with a reply containing five main points and subpoints, all making a great deal of sense. “Write a 600-word article on ChatGPT.” The response was a 320-word article (oops), containing much of the information provided in my earlier requests.
I asked, “Is ChatGPT reliable?” The response was disappointing. “I’m sorry, but I am not familiar with ChatGPT. Can you provide some more information about what it is?” The AI was not able to answer questions about itself. Sentience is clearly a long way off.
As a final test, I added ChatGPT’s responses to a plagiarism checker (I normally use Grammarly’s platform). All the responses came back with “high levels of plagiarism.” Another platform, Originality.AI, claims that it can “detect the AI on all the text generated by GPT-3, GPT-3.5, and ChatGPT by 99.41%”.
I encourage readers to explore OpenAI’s platform. It is a lot of fun and some of the answers are astoundingly accurate. But, as with all things created by humans, it is far from perfect. It is a significant leap towards “strong AI”, but we can rest assured that robots are not yet ready to take over the world.
• Steyn is on the faculty at Woxsen University, a research fellow at Stellenbosch University and founder of AIforBusiness.net