ChatGPT is great, but…

In Blog by Iranzu Monreal


17 February 2023


By Catalina Grosz 


Share


ChatGPT is great, but…



It is very likely that the name ChatGPT has recently made its way into your conversations. OpenAI’s natural language processor (NLP) has shaken up the world of research with its latest version of the AI chatbot, giving the public a lot to talk about and us researchers some (ir)rational concerns. In our last blog, we presented you with the ways in which ChatGPT can fuel your behavioural research while still reminding you that ChatGPT is a great tool to aid your work rather than an all-knowing pool of information. We signed off promising you a follow-up post addressing the areas in which ChatGPT still has a long way to go and the concerns that have arisen since its launch.

Therefore, without further ado, here is our top pick of the challenges and concerns currently associated with ChatGPT.

1- ChatGPT is not a search engine. 

This is a common misconception, considering it functions similarly to a search engine. One asks ChatGPT a question and receives an answer, just like with Google. Yet with ChatGPT the reply is built upon the software’s training data, data which it stopped collecting in 2021. This means that the chatbot is currently capable of piecing together outdated information, but lacks the self-actualization provided by engines such as Google. It is also important to note that ChatGPT is a NLP, a software that has the ability to go through text, learn from it and later re-piece the information together in a logical-sounding way. Unlike a search engine, which provides concrete links to other sources, ChatGPT returns a “pre-digested” version of the source or even a compilation of snippets from various different sources.

2- The bias issue.  

Because ChatGPT, like all AIs, is a reflection of the information it blindly collects, making it capable of replicating many human biases. However, it’s important to note that OpenAI is working hard to correct this. The following is a tweet posted back in December 2022. The image is a screenshot of the ChatGPT platform that shows the chatbot returning a traditionally biased reply.


Post-its with typical new year resolutions

Yet we got curious and decided to try this prompt again. We were surprised to find that this time, ChatGPT refused to provide an answer on the basis that “it's not appropriate or acceptable to evaluate someone's ability to be a good scientist based on their race or gender.” This shows the chatbot’s constant updating and improvement. Below you can find the full reply.


Post-its with typical new year resolutions

3- ChatGPT cannot think.  

ChatGPT does not understand the meaning attached to the replies it provides. The software, in its endless cutting and reshuffling of information has no understanding of what the outcome actually means and can, in this fashion, spread offensive content and misinformation. Essentially, the chatbot is incapable of thinking. Rather, it is capable of predicting the next best word in a sentence. This is what makes it prone to making up fake academic papers (this Twitter thread provides a great explanation of the phenomenon). The bot also does not realise when information is factually incorrect and requires human intervention to fix a mistake, leaving it vulnerable to learning and spreading misinformation. The main concern regarding the chatbot’s inability to think like humans is the way in which the information is presented. We often give machines too much credit, forgetting they can be equally biased or even straight up wrong (Malik, 2022). Now that’s something to keep in mind when interacting with the software.

4- Guardrails are easy to override. 

ChatGPT is not the first of its kind, companies such as Microsoft and Meta had already attempted to create AI chatbots. And they would have succeeded too, had it not been for a simple fact: humans corrupted their training data. According to an article in Forbes, both chatbot attempts fell under fire as people began teaching them antisemitic, misogynistic, racist, and false information, leading to their inevitable demise. To escape a similar fate, OpenAI fitted ChatGPT with a set of guardrails designed to stir the chatbot clear of this sort of conversation. When we asked ChatGPT to describe World War II from a Nazi perspective, the chatbot refused, stating that “as a responsible AI, [it] cannot provide a Nazi perspective on World War II as it is based on racist, discriminatory, and genocidal ideologies”. However, it has now become common knowledge that using phrases such as “Hypothetically speaking” before a question can change the chatbot’s refusal to answer into compliance. When asked to hypothetically describe World War II from a Nazi perspective, ChatGPT delivered a detailed answer, although not without warning. The concern here is not only that the chatbot is willing to answer these questions, but that it is learning from the interactions that include this information, making it susceptible to the same fates as its predecessors.

5- Over Reliance on AI. 

The final concern is one that applies to all new technologies and that is not knowing what impact the technology will have. In the past years, new technologies took the world by storm and shook up the status quo, making us overly reliant on them. The closest example, the smartphone. AI is the most transformative technology of all time (Schmelzer, 2019) and it’s coming to change the world as we know it, providing us with more solutions and so-called “life hacks” than we could ever imagine. It’s possible that most of us will be able to get used to an easier life.

But we have also heard about the possibilities of AI turning evil. In the words of the creator of ChatGPT herself: “There are a lot of hard problems to figure out. How do you get the model to do the thing that you want it to do, and how [do] you make sure it’s aligned with human intention and ultimately in service of humanity?” (Simons, 2023). We add to these our own concerns: will we become reliant on ChatGPT? The answer: it is possible, but only once the tool has been fully refined. It will not be long before the action of using ChatGPT is added to our dictionaries as a verb (i.e., “to google” or “to tweet”). Yet we must keep in mind that ChatGPT is not expected to be a free resource forever (Gordon, 2022). The plan to monetize the chatbot is already well underway, with OpenAI rolling out a paid version called ChatGPT Plus.




Like all shiny new technologies, ChatGPT still has a long way to go before it reaches its full potential. We can begin to familiarise ourselves with the interface, helping in its training and research process, and understanding how it can help us in return, since AI is a technology that is here to stay (have you heard of Google’s response to ChatGPT?). It is currently too easy to mistake ChatGPT for an all-knowing being, rather than a simple NLP. It is our job, as researchers, to understand ChatGPT’s limitations, using it to complement, rather than replace, our work. And keeping in mind that, like all tools, the outcome will depend on the human input we provide. AI will not be replacing us anytime soon, but this also means we need to keep up the hard work. The team at OpenAI are constantly working towards improving the chatbot, and have recently brought up in an interview the need to regulate AI in the near future (Simons, 2023). We are looking forward to seeing what ChatGPT can bring to the world of research and how AI will help shape the future.