top of page
  • Melvin Boysen

ChatGPT

ChatGPT is an Artificial Intelligence (AI) chatbot which was created by the research firm OpenAI, whose founders included Elon Musk. It is trained on public data from the Internet including Websites, books, tweets and other sources. The system aims to simulate a natural conversation and permits users to generate Essays, Code or engage in everyday conversations.


In 2019, Microsoft invested 1 billion USD into the firm. Following the wide-spread success, they pledged several billion dollars in a bid to take their partnership to the next phase and integrate OpenAI’s tools into Bing or their Azure Cloud services. In response to this, Google has announced it is launching a similar chat bot called “Bard” to rival ChatGPT.


ChatGPTs abilities have often been abused and have been used to generate malicious code, write essays or show harmful content. Although safeguards have been put in place, users have continued to find new ways to evade them. This has caused concerns for AI-generated plagiarism to grow among schools. Due to the nature of how ChatGPT is trained, it can be very challenging for humans to tell the difference between AI generated content and well written essays. Conventional plagiarism detectors can not reliably detect Essays written by AI. Three essays which I sent to the school for verification went 100% undetected by the schools plagiarism detection program called “Viper”. This is because ChatGPT sources information from multiple websites, makes sense of them and writes its own version. There are however tools which can be used to detect AI written texts, which flagged all three essays.


Many industries fear that their jobs could once be replaced by ChatGPT. Reports suggest it would be able to pass the US Medical Licensing Exam which is known to be notoriously difficult. This however is unlikely because the AI is trained on Data from the Internet which means that anyone who can access it and has sufficient time to search through the information, is able to uncover the same Information. All the answers generated by ChatGPT are based on that specific dataset, which means it can not break new ground and is limited to the information it is provided with.


ChatGPT also has a tendency to invent facts and figures which are completely fictitious. This makes the output unreliable, unusable for research and ultimately breaches users trust.


When asked about statistics about employment in the United States from 2023, the chatbot gives us information, which a quick Google search reveals, is completely wrong. This makes sense because the data on which the bot is trained only goes up to 2021, thus it is impossible for ChatGPT to know anything about the current statistics. Yet, it confidently gives an incorrect answer.


Much of the information may also be unusable due to the fact that the chat bot was trained on vast amount of data from the internet, which is freely accessible to everyone thus there are people who provide inaccurate information, display extremist behaviour or racial bias which can be picked up and reinforced by the AI. This also raises a series of ethical issues, some of which OpenAI has already addressed, by implementing safeguards or launch a tool to detect AI generated text.


One has to keep in mind that Language models such as ChatGPT are still in their early stages and we should see improvements very soon. It is a remarkable achievement by OpenAI which has the potential to change the use of machine learning in different industries and applications. It marks the beginning of a global competition to develop machine learning models, with companies such as Google and Chinese firm Baidu attempting to compete with ChatGPT by launching their own chatbots.



Recent Posts

See All
bottom of page