An analysis of millions of English-language tweets discussing ChatGPT in the first three months after its launch revealed that while the general public expressed excitement over this powerful new tool, there was also concern about its potential for misuse. Negative opinions raised questions about its credibility, possible biases, ethical issues, and concerns related to the employment rights of data annotators and programmers. On the other hand, positive views highlighted excitement about its potential use in various fields. The paper was published in PLOS ONE.
ChatGPT is an advanced AI language model developed by OpenAI, designed to understand and generate human-like text based on user input. It was introduced to the public in November 2022 as part of the GPT-3.5 architecture and later enhanced with versions like GPT-4. ChatGPT can perform various tasks, such as answering questions, providing explanations, generating text, offering advice, and assisting with problem-solving. It uses deep learning techniques to predict the most relevant responses, enabling it to engage in interactive conversations on a wide range of topics. The model was trained on large datasets, including books, articles, and online content, which allow it to generate coherent and contextually appropriate responses.
While useful in many areas, ChatGPT has limitations, such as occasionally providing inaccurate or biased information or generating completely fabricated responses (known as AI hallucinations). It has been applied in diverse fields, such as education, customer service, and content creation.
When ChatGPT was first introduced, its popularity soared, with its user base reaching 100 million individuals in the first month. Since then, many new AI language models have been developed by various companies. However, it could be argued that ChatGPT sparked the AI revolution in the workplace, generating widespread discussions about AI and prompting people to form varied opinions about its impact.
Researchers Reuben Ng and Ting Yu Joanne Chow aimed to analyze the enthusiasm and emotions surrounding the initial public perceptions of ChatGPT. They examined a dataset containing 4.2 million tweets that mentioned ChatGPT as a keyword, published between December 1, 2022, and March 1, 2023—essentially, the first three months after ChatGPT’s launch. The researchers sought to identify the issues and themes most frequently discussed and the most commonly used keywords and sentiments in tweets about ChatGPT.
The study analyzed the dataset in two ways. First, the researchers focused on identifying significant spikes in Twitter activity, or periods when the number of tweets, replies, and retweets about ChatGPT was notably high, and they analyzed what users were saying during those times. They collected and analyzed the top 100 most-engaged tweets from these periods. Second, they identified the top keywords each week that expressed positive, neutral, or negative sentiments about ChatGPT.
The results showed that there were 23 peaks in Twitter activity during the study period. The first peak occurred when ChatGPT surpassed 5 million users just five days after its launch, reflecting both the initial buzz and hesitancy surrounding the new tool. The second peak was primarily focused on discussions about ChatGPT’s potential uses. Subsequent peaks explored its utility in academic settings, detection of bias, philosophical thought experiments, discussions of its moral permissibility, and its role as a mirror to humanity.
The analysis of keywords revealed that the most frequent negative terms expressed concerns about ChatGPT’s credibility (e.g., hallucinated, crazy loop, cognitive dissonance, limited knowledge, simple mistakes, overconfidence, misleading), implicit bias in generated responses (e.g., bias, misleading, political bias, wing bias, religious bias), environmental ethics (e.g., fossil fuels), the employment rights of data annotators (e.g., outsourced workers, investigation), and adjacent debates about whether using a neural network trained on existing human works is ethical (e.g., stolen artwork, minimal effort).
Positive and neutral keywords expressed excitement about the general possibilities (e.g., huge breakthrough, biggest tech innovation), particularly in coding (e.g., good debugging companion, insanely useful, code), as a creative tool (e.g., content creation superpower, copywriters), in education (e.g., lesson plans, essays, undergraduate paper, academic purposes, grammar checker), and for personal use (e.g., workout plan, meal plan, calorie targets, personalized meeting templates).
“Overall, sentiments and themes were double-edged, expressing excitement over this powerful new tool and wariness toward its potential for misuse,” the study authors concluded.
The study provides an interesting historical analysis of public discourse about ChatGPT. However, it is worth noting that the study focused solely on English-language tweets, while much of the broader discussion occurred outside of Twitter and in non-English languages.
The paper, “Powerful tool or too powerful? Early public discourse about ChatGPT across 4 million tweets,” was authored by Reuben Ng and Ting Yu Joanne Chow.