Facts for You

A blog about health, economics & politics

Towards the end of November 2022, OpenAI, a San Francisco-based “AI research and deployment company”, released ChatGPT (Generative Pre-trained Transformer) for unrestricted and free-of-charge public use, setting the world ablaze. By 5 December 2022, a million users had already been signed up, soaring to a 100 million active users within two months of the launch of the interactive software. A chargeable premium version is expected to follow in due course. 

The OpenAI startup was co-founded as a non-profit entity by Sam Altman (currently CEO), Greg Brockman, Reid Hoffman, Elon Musk, Peter Thiel, and others in 2015, aiming to create safe artificial general intelligence (AGI) for the benefit of all of humanity. The company gained its reputation by developing DALL-E, a text-to-image generator based on GPT-3, the first natural language model to be produced by OpenAI, which was introduced in June 2020.  OpenAI has attracted much interest in its endeavours, with Microsoft, Reid Hoffman’s charitable foundation, and venture capital firm Khosla Ventures all listed as major investors in its ongoing ChatGPT project. Microsoft first invested $1 billion in Open AI in 2019, followed most recently by its multi-year, multi-billion-dollar investment in January 2023. Microsoft is reported to be incorporating GPT technologies in its search engine Bing and its Edge browser, and it seems likely that these technologies will also be used to increase the functionality of Microsoft apps- generating text for Word and Outlook, and graphs and graphics for PowerPoint. Google has inevitably stepped up its game to develop a competing chatbot, known as Bard, thereby triggering a so-called “AI arms race.”  

ChatGPT can be described as an AI-driven, text-based, chatbot- one that has been trained to interact in a conversational manner with human subscribers. It can understand questions, follow instructions, and generate new text in response to natural-language inputs typed into the dialogue box on its website. According to the OpenAI website: “The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” A moderation filter is stated to screen any inappropriate input of text. ChatGPT is powered by a Large Language Model (LLM), derived from GPT-3 and known as GPT-3.5, to generate its own text, which is derived from a vast dataset of publicly available material on the Internet, found within articles, Wikipedia entries, archived books, and various websites. 

As a form of Generative AI, ChatGPT goes beyond the usual machine-learning software, that merely analyses data for hidden patterns and relationships, by processing text and creating new content in different languages. It displays an impressive, albeit incomplete and at times imperfect, knowledge of current affairs, history, science, technology, business, art, literature, and many other topics.  ChatGPT has proved capable of drafting professional-looking letters and e-mails, essays on demand and other homework projects, academic articles, stories, and poems that are indistinguishable from those directly produced by humans, complete with correct spellings, appropriate punctuation, and error-free grammar and syntax. These impressive capabilities have gained it widespread attention. Besides generating text, it also serves as an efficient search engine, explaining why an advanced version of ChatGPT, GPT-4, is to be incorporated into Microsoft Bing to augment its search capabilities. 

Every technological innovation carries some risk, often unquantifiable during the early stages of implementation. According to an OpenAI research collaboration with Georgetown University and the Stanford Internet Observatory, Large Language Models such as ChatGPT can be misused and thereby facilitate the spread of disinformation that might influence target audiences through propaganda and fake news to embark on paths that are inimical to wider society. As yet, ChatGPT has no moral compass, having not been taught the differences between good and bad and reflecting instead any inherent biases, inaccuracies, and inconsistencies in the data it has been trained on. This means that nonsensical responses, wild predictions, outright lies, obscenities, and toxic output can all be generated by ChatGPT in the absence of any proper checks and safeguards. 

Early indications are that ChatGPT may prove to be a useful addition to the world of AI and it is thus attracting venture capital investment to take matters forwards and reap the anticipated profits. But heavy demands on computer power, limiting access when servers are running at capacity, and the many unanswered moral and ethical questions surrounding its future applications are likely to stifle growth for the time being.  ChatGPT, as it stands, can be considered work in progress. The hope is that, developed with due caution and safeguards, it will augment, and not stifle, creative endeavour, enable efficiency and productivity in the workplace, and indeed contribute to the welfare of humanity, as planned by its creators. 

Ashis Banerjee