Not All Rainbows and Sunshine: The Darker Side of ChatGPT

28 January 2023 | 07:51 اخبار روز
visits:99
Not All Rainbows and Sunshine: The Darker Side of ChatGPT

If you haven’t heard about ChatGPT, you must be hiding under a very large rock. The viral chatbot, used for natural language processing tasks like text generation, is hitting the news everywhere. OpenAI, the company behind it, was recently in talks to get a valuation of $29 billion¹ and Microsoft may soon invest another $10 billion².

ChatGPT is an autoregressive language model that uses deep learning to produce text. It has amazed users by its detailed answers across a variety of domains. Its answers are so convincing that it can be difficult to tell whether or not they were written by a human. Built on OpenAI’s GPT-3 family of large language models (LLMs), ChatGPT was launched on November 30, 2022. It is one of the largest LLMs and can write eloquent essays and poems, produce usable code, and generate charts and websites from text description, all with limited to no supervision. ChatGPT’s answers are so good, it is showing itself to be a potential rival to the ubiquitous Google search engine³.

Large language models are … well… large. They are trained on enormous amounts of text data which can be on the order of petabytes and have billions of parameters. The resulting multi-layer neural networks are often several terabytes in size. The hype and media attention surrounding ChatGPT and other LLMs is understandable — they are indeed remarkable developments of human ingenuity, sometimes surprising the developers of these models with emergent behaviors. For example, GPT-3’s answers are improved by using the certain ‘magic’ phrases like “Let’s think step by step” at the beginning of a prompt⁴. These emergent behaviors point to their model’s incredible complexity combined with a current lack of explainability, have even made developers ponder whether the models are sentient⁵.

The Spectre of Large Language Models

With all the positive buzz and hype, there has been a smaller, forceful chorus of warnings from those within the Responsible AI community. Notably in 2021, Timit Gebru, a prominent researcher working on responsible AI, published a paper⁶ that warned of the many ethical issues related to LLMs which led to her be fired from Google. These warnings span a wide range of issues⁷: lack of interpretability, plagiarism, privacy, bias, model robustness, and their environmental impact. Let’s dive a little into each of these topics.

Trust and Lack of Interpretability:

Deep learning models, and LLMs in particular, have become so large and opaque that even the model developers are often unable to understand why their models are making certain predictions. This lack of interpretability is a significant concern, especially in settings where users would like to know why and how a model generated a particular output.

In a lighter vein, our CEO, Krishna Gade, used ChatGPT to create a poem⁸ on explainable AI in the style of John Keats, and, frankly, I think it turned out pretty well.

Last Update At : 28 January 2023