Chat-GPT is the tree that hides the forest

This article was first published in French in La Conversation.

Until recently, the debate over the promises and risks of large language models was mostly taking place in tech circles. This all changed on November 30th 2022, when a company named OpenAI released ChatGPT. ChatGPT is a free and updated version of GPT-3, a powerful language system that generates text, which had been released in 2020. It became famous for generating a Guardian’s opinion piece arguing that AI systems will not destroy humans.

By December, over a million people had tried ChatGPT and were posting computer code, meal plans, work presentations and essays generated by the system. ChatGPT can also solve math problems, correct bad grammar or simplify complicated text. Having been trained on large amounts of data including websites, books, and wikipedia, it can mimic different styles of writing, like explaining how to remove a peanut butter sandwich from a VCR in biblical verses, writing poems in the style of Walt Whitman, or producing Seinfeld scenes scripts. ChatGPT remembers what the user said previously and adapts its responses to feedback. It is so good that some users wondered if real people were actually answering the prompts.

Today, so many people want to use ChatGPT that the website is at capacity. For the first time, society is really grasping the magnitude of the transformations that technology will bring to the near future. Yet, a large fraction of the ensuing public discussion focuses on what to do about school essays and plagiarism. The widespread fear that students will use ChatGPT to write their essays is missing the mark.

Language models are AI systems trained to estimate the probability of a sequence of words occurring in a text. They are used in a variety of ways such as in customer chats, text message apps, and translation software. Think about your email client suggesting the next word in the sentence you started. Certain language models are called large language models, when they are trained on a large number of parameters, although there is no clear threshold for that number.

These models were revolutionized by the invention of a new technology called transformers in 2017. Many impressive language models that use transformers came out such as GPT-3, Bloom, LaMDA, Megatron-Turing NLG and PaLM. While ChatGPT was trained on 175 billion parameters, Google’s PaLM was trained on 540 billion and can explain jokes and make sophisticated logical inference. Transformers have also been used to make systems that generate images from text like DALL.E 2, which can produce a credible picture of a koala dunking a basketball. In fact, some artists are now using AI to generate their work.

AI is currently revolutionizing work. People with no background in programing can produce code, anyone can generate maps, slides, drawings, photos, websites, text, or legal documents. Tomorrow’s professionals will rely on these tools. And what is the purpose of education if not to prepare students for society and for work?

A debate on plagiarism took place in the 90’s when the internet developed. College professors were lamenting that their students were copying information from online periodicals or asking for assistance from others through electronic discussion groups. While not attributing proper citation is a problematic practice called plagiarism, these cheaters learned how to search the web and triage information. In fact, the school system has since switched from a knowledge-heavy focus to an emphasis on how to gather, analyze, synthesize, and assess the accuracy and usefulness of information. This is one of the reasons why younger adults today are more resilient to fake news than older ones.

Today, AI is introducing a revolution even more significant than the one brought on by the introduction of the internet. ChatGPT is only one of many disruptive systems currently in existence, and we can expect more of them to come out soon. Currently, the three ingredients to AI systems—computing power, algorithms, and data—are all improving at a blistering pace. ChatGPT is only the tip of the iceberg, and that we need to prepare students for the significant social changes that AI is bringing.

Instead of trying to prevent students from using ChatGPT, we need to reform the way we teach. This reform should not consist in finding clever assignments that students cannot use ChatGPT for. We should make sure that students can use AI systems properly.

ChatGPT is trained in part from human feedback. Humans read the model output and judge if it is truthful and informative. For some topics, especially those requiring deep expertise, the answers might look plausible to humans but contain inaccuracies, which get reinforced. Over time, it will become even harder for humans to notice subtle deviations from the truth. Large language models are also typically trained on data disproportionately representative of western thoughts and overrepresented groups. As such, they perpetuate the most mainstream thinking. Even when asked about famous authors, ChatGPT only knows about their textbook interpretations. Educators could create assignments that students must complete with the help of ChatGPT, involving fact-checking the output and proposing more subtle content.

More importantly, we need to educate our students about the risks these systems yield. Large language models have been shown to perpetuate bias, to give potentially harmful advice and to facilitate consumer manipulation. Soon, these models could create mass deception. They can also cause legal breaches of data privacy and intellectual property rights which students should remain vigilant about. What’s more, transformer models creators and users routinely discover model capabilities, including problematic ones, they were previously unaware of. For instance, large language models happen to be good at arithmetic and reasoning when prompted with certain inputs. Some researchers also demonstrated they could use a transformer language model to predict the probability that defendants would reoffend, a capability that the model had not been intentionally trained for. This unpredictability increases the risk that a system placed on the market could be used for nefarious purposes or behave against the interest of its users.

Students need to prepare. They must learn to assess AI systems critically just like the previous generation had to learn to triage information online. They could also report any bug or unexpected behaviors they see to contribute to their safety. In addition, they should engage in democratic conversations to determine which human values we should use to constrain AI systems’ behaviors. And while they might not need to learn the skills that will be automated, they should understand the basics of how AI works and what risks it carries.