In recent months, the Large Language Models or LLMs have captured the zeitgeist and have been making headline after headline on the news. These sophisticated models, such as OpenAI's GPT-4 and Meta’s LLaMA, have captured the imagination of researchers, developers, and the general public alike.
However, as with any transformative technology, LLMs have experienced their fair share of hype and the subsequent fluctuation of expectations as well as fears. At the end of 2022, Gartner released their Hype Cycle report for Artificial Intelligence with Generative AI in its Peak of Inflated Expectations.
With the explosion of new developments after GPT-4 was announced, where do we sit now on the LLM hype curve after less than a year?
What really are LLMs?
Before we continue talking about the hype curve, let’s break down what LLMs are. These models are a subset of generative AI that is optimized to generate text, specifically to predict the next word in a sentence given a prompt and relevant context. They are trained on extremely large datasets, use billions of parameters and fine-tuned by humans (or other LLMs). Examples include: BERT, GPT, and T5.
At the end of the day, they are text calculators that know how to create text that humans can understand when given a prompt.
The Hype Curve: From Excitement to Realism
The hype curve is a common pattern observed when a new technology emerges. Initially, there is tremendous excitement and anticipation, often driven by lofty promises and visionary predictions.
In the case of large language models, their ability to generate coherent and contextually relevant text has fueled the initial hype. Media outlets have reported on the astonishing capabilities of these models, sparking the imaginations of countless individuals across various industries. It has also sparked a lot of controversy with existential fears coming from misunderstanding of the tool.
The Peak of Inflated Expectations
As large language models gained attention, the expectations surrounding their capabilities soared to unprecedented heights. People envisioned a future where AI-generated content would revolutionize industries such as journalism, customer service, content creation, and even personal assistance. However, during this peak phase, it is important to remember that these models are still far from perfect and have their limitations.
The Trough of Disillusionment
Following the peak of inflated expectations, the realities of large language models become more apparent, leading to a period of disillusionment. While these models can generate impressive text or images, they may also produce inaccuracies, biases, or nonsensical output. Moreover, the ethical concerns surrounding AI and potential misuse of such technology are amplified during this phase.
As a result, some enthusiasm wanes, and public sentiment can become more skeptical and fearful.
I would say we are here at the moment and have accelerated through the Peak of Inflated Expectations!
While many individuals and companies are finally leveraging this technology to produce enormous value, they are in the minority and a lot of people are still in the Trough of Disillusionment.
The Slope of Enlightenment
As the initial hype subsides, a more realistic understanding of large language models emerges. Researchers and developers actively work to address the limitations and challenges associated with these models. Improvements are made in areas like fine-tuning techniques, data quality, and mitigating biases.
The focus shifts from exaggerated expectations to refining the technology for practical applications. During this slope of enlightenment, the true potential and value of large language models begin to crystallize.
LLMs cannot solve all of your problems but it can get pretty close. Using the Pareto principle, these tools can help you create 80% of the value in only 20% of the time, depending on the use case. These models unlock creativity in a way that we’ve never seen before between people and machines. This allows for a quicker ideation process and removes many impediments to problem solving.
The Plateau of Productivity
Eventually, large language models will find their stable ground and start delivering meaningful contributions across multiple industries. With refined deployment strategies, improved understanding of their strengths and limitations, and ethical considerations in place, these models become valuable tools.
They aid in tasks such as content creation, language translation, chatbots, and even aiding researchers in scientific discovery. The plateau of productivity signifies the mature stage of large language models, where they seamlessly integrate into our lives and work as supportive tools. When this will happen for everyone is yet to be seen, but may be sooner than we think!
Large language models have undoubtedly caused a significant stir within the field of AI. The hype curve surrounding these models is a natural progression that occurs with any transformative technology. While the initial inflated expectations may have led to some disillusionment, it is essential to acknowledge the remarkable potential these models possess.
As the technology continues to mature, addressing challenges and refining their applications, large language models are poised to become valuable assets that augment human creativity and problem-solving.
By understanding and managing the hype curve, we can navigate the path towards leveraging these powerful tools responsibly and harnessing their true potential for the betterment of society.