In January 2023, just two months after its launch, the ChatGPT website was already receiving an average of 13 million unique visitors per day, with daily traffic growing by 3.4%. By April, the site was welcoming one billion monthly visits, including nearly 100 million active users. Why do generative AI and Large Language Models (LLMs) generate such excitement? What impact will they have on businesses? Is generative AI merely an evolution, or a genuine revolution? Let’s take a closer look.
From Chatbots to Agentic AI: The Future of Customer Experience
See moreTo better understand how generative AI will transform business practices, it is essential to grasp what it really is. The term “generative AI” suggests that it is about creating content. This is only partly true, since its functioning goes both ways: LLMs are not only capable of generating content but also of “digesting” it. This second aspect is crucial, as it sheds light on how they work and the risks they may pose for companies.
It is also important to distinguish between different types of generative AI. The best known, thanks to ChatGPT, is text-based generative AI, which produces and analyzes text. Following the same principle, other generative AIs work in the visual, musical, or audio domains. In this article, we will focus on the most mature of these technologies—text-based generative AI, or LLMs—whose impact on the business world is both significant and impressive.
Generative AI: evolution or revolution?
Contrary to what many might think, generative AI is not all that recent. It began to emerge with the rise of deep neural networks. As early as 2014, Generative Adversarial Networks (GANs) were already capable of producing images that looked like real photographs. Generative AI is therefore the direct evolution of decades of research in artificial intelligence. As Yann LeCun, Meta’s Chief AI Scientist, reminds us: “ChatGPT and other large language models did not appear out of nowhere. They are the result of decades of contributions from many people.”
The real trigger, however, came when OpenAI made ChatGPT available to the general public in November 2022. Why? Because it raised awareness of the potential impact of such a tool.
From a simple technological innovation…
From a technological standpoint, LLMs are more of an evolution than a revolution, though a remarkable one, because they combine up to four learning modes within a single AI:
- Unsupervised mode (embedding): assigns each word a vector (a series of numbers) that best defines it within a mathematical space, known as a vector space.
- Self-supervised mode: predicts the completion of a sentence fragment with what is statistically most likely.
- Supervised mode (equivalent to OpenAI’s InstructGPT layer): teaches the model how to answer questions.
- Reinforcement mode: ranks responses by order of preference and selects the most appropriate one, with human feedback playing a central role.
An additional safety layer is also integrated to filter out prohibited questions and answers.
Two other technical advances also played a crucial role. The first is the use of transformer technology which, ironically, was invented in 2017 by… Google. In its paper “Attention is All You Need,” Google Research described how to build a transformer to train deep learning models using self-supervised learning.
The third innovation is closely tied to the second: the ability to run transformers on massive computing infrastructures with enormous training datasets. At the heart of these infrastructures are Nvidia’s H100 GPUs (Graphical Processing Units), which provide the computing power required to make such large-scale training possible.
… to a revolution in usage
While the main innovations behind generative AI lie in the combination of four learning modes and transformers, the real revolution is in its use cases. LLMs mark the beginning of a new era in artificial intelligence and human–machine interaction, one that could be as transformative as the internet itself. From now on, anyone can interact with a computer using natural language, and the machine will be able to interpret and execute the user’s request.
Generative AI can create new data from a training corpus, a prompt, and a context. In text, for example, GPT-4 can provide advice, draft a plan, write articles, product sheets, and even code.
LLMs are not only capable of generating content but also of digesting it. This second aspect is crucial, as it sheds light on how they work and on the risks they may pose for businesses.
Unmatched computing power
If generative AI is not, strictly speaking, a technical breakthrough, why all the excitement? Because its unparalleled language processing power opens infinite possibilities for users—and, by extension, for companies. Generative AI pushes the boundaries of what machines can learn and create: new product designs, personalized marketing content, sales pitches enriched with vast amounts of information, support for medical diagnosis, and even forensic analysis.
Nvidia, the engine behind generative AI
Generative AI would not have reached this level of performance without Nvidia—and more specifically, its H100 (and earlier A100) GPUs, which make it possible to train transformers, or deep neural networks. The performance of AI depends on a precise balance between the volume of training data and the complexity of the model. By delivering unprecedented vector computing power, Nvidia has enabled the rise of LLMs trained on massive datasets.
How generative AI will change the game
It is difficult to predict the full scope of new professional use cases enabled or enhanced by generative AI, as the field of possibilities is so vast. What is clear, however, is that conversational AI must remain a tool. A machine has no intention or genuine understanding, and, for example, cannot truly distinguish between what is ethical and what is not.
It is also important to recognize that even the best LLMs, in general use, can have error rates exceeding 30%. This rate can be significantly reduced by narrowing the training scope or refining the usage context. For this reason, artificial intelligence does not replace humans, it takes over certain tasks and helps deliver efficiency gains.
That said, there are at least two domains where generative AI is likely to be highly disruptive. First, marketing, by enabling personalization at scale. Second, search engines, as generative AI is able to provide precise, documented answers, pushing players like Google to rethink their business models.
The impact of generative AI on business
Companies adopting generative AI stand to gain higher productivity and lower costs, two key factors of competitiveness. It also gives them the ability to generate new content quickly in response to competitive pressures. One thing is certain: generative AI will transform the way we create and consume content, making it a strategic differentiator for businesses.
At the same time, generative AI raises a number of questions and challenges. For example, what about copyright and intellectual property rights over the works used in training? In many cases, it remains unclear exactly which data LLMs were trained on, creating ethical issues for the companies that rely on them.
The limits of generative AI development
Some limitations are already emerging, starting with a methodological constraint. Models like GPT-4 require a very large share of the content available on the internet for training. Increasing their power and complexity would demand access to even more content (as explained by Vapnik’s law), which restricts their development potential. In addition, many valuable resources for training are not published on the web and therefore remain out of reach for generative AI. As a result, general-purpose LLMs alone cannot provide exhaustive and relevant answers in most specialized fields. Over the long term, this will require tailoring the training of certain LLMs on domain-specific corpora and accelerating the digitization of data.
Another challenge lies in keeping content current. For instance, ChatGPT currently does not include data beyond September 2021. While plugins can fetch content from the web, the retrieved data is integrated either into the prompt or the context—but not into the training corpus itself. This restricts the model’s ability to analyze the most recent information. Updating training datasets is therefore a real limitation. To this, we must add the risks of bias and fake news within training data, which can reduce the reliability of outputs.
As with any technology, responsible and ethical use of generative AI is essential to maximize its benefits while minimizing risks. This is why companies should favor secure environments such as Azure OpenAI, designed for professionals, rather than public-facing products like ChatGPT.
Comments (0)
Your email address is only used by Business & Decision, the controller, to process your request and to send any Business & Decision communication related to your request only. Learn more about managing your data and your rights.