RAG has become a key concept for anyone looking to create real value with generative AI based on their organisation’s data. Let’s take a closer look at what it means and explore the best enterprise approaches to make a real impact, interview-style 😉

#AI: 7 hot topics for 2025
See moreWhat does RAG mean, and when is it used?
Mick Levy: RAG stands for “retrieval augmented generation,” which in French translates to “génération augmentée par extraction de données.” It is the most pragmatic way to optimise the output of generative AI by grounding it in data specific to the organisation.
Generative AIs, especially large language models or LLMs, are designed to provide a plausible answer to any question, across all kinds of subjects, but not necessarily a correct one. This is what we refer to as hallucinations, and they are currently the number one obstacle to the widespread adoption of AI in organisations.

When using generative AI in a public institution or a company, the model needs to be able to answer questions reliably and in ways that are specifically relevant to that organisation. To make this possible, these very general models must be narrowed down and aligned with the organisation’s needs and internal data. That is exactly what RAG allows us to do.
It works by connecting one or more reference documents to the language model and instructing it to focus its answers primarily on that knowledge base. In doing so, the organisation’s own data carries more weight than the general knowledge the model was originally trained on.
Why is RAG the most effective approach to improve the results delivered by generative AI?
Mick Levy: RAG is not the only method for tailoring the output of large language models. In fact, there are four main approaches.
- First, prompt engineering, which involves refining how we communicate with the model and providing examples of the expected answers so it can draw inspiration from them. This is often referred to as one-shot or few-shot prompting.
- Second, the RAG approach, where the model is given documents and asked to base its responses primarily on the information contained in those documents.
- Third, fine-tuning, which goes deeper into the model’s architecture and involves adjusting the weighting of different parameters that influence how it works.
- Finally, building your own generative AI model, which is an option reserved for a very small number of players worldwide and remains out of reach and unnecessary for most organisations.
For companies looking to get the most out of their data assets and create specific value, RAG is clearly the best choice. It is the most effective, the fastest and the most reliable approach in other words, the one that delivers the highest return on investment.
Can you give some examples of how RAG is used in practice?
Mick Levy: There are countless examples. You simply need to look at any activity that relies on knowledge bases or document repositories. Take customer support, for instance, where agents need quick and easy access to information from the knowledge base and from previously resolved support tickets.
In marketing, RAG can assist with writing new content by drawing on all existing materials. In product teams, it can support the creation of product sheets based on those already written. The use cases are endless.
Why is RAG so important for standing out in your field?
Mick Levy: This is the key question to ask from the start. Why do I want to optimize the model I’m using, for what purpose, and to achieve what outcome? Generative AI, when enhanced with RAG, is a powerful tool for creating value but only if you have clearly identified the use cases and the reasons behind choosing them.
To gain a competitive edge through generative AI, you need to focus on use cases that are specific to your organization. That means considering how your company operates, its processes, its needs, its customers and its purpose. This can only be achieved by leveraging the organization’s own data, and RAG is the first step to making that possible.
What are the main challenges to overcome for an effective RAG approach?
Mick Levy: The first challenge involves IT architecture. It is essential to ensure proper communication between the language model and the organisation’s document repositories. There are existing solutions for this, whether hosted in the cloud or on-premises.
While the technological challenge is far from insurmountable, the real issue lies in the data itself — the documents used by the RAG system. These need to be well-managed and high in quality. That means the information must be reliable, verified, relevant, up to date and, as much as possible, non-contradictory.
This is a major challenge that calls for dedicated governance of unstructured data. The difficulty comes from the fact that for decades, organisations have mostly focused on structured data mainly numbers arranged in rows and columns within enterprise databases.
Despite this experience, many quality issues still affect structured data today. And with large language models, unstructured data must also be taken into account. According to an IDC study, unstructured data makes up 90 percent of all available data. The challenge is therefore enormous. But so is the potential for value creation, and RAG is the first step toward unlocking it.
Comments (0)
Your email address is only used by Business & Decision, the controller, to process your request and to send any Business & Decision communication related to your request only. Learn more about managing your data and your rights.