Shadow AI, sovereignty, trust: addressing the real challenges of AI

22 October 2025 - Updated at 10 March 2026
Orange Business Orange Business

The rapid rise of artificial intelligence (AI) in the professional world is raising critical questions, particularly around cybersecurity and trust. On the occasion of InCyber 2025, a conference explored the risks associated with Shadow AI, their impact on organizations, and the solutions that can be implemented to address these challenges.

Conference: Shadow AI, sovereignty and trust

What is Shadow AI?

Shadow AI refers to the unauthorized use of artificial intelligence tools by employees in a professional context. Unlike company-approved solutions, these tools can compromise data security and confidentiality.

Shadow AI represents a significant risk for organizations, on an even greater scale than Shadow IT. In France, surveys indicate that between 18% and 50% of employees are using Shadow AI tools in the workplace (Source: Ifop, Data Publica, 2024).

According to some surveys, as many as 60% of knowledge workers are using unregulated AI tools, representing a major risk for organizations.

Risks associated with Shadow AI

1. Data loss

Employees engaging in Shadow AI practices, for example by using tools such as ChatGPT, Midjourney, or Gemini, expose organizations to significant data loss risks. Consumer-grade solutions do not guarantee data confidentiality or strict security standards.

Prompts, conversations, and documents shared with these tools may be reused to further train large-scale AI models operating globally, potentially leading to data leakage. This data may later be exposed to other users who interact with models trained on that same information.

2. Unintentional disclosure of information to competitors

Another consequence of using unauthorized AI tools is the inadvertent sharing of valuable data. By entering information into unapproved AI solutions, employees may unintentionally provide sensitive insights that could benefit competing organizations, ultimately weakening their company’s market position.

Today, Shadow AI arguably represents one of the greatest cybersecurity threats associated with the use of artificial intelligence.

Cybersecurity and AI

Artificial intelligence can also be leveraged by attackers to design more sophisticated attack strategies. As AI becomes increasingly integrated into business processes, the resulting expansion of the attack surface requires heightened vigilance.

For example, when AI solutions are developed internally or new projects are launched, access points to major market LLMs can introduce additional potential attack surfaces. As a result, AI is increasingly being exploited by threat actors to conduct attacks that are more sophisticated, large-scale, and harder to detect.

AI sovereignty

Most of the large language models (LLMs), as well as image and video models currently in use or emerging on the market, are predominantly developed in the United States or China. This reliance on a limited number of AI models raises important sovereignty concerns. Organizations must ensure that the AI solutions they use comply with security and data confidentiality requirements.

Beyond issues of sovereignty and technological resilience, cultural context must also be taken into account. AI models trained primarily on English-language data, particularly those developed in the United States, inherently reflect an Anglo-centric and more specifically American cultural perspective. This influence can clearly be observed in the way certain questions are interpreted and answered by these models.

Artificial intelligence is gradually expanding the scope of
cybersecurity and, more broadly, the question of trust.

Trustworthy AI and trust in AI

The concept of trustworthy AI has multiple dimensions. Here is an overview of some of the most significant ones.

The environmental impact of AI

Digital technologies contribute significantly to CO₂ emissions, accounting for approximately 4% to 4.5% of total emissions in France. AI, in particular, is driving an increase in greenhouse gas emissions, with companies such as Google and Microsoft reporting rises of around 30% to 50% between 2019 and 2023.

Building trustworthy AI requires opting for more sustainable AI solutions in order to reduce this environmental impact. For each project or use case, it is therefore essential to consider how to design more frugal AI systems with a lower environmental footprint.

The rollout of the AI Act

The AI Act, the European regulation on artificial intelligence, is the world’s first comprehensive AI regulation. Having entered into force in August 2024, it is now gradually being implemented and will introduce new constraints aimed at maintaining trust across the different areas outlined above.

The regulation establishes a new framework for the development and deployment of AI within organizations. Certain uses will be fully prohibited in Europe, such as social scoring, while others will be strictly regulated, particularly those classified as high-risk use cases.

Transparency, bias and hallucinations

Another key challenge lies in trust in AI. For example, how much trust can be placed in AI-driven automated decisions that are not explainable and offer no transparency regarding the data used to train the model? Trust in AI fundamentally depends on the transparency of decision-making processes.

Organizations must be able to explain how decisions are made by AI systems in order to maintain user trust.

AI models can also be biased, leading to unfair decisions and undermining trust in organizations. Highly biased AI systems may reproduce existing inequalities or even amplify societal disparities.

Finally, hallucinations, which refer to both minor and major AI-generated errors, can further compromise the reliability of outputs. This raises critical questions about the level of trust that can be placed in such systems.

Combating Shadow AI: what solutions are available?

1. Establish clear rules and raise employee awareness

Organizations often tend to reduce AI-related topics to questions of individual productivity. While this dimension is important, security considerations must remain a primary focus.

Companies should therefore establish clear policies governing the use of AI tools. This may include employee charters to be signed, as well as awareness and training sessions highlighting the risks associated with Shadow AI and the collective responsibility to protect corporate data. This is particularly critical for personal data subject to GDPR, as well as information covered by industrial secrecy.

2. Use secure solutions

It is essential to provide employees with empowerment solutions by offering secure AI tools that comply with corporate standards and policies.

Failing to do so means losing twice. First, organizations miss out on the opportunities AI offers to improve employee productivity. Second, they risk alienating the new generation entering the workforce, which is already accustomed to using AI tools and would struggle to understand why such technologies are not available in a professional environment. At one time, it would have been unthinkable not to allow the use of Excel at work. AI is likely to follow a similar trajectory.

Secure AI platforms enable organizations to leverage AI models within a controlled and compliant framework, thereby significantly reducing the risks associated with Shadow AI.

3. Deploy AI at scale

Deploying AI also provides an opportunity to revisit existing processes, make them more efficient, or in some cases fundamentally disrupt them. AI can also be used to create new products or services, embedded within other offerings, physical devices, web applications, or mobile applications. This represents a second phase of AI adoption, following the initial step of empowering individual employees to use AI tools.

For this type of deployment, key considerations revolve around the selection and qualification of use cases. Each project must take into account six levels of risk or points of attention:

  • Cybersecurity
  • Sovereignty and cultural considerations
  • Transparency and explainability
  • Hallucinations and bias
  • Environmental impact
  • Regulatory compliance

It is worth recalling that 80% of AI projects of this nature fail to scale. One of the main reasons is the reluctance of some organizations to transform their processes. Building a proof of concept alone is not sufficient to scale AI. Achieving this requires a willingness to transform the organization, rethink processes, and adapt ways of working across the impacted teams and departments.

The introduction of an AI-powered feature must also be accepted by the employees who will be using it. They need to understand its purpose and value in order to integrate it properly, just like any other product functionality. Only then can organizations reduce the proportion of AI projects that fail to scale.

This highlights the dual challenge of successfully deploying artificial intelligence within organizations. On one hand, companies must work on empowering employees while simultaneously combating Shadow AI and raising awareness of associated risks. On the other hand, they must identify use cases that deliver value at scale, while carefully assessing the level of risk associated with each of them.

Training employees on the limitations and risks of AI tools is therefore essential. This includes raising awareness of bias, hallucinations, and best practices for protecting corporate data.

Today, Shadow AI represents a major challenge for organizations of all sizes. However, with the right strategies in place, the associated risks can be significantly reduced. More importantly, by investing in secure solutions and raising employee awareness, organizations can harness the benefits of AI while protecting their data and reputation. Trust and sovereignty must remain at the core of any AI strategy to ensure a secure and sustainable digital future.

Orange Business Orange Business Orange Business

Orange Business helps customers innovate and strengthen their business strategies across key digital domains, including Cloud, Customer Experience, and Data & AI. It supports organizations throughout their digital journey by providing advisory services, end-to-end solutions, managed services, and professional services to ensure long-term success. With…

All posts from Orange Business

Comments (0)

Your email address will not be published. Required fields are marked *

Your email address is only used by Business & Decision, the controller, to process your request and to send any Business & Decision communication related to your request only. Learn more about managing your data and your rights.

Discover also