Radar 2024 of AI Safety Solutions
Published October 9, 2024
- Cybersecurity
A Booming Market
Since the first publication of the AI Security Solutions Radar, it has become clear that the market for securing Artificial Intelligence is booming. Industries are showing increasing interest in AI security solutions, confirming the true existence of a competitive market. This trend is supported by the increase in the number of players and solutions available to companies. This article is therefore a continuation of our market analysis. Since the first version published in March 2024 and presented at the International Cybersecurity Forum:
-
+40% of identified players in the new version of the Radar
-
+60% of suppliers interviewed since March 2024
-
88 Solution editors on our radar
List of startups identified in 2024 by category:
- Anti deepfake :
- Sensity, Daon, Sentinel, DuckDuck Goose, Uncov, Buster AI, Reality Defender, Integrity, Trustiens
- Data protection and confidentiality IA:
- DynamoFL, Lasso, Harmonic, Nightfall AI, Antimatter, Safebranly.AI, Tortanix, Skyflow, HephAI, Knostic, Credal, Securit, Sarus, Mithril Security
- Detection and response of Machine Learning algorithms:
- Mindgard, Hiddenlayer
- Ethics, explicability and accuracy of treatment:
- Arize, Fiddler, Saidot.AI, Vidence, Ruera, Giskard, Humanloop, Arthur, Helicone
- Secure chatbot and LLM filtering:
- RebuffAI, Lakera, Usage Panda, Prompt, Insightcircle, CalypsoAI, Aimon
- Secure collaboration in Machine Learning:
- Aleir, Skyld, Rhino Health, Cosmian, Zama, BeekeeperAI, Opaque Systems
- Assessment of model robustness and vulnerabilities:
- Azure/Counterfit, Robust Intelligence, DeepKeep, Adversa, Numalis, Encord, Protect AI, Trojai, Huntr, Prism Eval, Citadel AI
- IA risk management:
- Flower, Lumenova, Safer AI, Integrate.ai, Monitaur, Trustible, AIShield, FedML, Devron, Holistic AI, Nudge, Calvinrisk, Paloalto, Konfer
- IA regulatory compliance:
- Fairly, Suzan AI, Enzai, Quantpi, Preamble, Credo ai, Naaia
- Synthetic data / Anonymization:
- Hazy, Gretel, Nijta, Private AI, Mostly.AI, Tonic, TripleBlind, Octopize
The observed trends are confirmed. Solutions that meet a priority security need and require a moderate investment in research continue to grow in number with accessible and quick-to-implement tools. At the same time, categories of solutions based on advanced and complex technologies remain more stable, with leaders confirming their status.
Powerful and substantial movement
We observe significant movement in three main segments of the market: data protection, synthetic data generation for model training, and AI system risks management tools.
Each of these categories have specific dynamics that justify further analysis.
AI risk management is market category in a vibrant state with a significant number of identified players. AI governance is therefore becoming a central issue, ensuring the transparency and accountability of deployed systems.
Since the enforcement of AI system regulations, which aim to govern their development, deployment and use, several AI risk management tools have evolved. These solutions now enable companies to comply with regulations, such as the EU AI Act, while providing clear visibility into their AI systems.
We have therefore decided to refine our Radar (see radar image above) by creating a targeted analysis of regulatory compliance solutions. The publishers offer platforms for centralizing AI systems and its risks while automating regulatory monitoring.
However, the proliferation of risk management solutions may be excessive when we note that companies often prefer, at first glance, to manage risks internally. Indeed, the trend is to adapt traditional governance methodologies to AI systems rather than embarking on the adoption of specialized tools. It is then essential to question the adequacy of this approach in the face of real market needs.
Do companies have the skills and resources necessary to effectively manage AI-related risks? If not, they should outsource the management to specialized experts. Thus, we ask ourselves: will AI risk management solutions succeed in convincing people of their added value?
Data protection is a major issue in the field of AI security. Indeed, the increase of generative AI solutions in companies adds potentially uncontrolled exposure of sensitive data. It is essential to develop robust strategies to protect this data.
Publishers of this type of solution, through different mechanisms, aim to guarantee the confidentiality and integrity of data entering or leaving AI models, whether during their development or use.
Among the publishers we have identified, various solutions offer a browser extension to monitor web traffic, like a proxy, and prevent any data leaks when using LLMs. Companies also focus on protection against Retrieval-Augmented Generation (RAG) Poisoning or even securing the training data itself: attackers can inject malicious examples into the RAG retrieval database and therefore counterfeit the results.
There are many players fighting against data protection in our Radar (see radar image above) and the number continues to increase.
Companies must now not only take advantage of these tools but also find the experts capable of implementing them effectively to avoid overloading security teams.
Synthetic data has significant but underexploited potential. The evolution of content generation technologies highlights a new priority: the confidentiality of training data.
Synthetic data refers to the artificial generation of data that imitates the properties of reality. This data is often used to augment the size of a real dataset or to protect the privacy of individuals by implementing anonymization techniques, particularly in areas where confidentiality primes. Following our targeted analysis on regulatory compliance solutions, we decided to highlight synthetic or anonymized data generation solutions in order to obtain a more precise perception of the market.
The use of synthetic data (see radar image above) is becoming a preferred solution to protect sensitive information while allowing for close-to-real, and therefore efficient, training of AI models. This approach makes it possible to conciliate performance and respect for privacy. Today, we notice several companies interested in this, but the situation could change due to greater awareness of these tools and demonstrations of conclusive use cases.
Ongoing and future cyber-trends
The field of Artificial Intelligence is marked by continuous technological advances with:
- Emerging techniques, such as deepfake generation techniques
- Increasingly sophisticated attacks, such as poisoning attacks.
We are witnessing an explosion in the creation of AI-generated content. From texts to images to videos, content generation capabilities are multiplying. Deepfakes represent a particular challenge since they are often indistinguishable from reality. It underlines the need to develop effective detection technologies to prevent abuse. The development of these tools is essential to ensure the reliability of information and safety over multimedia content. Several approaches can be used by these tools, such as the use of Machine Learning models to detect inconsistencies or to check for the presence of invisible digital watermarks on the content. It is also possible to analyse metadata and pixels to look for signs of content manipulation.
OurRadar (see radar image above) is increasingly identifying solutions to combat malicious deepfakes.
The emergence of a new type of advanced attacks represents a growing threat to AI systems. Certain types of attacks, for example, aim to extract the weights of models and subsequently exploit them to deactivate learned security mechanisms, thus compromising the robustness of AI systems. These attacks can be compared to the lobotomization of neural networks. It is therefore imperative to develop strategies to strengthen the resistance of models to the most sophisticated attacks and guarantee their robustness in the face of increasingly complex threats.
Several solutions in our Radar (see radar image above) asses the robustness of models to some extent. Others can also apply the formal mathematical methods proposed by the ISO 24029 suite of standards.
However, faced with this growing threat which is the consequence of increasingly complex attacks, are these solutions sufficient to guarantee models robustness?
A maturing market
The AI security market is maturing. Established players continue to strengthen their positions, while new entrants bring new perspectives for innovations. Major IT and cybersecurity players are starting to enter the scene, often with more substantial development capabilities. This dynamic has the potential to lead to a consolidation of the market, where these global firms are getting closer to startups to integrate their technologies and expertise for the benefit of their own AI security offers.
The future of the AI security market is promising, with continued growth of startups and the arrival of major players. The next version of our Radar may reveal even more trends and opportunities, offering new horizons for the market for securing artificial intelligence models.