Table of Content
The Global State Of Responsible AI In Enterprise
/>Discover more insights on The Cymes!Intro
Today, we will explore a new global survey by Responsible AI Summit that reveals what's really happening behind the scenes of enterprise AI adoption, we delve into the captivating world of A and GenAI, exploring how their immense potential has prompted organizations to adopt these technologies at a rapid pace. However, this rush often comes at a cost, as many companies fail to adequately address the associated risks.
The allure of AI and GenAI’s potential has led companies and other organizations to dive headfirst into adoption without mitigating risks adequately. This has often resulted in misalignments between technology integration and broader business goals, causing poor return on investment and low stakeholder buy-in. It has also caused well-publicized lapses such as discriminatory lending and leaks of sensitive corporate and consumer data.
According to a recent global survey conducted by Accenture, while organizations continue to focus on integrating AI into their processes, 56% of Fortune 500 companies still cite the technology as a risk factor in their annual reports, up from just 9% a year ago. Among those companies, as many as 74% had to temporarily pause at least one AI or GenAI project over the last year. There are typically four types of AI risks:
Misuse: the exponential growth in AI and GenAI capabilities has resulted in their unethical or illegal exploitation to cause harm in the form of scams, misinformation, and disinformation. For example, the ease of creating deepfakes has resulted in social engineering, automated disinformation attacks, scams, financial fraud, identity theft purposes, and the manipulation of election results.
Misapply: hallucinations or inaccurate outputs, which appear superficially convincing, are one of the biggest issues plaguing GenAI, due to the technology inherently prioritizing plausibility over accuracy. For example, in June 2023, ChatGPT wrongly accused Mark Walters, a radio jockey in Atlanta, Georgia of defrauding and embezzling funds from a non-profit organization, leading to a lawsuit against OpenAI2.
Misrepresent: this includes situations where GenAI output created by a third party is purposefully used and disseminated, despite questions about credibility or authenticity. A good example is the Tesla Cybertruck crash that was widely misrepresented on Reddit in March 2023, before it was confirmed to be a deepfake.
Misadventure: in these situations, users consume and share misinformation/disinformation they believe to be true. In March 2019, the CEO of a UK energy company was duped of US$243,000 when attackers used AI to clone the voice of the company’s CEO and impersonate him over a phone call.
The number of AI-related incidents, from algorithmic failures to data breaches, has gone up annually over the last few years. This includes a 32% increase in the last two years and a twentyfold increase since 2013. Moreover, 91% of organizations surveyed in Accenture’s study expect a further increase in incidents over the next three years. Also, 45% believe there is a 25% probability of a major AI incident occurring in the next 12 months, which could result in a 30% erosion in total enterprise value.
The advent of GenAI has raised the stakes for RAI simply due to its greater sophistication compared to traditional AI. After all, AI models have evolved from just a few parameters with ML to tens of thousands with deep learning, and now to millions, billions, and at times trillions with the LLMs that are the foundation of GenAI. Even though this creates various opportunities for organizations, it also increases the chances of errors and misinformation, which could result in the alienation of customers, regulatory infractions, loss of revenues, and eventually damage the brand’s reputation.
For AI and GenAI to be integrated across industries at scale, companies must implement the principles of RAI across the full application life cycle by governing their data, protecting company intellectual property (IP), preserving user privacy, and complying with laws and regulations. One way of doing it is by automating and scaling parts of AI governance, security, and risk management programs to detect and monitor configured guardrails and controls more efficiently. Another way is to adopt a risk-tiered approach that applies different monitoring standards to AI systems based on risk and impact on customers, partners, and employees.
The global RAI market is projected to grow significantly in the coming years. With an estimated value of US$1.1 billion in 2025, it is projected to grow at a robust compound annual growth rate (CAGR) of 40.3% to reach approximately US$6.2 billion by 2030. North America (comprising the U.S. and Canada) is positioned as the largest market for RAI, with expected revenues of approximately US$486.8 million in 2025, accounting for 42.5% of the global market. By 2030, the region’s RAI market is forecasted to surge to around US$2.4 billion, driven by strong governmental initiatives, corporate investments in AI ethics, and a well-established technology infrastructure. Meanwhile, the Asia Pacific (APAC) region is anticipated to witness the fastest growth during this period, with a remarkable CAGR of 45.3% from 2025 to 2030. This rapid expansion is fuelled by the increasing adoption of AI technologies across industries, the rise of AI-focused policies in countries like China, India, and Japan, and a growing emphasis on aligning AI innovation with ethical and regulatory frameworks. Organizations in North America and Asia are the leaders in terms of responsible AI adoption, followed by those in Europe and Latin America.
According to the Artificial Intelligence Incident Database (AIID) that tracks instances of ethical misuse of AI, the number of AI incidents continues to climb annually. In 2023, 123 incidents were reported, a 32.3% increase from 2022.
AI governance – three lines of defence
Organisations are increasingly adopting three lines of defence to build a robust AI governance framework and effectively counter various AI blemishes.
1st line: also called responsible AI by design, this stage involves building and integrating various safety protocols including technical controls and other mitigations at the product or service design stage. These controls could range from reviewing the AI model code, developing automated model evaluations, documenting key information about the system, or creating input and output filters on top of AI models. It could also include technical guardrails integrated into systems, or firewalls to stop the leak of sensitive information. These measures give organizations the best chance to mitigate potential AI governance issues before the models leave the lab environment. The first line of defense is usually enough for most organizations, especially those operating in a simpler regulatory environment. However, for highly regulated organizations and those having compliance obligations to various stakeholders, this line of defense may not be sufficient to mitigate responsible AI challenges.
2nd line: in this stage, a core oversight team which could be a dedicated AI governance team, AI ethics committee, or center of excellence, forms the first point of escalation for any AI-related issue. Members of this team typically consist of DevOps and machine learning (ML) engineers with deep AI expertise, who are responsible for ensuring AI use across the organization conforms to internal and external standards. Specifically, their tasks include conducting risk assessments across various disciplines, assessing the technology’s impact on external stakeholders, and managing key regulatory compliance requirements such as post-market monitoring activities. What makes this line of defense so important is that the data encountered in real use cases will very often be quite different from that used to train the models in the lab.
3rd line: in this stage, a dedicated internal audit team, an external audit agency, or even a government regulatory body (in extreme situations) monitors and audits high-risk use cases. This is done to get a comprehensive understanding of how the AI models are performing in the context of bias, drift, alerts, model changes “audit log,” custom metrics, and explainability. The team regularly shares findings with the development team and may provide additional guidance for risk mitigation. It is usually the business unit (BU) or line-of-business (LOB) leader who has overall responsibility for this line of defense including the behavior and performance of the AI/ML program.
Market landscape
In 2020, the Berkman Klein Center for Internet & Society at Harvard University published a diverse study involving governments, intergovernmental organizations, companies, professional associations, and advocacy groups. An analysis of this study’s dataset identified eight key themes related to developing and deploying AI responsibly and ethically: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values.
Fairness and non-discrimination: AI systems must be designed and implemented to treat all individuals and groups equally, and not create bias based on race, gender, or socioeconomic status. For example, there have been instances of facial recognition algorithms recognizing a white person more easily than a black person due to the type of data used in training. This can negatively affect people from minority groups, hindering equal opportunity and perpetuating oppression. This theme was present in 100% of the cases studied.
Privacy: As authorities increase their enforcement, rulemaking, and legislation to navigate the complex AI landscape, organizations must ensure adequate adherence to privacy needs while deploying AI systems. This includes limitations on data collection, data quality, purpose specification, use limitation, accountability, and individual participation. This theme was present in 97% of the cases studied.
Accountability: AI systems should be developed and deployed such that responsibility for bad outcomes can be assigned to liable parties. This theme was present in 97% of the cases studied.
Transparency and explainability: Information on where, when, and how the AI systems are being used should be provided, along with allowance for oversight. This theme was present in 94% of the cases studied.
Safety and security: Deployed AI systems should be safe, function as intended, and be resistant to compromise by unauthorized parties. This theme was present in 81% of the cases studied.
Professional responsibility: people developing and deploying the AI systems must act with integrity and professionalism. They must consult appropriate stakeholders and consider the long-term effects of the technology before deployment. This theme was present in 78% of the cases studied.
Promotion of human values: AI systems should be developed and deployed keeping humanity’s long-term well-being in mind. This theme was present in 69% of the cases studied.
Human control of technology: this suggests that despite the ubiquity of AI and GenAI systems across industries and organizations, key decisions must remain subject to human review. This theme was present in 69% of the cases studied.
In 2024, a team of researchers at Stanford University in collaboration with Accenture, ran a global survey on respondents from over 1,000 organizations to assess the global state of responsible AI. They found privacy and data governance risks (e.g., the use of data without the owner’s consent or data leaks), to be the leading AI concerns across the globe. Notably, they observed that these concerns were higher in Asia and Europe compared to North America. Fairness risks were selected by 20% of North American respondents, significantly less than respondents in Asia (31%) and Europe (34%).
Global AI regulations
Over the past few years, AI has evolved from being primarily a research or experimental endeavor to becoming a transformative technology with extensive real-world applications. This rapid development has brought issues like model governance, safety, and risk to the forefront, transitioning from niche academic discussions to key priorities in the field. In response, governments and industries globally have introduced regulatory proposals. While the current regulatory environment remains in its early stages—diverse and evolving— the overall trajectory is increasingly evident.
Use of AI to сounter the risk of AI
The recent explosion in AI and GenAI is expected to transform almost every aspect of human life, with the extent of impact hard to predict. According to Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology, AI breakthroughs are gradually beginning to read like science fiction.
For example, according to a March 2023 article published in the Scientific American journal, an AI system connected to Functional Magnetic Resonance Imaging (fMRI) technology, can now reconstruct what an individual’s brain is thinking and represent it accurately as an image. Another AI model that was trained only in the English language, began understanding and answering questions in Persian. Further, ChatGPT trained itself in research chemistry even though it wasn’t part of its targeted training data. This explosive transformation brings with it, a high risk of AI-generated fraud, cybersecurity risks, misinformation campaigns, hallucination, and deepfakes.
A recent study by PwC and the Bank of England found that despite all the potential cases for misuse, AI is more effective at fraud detection than manual controls. AI-powered fraud detection solutions usually have a higher detection accuracy due to their ability to see the larger picture instead of individual transactions. AI can also help reduce false positives by getting contextual insights from large datasets. Lastly, AI and accelerated computing offer better scalability, capable of handling massive data networks to detect fraud in real-time. Below are the key areas where AI is being used to counter AI-generated risk.
Anomaly detection: ML algorithms such as clustering and classification techniques can be deployed to detect unusual patterns or behaviors in datasets. A good example is autoencoders, a type of artificial neural network (ANN), which can decipher the usual/normal patterns in a dataset, and then detect deviations from these patterns to potentially discover AI-generated fraud. Real-time anomaly detection systems and event-driven architectures can perform a similar function in real time, enabling organizations to respond rapidly to emerging threats.
Over the last few years, AI anomaly detection has been widely applied in areas such as cybersecurity (detecting intrusions and threats), fraud detection (identifying fraudulent transactions), healthcare (monitoring patient data for anomalies), industrial systems (predicting equipment failures), and predictive maintenance (anticipating maintenance needs).
Natural Language Processing (NLP): AI and ML are not only a significant part of how data is processed but also how it is protected. NLP techniques such as stylometric analysis and sentiment analysis, are often used to analyze nonnumerical text-based communications for signs of AI-generated content, such as detecting specific linguistic patterns or inconsistencies that may indicate fraudulent intent. Specifically, stylometric analysis helps detect anomalies in writing styles, while sentiment analysis uncovers inconsistencies in emotions expressed within the text. Companies are increasingly using LLM-powered NLP for rule setting, and to make NLP models better at understanding fraud signals.
Image and video analysis: AI-powered image and video analysis tools can identify GenAI-generated deepfakes and other tampered multimedia content, by comparing them to credible sources. Additionally, digital forensics are used to analyze image and video artifacts, while deep learning models detect manipulated content based on inconsistencies in lighting, facial expressions, or other visual cues.
One way to spot a deepfake is to identify its source. Known as source analysis, this process involves using deepfake detection algorithms that analyze file metadata thoroughly and rapidly to ensure a video is completely unaltered and authentic. Another technique is to pinpoint its background, a task that has been made more difficult by AI tools capable of altering backgrounds. AI-powered deepfake detectors can identify altered backgrounds by carrying out granular checks at multiple points to identify changes missed by the human eye.
Phishing detection: ML-powered models are being used to detect phishing attempts, such as malicious URLs or suspicious email patterns, even when they are AI-generated. Feature extraction techniques, such as text and URL analysis, can be combined with supervised learning algorithms, such as support vector machines or neural networks, to classify emails or web pages as phishing or legitimate. Moreover, these models can be regularly updated with new data to adapt to the evolving tactics of cybercriminals employing AI-generated phishing content.
Generative Adversarial Networks (GANs): adversarial AI models called GANs simulate the ‘attack’ experience on other AI models used in fraud detection, to identify weaknesses and increase their robustness against genuine AI-generated fraud. These adversarial models can generate data that mirrors actual fraudulent behavior which consequently enhances and optimizes fraud detection algorithms, and facilitates the detection of even the most subtle deviations from legitimate patterns. This process also enables fraud detection systems to identify and defend themselves against a wider range of threats, including the ones created by GenAI.
Final thoughts
As we reflect on the journey of AI and GenAI adoption, it becomes evident that this path is filled with both exciting possibilities and real challenges. The insights from the Responsible AI Summit survey remind us that rushing into technology without a thoughtful approach can lead to significant pitfalls. Organizations must prioritize responsible AI practices to not only minimize risks but also ensure that their innovations truly serve their business goals and the communities they impact.
The rapid growth of the responsible AI market is a hopeful sign that more companies are recognizing the importance of ethical and accountable technology use. It’s not just about cutting-edge tools anymore; it’s about doing the right thing for everyone involved. By embracing these principles, businesses can build trust with their stakeholders and create solutions that benefit society as a whole.
The road ahead may be complex, but with a commitment to responsible practices, we can turn the promise of AI into a powerful force for good. Together, let’s navigate this exciting landscape, ensuring that technology enhances our lives while respecting our values and needs. Stay tuned for the part 2!