
With 95% of enterprises facing incidents, Infosys research reveals a wide gap between AI adoption and responsible AI readiness, exposing most enterprises to reputational risks and financial loss.
For insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI, a survey of over 1,500 business executives and 40 senior decision-maker interviews across Australia, France, Germany, UK, US, and New Zealand was conducted. The findings show that while 78% of companies see RAI as a business growth driver, only 2% have adequate RAI controls in place to safeguard against reputational risk and financial loss.
The report analysed the effects of risks from poorly implemented AI, such as privacy violations, ethical violations, bias or discrimination, regulatory non-compliance, inaccurate or harmful predictions, among others. It found that 77% of organisations reported financial loss, and 53% of organisations have suffered reputational impact from such AI related incidents.
Key findings include:
AI risks are widespread and can be severe
- 95% of C-suite and director-level executives report AI-related incidents in the past two years.
- 39% characterise the damage experienced from such AI issues as “severe” or “extremely severe”.
- 86% of executives aware of agentic AI believe it will introduce new risks and compliance issues.
Responsible AI (RAI) capability is patchy and inefficient for most enterprises - Only 2% of companies (termed “RAI leaders”) met the full standards set in the Infosys RAI capability benchmark — termed “RAISE BAR” with 15% (RAI followers) meeting three-quarters of the standards.
- The “RAI leader” cohort experienced 39% lower financial losses and 18% lower severity from AI incidents.
- Leaders do several things better to achieve these results including developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan.
Executives view RAI as a growth driver
78% of senior leaders see RAI as aiding their revenue growth and 83% say that future AI regulations would boost, rather than inhibit, the number of future AI initiatives.
- However, on average companies believe they are underinvesting in RAI by 30%.
With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage. To help organisations build scalable, trusted AI systems that fuel growth while mitigating risk, Infosys recommends the following actions: - Learn from the leaders: Study the practices of high-maturity RAI organisations who have already faced diverse incident types and developed robust governance.
- Blend product agility with platform governance: Combine decentralised product innovation with centralised RAI guardrails and oversight.
- Embed RAI guardrails into secure AI platforms: Use platform-based environments that enable AI agents to operate within preapproved data and systems.
- Establish a proactive RAI office: Create a centralised function to monitor risk, set policy, and scale governance with tools like Infosys’ AI3S (Scan, Shield, Steer).