Responsible AI has become a strategic business priority for Indian enterprises, moving beyond ethics to play a key role in brand trust and long-term stakeholder value. Reflecting this shift, Nasscom released its State of Responsible AI in India 2025 report at the Responsible Intelligence Confluence in New Delhi.
Based on a survey of 574 senior leaders from enterprises, SMEs, and startups conducted between October and November 2025, the report highlights strong progress in Responsible AI (RAI) adoption. Around 30% of organisations have mature RAI practices, while 45% are actively implementing formal frameworks—showing clear improvement over 2023 levels.
The findings reveal a strong link between AI maturity and responsible adoption. Nearly 60% of companies confident in scaling AI responsibly already have mature RAI frameworks in place. Large enterprises lead with 46% maturity, while SMEs and startups are steadily advancing at 20% and 16%, respectively. Among sectors, BFSI tops the list with 35% maturity, followed by TMT at 31% and healthcare at 18%, with many organisations actively strengthening their RAI frameworks.
Sangeeta Gupta, Senior VP & Chief Strategy Officer, Nasscom, said, "As AI becomes deeply embedded in critical decisions across finance, healthcare, and public services, responsible AI is no longer optional; it is foundational to building trust, ensuring accountability, and sustaining innovation. The real measure of India's AI leadership will not just be in the scale of adoption, but in how responsibly and inclusively these systems are designed and deployed. For businesses, this means moving beyond compliance checkboxes to embedding responsible practices across the entire AI lifecycle. With the right investments in governance, talent, and transparent frameworks, India has the opportunity to set global benchmarks for trustworthy AI that serves society at large."
Workforce enablement is a major priority, with nearly 90% of organisations investing in AI sensitisation and training. Businesses show the highest confidence in meeting data protection obligations, reflecting stronger privacy frameworks, while monitoring-related compliance continues to be a key area for improvement. Accountability for Responsible AI remains largely top-down, with 48% placing ownership at the C-suite or board level, though 26% now assign it to departmental heads. AI ethics boards are also gaining momentum, with 65% of mature organisations having formal committees in place.
Despite this progress, challenges persist. Hallucinations (56%), privacy violations (36%), lack of explainability (35%), and bias (29%) are the most common risks. Key barriers include poor data quality (43%), regulatory uncertainty (20%), and talent shortages (15%), with SMEs particularly constrained by high implementation costs.
As AI systems become more autonomous, Responsible AI is emerging as a critical enabler of scalable and trusted adoption. Mature organisations report greater readiness for emerging technologies such as agentic AI, though many will need to upgrade their frameworks. Continued investment in skills, governance, data quality, and monitoring will be vital for building trustworthy, human-centric AI at scale.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.



