60% of Indian Businesses Adopt Mature Responsible AI Frameworks Despite Ongoing Challenges, Nasscom Report Reveals

In a significant development for the Indian business landscape, a recent report by Nasscom reveals that nearly 60% of Indian firms confident in scaling artificial intelligence (AI) responsibly have established mature Responsible AI (RAI) frameworks. This finding underscores a growing recognition among businesses of the importance of ethical and accountable AI practices, even as challenges persist in the journey toward comprehensive adoption.

The report, titled “State of Responsible AI in India 2025,” was unveiled during Nasscom’s Responsible Intelligence Confluence held in New Delhi. It is based on a survey conducted between October and November 2025, which gathered insights from 574 senior executives across large enterprises, small and medium enterprises (SMEs), and startups involved in the commercial development and use of AI in India. The findings paint a picture of a rapidly evolving landscape where organizations are increasingly aware of the need for responsible AI practices, yet face significant hurdles that could impede their progress.

At the heart of the report is the assertion that RAI frameworks are essential for guiding the ethical, safe, and accountable design, development, and deployment of AI systems. These frameworks not only help mitigate risks associated with AI but also foster trust among stakeholders, including customers, employees, and regulators. However, the survey highlights several persistent gaps that threaten to slow the safe adoption of AI technologies.

One of the most pressing issues identified in the report is the lack of high-quality data, cited by 43% of respondents as the primary barrier to effective implementation of RAI frameworks. High-quality data is crucial for training AI models that are accurate, reliable, and free from bias. Without it, organizations risk deploying AI systems that may produce misleading or harmful outcomes. This challenge is particularly acute in sectors where data availability is limited or where data privacy concerns complicate access.

Regulatory uncertainty also looms large, with 20% of respondents indicating that unclear regulations hinder their ability to implement responsible AI practices. As AI technologies evolve rapidly, regulatory frameworks often lag behind, creating an environment of ambiguity that can stifle innovation. Large enterprises and startups alike express concern over the lack of clear guidelines, which can lead to hesitancy in adopting AI solutions or investing in necessary infrastructure.

Moreover, the shortage of skilled personnel remains a significant hurdle, with 15% of respondents highlighting this issue. The demand for professionals who possess both technical expertise in AI and a deep understanding of ethical considerations is growing. Organizations are increasingly recognizing that building a workforce capable of navigating the complexities of AI requires substantial investment in training and development.

Despite these challenges, the survey indicates a positive trend in the maturity of RAI practices among Indian businesses. Approximately 30% of firms reported having fully mature RAI practices, while 45% are actively implementing formal frameworks. This shift from basic awareness to structured strategies and policies reflects a growing commitment to responsible AI among Indian organizations.

Interestingly, the report reveals a direct correlation between AI maturity and the robustness of RAI frameworks. Firms with stronger AI capabilities tend to have more developed RAI practices, suggesting that as organizations enhance their technological prowess, they also become more attuned to the ethical implications of their AI deployments. This relationship highlights the importance of integrating responsible practices into the core of AI strategy rather than treating them as an afterthought.

When examining the maturity of RAI practices across different sectors, the report identifies the banking, financial services, and insurance (BFSI) sector as the most advanced, with 35% of firms reporting mature RAI frameworks. This is followed by the technology, media, and telecom (TMT) sector at 31%, and healthcare at 18%. The BFSI sector’s leadership in RAI maturity can be attributed to its long-standing focus on compliance and risk management, which has naturally extended to the realm of AI.

As organizations strive to enhance their RAI frameworks, workforce readiness is emerging as a critical priority. The survey reveals that nearly nine in ten organizations are investing in sensitization and training initiatives aimed at fostering a culture of responsible AI. This proactive approach is essential for ensuring that employees at all levels understand the ethical implications of AI technologies and are equipped to make informed decisions regarding their use.

Accountability for RAI practices remains largely top-down, with 48% of organizations placing responsibility for RAI initiatives with the C-suite or board of directors. This centralized approach can facilitate swift decision-making and ensure that ethical considerations are integrated into strategic planning. However, the report also notes that 26% of organizations now assign RAI responsibility to departmental heads, reflecting a growing recognition of the need for accountability at multiple levels within the organization.

The establishment of AI ethics boards is gaining traction, particularly among mature organizations. The report indicates that 65% of firms with mature RAI practices have constituted AI ethics boards or committees. These boards serve as a mechanism for overseeing AI initiatives, ensuring that ethical considerations are prioritized throughout the development and deployment process. However, some companies remain cautious about the effectiveness of these boards, emphasizing the need for clear mandates and active engagement from leadership.

Sangeeta Gupta, Senior Vice President and Chief Strategy Officer at Nasscom, emphasized the foundational role of responsible AI as AI becomes increasingly embedded in critical decision-making processes. She stated, “The real measure of India’s AI leadership will not just be in the scale of adoption, but in how responsibly and inclusively these systems are designed and deployed.” Gupta’s remarks underscore the importance of viewing responsible AI as a strategic imperative rather than merely a compliance requirement.

As Indian businesses navigate the complexities of AI adoption, the report advocates for a shift away from compliance-led approaches. Gupta encourages organizations to invest in governance, talent, and transparent frameworks that prioritize ethical considerations. By doing so, India has the opportunity to set global benchmarks for trustworthy AI that serves society at large.

In conclusion, the Nasscom report paints a nuanced picture of the state of Responsible AI in India. While significant progress has been made, particularly in the establishment of mature RAI frameworks among a substantial portion of businesses, challenges related to data quality, regulatory clarity, and workforce readiness persist. As organizations continue to advance their AI capabilities, the integration of responsible practices will be crucial in ensuring that AI technologies are developed and deployed in a manner that is ethical, inclusive, and beneficial to society. The path forward will require collaboration among businesses, regulators, and other stakeholders to create an environment that fosters innovation while safeguarding the public interest.