Businesses are rapidly integrating AI into their operations, customer service, sales, marketing, and for predictive work. From healthcare and education to logistics, finance, banks, and agriculture, AI is becoming an integral part of business. AI also brings parallel challenges in ethics, privacy, bias, and governance. To balance and manage AI, the Ministry of Electronics & Information Technology (MeitY) launched AI Governance Guidelines on 5 November 2025, signalling a shift from rigid regulation to a calibrated, ecosystem-friendly framework.
Shri S. Krishnan, Secretary, MeitY, added, “Our focus remains on using existing legislation wherever possible. At the heart of it all is human centricity, ensuring AI serves humanity and benefits people’s lives while addressing potential harms.”
The guidelines have two main goals. First, they want AI to be used more in key areas such as farming, health, education, and finance. Second, they want AI systems to be clear, fair, responsible, and suitable for Indian society. Instead of creating a brand-new AI law, India decided to add these new guidelines to the laws that already exist, the Digital Personal Data Protection Act, 2023.
The AI governance guidelines have four main parts
1. Seven guiding principles (Sutras) for using AI in an ethical and responsible way.
2. Key recommendations across six pillars of AI governance.
3. An action plan mapped to short, medium, and long-term timelines.
4. Practical guidance for organisations, developers, and government to make sure AI is used in a clear and responsible way.
India’s plan is practical and combines technology with law. It is built on Digital Public Infrastructure (DPI) and uses both voluntary actions and rules.
Key Principles (Seven Sutras) from the India AI Governance Guidelines (2025)
The seven guiding principles proposed for AI governance in India:
- Trust is the Foundation: AI systems should be made so that people can trust them. They must be safe, work as expected, and be responsible.
- People First: AI should help people do more, not take away human decision-making. In important situations, people should make the final decision.
- Innovation over Restraint: India’s framework avoids over-regulation that stifles creativity. The state’s role is to enable responsible innovation, not to control it prematurely.
- Fairness & Equity: AI must treat everyone equally, regardless of gender, caste, language, or region. AI should be checked so that it is not unfair and shows the diversity of the Indian people.
- Accountability: Everyone who works with AI, such as developers, users, or companies, must take responsibility for what they do.
- Understandable by Design: AI should not be a “black box.” People and government should be able to understand how AI works and why it makes certain decisions.
- Safety, Resilience & Sustainability: AI systems must be robust, secure, and environmentally responsible.
Globally, governments are grappling with how much to regulate AI and which applications to ban. India’s plan is different because it does not start with strict rules. Instead, it helps new ideas grow while managing risks.
Read full details here.
Ajay Kumar Sood, Principal Scientific Advisor to the Government of India, said, “The guiding principle that defines the spirit of the framework is simple, ‘Do No Harm’. We focus on creating sandboxes for innovation and on ensuring risk mitigation within a flexible, adaptive system. The IndiaAI Mission will enable this ecosystem and inspire many nations, especially across the Global South.”
Also Read: Why AI is the Underwriter’s Strongest Ally























