Skip to main content
Thought Leadership

AI Regulation in India: How It Affects Foreign Tech Companies

India's approach to AI regulation is evolving rapidly — from the November 2025 AI Governance Guidelines to the February 2026 IT Rules amendment targeting deepfakes and synthetic content. Here is what foreign tech companies operating in or selling to India need to know.

By Manu RaoMarch 20, 202610 min read
10 min readLast updated March 20, 2026

India's AI Regulatory Landscape: Not One Law, but a Layered Framework

If you are a foreign tech company building or deploying AI products in India through a foreign direct investment structure, the first thing to understand is that India does not have a standalone AI law. Unlike the EU's AI Act, which creates a single comprehensive framework, India has chosen a layered, multi-regulation approach that distributes AI governance across existing laws, sector-specific regulators, and new guidelines.

This is both good news and a complication. Good news because there is no single massive compliance framework to implement overnight. A complication because the obligations are scattered — and a foreign tech company needs to track multiple regulatory streams simultaneously.

As of March 2026, the key regulatory layers affecting AI in India are:

  1. India AI Governance Guidelines (November 2025) — voluntary, principle-based
  2. IT (Intermediary Guidelines) Amendment Rules, 2026 (February 2026) — mandatory for intermediaries
  3. Digital Personal Data Protection Act, 2023 (DPDPA) — mandatory, phased implementation through May 2027
  4. Information Technology Act, 2000 — existing law, broadly applicable
  5. Sector-specific regulations — RBI for fintech, SEBI for capital markets, TRAI for telecom, IRDAI for insurance
  6. Proposed Digital India Act (DIA) — not yet enacted, but expected to introduce risk-based classifications
Article illustration

The India AI Governance Guidelines (November 2025)

What They Are

On November 5, 2025, the Ministry of Electronics and Information Technology (MeitY) released the India AI Governance Guidelines under the IndiaAI Mission. These guidelines articulate India's pro-innovation, techno-legal approach — explicitly avoiding the EU's heavy-handed prescriptive model.

The guidelines are built on seven guiding principles (called "sutras"): Trust, People First, Innovation over Restraint, Fairness and Equity, Accountability, Understandable by Design, and Safety, Resilience and Sustainability.

Are They Mandatory?

Not yet. The guidelines are currently voluntary and principle-based. However, they signal the direction of future regulation and establish institutional structures that will likely gain enforcement teeth. MeitY Secretary S. Krishnan has ruled out a standalone AI law, indicating that India plans to regulate AI through existing frameworks — particularly the DPDPA, IT Act, consumer protection law, and sector-specific regulations.

What Foreign Tech Companies Should Do

  • Adopt the guidelines voluntarily. Companies that align early will face fewer disruptions when mandatory requirements arrive. The guidelines recommend publishing transparency reports, implementing grievance redressal mechanisms, and conducting bias audits — all actions that build credibility with Indian regulators and enterprise customers.
  • Engage with the AI Safety Institute (AISI). The guidelines propose an AISI as a technical body for research, testing, and standards. Foreign companies that participate in AISI consultations will have input into the standards that eventually apply to their products.
  • Monitor the AI Governance Group (AIGG). This proposed inter-agency policy body will coordinate AI policy across ministries. Its decisions will determine which existing laws apply to specific AI use cases.
Article illustration

IT Rules 2026 Amendment: The Deepfake and Synthetic Content Crackdown

What Changed

On February 10, 2026, MeitY notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, effective from February 20, 2026. This is the most significant regulatory intervention in content governance since the original 2021 rules.

The amendment explicitly incorporates Synthetically Generated Information (SGI) — including AI-generated text, images, audio, and video — into the due diligence obligations of intermediaries.

Key Obligations for Foreign Platforms

ObligationRequirementTimeline
Accelerated takedownsRemove flagged SGI content within 3 hours of court/government order; 2 hours for non-consensual deepfake nudityImmediate (effective Feb 20, 2026)
Content labellingAll non-prohibited SGI must be clearly and prominently labelled; visual SGI needs visual labels, audio SGI needs audio disclosuresImmediate
Metadata embeddingEmbed permanent metadata or unique identifiers to trace the source of AI-generated content where feasibleImmediate
Automated filteringDeploy AI filters to block uploads of child sexual abuse material (CSAM) and non-consensual intimate images generated by AIImmediate
Quarterly user notificationInform users every 3 months (previously annually) about platform rules, consequences of non-compliance, and reporting obligationsImmediate

Who Is Affected?

The rules apply to significant social media intermediaries (SSMIs) — platforms with 50 lakh (5 million) or more registered users in India — and community-facing online gaming intermediaries. If your platform meets this threshold, you must comply regardless of where your company is incorporated.

Smaller intermediaries and pure SaaS/B2B platforms are not directly subject to these enhanced obligations, but the trend line is clear: AI content governance requirements will expand.

Practical Compliance Challenges

The 3-hour takedown requirement is operationally demanding. Maintaining 24/7 legal and content moderation teams capable of acting within 180 minutes requires either India-based operations or follow-the-sun coverage across time zones. For many foreign companies with small India teams, this means either building capacity or partnering with Indian compliance service providers.

Article illustration

DPDPA: Data Protection That Applies to AI Training and Deployment

How DPDPA Intersects with AI

The Digital Personal Data Protection Act, 2023 is not an AI law, but its requirements directly affect AI companies in several ways:

  • Consent for data processing: In almost all cases, a Data Fiduciary must obtain explicit consent before processing personal data. This affects AI training pipelines that use Indian user data — you cannot simply scrape or collect data without clear, informed consent.
  • Purpose limitation: Data collected for one purpose cannot be reused for a different purpose without fresh consent. AI companies that train models on user interaction data must ensure their consent notices cover model training as a specified purpose.
  • Data localisation (potential): While the DPDPA does not mandate blanket data localisation, it empowers the government to restrict transfers to specific countries. AI companies processing Indian personal data in overseas data centres should monitor this closely.
  • Right to erasure: Data principals can request deletion of their data. AI companies must design systems to honour deletion requests — a non-trivial technical challenge for models trained on deleted data.

Implementation Timeline

The DPDPA Rules 2025, notified on November 13, 2025, follow a three-stage implementation:

StageDateWhat Activates
Stage 1November 13, 2025Data Protection Board of India established
Stage 2November 13, 2026Consent manager registration requirements
Stage 3May 13, 2027Full compliance duties: notice requirements, security protocols, breach notifications, data principal rights

The 18-month window to May 2027 seems generous, but companies doing business in India should not wait. The compliance requirements differ substantially from GDPR, and retrofitting data processing systems takes time.

Article illustration

Sector-Specific AI Regulation: What Your Industry's Regulator Is Doing

Financial Services (RBI and SEBI)

The Reserve Bank of India has been particularly active on AI governance in financial services. RBI guidelines require banks and NBFCs to:

  • Conduct bias audits on AI-driven lending and credit scoring models
  • Maintain explainability for AI-based customer-facing decisions
  • Report AI-related incidents through existing supervisory reporting channels
  • Ensure human oversight for high-risk automated decisions (loan rejections, fraud flags)

SEBI has similarly issued guidance on algorithmic trading, requiring audit trails and kill switches for AI-driven trading systems. Foreign fintech companies deploying AI in India's capital markets must comply with SEBI's circular on algorithmic trading and co-location facilities.

Healthcare (CDSCO and ICMR)

AI medical devices face regulatory oversight from the Central Drugs Standard Control Organisation (CDSCO). AI-based diagnostic tools are classified as medical devices and require regulatory approval before commercialisation. The Indian Council of Medical Research (ICMR) has published ethical guidelines for AI in biomedical research.

Telecom (TRAI)

The Telecom Regulatory Authority of India is examining AI's role in network management, customer service automation, and spam/fraud detection. Foreign telecom equipment providers with AI components should monitor TRAI consultations.

For companies navigating these sector-specific requirements, our FEMA and RBI compliance service covers regulatory liaison and reporting obligations.

Article illustration

The Proposed Digital India Act: What Is Coming Next

The Digital India Act (DIA), intended to replace the 22-year-old IT Act, 2000, has been in development since 2023. While it has not been enacted yet — MeitY has indicated the timeline remains uncertain — the published drafts and consultations signal several AI-relevant provisions:

  • Risk-based classification of digital platforms and services (echoing the EU's tiered approach)
  • Enhanced intermediary obligations scaled to platform size and risk
  • Content accountability frameworks for AI-generated content
  • Algorithmic transparency requirements for platforms using AI to curate, recommend, or moderate content
  • Specific provisions for emerging technologies including AI, blockchain, and IoT

Additionally, the Artificial Intelligence (Ethics and Accountability) Bill, 2025 — a Private Member's Bill introduced in the Lok Sabha in December 2025 — proposes a statutory Ethics Committee for AI, mandatory ethical reviews for surveillance and high-risk systems, bias audits, and penalties up to INR 5 crore for non-compliance. While Private Member's Bills rarely pass as introduced, they often influence government legislation.

Compliance Roadmap for Foreign Tech Companies

Here is a practical, prioritised action plan based on the current regulatory landscape:

Immediate (Q1-Q2 2026)

  1. Audit your intermediary status. If your platform has 5 million+ registered users in India, implement the IT Rules 2026 amendments immediately — 3-hour takedown capacity, SGI labelling, metadata embedding, and automated filtering.
  2. Review data processing activities. Map all personal data of Indian users that flows through your AI systems. Identify gaps against DPDPA requirements even though full compliance is not due until May 2027.
  3. Establish an India compliance point of contact. Whether this is a local legal team, a compliance officer, or an external firm like Beacon Filing, you need someone tracking regulatory developments in real time.

Near-Term (Q3-Q4 2026)

  1. Implement consent management. Build or procure consent management infrastructure that meets DPDPA requirements — purpose-specific consent, withdrawal mechanisms, and record-keeping.
  2. Prepare for consent manager registration (Stage 2, due November 13, 2026) if your platform acts as a consent aggregator.
  3. Conduct a bias audit on any AI models used for decisions affecting Indian users — hiring, lending, insurance underwriting, content moderation, or service eligibility.

Medium-Term (2027)

  1. Achieve full DPDPA compliance by May 13, 2027 — notice requirements, security safeguards, breach notification protocols, and data principal rights mechanisms.
  2. Monitor the Digital India Act for introduction in Parliament. Budget for a 12-18 month implementation window once enacted.
  3. Engage with sector regulators (RBI, SEBI, TRAI, IRDAI) on AI-specific guidance relevant to your industry vertical.

If your company is considering setting up an India entity to manage compliance locally, review our guide on choosing between a subsidiary, branch office, and liaison office. For companies already incorporated, our annual compliance services include regulatory monitoring and filing support.

India vs. Global AI Regulation: How the Approaches Differ

Understanding India's regulatory philosophy requires comparing it with other major jurisdictions where your company may already operate:

DimensionEU (AI Act)USAIndia
ApproachRisk-based, prescriptive legislationSectoral, agency-led guidanceLayered, existing-law-first with voluntary guidelines
Standalone AI lawYes (effective August 2025)No (executive orders + agency rules)No (proposed DIA may include AI provisions)
High-risk AI classificationDefined in Annexes (biometric, employment, credit)Agency-specific (FDA for health, FTC for consumer)Sector regulators decide (RBI, SEBI, CDSCO)
PenaltiesUp to 7% of global turnoverAgency-specific, often FTC consent ordersIT Act fines, DPDPA up to INR 250 crore, proposed INR 5 crore under AI Ethics Bill
Data requirementsGDPR applies to training dataState-level laws (CCPA, CPRA)DPDPA applies, phased to May 2027
Extraterritorial reachYes (any AI system affecting EU residents)Limited (primarily domestic enforcement)Yes (DPDPA and IT Rules apply to foreign entities serving Indian users)

The key insight: if you are already GDPR-compliant, you have a strong foundation for India's DPDPA requirements — but the frameworks are not identical. India's consent requirements, data localisation possibilities, and breach notification timelines differ from GDPR. Companies operating Indian subsidiaries should not assume that existing EU compliance programmes will automatically satisfy Indian regulators.

For companies evaluating whether to establish a legal presence in India, the regulatory environment increasingly favours having a local entity. A wholly owned subsidiary provides the clearest compliance structure, as it can appoint local compliance officers, maintain records in India, and respond to regulatory inquiries within the required timelines. Compare this with operating through a branch office structure, which may create complications around data residency and regulatory representation.

The resident director requirement for Indian companies also becomes strategically important for tech firms — appointing someone with technology and compliance expertise (rather than just a nominee) can significantly improve your regulatory posture. This person becomes your frontline interface with Indian regulators and enforcement agencies.

Companies that are registered in countries with active Double Taxation Avoidance Agreements with India — including the US, UK, Singapore, Japan, and Germany — can also structure their India operations to optimise the tax treatment of technology licensing fees, royalties, and service charges, all of which are common in AI company structures. Our tax advisory team can help structure these cross-border flows efficiently.

Key Takeaways

  • India does not have a standalone AI law. AI is regulated through a layered framework — the IT Act, DPDPA, sector-specific regulations, and voluntary governance guidelines.
  • The IT Rules 2026 amendment (effective February 20, 2026) imposes mandatory obligations on platforms with 5 million+ Indian users: 3-hour takedowns, SGI labelling, metadata embedding, and automated CSAM/NCII filtering.
  • The DPDPA affects AI companies that process Indian personal data — including for model training. Full compliance is due by May 13, 2027, but the requirements differ significantly from GDPR.
  • Sector-specific regulators (RBI, SEBI, CDSCO, TRAI) are applying existing frameworks to AI within their domains. Fintech and healthtech companies face the most immediate obligations.
  • The regulatory direction is clear: India is moving toward more prescriptive AI governance. Companies that comply early will face fewer disruptions and build trust with regulators and enterprise customers.
FAQ

Frequently Asked Questions

Does India have an AI law like the EU AI Act?

No. As of March 2026, India does not have a standalone AI law. Instead, AI is governed through a layered framework including the IT Act 2000, DPDPA 2023, voluntary AI Governance Guidelines (November 2025), the IT Rules 2026 amendment, and sector-specific regulations from bodies like RBI, SEBI, and TRAI.

What are the IT Rules 2026 amendment requirements for AI content?

The February 2026 amendment requires significant social media intermediaries (5 million+ users in India) to label AI-generated content, remove flagged synthetic content within 3 hours, embed metadata for traceability, deploy automated filters for CSAM and non-consensual deepfakes, and notify users quarterly about platform rules.

Does the DPDPA apply to AI companies outside India?

Yes. The DPDPA applies to any entity processing personal data of individuals in India, regardless of where the company is incorporated. If your AI product collects, processes, or trains on data from Indian users, you must comply with DPDPA requirements including consent, purpose limitation, and data principal rights.

When is DPDPA full compliance required?

Full DPDPA compliance is required by May 13, 2027 (Stage 3). However, the Data Protection Board was established in November 2025 (Stage 1), and consent manager registration is due by November 2026 (Stage 2). Companies should start implementation now as requirements differ substantially from GDPR.

Do foreign AI companies need an India entity to comply?

Not necessarily, but having a local entity simplifies compliance. The IT Rules require a grievance officer and compliance officer based in India for significant intermediaries. Many foreign tech companies establish a subsidiary or branch office to manage regulatory obligations, especially if they have substantial Indian user bases.

What penalties do foreign tech companies face for AI regulatory non-compliance in India?

Penalties vary by regulation. Under the IT Act, intermediaries can lose safe harbour protection, exposing them to direct liability. DPDPA violations can attract penalties up to INR 250 crore (USD 30 million). The proposed AI Ethics Bill suggests penalties up to INR 5 crore. Loss of intermediary status is often the most severe practical consequence.

How should foreign AI companies prepare for India's Digital India Act?

While the Digital India Act has not been enacted, companies should monitor MeitY consultations, implement the voluntary AI Governance Guidelines as a baseline, ensure compliance with existing laws (IT Act, DPDPA, Consumer Protection Act), and budget for a 12-18 month implementation window once the DIA is passed.

Topics
AI regulation Indiaforeign tech companies indiaDPDPA complianceIT rules 2026india AI governancedeepfake regulation india

Need Help With Your India Strategy?

Talk to us. No commitment, no generic sales pitch. We will walk you through the structure, timeline, and costs specific to your situation.