As AI finds usage in capital markets, SEBI has to navigate sometimes conflicting mandates.
It is inevitable that Artificial intelligence (AI) will soon become a critical element of most industries. India is preparing for the AI era with technologies and innovations on one hand and consequent legislative and policy reforms on the other. While currently there is no specific law governing AI in India, the Information Technology Act, 2000 (IT Act) is the principal legislation governing electronic transactions and digital governance.
The government has actively issued a series of advisories concerning artificial intelligence. While these advisories fall short of constituting a comprehensive legal framework, AI governance and regulation remain firmly on the policy agenda. This article examines SEBI’s evolving engagement with AI-related developments.
The increased use of AI and machine learning in investor facing tools (used to disseminate trading strategies and advice, for compliance requirements, etc.) and potential issues that may arise out of use of such tools has prompted SEBI to frame laws to protect investors and stakeholders’ data. Accordingly, in addition to several circulars issued in respect of cybersecurity for market intermediaries, SEBI had released a consultation paper on ‘Proposed amendments with respect to assigning responsibility for the use of artificial intelligence tools by Market Infrastructure Institutions, Registered Intermediaries and other persons regulated by SEBI’.
SEBI subsequently amended the (i) SEBI (Intermediaries) (Amendment) Regulations, 2025; (ii) the Securities Contracts (Regulation) (Stock Exchanges and Clearing Corporations) (Amendment) Regulations, 2025; (iii) and SEBI (Depositories and Participants) (Amendment) Regulations, 2025 and introduced a regulation mandating any person regulated by SEBI who uses AI or other such technologies, to be responsible for (i) privacy, security and integrity of investors and stakeholders’ data including that maintained by it in its fiduciary capacity; (ii) the output arising from the usage of such tools and techniques it relies upon or deals with; and (iii) the compliance with applicable laws in force.
Intermediaries may deploy AI across various stages of their operational lifecycle—from client-facing applications such as chatbots, to backend functions involving data analysis, decision-making, and, in some cases, execution of transactions. SEBI has imposed compliance obligations on intermediaries regardless of the nature, scale, or context in which such tools are employed. However, this uniform regulatory approach risks equating low-risk use cases (like chatbots) with high-impact applications (such as algorithmic investment execution), potentially stifling innovation and impeding broader AI adoption in the securities market.
While the advent of AI necessitates regulation, SEBI’s broad-based approach may need to be revisited to enable innovation and compliance based on the scale and scenario of adoption. Accordingly, SEBI may consider the following:
Use of AI by SEBI
In a press conference, SEBI’s ex-chairman, Mrs. Buch indicated that SEBI will use AI for reviewing IPO documents in a faster and more efficient way. This shows SEBI’s willingness to not only adapt but also adopt technology to increase efficiency.
SEBI could use this opportunity to lead by example and adopt high standards of ethics, transparency and accountability while implementing AI in its systems. This will inspire confidence amongst market intermediaries and ensure faster implementation of AI. Hiring experts, conducting knowledge sharing sessions and training employees in all aspects of AI will help SEBI officials understand the requirements better.
Risk Based Approach
European Union’s AI Act (EU AI Act) adopts a proportionate risk-based approach to AI regulation, wherein the compliance obligations vary based on the level of risk posed to health, safety and fundamental rights. The EU AI Act classifies risks into four distinct categories- unacceptable risks, high risks, limited risks and minimal risks. SEBI could consider adopting a risk-based classification framework—treating standard chatbot applications as low-risk tools and categorizing AI-driven execution of investment decisions as high-risk activities—thereby enabling the prescription of differentiated compliance requirements tailored to the risk profile of each use case.
Client Decisions
The amendments provide that intermediaries cannot disclaim responsibility merely because a client independently chooses to act without relying on the outputs or recommendations generated by AI tools developed by the intermediary. The Asia Securities Industry & Financial Markets Association (ASIFMA), representing foreign portfolio investors, raised a concern on this approach stating that it would be unreasonable to impose liability on intermediaries when the AI tools provide accurate information, and the clients make an independent decision. ASIFMA also advocates for a shared responsibility framework where liability is distributed across different stages of AI value chain.
Principles for protection
While SEBI requires compliance with guidelines issued by the government of India in respect of regulation of AI, SEBI may consider prescribing checks and balances that each intermediary must ensure at the time of adopting AI.
IOSCO Report
In March 2025, the International Organization of Securities Commissions (IOSCO) has released a report on ‘AI in Capital Markets: Use Cases, Risks and Challenges’. The report analyses implementation of AI and processes adopted by regulators in different countries to protect investors and the regulated entities. IOSCO’s survey revealed a divergence in regulatory approaches: while some jurisdictions adopt a ‘technology-neutral’ stance—regulating activities irrespective of the underlying technology—others are moving towards bespoke regulatory frameworks specifically tailored to the use of AI in the financial sector.
Key practices adopted globally are: (i) specific guidance in respect of AI in securities markets, governance, risk management, algorithmic biases, transparency, ethics, etc.; (ii) training market participants in (A) identifying and solving AI related issues, (B) ensuring confidentiality of personal data and preventing breach of third-party intellectual property rights; (C) protecting investors from results and, or, advise arising from incorrect data sets; (iii) Co-ordinate with other departments of the government for information sharing and up-skilling thereby saving cost; (iv) Collaborating with international agencies primarily through international fora, workshops, and working groups, to ensure knowledge sharing.
SEBI may also consider developing a framework for ‘AI Sandbox’, under which intermediaries may be granted facilities and flexibilities to experiment use of AI across service models and products. This will permit scope for innovation and real time experimenting on customers to intermediaries on one hand and allow SEBI to identify risks and frame policies to safeguard investor interest, on the other.
