Significant breakthroughs in the efficient utilisation of machine-learning algorithms and artificial intelligence (AI) tools have led to radical paradigm-shifts across sectors. Private equity and venture capital firms are not immune to AI’s impact and are embracing the use of AI tools for data analysis to aid decision-making. During the past five years, a considerable number of alternative investment funds (AIFs) registered with the Securities and Exchange Board of India (SEBI) have heavily invested into such tools, both for internal development and in external licensing. These tools have consistently proved to optimise fund performance by streamlining deal sourcing and screening, due diligence processes, portfolio data and market analysis.
Under the existing regulatory framework, fund managers are responsible for taking decisions on behalf of AIFs. Their key responsibilities are to ensure compliance with codes of conduct, account for all fund decisions (supplemented with justifications), maintain systematic records of all investment information, disclose all material information regarding fund investments to investors and the SEBI, preserve investee and potential investee confidentiality, resolve investor complaints and prevent conflicts of interest. However, with AI tools becoming increasingly embedded in fund operations, the lines of managerial accountability are blurring, raising urgent questions about the scope, limits and enforceability of such duties.
In response to the revolutionary rise of AI in the private investment sector, the SEBI floated a consultation paper on 13 November 2024, proposing to place all responsibility on fund managers for the use of AI. This includes sole accountability for the privacy, security and integrity of investor and stakeholder data, the validity of investment decisions relying on the output from such tools and the legal compliance of techniques employed.
Notably, although the consultation paper refers to “neural networks” in its overview of AI, the proposed regulatory amendments do not mention them. For now, this omission is of little practical concern since a majority of the AI tools used by AIFs rely on machine learning models designed to process structured data to optimise operational efficiency and data analysis.
However, AIFs are already dipping their toes into “deep learning”, leveraging neural networks to provide turnkey support for investment decisions and the creation of investment portfolios. This marks a significant departure from current practices, in which machine learning tools still require human intervention and predefined protocols. Because the integration of deep learning tools is gaining momentum, the expectation that fund managers remain fully accountable for every investment decision will quickly become untenable.
Although deep learning neural network models can account for the vast datasets they use to generate investment decisions, they cannot break down the rationale, weight or sequencing behind their decision-making pathways. The resulting opacity, often referred to as the “black box” problem, raises serious concerns about the SEBI’s proposed regulatory framework, which still places full responsibility on fund managers. If left unresolved, this may become a sector-wide governance and compliance risk.
To meaningfully regulate AI in the private investment space, it is essential to anticipate and address several critical challenges that may soon manifest. First, fund managers may no longer be able to explain or disclose the basis of investment decisions to investors since AI systems do not isolate or attribute specific inputs in a human-readable form. Second, they will be unable to justify investment decisions because the weight assigned to each factor by a deep learning model is not transparent and is irretrievable. Third, confidentiality across investee companies and investment portfolios could prove difficult to enforce when AI systems aggregate data in ways that may inadvertently create crossover exposure. Most importantly, although fund managers will remain legally accountable, their decision-making role will be greatly diminished. This raises urgent questions about the evolution of regulatory responsibility when machines make the calls.
As deep learning AI tools become standard practice in AIF operations, India’s regulatory architecture needs to keep pace with them. Although the SEBI’s consultation paper is undoubtedly a step in the right direction, India’s central regulatory authority must adopt a proactive, principles-based approach that safeguards stakeholders without stifling innovation.
Bharucha & Partners – Swathi Girimaji and Sachit Ram