Search

Cookies

We use cookies to improve your experience. By continuing, you accept our use of cookies.

Technology

Sebi Establishes AI Task Force to Combat Rising Cybersecurity Risks in Financial Markets

· · 3 min read

India's market regulator, Sebi, has formed a new AI task force, cyber-suraksha.ai, to address escalating cybersecurity threats posed by advanced artificial intelligence models. The initiative comes amid global concerns over AI tools' ability to identify and exploit system vulnerabilities at speed and scale.

NEW DELHI – The Securities and Exchange Board of India (Sebi) has announced the creation of a specialized task force, named cyber-suraksha.ai, dedicated to mitigating the growing cybersecurity risks introduced by artificial intelligence (AI) in the financial sector. The move, effective May 6, 2026, underscores a proactive approach by the market regulator to safeguard the interconnected securities ecosystem.

The establishment of this task force follows widespread discussions among governments and regulators globally regarding the potential dangers of sophisticated AI models. Notably, the unreleased AI model Claude Mythos by Anthropic has raised alarms due to its claimed capabilities in analyzing and exploiting software vulnerabilities and previously unknown security flaws at an unprecedented scale.

Why Sebi Formed the Task Force

Sebi highlighted that the rapid evolution of emerging technologies, particularly AI-driven vulnerability identification tools like Claude Mythos, introduces new dimensions of risk for regulated entities. These tools can significantly heighten risk exposure by enabling the identification and exploitation of existing vulnerabilities with remarkable speed and scale.

Beyond vulnerability exploitation, the regulator also expressed concerns regarding data confidentiality, application integrity, and the reliability of outputs generated by AI systems. Given the interconnected nature of the securities market, Sebi emphasized the necessity of a coordinated strategy for vulnerability management, information sharing, and continuous monitoring to prevent cascading impacts across the entire system.

Composition and Mandate

The cyber-suraksha.ai task force comprises representatives from key market infrastructure institutions (MIIs), qualified registrars to an issue and share transfer agents (QRTAs), all qualified reporting entities (QREs), and other relevant stakeholders. A preliminary meeting has already been convened to review the risks posed by new AI models and discuss necessary countermeasures.

The task force's primary mandate is to meticulously examine the cybersecurity risks associated with AI-based models and to formulate a uniform mitigation strategy. It will also facilitate the sharing of threat intelligence, best practices for vulnerability management, and develop playbooks for responding to AI-driven threats. Stakeholders are required to report cyber incidents, malicious activities, significant attack vectors, and vulnerability information on a priority basis to strengthen the overall cybersecurity posture of the securities markets.

Immediate Advisories and Safeguards

Following initial discussions, Sebi has issued advisories for regulated entities:

  • All operating systems and applications must be updated immediately with the latest security patches.
  • Where patches are unavailable, virtual patching should be considered as a temporary protective measure.
  • Regular security audits, incorporating both conventional and AI-powered vulnerability assessments, are now mandatory under Sebi’s cyber resilience framework.
  • Exchanges and depositories must direct their empaneled application vendors to assess AI-led vulnerability detection risks and implement appropriate safeguards.
  • Any system changes, no matter how minor, require full documentation, thorough impact analysis, structured review, and rigorous testing before deployment to ensure operational resilience.
  • Eligible regulated entities not yet onboarded to a Market Security Operations Centre (M-SOC) must expedite the process, given the heightened risks from AI-driven attacks.

Sebi's cyber security and resilience framework now mandates that the capabilities of AI models be explicitly considered in periodic risk assessments for regulated entities and their third-party service providers. System hardening techniques are also to be implemented to reduce IT infrastructure vulnerabilities.

Broader Regulatory Landscape

The Reserve Bank of India (RBI) has also been actively engaged in discussions with government officials, banks, and international regulators concerning the risks presented by advanced AI. RBI Deputy Governor Swaminathan J. recently characterized powerful technologies like AI as a “double-edged instrument.” He cautioned that adopting AI without adequate safeguards could amplify existing weaknesses and create entirely new forms of harm, stressing the need for a balanced approach that avoids both technological hype and defensive retreat.

Related