Remi Ladigbolu/

Elon Musk has announced that X will conduct rigorous security testing on X Chat, the platform’s private messaging and artificial intelligence interface powered by Grok, before releasing its full source code to the public.

The move comes as regulators across Europe, the United States and emerging markets increase scrutiny of the platform’s safety and governance practices.

In a post on X on Monday evening, Musk said: “In the next few months, we will be doing rigorous security tests of 𝕏 Chat and will open source all the code.”

The announcement signals a shift towards radical transparency at a moment when social media companies face mounting pressure to prove how their systems protect users, handle data and prevent abuse.

X Chat integrates conversational AI directly into user communications, allowing interactions with Grok alongside private messaging.

By committing to open-sourcing the code after internal audits, X is inviting independent examination of its security architecture, data flows and safeguards, an approach more commonly associated with open-source software communities than global social platforms.

The pledge comes against a backdrop of intensifying regulatory action, particularly in Europe, where the European Commission has opened formal proceedings examining whether X and its AI systems comply with obligations under the EU’s Digital Services Act (DSA).

The legislation requires large platforms to assess and mitigate systemic risks, including those linked to data security, harmful content and algorithmic behaviour.

Penalties for non-compliance can include fines of up to six percent of global annual turnover.

European authorities have also widened their scrutiny of X’s operations more broadly, including investigations into the behaviour of AI-generated content and platform safety practices.

These actions underscore the extent to which regulators are now examining not just what platforms publish, but how their underlying systems function.

In the United Kingdom, regulators including Ofcom and the Information Commissioner’s Office are monitoring the impact of AI-driven features on online safety and data protection, reinforcing the growing expectation that platforms demonstrate technical as well as policy-level compliance.

In the United States, where regulation of social media and AI remains fragmented, Musk’s announcement is likely to feed into long-running debates over transparency, encryption and surveillance.

Open-sourcing X Chat’s code could give lawmakers, academics and civil society groups rare insight into the inner workings of a major platform’s messaging and AI tools, even in the absence of a comprehensive federal framework.

Pressure is also building in emerging markets, where governments have demanded greater accountability from AI-driven platforms.

In India, officials have publicly criticised X’s responses to concerns about harmful outputs generated by Grok, calling for clearer safeguards and more detailed explanations of how risks are being addressed.

Similar concerns have been raised in other jurisdictions where trust in digital platforms is shaped by fears of political interference and weak enforcement.

From a cybersecurity and online safety perspective, the planned open-source release is consequential. Independent audits by researchers and ethical hackers could help identify vulnerabilities, privacy risks or unsafe design choices.

Advocates argue that such scrutiny strengthens systems through collective oversight, though its success depends on how quickly and transparently companies respond to identified flaws.

The announcement also touches on persistent user concerns about surveillance and hidden access mechanisms.

While there is no evidence that X’s systems contain undisclosed government backdoors, open-source access would allow third parties to verify claims about encryption and data handling, a point of particular importance to journalists, activists and organisations operating in sensitive environments.

Since Musk acquired the platform formerly known as Twitter, X has undergone rapid transformation, including workforce reductions, product changes and deeper integration of artificial intelligence through xAI.

Grok has drawn attention for its real-time capabilities, alongside criticism over moderation and safeguards.

Musk did not provide a precise timeline for when the security testing will conclude or when the code will be released, beyond saying the process will unfold over the coming months.

0

By Editor

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.