In recent years, debates over the regulation of social media and the safe use of digital platforms have assumed an increasingly central role in public discourse. The rapid growth of these technologies, along with their direct impact on the public and private lives of millions of people, has brought issues such as user protection, the circulation of harmful content, and platform liability to the forefront of political and regulatory agendas worldwide.

In this context, several countries have adopted new regulatory frameworks, with particular emphasis on safeguarding children and adolescents. Australia passed the Online Safety Act in 2021[1]. In the same year, the United Kingdom implemented the Age Appropriate Design Code[2]. In 2022, the European Union approved the Digital Services Act (DSA)[3]. Brazil followed this trend by enacting, in 2025, the Digital Child and Adolescent Statute (ECA Digital)[4].

Despite differences and specificities across these initiatives, they all stem from a common concern: how to mitigate risks and protect rights in increasingly complex digital environments.

It was against this backdrop of growing concern over platform safety that regulators’ attention was abruptly captured by the consequences of the new features released by Grok, a generative artificial intelligence system integrated into the platform X (formerly Twitter). In December 2025, the tool began allowing users to create images based on real people.

As a result, the technology was used predominantly to generate images of individuals in sexualized contexts through simple user commands. Within days, the social network was flooded with millions of such images. In this scenario, anyone who had ever posted personal photos or photos of family members on social media became exposed to this risk.

The situation became even more serious with the creation and circulation, on the platform itself, of thousands of sexualized images of children and adolescents, without any effective or immediate measures adopted by X. According to research by the Center for Countering Digital Hate (CCDH), it is estimated that Grok generated around 3 million sexualized images in its first 11 days of operation, including approximately 23,000 images that appear to depict children. These images are believed to represent roughly 65% of all content produced by the AI during that period[5].

It is clear that Grok is not the first technology to enable the generation of sexualized images of real people without consent. However, its impact occurred on a scale never seen before. A study conducted by the Tech Transparency Project identified at least 102 apps available on the Google Play Store and Apple App Store capable of digitally removing clothing from women or leaving them dressed only in underwear[6].

Yet, by being integrated into one of the world’s largest social networks, with hundreds of millions of users, Grok significantly expanded the reach, speed of dissemination, and potential harm of such content.

The reaction of several countries and international organizations was immediate. Malaysia and Indonesia banned Grok from operating within their territories.[7] The European Union, in turn, announced the opening of a formal investigation into platform X to determine whether it had complied with the obligations established in the DSA[8]. Countries such as the United Kingdom and France followed the same path and launched their own inquiries into X due to the millions of images generated[9].

In Brazil, the National Data Protection Authority (ANPD), the Federal Prosecutor’s Office (MPF), and the National Consumer Secretariat (Senacon) jointly issued recommendations to X after receiving reports related to the use of Grok[10]. They recommended:

  1. The creation of clear and effective procedures to identify and remove content already produced and still available on the X platform.
  2. The immediate suspension of accounts involved in generating, through Grok, sexual or eroticized images of both children and adolescents, as well as adults, without their authorization.
  3. The implementation of mechanisms enabling data subjects to report irregular, abusive, or unlawful uses of their personal data.
  4. The preparation of a data protection impact assessment specifically addressing the activities of generating synthetic content based on the manipulation of photos, images, videos, or audio files uploaded by users to Grok.

In response, Grok announced on January 9 that the feature would be restricted to subscribers only[11].

More recently, the ANPD, the MPF, and Senacon concluded that the responses provided by X were insufficient and decided to adopt a stricter stance. An administrative order was issued requiring the X Group to immediately implement technical and organizational measures to prevent Grok from generating content depicting children and adolescents in sexualized contexts, which is strictly prohibited, as well as adults in sexualized or eroticized situations without their consent[12].

These measures must apply to all versions, plans, and modalities of Grok. In addition, X is required to submit a detailed report of the steps taken, including documentary evidence proving their effectiveness. Failure to comply may result in more severe sanctions.

The Grok case raises an important question. As discussed, many countries have indeed mobilized to develop new regulatory frameworks aimed at combating practices that threaten safety in digital environments. Yet it is also true that several of these countries already had, and still have, legal instruments capable of addressing such illicit conduct, even when committed online.

In Brazil, numerous laws predating the scandal already covered, to some extent, the unlawful acts committed by users. The Brazilian Federal Constitution establishes the dignity of the human person as a foundational principle of the Democratic Rule of Law and guarantees the inviolability of a person’s image, including the right to compensation for violations[13].

Brazilian criminal law, in turn, classifies as a crime the production, by any means, of content depicting nudity or sexual or intimate acts without the authorization of those involved[14]. Similarly, the Child and Adolescent Statute (ECA) criminalizes the production, reproduction, or recording, by any means, of explicit sexual or pornographic content involving a child or adolescent. Penalties can reach up to eight years of imprisonment.

Brazilian law also imposes a series of obligations on digital platforms. The Brazilian Internet Civil Framework (Marco Civil da Internet) establishes that an internet application provider that makes third‑party content available may be held liable for damages arising from the disclosure, without the authorization of those involved, of media containing scenes of nudity or sexual acts, when the provider has been notified by the victim and fails to remove the content[15].

However, even with a large number of older and newer laws addressing these practices, they continue to grow in the digital environment. As noted, even before the scandal triggered by Grok’s new features, there were already several applications capable of generating unauthorized sexualized images of real people. Moreover, because digital platforms are fertile environments for all kinds of innovation, it is not difficult to imagine that, in the coming years, new tools may emerge enabling other forms of illicit conduct.

This reality exposes the limitations of a regulatory model focused predominantly on offering remedies only after harm has occurred. In cases involving the creation and circulation of sexualized images, especially when they involve children and adolescents, post‑hoc removal or subsequent accountability, on their own, are rarely capable of reversing the harmful effects. Once harm to a victim’s dignity, privacy, and integrity has taken place, it tends to endure.

It is important to understand the difference in the speed and scale at which digital technologies and the law operate. When regulation focuses solely on sanctions, takedown obligations, or mechanisms of accountability after unlawful conduct has occurred, it intervenes only once the damage is already materialized, and, in many cases, irreversibly disseminated.

It is in this scenario that the relevance of approaches associated with the field of Trust and Safety (T&S) becomes increasingly evident[16]. Rather than treating safety as a secondary concern or as an emergency response to crises, this approach recognizes that certain risks are predictable and, for that very reason, must be considered from the earliest stages of product design and development.

The logic of Trust and Safety shifts the focus away from simple content moderation toward a broader model of risk governance, in which platforms take an active role in identifying, assessing, and mitigating potential harms arising from the use of their technologies.

This is precisely why T&S teams have become increasingly central departments within digital platforms. These teams work with product developers and engineers to consider how new products and technologies might be used as vectors for abuse or misuse[17].

It is in this context that Safety by Design becomes meaningful as a natural extension of Trust and Safety practices. While T&S structures policies, processes, and institutional responses for dealing with risks, Safety by Design brings this concern to an earlier stage: the design of product features themselves[18]. From the moment functionalities are conceived, they already carry the possibility of being used either legitimately or abusively.

When adopting a Safety by Design approach, technology companies begin to scrutinize their choices more closely during the early phases of development. The question is no longer only “what does this feature allow users to do?” but also “who could be affected by it?”, “how might it be misused?”, and “what kinds of safeguards make sense before launch?”

In the case of Grok, for example, the ability to generate realistic images based on real people made the risk of producing unauthorized sexualized content entirely foreseeable. The absence of more robust safeguards from the outset exposed precisely the kind of gap that Safety by Design seeks to address.

The goal of Safety by Design is not to eliminate every potential harm to end users but to: (i) build systems that promote safety and well‑being; (ii) reduce and prevent harm, including by creating mechanisms that help users identify risky situations and exercise effective controls to avoid them; and (iii) remediate harm by ensuring appropriate response and redress mechanisms when it occurs[19].

Safety by Design requires integrated action involving lawyers, engineers, designers, researchers, and executive leadership. When decisions are driven solely by criteria such as speed of release or engagement metrics, attention to risks is often left behind.

The episode involving Grok illustrates this clearly. The reputational costs, regulatory pressure, and significant social impacts could have been mitigated with a more cautious approach from the beginning. Indeed, acting preventively may help providers reduce both the frequency and severity of unlawful conduct within the use of their services.

Finally, embedding safety as a structural element in the design of technologies produces benefits not only for user protection. It also strengthens trust, preserves platform reputation, and facilitates compliance with increasingly demanding regulatory frameworks[20]. In a context of increasing regulatory scrutiny over the role played by technology companies, the Grok case reinforces a lesson that cannot be ignored.