Grok, put her in a bikini.

The image features the logo and interface of "Grok" on a blue background.

The bikini prompt is more than a user problem. 

It’s an ownership problem.

There’s a new parlour trick doing the rounds: people prompting Grok to “put” someone in a bikini, or to “undress” them. It gets framed as banter, a meme, a bit of mischief.

But the underlying behaviour is not playful. It is non-consensual sexualised image manipulation. Reporting and research over the past week shows people using Grok to generate sexualised images of real women, and in some cases content that appears to involve minors, with outputs shared publicly on X.

It is foreseeable misuse. And it is a corporate responsibility test.

More importantly, it is an ownership test.

Owners cannot absolve themselves

When something like this breaks into the open, the default defence is familiar:

  • “Users did it.”
  • “It’s a small minority.”
  • “No system is perfect.”
  • “We are a platform, not a publisher.”

That line does not hold when the owner controls the system end to end.

With Grok, the capability sits inside the platform. The prompts happen there. The outputs get shared there. The engagement and reach happen there. The incentives sit there too. If you own the platform and the tool, you do not get to stand outside the consequences and act like a bystander.

Responsibility follows control, not intent.

“It’s the users” is a convenient story

Yes, individuals choose to type the prompt. That matters.

But product design decides whether the prompt works, how often it works, how easily it works, and how far the output can travel. If you remove friction, you increase harm at scale. That is not philosophy. It is mechanics.

The reporting has been blunt on the scale. The Guardian cited analysis of posts where users requested sexualised edits of real people, including minors, and shared the results, often coaching others on what prompts to use.

If that is the environment, then the question is not “Why are users doing this?” The question is “Why did the product allow this pattern to form so easily, and why was distribution so frictionless?”

Free speech is not a safety plan

People reach for “free speech” quickly in these debates, but it is the wrong frame.

This is not about political views or controversial opinions. It is about generating sexualised images of real people without consent. That is not expression. It is harm.

A serious company treats it as a safety issue, like any other product risk. You do not ship a car without brakes and then talk about driver choice. You fix the brakes.

What accountability looks like when you actually mean it

There is a practical way to test whether a company is serious. Look at what they intentionally make hard to do, not what they make possible.

In this case, after public outcry, Grok reportedly disabled or limited image generation for most users, restricting access to paying subscribers whose identities can be traced.

That tells you something important: friction and traceability change behaviour. The company could introduce stronger constraints when it had to.

So the hard question becomes unavoidable. If you can tighten controls after reputational and regulatory pressure, why were those controls not in place before the predictable misuse surfaced at scale?

Reuters also reported that the European Commission ordered X to retain internal documents related to Grok, citing concerns including non-consensual images, in the context of ongoing scrutiny under the Digital Services Act.

That is not a minor PR wobble. That is governance and regulatory risk landing directly on the owner’s desk.

Where responsibility should sit

In any company, you can trace responsibility to the people who can actually change outcomes. In this situation, that means:

  1. The owner
    • Sets culture and priorities.
    • Appoints leadership.
    • Chooses whether safety is a core requirement or an optional cost.
  2. The board and executives
    • Decide the risk appetite.
    • Fund safety and enforcement properly.
    • Set non-negotiables for high-risk capabilities.
  3. Product and engineering leadership
    • Own what the system can do.
    • Decide defaults, refusals, friction, monitoring, and escalation.
    • Treat predictable abuse as a primary design input, not an afterthought.
  4. Trust and safety
    • Needs authority and resourcing, not just responsibility.
    • Cannot be the only team that gets blamed after decisions were made elsewhere.

If any of those groups can say, “That’s not on me,” then responsibility has been designed out of the system. That is the real failure.

The point that matters

Billionaire owners often try to position themselves as builders, visionaries, or champions of principle. Fine. But if you take the power, you take the duty.

You cannot claim the upside of ownership, control, and scale, then shrug when the predictable downsides arrive. You cannot set incentives that reward virality and then act surprised when harmful content spreads fast. You cannot centralise control and then outsource accountability to “users”.

This is where corporate responsibility lives. It lives at the top, with the people who can say yes, no, ship, delay, restrict, and fund.

If the bikini prompt trend has taught us anything, it is this. It has made it obvious that accountability is not a statement of values. It is a set of constraints that you choose to enforce, even when they slow growth.

With X slow to intervene, the UK government is even discussing a ban on the platform, begging us to question why safety controls and safeguards are permitted to be optional from the outset.