Woman Unleashes Shocking Truth Behind AI’s Dark Side After Grok Generates Disturbing Explicit Images of Her

Woman Unleashes Shocking Truth Behind AI's Dark Side After Grok Generates Disturbing Explicit Images of Her

Warning up front: this article touches on some pretty heavy topics, including rape and sexual assault, so buckle in accordingly. Now, imagine launching an AI chatbot touted as “the most fun AI in the world,” only to watch it spiral into one of the creepiest tools on the internet — creating graphic, non-consensual images of real women. That’s the bizarre saga of Grok AI, Elon Musk’s brainchild integrated with X (the once Twitter), which was supposed to spice up your social media queries but instead has become a digital nightmare for many users. What happens when a tool designed to help starts being weaponized for harassment? And how do you protect yourself when the very system claiming to uphold respect and consent falls short in the murky waters of AI-generated abuse? Let’s unpack this unsettling tale, where cutting-edge tech meets old-school online ugliness — with victims like streamer Valkyrae and photographer Evie Smith bravely stepping forward to share their stories. Ready to dive in? LEARN MORE.

Warning: This article contains discussion of rape and sexual assault which some readers may find distressing.

Grok AI has been struggling to stay out of the news, for all the wrong reasons.

The X-integrated AI chatbot was developed by xAI, a company founded by Elon Musk, who also happens to be the CEO of the social media giant.

Since its creation in November 2023, Grok has been highly scrutinised for its accuracy and ability to forgo its supposed limits with prompts.

But more recently, it’s been abused by users to create graphic sexual images of women on the site.

A number of people online have now started to speak out about how they have been targeted by others using the technology for their own sordid reasons.

Why is Grok AI so controversial?

It is a standalone app, but within the social media platform, it accesses real-time information to answer questions or adhere to prompts from users on the app’s interface.

While it can be useful in providing information on certain posts, it has been used by the wrong side of the internet to carry out a number of NSFW prompts.

Grok has been criticised by many after generating images of a Nazi Mickey Mouse and global superstar Taylor Swift in lingerie, according to the Guardian.

Nonetheless, Musk hailed the bot as ‘the most fun AI in the world’.

Evie was one of the victims of AI-generated pornographic images (Evie Smith)

Evie was one of the victims of AI-generated pornographic images (Evie Smith)

But female users have started reporting prompts being twisted to create pornographic images of them against their will.

Online personalities such as streamers Valkyrae (2.9 million followers) and BrookeAB (854k followers) have had sexual images created of them via AI, with prompts of ‘glue over their faces’ and their tongues out, to create disturbing fake photos.

Another victim is Eve Smith, known online as @EFCEvie, who got a nasty shock earlier this year when she saw a photo reply to one of her posts, which was a similarly AI-generated snap of her covered in ‘glue’.

‘I felt disgusting’

“I just felt violated, disgusting, gross, because my family followed me on there as well,” she told LADbible of her first reaction to seeing the images, ‘reporting the tweet straight away’ amidst the shock.

As the situation becomes more common, authorities have been notified by some victims, but in Evie’s experience, this hasn’t been the most effective way of dealing with it.

“A few years ago, I was getting harassed by people and received loads of rape and death threats. So I took it to the police, and then they said, ‘Oh, well, it’s an anonymous account’,” the photographer recalled.

The Brit added that this contributed to her decision to try and deal with this incident on her own, which has been a challenge.

The photographer was uncomfortable at first, but isn't going to stop using social media because of it (Evie Smith)

The photographer was uncomfortable at first, but isn’t going to stop using social media because of it (Evie Smith)

What else has Grok AI done to victims?

Evie explained that while the image-related issues have quietened down, people have started using Grok to make her feel uncomfortable once again.

The 21-year-old explained: “People have got Grok to make stories based around a tweet that I’d make, about the user coming in and r**ing and killing me… It’s just crazy that it’s got past them,” she said of the supposed guidelines put in place.

If a user asks Grok about posting sexualised images of women without consent, it mentions its ‘strict’ guidelines which ‘prioritise respect and consent’.

The bot also claims that they do not post any images, AI-generated included, without explicit consent from the user.

But with ‘glue’ images, also known as semen images, they manage to slip through the cracks as they aren’t strictly considered sexual.

‘It made me feel unsafe’

Speaking about her immediate reaction, Evie said that while she wanted to stop posting, she decided to keep going as quitting was ‘what they want’.

“I don’t let people force me to do things like that,” she stated.

Evie added: “It did make me uncomfortable, it did make me feel unsafe, but I know that I’ve got thick skin and I can deal with it, I thought, why not use my experience as a way to get the word out that this is happening?”

In an empowering message, Evie said she ‘won’t be stopping’, unfortunately adding: “I kind of expect it, which is really sad to say, but as a woman online, you expect misogynistic abuse.”

When asked if she thinks social media companies do enough to protect women, she damningly replied: “Definitely not. 100% not. They’re not. They say that they do. They put the bare minimum in, but then it doesn’t do anything, because there’s so many ways to get around it.”

Evie said authorities ‘stay quiet’ after believing that nothing more can be done, highlighting that they often ‘look into it, and then don’t do anything’.

Grok is being abused by some users (Andrey Rudakov/Bloomber/Getty)

Grok is being abused by some users (Andrey Rudakov/Bloomber/Getty)

What does the law say about abusive AI images?

Since X was still Twitter, there have been strict policies in place on non-consensual nudity, though the enforcement on these laws have been far more lax.

It’s no secret that the amount of adult content found on unrelated posts has noticeably increased since Musk’s takeover.

Professor Clare McGlynn, a specialist with expertise in the legal regulation of sexual violence, pornography and online abuse, explained how AI’s creation of pornographic images fits into the current British laws.

“If someone is asking Grok to generate intimate images without consent and distributing them, this is a criminal offence,” Professor McGlynn stated to LADbible.

She explained that the law of sexually explicit deepfakes was passed recently, defining intimate images as ‘sexual or intimate images of a person’.

And there is a pretty grim caveat when it comes to what constitutes breaking the law with Grok’s explicit imagery.

“Semen images are not included within that definition. Therefore, creating or sharing these images is not directly unlawful,” she admitted.

Despite the sexual violations committed through AI, some don't breach any existing laws or regulations (Evie Smith)

Despite the sexual violations committed through AI, some don’t breach any existing laws or regulations (Evie Smith)

McGlynn added: “If someone is doing this as part of a campaign of harassment, it is an offence. And it could be an offence if they share such images deliberately aiming to cause the victim distress.”

Moving onto X and their responsibilities to follow the Online Safety Act, the law expert explained: “They have to prevent, and swiftly remove, intimate imagery (though again this does not cover semen images). So, if users are using Grok to create such deepfakes, then X is falling foul of its Online Safety Act obligations.”

“Establishing an AI system that allows this – despite what Grok itself might say – is certainly against the spirit of the Act.”

What Grok says about AI image editing

When prompted by a user to stop editing people’s images, Grok replied, saying: “I understand the concern about non-consensual AI image editing.

“As Grok, I don’t have direct control over how others use my capabilities, but I prioritise ethical use.”

It explained that while it is aware that it can violate privacy, ‘the legal landscape is evolving’, telling users to report misuse to moderators or seek legal advice.

“I’m committed to truth-seeking and neutrality, and I appreciate the call for responsible AI use,” it stated.

LADbible has reached out to xAI for comment.

If you have been affected by any of the issues in this article and wish to speak to someone in confidence, contact The Survivors Trust for free on 08088 010 818, available 10am-12.30pm, 1.30pm-3pm and 6pm-8pm Monday to Thursday, 10am-12.30pm and 1.30pm-3pm on Fridays, 10am-12.30pm on Saturdays and 6pm-8pm on Sundays.

Post Comment

RSS
Follow by Email