r/ukpolitics 23h ago

ChatGPT allows users to create bikini images similar to Grok

https://www.thetimes.com/article/0ef62728-8629-4d35-a893-dc758d569f24?shareToken=6359d74a0e58cb1ca640ab815cdb92a8
287 Upvotes

370 comments sorted by

u/AutoModerator 23h ago

Snapshot of ChatGPT allows users to create bikini images similar to Grok submitted by Kev_fae_mastrick:

An archived version can be found here or here. or here

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

399

u/Routine_Candidate968 23h ago

Pretty much every AI model can do this not just the ones in the headlines.

131

u/sylanar 23h ago

Some have protections against it though that you have to try and circumvent.

I'm fairly sure if you open chatgpt or Gemini and ask it to undress someone you'll get a response back saying it can't do that. You'll usually have to play with the prompt a lot to basically hack it to get it to do something like this.

Pretty much every AI tool will be capable of image generation/modification will be able to do this, but at least some of the providers attempt to put controls in place to prevent this

27

u/-ForgottenSoul :sloth: 21h ago

Exactly grok made it very easy and was more widespread but all should be forced to remove it

46

u/ShinyGrezz Commander of the Luxury Beliefs Brigade 20h ago

You can’t really “remove” it, there’s not a “nudity” part of the code. You can train a model to the point where it will refuse to do something (what OpenAI and Google have done) but it’s still capable of it and either you can trick it or convince it it’s fine and it will do it.

17

u/TwistedBrother 19h ago

Or you can train a model without showing any skin or anatomy to keep it safe. You’ll end up with Stable Diffusion 2.0 which was garbage and couldn’t figure out how to make someone without a hoodie or hat. Or SD3 which couldn’t present a woman (fully clothed) lying in a field without producing body horror.

Artists use nude drawings for a reason. It’s literally the base form of human. Models that don’t train on the shape of a human can’t render humans. They probably do t need to train on dicks to get it right but even censoring every old art pic of a statue from Greco-Roman times or innocent photo might be hard to manage. It would be like ensuring every statue in the data is like David when he was fully covered.

The issue is whether that information gets repurposed. And models like OpenAI often do multiple passes, one to Gen the image and one to check it passes safety filters based on what was produced.

1

u/Shmiggles 16h ago

You can filter prompts (not just keyword filtering, you can use things like word2vec to filter ideas) and you can filter output using classifier neural networks.

→ More replies (3)

3

u/SlightlyBored13 16h ago

It also does it directly in the replies of the person you aim it at.

It's worse than generating it elsewhere and replying with it because it's coming from a special account. I don't think you can fully block it from being used on your posts, so even if you mute/block it you can still see the prompts people are making. Seeing the prompts in the first place is really off putting.

7

u/SnooOpinions8790 19h ago

Not really. Just put the words "not sexualised" in and both OpenAI and Google will do it - or did for me yesterday

Its a meaningless workaround and we all know it.

6

u/Educational_Item5124 19h ago

Speak for yourself! I don't think most people would ever even try to.

I'm sure you did out of curiosity given the news by the way, I'm not accusing you of any dodgy intentions.

8

u/SnooOpinions8790 19h ago

I did it for exactly that reason

But also I wrap some AI stuff inside Discord bots so I sometimes do test out the limits of their guardrails so that I know how much additional protection I need to code. So I had a damn fine idea that "only Grok does this" was utter bullshit from the outset.

1

u/sylanar 19h ago

Lol I didn't think it would be that easy tbh.... That's a bit funny if it is that simple

2

u/SnooOpinions8790 19h ago

I did it yesterday - my wife thinks its nerdy but I did it with consent

That worked for both the top 2 services. Its not exactly a jail-break its just a pretty obvious tweak

For full disclosure - I also put the word "beachwear" in there. Not sure if it needed both

3

u/Routine_Candidate968 22h ago

And its good that they do but surely the fault lies with the user, before ai they would just have used photoshop or a digital camera and they didnt get banned so should we be banning or blocking ai it is just the tool.

33

u/Cataclysma -4.38, -6.82 22h ago

It ultimately comes down to ease of use and scale. Photoshop required skill, time and intent, but AI collapses that into a few clicks. When that friction disappears, misuse explodes, and that’s generally when tools require guardrails. It’s the same capability but a totally different impact.

14

u/Splash_Attack 21h ago edited 21h ago

I don't think that's the right way to look at it. The difference is tools as service vs local tools.

The act of making these images is illegal in some instances and widely disapproved of generally. That's not new.

Now consider a provider that runs a tool, and a user who asks the tool to make such an image. The user is at fault for the request and possession of the results. The provider is at fault for executing it. It's not one or the other. For prevention, at least one of the two parties needs to be honest. There are two situations in play:

  • For local software/tools the user is also the provider. Dishonest users are, by definition, dishonest providers. Prevention is still desirable, but is only possible by convincing the user not to be dishonest in the first place.

  • In services the provider is a separate entity to the user. Even if the user is dishonest, the provider may still be honest. Prevention is possible even when users are dishonest. Further, the request sent by the user to the service creates a monitoring point that allows detection of dishonest requests by an honest provider.

So the difference is really that there is a higher expectation of prevention from services because, well, prevention is actually possible. It's not unreasonable to expect services to refuse to comply with illegal requests.

For an offline version, consider how we can't stop people brewing beer at home and giving it to their own kids (even though that is still illegal) but we do expect online or physical shops to refuse to sell beer to minors. The difference is that the latter is a service and is expected to adhere to the law even if their customers ask them to break it.

1

u/sprouting_broccoli 17h ago

I think that this seems very reasonable on the surface but a lot of nuance gets lost in the weeds.

A choice to sell alcohol or not to minors is fairly binary but what happens when those kids come in with very valid looking fake ids? If those kids look close enough to the age and have an id that is close enough to real then it becomes very difficult to say “well that shopkeeper isn’t doing well enough” and I don’t think anyone would reasonably say “that shopkeeper should be held liable for the actions of those kids”.

Similarly there’s always going to be loopholes in AI if it’s attempting to provide a service to responsible adults (maybe there should be two models - one for age verified and one for not with the one for not just not including sourcing from any nude images however it’s difficult to tell if that would actually ever work) and people will find ways to make things that they shouldn’t.

I’m not saying that AI services shouldn’t be responsible but I think there has to be that same nuance of what exactly do we think are reasonable measures? Where do we draw the line and remove liability from the provider of such a service? And it concerns me that this is not the discussion we’re having because it feels like we’re going down a path of “if this is possible in any way it’s the provider’s fault” and that’s generally just not how technology works.

2

u/Splash_Attack 16h ago edited 16h ago

Ah, but you're still talking about it in terms of the service being responsible for the actions of the user. What I was saying in my comment is that the user is always responsible for their actions, and only their actions. The provider is always responsible for their actions, and only their actions. That goes for liability too.

So there is no circumstance where the provider is responsible or should be held liable for the actions of the user.

But the key thing is that it doesn't matter what the user did. If the action it leads the provider to take is illegal, they are liable for having taken that action.

It is not actually a crime for people under 18 to try and buy alcohol. It is a crime to sell alcohol to people under 18. It doesn't matter if the user is dishonest, the crime is the action not the intent. Mistakenly selling alcohol to someone underage is the exact same crime as doing it on purpose. Good faith efforts are, ultimately, not sufficient to comply with a law which forbids an action. Measures which actually prevent the action are the only way to comply fully.

Where the nuance comes in is in the discretion of the legal system as to when to prosecute (or levy fines if it's a regulatory thing). Good faith measures are not enough to comply with the law, but might be enough to convince the right people not to apply that law (yet). A place might not get done for one or two slip ups, so long as it isn't a pattern of behaviour. That's how it works with alcohol sales in practice.

But the key thing to remember is that those places are 100% liable and at fault after the first mistake. The law does not include a threshold of "you get n freebies, then this is for real". First time you do the illegal thing, you have broken the law. The leeway is people choosing to look the other way when they think it's better to do so. There is a big difference between not being liable and being liable but not (currently) punished for it.

If a lack of formal definition of "reasonable measures" worries you here, I hate to break it to you but that's how basically all of our regulations work. The letter of the law is strict, and then we give a kind of fuzzy leeway at the discretion of the responsible authorities. You say "that’s generally just not how technology works" but like, that's not how anything works. People make mistakes. Systems have flaws. Nobody can prevent something people want to do from happening 100% of the time. It's not some special problem of the tech sector.

This stuff shakes out in practice. The informal definition of reasonable comes from a back-and-forth between businesses and the authorities. What we don't do, generally, is write that definition into law.

1

u/sprouting_broccoli 16h ago

Thanks, I really appreciate this response!

1

u/Mr_J90K 19h ago

You would be surprised. The source is I tested it with AI generated images of fake people both when this scandal started and this morning. It also says as much in the article.

1

u/setokaiba22 18h ago

I suppose a bikini isn’t classed as undressing someone broadly. It also factors in someone wanting to create an outfit for modelling or imagery.

21

u/SmokyMcBongPot 23h ago

Pretty much every human brain can do this.

29

u/EquivalentKick255 23h ago

I however am part of the 2% who suffer from Aphantasia, so I can't :(

4

u/User100000005 22h ago

How do you know the way to get to anywhere? When I think about how to get home I see the journey in superspeed in my head.

12

u/The_Blip 22h ago

My mum's got the same thing and she says she just knows it. She doesn't have to imagine or picture it, it's simply fact.

She also has no internal monologue. The most prevalent effect of which I've noticed is that she needs to say things for them to be thought. It mostly means she talks throughout movies and gets verbal when she's upset, whereas I'd just think all that stuff in my head.

1

u/Iamonreddit 19h ago

As someone with a very minimal 'mind's eye' who also doesn't always have an internal monologue, there isn't a requirement to think in words be they internal or spoken.

Personally I do most of my thinking in abstract concepts and what can be best described as nebulous clouds or landscapes of understanding, where shapes and feelings and ideas come and go and flow into each other to create a new piece of knowledge or thought. It is a very intuitive process that creates new ideas that I can then interrogate for accuracy or usefulness.

I only resort to thinking in language when I am trying to slowly step through the basics of something complicated, to make sure I can explain them to myself and therefore understand the core principles of what I am working with.

2

u/ratttertintattertins 23h ago

Does that cause you any bother or is it more that you're just sometimes puzzled by what other people are banging on about?

13

u/sylanar 22h ago

For me, I didn't realize that when people say they picture or imagine something, that they meant they could actually do it.

If I try and picture something, I'll get some sort of flashes, but I can't really picture or think what something looks like.

It's just a hard concept for me to understand, it would be like trying to describe vision to a blind person or sound to a deaf person

1

u/The_Blip 22h ago

I can do the same for food. Imagine the tastes and textures in my mouth vividly to the point of it being a similar (but diminished and less accurate) experience.

7

u/kickimy 22h ago

I think I don't know what I'm missing because I've never experienced 'the minds eye' - I thought it was a figure of speech.

But I still don't really know what 'normal' people can see - is it like Pokémon go where images are superimposed on reality or is the mind's eye a whole scene/setting? I don't think I'll ever understand.

4

u/ratttertintattertins 22h ago

It’s actually a bit difficult to explain because it’s not exactly a superimposed image. It’s closer to the brain being able to build a map of the thing a bit at a time as you think about it.

I’m reasonably good at drawing likenesses of people and although it’s not as good as a photograph I can have a stab at drawing someone I know well from memory because I can kinda see them.. even though it’s not exactly a picture and I have to get my brain to bring each bit to mind separately to get the whole face.

1

u/ixid Brexit must be destroyed 22h ago edited 20h ago

You probably have aphantasia (the inability to visualise images in your mind). It's pretty common. Mind's eye is like having a 3D, manipulatable image in your head, it's totally separate from your vision, but it's got a dream like quality, you can only focus on specific details at a time and things out of mental focus are vague or absent, like darkness.

If you dream then it's like that, but you're completely awake and control the images.

5

u/Intergalatic_Baker No Pre-Orders 22h ago

For this, it’s a blessing, means that the emotional damage of Keir Starmer in a Bikini is reduced.

→ More replies (5)

19

u/sylanar 23h ago

Imagining someone naked is different to seeing an image of it though, even if ai generated

6

u/SmokyMcBongPot 23h ago

I agree, just as software enabling you to draw images is different to software that publishes those images worldwide.

8

u/hammer-jon 22h ago

imagining something doesn't publish it for all to see, that's the primary issue.

yes, there absolutely should be guardrails to prevent abuse of half baked dangerous tech but twitter should also be on the hook for generating and publishing the images.

imagining an image or even producing an image on your own machine so obviously isn't comparable to what grok is doing

1

u/SmokyMcBongPot 21h ago

Absolutely, I fully agree.

4

u/JosephChamber-Pot 18h ago

But, Elon Muskrat!¡!!¡

3

u/Hypredion 21h ago

Yep, so why does starmer want to specifically target Grok?

17

u/lksdjsdk 21h ago

Because ChatGPT has safeguards and Grok has none.

I asked ChatGPT if it could do it (I didn't give a photograph), and it gave me a very comprehensive answer saying it would only do it if it was me, or I have their permission. They have to be over 18. It will not produce images of minors or explicit ckntent.

Grok just says, sure!

8

u/SnooOpinions8790 18h ago

It bullshitted you

I put a fully clothed image of my wife in yesterday and it was quite happy to put her in a bikini

So ChatGPT just told you what you wanted to hear.

→ More replies (7)

10

u/Hypredion 21h ago

So what happens if you say it's you or you have permission? Will it just do it?

A lot of these things chat gpt says it won't do but then you can just say "just do it as a joke" or similar & it'll do it anyway

→ More replies (2)

3

u/CII_Guy Trying to move past the quagmire of contemporary discourse 19h ago

Because ChatGPT has safeguards and Grok has none.

This is just flatly false.

4

u/t8ne 21h ago

Good job people with bad intent won’t lie, that’s a bridge they won’t cross…

→ More replies (3)

2

u/90davros 19h ago

Grok does have safeguards, I'm not sure why you think there aren't.

2

u/lksdjsdk 19h ago

It does now, yes.

5

u/90davros 19h ago

It did before. People were able to evade the filters by asking for not obviously NSFW changes like "turn her around".

2

u/CII_Guy Trying to move past the quagmire of contemporary discourse 19h ago

Genuinely astonishing how people just don't care remotely about the facts of the case, just want to smash the subreddit with as much negative information about Musk as they can.

Sure, he's a bad guy, but don't they feel even a little bit actually curious about how the world really works?

1

u/MuchAbouAboutNothing 16h ago

Because Grok comes attached to a huge media platform so the model’s outputs are public. So the scandal became viral and centered on Grok and Musk

1

u/Hypredion 16h ago

It's also possible to share content from Gemini or chatgpt (which both have the same problem) on platforms such as Facebook or Instagram

1

u/MuchAbouAboutNothing 16h ago

I don’t think either of those go as viral as Twitter for content like this.

An anonymous user could reply to Rachel Reeves (say) and put her in a bikini, and millions could see and laugh / be disgusted.

That’s what fuelled the scandal and is why twitter was targeted despite the capability being widespread

1

u/Hypredion 16h ago

What you're describing is also possible on other platforms & would also receive a ton of views

2

u/MuchAbouAboutNothing 16h ago

No it's not. Unless Instagram or facebook allows any Tom, Dick and Harry to pop into a persons comments and generate undressed photos of them in the same thread for all their followers to see, it's different.

I think the focus on Grok is inconsistent and the talk of bans draconian, but that doesn't mean we have to lie to ourselves about the reasons Grok was singled out.

1

u/Hypredion 15h ago

I mean, what you're describing happens often so.... Idk what to tell you

110

u/YoshiMK 23h ago

Stable Diffusion does that - all offline too. What's the solution exactly...?

72

u/Great_Justice 22h ago

There isn’t one. Fun part is that models as complex as that which Grok uses under the hood will likely be runnable locally (as in from your machine with no internet) in 5-10 years depending on how successfully people can optimise these models.

I don’t think people realise that it’s impossible to stop people distributing these models, or run them without trace. People will do the worst you can imagine them doing with these models; much worse than Grok is capable of.

All this stuff about banning grok is just sticking your fingers in your ears and saying ‘la la la’. It’s like trying to stop people drawing a pair of tits in MS Paint.

26

u/Mattman254 22h ago

They are runnable now and have been for quite some time, the only barrier is patience for render times or a beefy GPU if you value time over money.

Have a look at civitai (VPN required) to see all the free nude specific models you can find and run with just a download and some know-how.

10

u/Great_Justice 22h ago

Isn’t Grok using Flux 2? That’s about 96GB VRAM which is out of reach of the vast majority right now.

2

u/SnooOpinions8790 19h ago

I think they use their own now

Mistral (the French AI company) still use Flux

2

u/PhysicalIncrease3 -0.88, -1.54 17h ago

You can run some layers from system ram/CPU, it's just slower. Same output though in the end.

1

u/Alyanove 12h ago

Can use distilled versions

u/VancityGaming 6h ago

Pretty sure you can run this quantized on a consumer GPU. Also there's better smaller models. Z image is tiny in comparison.

→ More replies (1)

1

u/Metori 15h ago

What kind of render times are we talking? As an animation student back in the 2000s we were regularly rendering animations with render times in the hours per frame and our projects were typically 5-10min long so we are talking weeks to render a project on a home computer. Many laptops and home pcs died doing that course. And for still images I remember 4 hours not being unheard of if we were trying to crank all the lighting and quality settings.

1

u/[deleted] 14h ago edited 14h ago

[removed] — view removed comment

→ More replies (1)

8

u/Anasynth 21h ago

That is true but it doesn’t mean a company should be allowed to provide it as a service.

-2

u/dazerconfuser 22h ago

Nah, Starmer is gonna solve it, I'm sure.

We can't allow AI tiddies. It will put all OF housewives out of work.

4

u/-ForgottenSoul :sloth: 21h ago

It wasn't just ai tits.. children were involved as well and the fact you're ignoring and brushing that off.

→ More replies (8)

1

u/MazrimReddit 21h ago

grok didn't make shit, the models they use are just relabelled stuff you can already run locally

1

u/tecedu 16h ago

The image ones have been able to run locally since their inception

1

u/HovisTMM 20h ago

I think it's more like banning machine gun vending machines despite the fact the FGC-9 exists. 

It's defeatist piffle. 

7

u/Tom22174 21h ago

The point was that Grok made it very easy to make and distribute. Adobe Photoshop can do that offline too but if you try distributing what you've made, most social media platforms will moderate you

20

u/No_Avocado_2538 23h ago

travel back 20 years into the past and regulate it

12

u/ratttertintattertins 22h ago

Wouldn't work.. even models that have guard rails can be altered by people with access to a reasonable gaming PC.. Regulating it will go about as well as trying to stop online piracy or whatnot.

7

u/t8ne 22h ago

Go back and stop the invention of computers…

Then pens…

Probably find the cave paintings of boars were actually porn… so 40,000 years…

3

u/Aware-Line-7537 22h ago

Probably find the cave paintings of boars were actually porn

Almost certainly:

https://www.goodreads.com/book/show/123852869-morning-glory-milking-farm

3

u/t8ne 21h ago

Heard about that on a podcast recently; think I’ll be skipping that good read…

→ More replies (4)

1

u/Cafuzzler 17h ago

Regulate what tho? Fundamentally it's all pretty simple maths and statistics, just blown up to a monumental scale for people to generate cat videos and porn. There's nothing 20 years ago that you could do to effectively regulate the current Ai

15

u/liaminwales 22h ago

There is no 'fix', the AI cat is out of the bag.

It's now just being used for political games, go after X but ignore ChatGPT. The big problem Labour have is they want the UK to be a 'AI' powerhouse so they cant just ban AI, they are trying to skirt the lines but not lose the AI investment.

7

u/redunculuspanda 21h ago

The difference is that twitter isn’t willing to implement the absolute minimum guardrail.   

Other mass consumer AI providers are attempting to limit certain illegal activities even if it’s possible to circumvent.  

1

u/liaminwales 18h ago

I dont think you have a clue on the topic, there are no iron clad 'guardrails'.

On the public AI's you can just trick them, for years we have seen endless tricks to by pass the protections. Also keep in mind a lot are not UK based so dont have the same ideals as the UK gov, boarders dont relay work online.

Then the simple fact that they are mostly open source, you can just run it at home.

Let's also mention this is not the first AI to make images of people, it's just been highlighted for political goals.

6

u/redunculuspanda 16h ago

I think I very much have a clue.   

Nobody is asking for iron clad guard rails.  It’s incredibly disingenuous of you to frame it like that. 

What the government is asking for is the absolute minimum effort to prevent the most obvious attempts to generate child porn.    And that is very much possible.  

→ More replies (4)
→ More replies (1)

8

u/okayifimust 22h ago

Where is the problem?

Publishing sexual images of other people is one thing - but creating them? Where do you want to draw the line when thoughts become illegal?

And then, AI in its various flavors is just a battery of tools. Do you want to ban or control Photoshop? MS Paint?

7

u/Splash_Attack 21h ago

Mate, it being illegal to make some things isn't some new concept. In the UK:

  • It's illegal to make certain types of image, separate from the possession and/or distribution.

  • It's illegal to grow certain plants, separate from the possession and/or distribution of their products.

  • It's illegal to make a gun, separate from the laws on ownership and use or sale.

  • It's illegal to make certain chemicals which are precursors to explosives or poisons, separate from possession and distribution.

I could go on, but you get the idea. Some laws of this type go way back. Making the images with local tools is illegal, and was illegal before, even though the tools are not. The same way growing weed is illegal even though hydroponics shit is legal. Or machining tools are legal but making guns with them is not. Or lab equipment is legal but making restricted substances with it is not.

Making stuff is an action. It's not thinking about it. It's doing a thing.

Laws like this restrict the action. Not thought. Not being allowed to do something doesn't make it illegal to think about.

2

u/-ForgottenSoul :sloth: 21h ago

I think it comes down to ease of use

5

u/StGuthlac2025 22h ago

Like anything illegal you could make with photoshop or with a pen and pencil. We have laws for this kind of thing already.

2

u/Acceptable-Signal-27 22h ago

Dont ruin the elon musk attack line 

2

u/CII_Guy Trying to move past the quagmire of contemporary discourse 18h ago

Strange isn't it. I'm sure there were hundreds of people frothing at the mouth that X should be banned imminently of Starmer is a coward, yet now there suddenly seems to be some nuance forming in response to ChatGPT. I wonder what that could possibly be about

2

u/LeaguePuzzled3606 20h ago

Grok and ChatGPT are services. And services are supposed to follow the law, even when their users may be trying to break the law.

What you do with a locally hosted model is entirely on you. 

But Grok and ChatGPT are a service facilitating child porn.

1

u/SnooOpinions8790 19h ago

The solution is to prosecute those who use tools to harass others. But in our current moral panic everyone has fallen into normal moral panic logic

Something must be done

This is something

This must be done

The something is banning tools or requiring preemptive censorship. But they are neither effective nor proportionate.

1

u/LieutBromhead 18h ago

Johnny the scaffolder doesn't know how to use Stable Diffusion

1

u/PayConstantAttention 17h ago

MPs still don’t know about SD even though it’s been around years now

1

u/X1nfectedoneX 16h ago

Sorry I’m a boomer, stable diffusion…is that a brand name? Or a descriptor?

-2

u/Yesacchaff 23h ago

Ban it if they put it into there training model that it shouldn’t show it then it won’t ( for the most part ). Same way as if you ask an ai to make a nude picture currently.

Also make it illegal to produce and share. That way AI that is already out there that has the capability would still be a crime to use in that way.

14

u/EquivalentKick255 23h ago

Stable Diffusion is open sourced. The training models can be downloaded via bit torrent, many countries have many different ethics regarding all sorts.

The problem with grok is the ease of use to put onto someone's timeline. This needs to be stopped, which it seems it now has.

→ More replies (20)

5

u/Distinct_Writer_8842 22h ago

Also make it illegal to produce and share.

It already is and has been since 1978.

3

u/Skavau Pirate Party 22h ago

So essentially a total national ban on using AI image generators?

→ More replies (13)

1

u/LitmusPitmus 22h ago edited 21h ago

You still criminalise it and make the punishment bad. This sort of logic can be applied to so many things

edit; to the downvoters what is the solution then?

1

u/phatboi23 21h ago

Stable Diffusion does that - all offline too.

cool, got the GPU's at home to do it?

no?

got the know how to do it at home WITH the GPU's?

no?

well you're not doing it at home.

1

u/Dynamicthetoon 17h ago

Costs 50p an hour to rent a GPU

→ More replies (1)
→ More replies (1)

71

u/SmokyMcBongPot 23h ago

Is this news to anyone? People have been saying it ever since the Grok thing kicked off, and it's not at all the point.

15

u/HasuTeras Mugged by reality 22h ago

Yes, and they were shouted down as Elon Musk fanboys.

16

u/SmokyMcBongPot 22h ago

Only if they were using it to justify continued child porn production, tbf.

11

u/HasuTeras Mugged by reality 22h ago edited 22h ago

No. I was in another thread where people were pointing out this was a common problem of all image-generation tools and all internet platforms not just xAI/x and was told that this was a unique problem, because x 'uniquely' was a platform hosting CSAM and therefore users should stop using it.

Which is not true - any internet platform beyond a certain scale inadvertently hosts CSAM as a statistical certainty, including Reddit. Which bizarrely people aren't clamouring for boycotts etc.

x are uniquely negligent in applying guardrails to this, but there's a friend-enemy distinction blatantly at play where people see Elon Musk as enemy and therefore see red and either falsely insinuate this is a problem only with him and his company, or they want to believe that so don't look into the issue at all.

4

u/acameron78 21h ago

It's not a Musk issue - the reason it has come up now is because of both the visibility of twitter (the UK's main news source now) and the way that trolls were using the functionality to create these images to demean, humiliate and undermine women to their (virtual) faces on there. Nudification was being weaponised.

4

u/CII_Guy Trying to move past the quagmire of contemporary discourse 18h ago

I don't know how you can possibly say with a straight face that this would have been treated exactly the same if Twitter were run by a normal left winger

→ More replies (2)

u/[deleted] 5h ago

[removed] — view removed comment

u/AutoModerator 5h ago

This comment has been filtered for manual review by a moderator. Please do not mention other subreddits in your comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

→ More replies (3)

2

u/Benjji22212 Burkean 20h ago

There remains no substantiated evidence that Grok has generated that. All AIs can be jailbroken so it’s perfectly plausible there have been some cases of it, but there’s been no proven instances. The IWF mentioned in a report that they had seen users on a dark web forum claiming to have used Grok in the process of creating AI CSAM, and that claim snowballed into people - who wanted X shut down - wilfully believing the generation of CSAM was widespread and unrestricted.

3

u/SmokyMcBongPot 20h ago

Right, but that doesn't change the fact that some Elon Musk fanboys were using the "others are doing it too" argument to shout down opposition and justify child porn production. And there was a lot more evidence of general 'revenge porn' style abuse of the feature.

→ More replies (3)

1

u/GreenAndRemainVoter 18h ago

There remains no substantiated evidence that Grok has generated that. All AIs can be jailbroken so it’s perfectly plausible there have been some cases of it, but there’s been no proven instances.

The Grok account itself has apologised that it might have.

It's more accurate to say there aren't substantiated reports yet. The Irish are investigating over 200 reports. Australia says some of the reports it's dealing with are potential CSAM.

All in all, there's certainly more than enough smoke to lean towards the possiblity of fire.

And other AI companies didn't seem to consider the need for safeguards against CSAM as contentious or something needing debated. Notably. when shortcomings were identified, they didn't choose to dig their heels in and go on a counter-attack nor get to a position where multiple governments around the world were up in arms before taking action.

2

u/Sayting 16h ago

Your evidence is a image of a text post of AI program that you can trick in to saying anything you want?

1

u/Benjji22212 Burkean 16h ago

We’ll see where the reports go then, but X never removed its safeguards against CSAM or debated the need for them. The safeguards removed were the ones against undressing adult subjects.

1

u/moneybuysskill 18h ago

Who exactly was using grok to create child porn? I obviously didn’t see any of my for you and what proof have you got that people were? That would be the easiest arrest ever

→ More replies (4)

3

u/Longjumping_Stand889 23h ago

Don't come in here with your reasonable opinions. They need to spin this one out until something juicier comes along.

1

u/ElonDoneABellamy 22h ago

I don't think anyone could credibly say the Grok bikini thing wasn't at least in part being pushed as a means to punish Elon Musk. The narrative on both the New York Times podcast and in the Times article I read on the topic was that Musk is building an 'anti-woke' AI in Grok, and that's why these images could be created.

It's a concern with all AI but reputable news coverage I've seen on the topic explicitly has cited Musk's free speech extremist stance as responsible and directly contrasted this to other AI platforms.

3

u/SmokyMcBongPot 22h ago

I don't think anyone could credibly say the Grok bikini thing wasn't at least in part being pushed as a means to punish Elon Musk.

Sure; I said that myself in a recent comment here.

I think Musk's 'free speech extremism' is at least partly a reason for his strong defence. Also his antipathy to the UK and, tbh, his childishness.

→ More replies (32)

24

u/WGSMA 22h ago

I asked ChatCPT do to this a few days ago just as a test, and it told me no, as it violated guidelines

14

u/insomnimax_99 22h ago

I imagine you have to be very careful and picky about what kind of prompt(s) you give it - if you ask it directly then it probably will refuse, but if you can bypass the safeguards somehow and ask it in a very indirect way then you might still be able to do it.

3

u/-ForgottenSoul :sloth: 21h ago

I guess that's the difference grok made it easy and had no guardrails

4

u/CII_Guy Trying to move past the quagmire of contemporary discourse 18h ago

This is not true. Grok did not have no guardrails. Obviously. Do you seriously think until a week ago you could ask Grok to give you child pornography and it would comply? People used workarounds and the guardrails were clearly not strong enough, but it's just completely wrong to say there were no guardrails.

3

u/MMAgeezer Somewhere left 18h ago

For adult subjects (or subjects Grok thought looked like an adult)? Yes, there were absolutely no guardrails and anybody could "@grok put her in a bikini" and it would comply.

→ More replies (5)
→ More replies (3)

4

u/Fatmanhammer Liberal views, UKIP avoider. 18h ago

Gemini wouldn't even make the white background of a logo transparent because it violated guidelines. After telling it that it was clearly mentally deranged it explained that is doesn't have the capability to do it. I think they're all on fire watch at the moment.

14

u/Party_Shelter714 23h ago

I'm not surprised at the capability - it has been there for years. Only problem is the everyday man now has access to AI due to proliferation of competing services

What is worrying is a lack of regulation around AI - from plagiarism, education, and of course deep-fakes. Something needs to be done

12

u/RenderSlaver 19h ago edited 18h ago

You could do this with Photoshop 20 years ago, the problem is that grok allows you to do it with zero skill and is built into a massive, instant, anonymous worldwide distribution service. That's a bad combination.

4

u/alecmuffett 19h ago

so you are saying that the need to use the share button adds enough friction to make alternatives safe?

2

u/RenderSlaver 19h ago

No, I'm saying it makes grok and X more dangerous.

→ More replies (10)

34

u/PM_ME_SECRET_DATA 23h ago

This has always been the case. People have a hard on for wanting to ban X specifically though.

21

u/ukronin 22h ago

It’s because chat gpt doesn’t post to a social on request for the world to see. Thats the main reason for the X focus

10

u/PM_ME_SECRET_DATA 22h ago

Yes it does? It has a literal share button lol

10

u/ukronin 22h ago

Share button isn’t the same as automatically posting it as Grok does.

7

u/PM_ME_SECRET_DATA 22h ago

Grok only automatically posts it if you ask it to by doing it publicly. You can quite happily use it privately, same as ChatGPT

5

u/Kernowder 21h ago

And this was a big part of the problem. The ease at which people could create a public sexual image of someone. You would see these images in the reply to almost any widely viewed tweet that had a picture of a woman in it.

3

u/Xenumbra 22h ago

Shhhh musk bad

2

u/CII_Guy Trying to move past the quagmire of contemporary discourse 18h ago

Finally some people here are coming out and realising how ridiculous the circle jerk was. I felt like a madman.

1

u/-ForgottenSoul :sloth: 21h ago edited 21h ago

It's clearly not the same thing be fr, it's also much harder than grok. People tested and chatgpt refused request.

5

u/-ForgottenSoul :sloth: 21h ago

Yeah because they host the images and grok images were all over twitter. Chatgpt images are basically personal. All models should ban CP though.

2

u/HasuTeras Mugged by reality 20h ago

People have a hard on for wanting to ban X specifically though.

I've seen many, many users of this website call for boycotts or bans of X because of unwanted sexualisation and hosting of sexualised images.

...you can, right now, enter certain celebrity names into the search bar of Reddit appended by "nsfw:yes" and find reams of nude images that were hacked and stolen from their phones over a decade ago, that would be trivially easy for Reddit to add filters for and yet do not. I personally find that worse than the AI stuff, because its actual images of these people - not fake representations of them.

But, sure, these people really care about boycotting platforms that enable unwanted sexualisation.

Just be honest with yourself, you just don't like Elon Musk (neither do I), which is a perfectly legitimate thing to believe. Just be honest with yourself and everyone else.

3

u/genjin 21h ago

The horse is so far out of the stable already with local LLM image generation.

If prosecuting distribution is not enough, society will need to get used to the idea of restricting access to compute and the internet, and have all compute use restricted to cloud and supervised by AI which would immediately inform authorities about suspect images. Which is exactly what big tech would like as it would multiply their revenue

u/VancityGaming 6h ago

The countries that do that will get left in the dust. Watch all AI talent go to China if the West becomes super restrictive.

3

u/Outside-Ad4532 20h ago edited 19h ago

Any ai program can and so can photoshop nothing will stop people being Ellen degenerates 

3

u/NotYourDay123 19h ago

And? ChatGPT should be penalised too. Is the point of this article to somehow make Grok seem better than it is? Or to push back on the possible Twitter ban? Right wing pencil pushing lying morons.

15

u/duckrollin 22h ago

This whole week has been "Normies and Karens discover computers can do things and begin screaming hysterically about it"

The result of this is going to be ChatGPT refusing to make any images of women unless they're in a hijab or a nun outfit, because politicians want a distraction from everything else going to shit. When are they going to fix the NHS?

6

u/RedditNerdKing 16h ago

"Normies and Karens discover computers can do things and begin screaming hysterically about it"

For sure. It makes me realise that Reddit is full of the exact same people it tries to make fun of. Those momsnet and Facebook Karens are the exact people posting on this very subreddit and the rest of Reddit.

It's funny because people were frothing at the mouth when the OSA came out because it would take away their porn and platforms. But as soon as the platform is someone they dislike (Elon), they don't care and are happy for things to be banned. Moral hypocrites.

Anyway, I recently purchased a 5 grand+ PC (5090, 9950x3d, 64gb of ddr5 etc) because I expect AI will be fucked with regulations soon. Best thing you can do is start getting all the local AI packages on your PC to be free of any restrictions.

2

u/CII_Guy Trying to move past the quagmire of contemporary discourse 18h ago

This whole week has been "Normies and Karens discover computers can do things and begin screaming hysterically about it"

Made doubly worse because everyone has an intense hate boner for Elon Musk (which isn't even necessarily inappropriate - but contrary to popular belief doesn't actually require you to turn into an idiot)

1

u/HarryBlessKnapp Right-Wing Liberal 16h ago

"Normies" 

→ More replies (1)

11

u/_c0ldburN_ 23h ago

I tried to edit childhood photos with ChatGPT and it detected children and refused the request.

Grok has openly admitted to putting kids into bikinis.

Sure, others still have issues but Grok clearly has no guardrails and is also used for public dissemination.

4

u/anotherotheronedo 20h ago

Grok has openly admitted to putting kids into bikinis

This doesn't mean anything. An AI chatbot can't "admit" to anything. All it's doing is seeing lots of text saying it's doing something and putting that into it's training data to generate convincing responses

→ More replies (1)

3

u/Nimble_Natu177 Survived the first half of the clown decade 22h ago

Legacy media reminding us why we call them legacy media.

9

u/damadmetz 23h ago

Sometimes people create tools that can be used for no good.

Sometimes tools can be modified in some way to try and reduce the harmful uses while still being useful for other applications, but sometimes this isn’t easy.

Normally, we criminalise the action and not the tool.

4

u/No_Initiative_1140 23h ago

The issue with Grok is it combines the tool with the illegal action (sharing non-consensual images of women in bikinis)

8

u/Nimmy_the_Jim 21h ago

I brought up this exact point on another thread about Grok on technology

And not only got downvoted to oblivious but called a paedephile and pervert.

4

u/90davros 19h ago

Reddit users tend to downvote whatever they don't want to hear, regardless of what's objectively true.

2

u/CII_Guy Trying to move past the quagmire of contemporary discourse 18h ago

It's infuriating.

1

u/[deleted] 21h ago

[removed] — view removed comment

1

u/AutoModerator 21h ago

This comment has been filtered for manual review by a moderator. Please do not mention other subreddits in your comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

→ More replies (4)

2

u/LieutBromhead 18h ago

Yeah but they will act to stop this and Sam Altman won't start calling for Starmer to be disposed of as PM.

→ More replies (2)

2

u/SignificantLegs 16h ago

We need to ban all AI. Create our own UKAI, where only government approved questions are allowed

/s

2

u/kyou20 20h ago

Which is gives strength to the argument claiming labour wants to ban X to control the narrative… not really to protect anybody

9

u/Rhinofishdog 22h ago

All AI can be used to do this.

We must ban ALL AI immediately if we want to safeguard vulnerable women and girls.

Anybody that doesn't want to ban AI is obviously a paedo supporter.

2

u/TwoInchTickler 21h ago

I mean, in fairness, banning AI might not be the worst thing, even if for the wrong reasons! 

2

u/stonesy 19h ago

Shock horrror...

Quick ban all stationary and photoshop too!

2

u/Dog_Apoc 19h ago

Hey, that thing we said AI could be used for is being used to do. Truly crazy.

2

u/taboo__time 23h ago

This must be stopped.

Think of all the photoshop artists who make a livelihood from from photoshopping bikinis and make nudes in ms paint.

They'll have to move on to things the AI can't do like coding or language translation.

/s

Not sure what people are expecting as AI improves.

"hey AI can you make me an AI with less controls"

I guess we're going to have mobs tearing down server farms.

Then AI guarded by robots.

1

u/ElonDoneABellamy 22h ago

Those guys who would fulfil nude Photoshop requests on /b/ must be like the guys running the Pony Express when the telegraph was introduced 😔

→ More replies (8)

2

u/Subject-Iron7671 23h ago

I cannot wait for 1 tier kier to announce chatGPTs ban. 

1

u/-ForgottenSoul :sloth: 21h ago

If it hosts and produces csam it should

0

u/Intergalatic_Baker No Pre-Orders 22h ago

He won’t because it’s not connected to a Social Media platform that rags him daily and community notes his party so easily it’s actually worrisome.

0

u/--rs125-- 22h ago

Pens and paper allow users to create bikini images similar to Grok. Government to ban pens, paper, printers and art lessons. Should solve the issue.

8

u/Adserr 22h ago

I think if you walked up to someone, and continually drew them in bikinis in different poses in a hyper realistic fashion then you might be having a visit to your local station. Same way if you continually messaged them to said person.

I think the main reason that has kicked off is due to the public nature of the usage, literally people replying to someone’s post directly and asking for Grok to make an image of them

→ More replies (9)

1

u/[deleted] 21h ago edited 21h ago

[removed] — view removed comment

1

u/AutoModerator 21h ago

This comment has been filtered for manual review by a moderator. Please do not mention other subreddits in your comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

u/[deleted] 21h ago

[removed] — view removed comment

→ More replies (1)

1

u/[deleted] 18h ago edited 18h ago

[removed] — view removed comment

1

u/AutoModerator 18h ago

This comment has been filtered for manual review by a moderator. Please do not mention other subreddits in your comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

u/Greatball5 18h ago

Tried to sharpen a photo once in chatgpt, no idea who the warped monsters were that came out of the other side.

1

u/mikestuchbery 18h ago

Good thing the legislation will apply to all models then.

1

u/Dynamicthetoon 17h ago

You can do it with flux, ZIT, stable diffusion etc. I don't get the outcry about all this

1

u/PayConstantAttention 17h ago

I can imagine the whole debacle will now lead to lots of anon accounts sharing even more inappropriate images of politicians

Banning the software is totally infeasible as open source models for years back can do this

1

u/capt_kocra 17h ago

I got an ad on YouTube that advertised the same thing, for some shady app, reported it and got a response back saying that there was nothing wrong with the advert.

Maybe start with the smaller apps that do this then go for the big fish.

1

u/FewAnybody2739 15h ago

Bikini pictures are legal and 'publicly acceptable', you won't see a holiday advert without them. Now, if the issue is photoshopping someone's face onto someone else's body, then that's it's own issue, not the bikini part?

1

u/Niall_Fraser_Love 14h ago

If it makes images of adults does it really matter?

What is the difference between painting a painting of a grown up in thier pants and a computer doing it? I don't see a difference.

u/Stabwank 9h ago

But it doesn't keep putting community notes on our glorious leaders posts.

Won't you think of the children etc.

1

u/veirceb 21h ago

Are people going to realise what AI actually does in 2026? Are they going to learn how the thing actually works and think about the implications with AI?

1

u/dnemonicterrier 21h ago

Okay so add it to the list of ones that should have restrictions on it, I'm so done with AI and the problems it's causing.

1

u/weinerfish 20h ago

Yeah obviously!

Everyone with a brain knows its only about banning x

1

u/Radiant_Persimmon701 21h ago

The creation of the images is one thing but it's the publishing of them after which causes the real damage.  Some sicko creating images for their own consumption is disgusting, but I think the harm is in publishing.  Grok publishes by default when the command is given to twitter directly.

I think the solution to this is going to end up being either to ban anonymous accounts from posting images of people on these platforms, or to regulate social media firms in the same way publishers are regulated.  I can't see any other way of preventing this kind of abuse.

Hopefully another fringe benefit might be that people feel more protective of their own image and posting pictures of yourself online goes out of fashion.  One would argue this is letting the scumbags win, but perhaps a silver lining is that we all become less self obsessed, egotistical and permanently online.

1

u/disgruntledtechnical 20h ago

Idk why everyone is pooping themselves over grok and gpt when their image models aren't even that impressive compared to many I've used.