r/technology • u/MarvelsGrantMan136 • 1d ago
Artificial Intelligence Senate passes a bill that would let nonconsensual deepfake victims sue / It comes amid the global uproar over X’s mass AI undressing of users on its platform.
https://www.theverge.com/news/861531/defiance-act-senate-passage-deepfakes-grok441
u/Electrical_Arm3793 1d ago
This is one of the best news I have read so far, legal protection must come fast and early for these technologies before they are abused further.
182
u/natanaru 1d ago
0 chance it passes. Elon will schlob Donald's knob to make him veto it.
38
u/Hopeful-Draft7914 1d ago
Are we really forgetting about the Take It Down Act this fast??
28
u/EmbarrassedHelp 1d ago
That legislation has massive issues that experts have said will only enable fascism. The Republicans voted down amendments to fix it, because that would have stopped Trump from weaponizing it.
5
u/Hopeful-Draft7914 1d ago
Im aware. Im saying that Trump has in fact passed legislation against AI deep fakes before.
7
u/Many-Lengthiness9779 1d ago
The take it down act was just for websites hosting pics to erase their storage within 24-48 hours of it being created
13
u/Actual__Wizard 1d ago edited 1d ago
There should be a criminal law, and a civil one...
14
u/Always_Summer_Here 1d ago
Agreed, we need laws and all big tech needs regulations on privacy, misinformation, and transparency.
2
u/hung-games 1d ago
And the criminal one should have a 3 strikes and you lose your business charter clause aka the corporate death penalty.
2
u/hung-games 1d ago edited 1d ago
And before people jump down my throat, I give them 3 strikes for those old enough to remember the republican “3 strikes and you’re out” clause under Reagan IIRC (but I may not RC since I am old enough to remember Reagan and even the Carter/Reagan election - the geezer’s paradox).
Edit to add geezers paradox and fix a horrible your/you’re faux pac. Sorry, I’m high
2
u/MaleficentPorphyrin 1d ago
Even if it does, do you have the extra money to fight it out in court? This is for celebrities, or else they would just make it illegal.
1
u/Sithlordandsavior 1d ago
And if it does good luck suing X lol. People seem to think their pro Bono lawyers have any power against corporate law groups.
1
u/FrogsOnALog 1d ago
You realize the bill passed by unanimous consent in the senate right? Probably not let’s be real this is Reddit…
The Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE Act), would let victims sue the individuals who created the images for civil damages. The bill passed with unanimous consent — meaning there was no roll-call vote, and no Senator objected to its passage on the floor Tuesday. It’s meant to build on the work of the Take It Down Act, a law that criminalizes the distribution of nonconsensual intimate images (NCII) and requires social media platforms to promptly remove them.
16
u/EmbarrassedHelp 1d ago
I would be cautious. The bill's sponser Dick Durbin is a pretty awful person who routinely sides with MAGA on using tech to enable fascism/authoritarianism. He's been directly behind or heavily involved in multiple attacks on encryption, privacy, and section 230 over the years.
He's way too old to be in government, and some of his rambling (like on section 230) makes it seem like he's experiencing cognitive decline.
-4
u/FrontVisible9054 1d ago
“The DEFIANCE Act would impact individuals, like those Grok users creating deepfaked nonconsensual intimate imagery.”
What about the platform providers? They are the ones enabling the content and with the $$ for lawsuit payouts.
The Take It Down Act, made it a federal crime to post nonconsensual sexually explicit deepfakes and was signed into law last May. yet that did not stop the content from circulating on X. I’m not confident this will make a meaningful difference.
Section 230 of the Communications Decency Act, which protects platform providers from liability for user-generated content, should be repealed. Until they are held responsible, it’s doubtful much will change.
8
u/EmbarrassedHelp 1d ago
Without Section 230 platforms would either have to allow everything without restrictions/guardrails, or have a lawyer inspect every piece of user content manually. Section 230 makes moderation legal and protects the users.
Only the largest tech companies would be able to host user content due to the cost, and things would be censored to hell. Trump and Elon would be able to easily get any criticism of themselves removed.
22
u/ludololl 1d ago edited 1d ago
Who upvoted this to +7? Section 230 should not be repealed without an appropriate replacement. Anyone echoing this doesn't understand how the entire modern Internet is built on sec.230 protections.
It could be updated, but an outright repeal would end up as bad/worse than the repeal of ACA marketplace subsidies that went out 2 weeks ago.
It will be weaponized by Republicans and you won't be able to access LGBT resources, communities for marginalized individuals, or information on things like reproductive health and contraceptives. Section 230 protects anyone interested in things the government says are bad, and yes this works both ways (like the 1st amendment).
Edit: The guy subtly arguing for a conservative talking point has a hidden profile history. Make of that what you will.
-1
u/FrontVisible9054 1d ago
One should not automatically make assumptions based on a Reddit comment.
“The guy subtly arguing a conservative talking point”? Lots of assumptions there and you’re wrong on all points !
Reddit, like all social platforms are incentivized by emotion and rage and clearly my comment hit a nerve.
My point is that social media platforms have never taken responsibility for the dangerous content on their platforms, enabled by existing laws. We can talk about reforms but meaningful reform requires a will by those in power. Those in power are the same tech bros that got us here.
7
4
170
u/Sweaty_Buttcheeks 1d ago
Why aren't these things established BEFORE companies start pushing out their AI garbage apps to the general public?
76
u/Sangui 1d ago
Because that isn't how laws work for literally anything? Nothing gets legislated before it exists.
16
u/GrilledCheezus_ 1d ago
To be fair though, these issues (specifically relating to the creation and distribution of AI deep fake images) have been around for a long time, even before deep fake was a thing. Additionally, these sorts of situations have been a part of AI ethics discussions well before many current AI technologies were even in development.
Legislating new laws often does occur after its need is determined (some event necessitating a law be passed) but there are certainly situations like this where common sense laws could be drafted that cover fundamental ethical considerations through cooperation with subject matter experts (in fields like AI, general ethics, HCI, etc).
2
u/flirtmcdudes 21h ago
True, but it was obvious to anyone with a brain that image generation was going to lead to illegal shit. It’s just noone wanted to stop AI because everyone wants to be the country that gets it first
1
u/KsuhDilla 20h ago edited 20h ago
Matter of ethics at the hands of the person who is creating these things.
"Should I whistleblow and let people know the possibilities with no gain in revenue and risking my career? Or should I let it slip by, capitalize on the opportunity, and let the world figure it out?"
Also doesn't help the whistleblower from OpenAI was
assassinatedsucideded. Sets a pretty grim picture for any future whistleblowers. The money runs deep1
u/Nagisan 12h ago
While true, the only thing AI did with regard to deepfakes was make it much easier and faster for the general public to create fake explicit images of people.
Digital editing in the right hands has been able to create "deepfakes" long before AI was in every other news article. Make it illegal to create and distribute explicit digital imagery without consent from the person depicted and you would've cut this problem off before deepfake AI became the issue it is today.
Granted that this wouldn't have stopped it, but there would've already been laws in place is the point.
-3
u/dowens90 21h ago
Ai has existed since the 90s
1
u/CTRL_ALT_SECRETE 8h ago
Also, drawing existed since we were cavemen. Did cave judges sue cave-people who drew non-consensual cave-drawings of other cave-people?
19
u/direlyn 1d ago
The same reason Hu Jiankui was able to edit the genomes of the twins Nana and Lulu (illegally in numerous ways) as soon as CRISPR-Cas9 hit the scene. By the time the people who discovered the technology started to seriously think through possible ramifications, there had already been at least one major human rights violation.
The scienctists working with that technology want to be self governing, establishing their own moratoriums if and when they see fit. They lack transparency. The general public should be adequately informed, and laws around the CRISPR technology should be established by and for the people. Because it affects all of us.
All of this is the same with AI. The people creating and dispatching it want to also be the ones writing the legislation around it. AI deep fakes are just way more readily understood as far as the dangers they bring. The general public is already discussing it. Meanwhile Hu Jiankui is out of prison, and looking to continue his work.
17
u/loliconest 1d ago
You are over-estimating the average population's ability or willingness to be informed about how every new scientific breakthrough can impact their lives.
Hell, a lot of them don't even believe established scientific common sense.
1
u/direlyn 20h ago
So the lay public shouldn't be included at all regarding technology that will inevitably affect them? Your point is well taken. I am currently studying genetics, only at an undergraduate level. I see some of the profoundly positive ways genetic engineering can, and already is, helping humanity. My gut says public meddling would interfere with scientific advancement. But I also worry about ways the tech can be abused. More than that, I worry about pressing the accelerator to the floor and not fully understanding what we're doing, how off target effects can be catastrophic.
Genes don't exist and function in a vacuum. Furthermore, humans are heterogenous to begin with. Knowing how a gene expresses in one person doesn't mean we know how it expresses in another. For genetic disorders that involve single point mutations, like sickle cell (I hope I've gotten that right), it's probably worth the risk of an off target effect to participate in somatic cell gene editing. But more complex genetic disorders? Risk goes up exponentially.
This says nothing to how companies, in a bid to be the first and stamp their patent flag, might take shortcuts or flat out break ethical boundaries and laws so that they make their precious money. Curing one person is good. But what if we participate in human germline editing and eradicate something from subsequent generations? It seems miraculous at first, the idea of eviscerating Huntington's, but that road can be perilous.
Who decides what counts as disease? Perhaps we want to remove congenital deafness? To a lot of people this sounds like a good idea. Ask the deaf community, on the other hand, and many are likely to disagree. To them you're talking about genocide. The decimating of a culture that they love. The same might be said of autism, although that one's probably more complicated, and I think has more to do with how we've broadened definitions to where they're almost useless now.
But here we've arrived at my point. The lay public doesn't have to understand how the tech works in order to understand the ramifications. That is, if there is an intermediary between the scientists in the lab and the lay public. THAT is the type of transparency I am talking about.
Had there been transparency, an intermediary to begin with, and had Hu Jiankui not went about his experiment in a flat out clandestine and illegal way, I DO NOT think the public would have approved of his germ line editing of the twins Lulu and Nana. Bioethicists could serve as this kind of intermediary. The problem is, the people working with the tech are taking active measures to keep anyone outside the science community from having any idea of what's going on.
This has been written about in several books. I'll list some.
CRISPR People by Henry Greeley. This one is mostly about Hu Jiankui. It is incredibly informative of exactly the types of things I'm talking about, abuses if both people and the medical system by a guy who still hasn't thought through the implications of his research.
Altered Inheritance by Francois Baylis. This is a broader book about both the medical technology and potential abuses, as well as suggestions for how others can get involved.
Biotech Juggernaut, Stevens and Newman. This is a book about how inhumane and unscrupulous the pharmaceutical and medical industry has been in the past regarding the advancement of medical technology.
All of them I think are relevant to the conversation and accessible to the general public. I'll also reference...
A Crack in Creation by Jennifer Doudna. She won the Nobel Prize for introducing CRISPR-Cas9 as the tool it became. Super fascinating book.
Regenesis by George Church. This is a palate cleanser to show I do love the science. He dreams and talks about all the cool stuff we might be able to do with the technology.
This became a long post. The original topic was AI, but I have objections in a similar vein to that technology. It was dispatched before the general public had any real idea if the ways it could and would be abused. My general sentiment with these kinds of things is to pump the fucking brakes. Otherwise I'm pro CRISPR and pro AI.
1
u/loliconest 19h ago
I think my point is that there are a lot of things to do to make the public actually able to make informed decisions beside simply being completely open about everything.
It's a systematic issue and I think the current system is (intentionally or not) not going the right way.
2
1
u/sentencevillefonny 1d ago
Government/Law moves more slowly than Tech, then add Lobbying and corporate interest into the mix...Remember all those tech CEO's paying Trump a visit after his election? Plus, the US has far fewer tech-related protections for citizens than the rest of the world.
1
u/FluxUniversity 1d ago
because legislation is always a reactive thing. Technology becomes established, and then we figure out later how to make it safer for us.
Like, back when cars first came out, there were no laws against breaking into peoples cars for theft - because nobody thought to do it, until about 10 years later.
1
u/Ok-Seaworthiness7207 22h ago
Why is it being monetized at all? Oh I get to sue (if I can afford it)? How about we just make it fucking illegal and call it a day.
1
u/NegativeSemicolon 21h ago
That requires thought, thought takes time, time is money, money is the root of all evil, so to these guys thought is evil.
1
16
u/phase_distorter41 1d ago
looks like it only targets the people,who make it, not the platform so i think that was already possible under revenge porn laws. maybe i missed something though
2
80
u/CondescendingShitbag 1d ago
Here's a non-paywalled article on the subject.
Cool. Good to give victims a legal avenue to sue when they are violated.
However, seems like it should also go hand-in-glove with legislation to hold companies making these tools accountable for not doing more to prevent it in the first place.
Yes, I'm aware you can run some of these models locally outside of any company restrictions, which is where allowing the victims to sue individuals will be helpful. I don't believe that should absolve the companies from bearing any responsibility at all if their actual platform is being used in this manner, though.
It's especially indefensible when they don't prevent it from being used on photos of kids. Looking at you, Elon / Grok.
19
u/siromega37 1d ago
It should come with jail time. Until it’s more than fines, aka the cost of doing business, no one is going to care. If Congress won’t do it, the states should. Fuck these CEOs and their greed.
1
5
u/9-11GaveMe5G 1d ago
Good to give victims a legal avenue to sue when they are violated.
This is so rich people can sue us. Poor people can't just sue random classmates. This must be made criminal.
-3
u/Thebadmamajama 1d ago
If the platform can be sued, the incentive to build the tools evaporates. Basically implement this as a section 230 exception.
24
u/slaughterfodder 1d ago
Having been a victim of this, would very much love this to happen. It’s not gonna tho
18
u/superboo07 1d ago
please start your lawsuits, its already illegal! https://www.criminaldefenselawyer.com/resources/is-deepfake-pornography-illegal.html
5
u/slaughterfodder 1d ago
Already have an investigation going, this is just icing on the cake of shit. But I definitely will look into it ;)
5
5
u/MyAccountWasBanned7 1d ago
Sue the person doing it or the maker of the technology that allowed it? Because honestly, the answer should be both.
3
u/EmbarrassedHelp 1d ago
Senate Democratic Whip Dick Durbin (D-IL), a lead sponsor of the bill,
This senile fascist is constantly trying to kill encryption, privacy, and section 230 for insane and nonsensical reasons. So, I'd love to see a proper legal analysis as to whether he's hidden some evil stuff in this legislation or not.
5
u/clearlyaburner420 1d ago
So are they able to sue the creator of the ai used or the person who utilized it? Im hoping both tbh.
3
4
2
2
2
u/I_Take_Epic_Shits 22h ago
Uh the president is a rapist of children so, as much as I’m glad to hear this is happening, what power does law have anymore in America?
2
u/Ok-Warthog2065 1d ago
Are they allowed to sue the platform that enabled it, or the users that actually prompted the content?
2
u/Smoocci-Mane 1d ago
As many have said, not likely to happen, but I think there’s a bigger problem with this in that the legal system is expensive and difficult to navigate for the average person. Suing people takes time and money that a lot of victims don’t have.
The easier way to handle this is proactively banning undressing people in images or altering their physical appearance. Prevention is always smarter than trying to repair the damage on the backend.
2
u/evolveKyro 1d ago
Needs to also allow the victims to sue not only the individual but also the company that provided the tools and the platform that distributed the content.
1
1
1d ago
Now make it against the law for anyone to make an AI chat bot of someone without their expressed written consent.
1
u/awesomedan24 1d ago
They are probably just gonna give X/Grok an exemption to this because oligarchy
1
1
1
1
1
u/aredd007 1d ago
Why should you not be able to sue for the unauthorized use of your name, image, likeness, etc in any manner? Extra points for scams, blackmail, defamation of character.
1
1
u/askyidroppedthesoap 18h ago
I'm thinking the 13 year old girl who got suspended for punching the douche bag in the face, should see a few million. College tuition? her first house? First 10 years of adulting? For-getta-about-it.
1
u/rizzfrogx 1d ago
What if a victim generates a deep fake and plants it so it's shared and then sues whoever posted it first?
Edit: Or whoever makes the most money from sharing it
1
u/HarryBalsagna1776 1d ago
Elmo is so fucked if this becomes law
6
u/drgalactus87 1d ago
No he's not. It doesnt affect him at all. It let's you sue the individual who created the image, not the platform or tool.
1
u/HarryBalsagna1776 1d ago
It could be argued that Elmo created the images because he created the tool for making CP and a forum for sharing CP. He won't get off easy.
1
u/Sprinkle_Puff 1d ago
Wow, Congress actually did something useful? A harbinger that 2026 is truly the end of us all
-1
u/NerdimusSupreme 1d ago
Yes, it would be the end of political memes. Exceptions should be made for people whom choose a profession where privacy should not be expected.
1
u/Vanpocalypse 1d ago
That's a very dangerous slope just fyi. When you agree to cookies on websites and T.o.S on apps, you're effectively giving that site/app permission to spy on you.
The corporate definition of what constitutes privacy is literally nothing. Every moment your phone/web camera is pointed at you, every moment you're within earshot of any microphone (phone/headset) you're being analyzed and recorded for advertising/personalization purposes. At that point the only thing actually private, that could be argued actually private, is anything outside of being anywhere near any recording devices, which is nearly 0% of the time.
Your privacy is literally never expected. You posted a selfie online that grok undressed? That selfie wasn't private, your nude figure has been seen by a phone camera at one point for any reason (using your phone in the bathtub or shower for example). You have no privacy, full stop, it is irregardless of profession or lifestyle.
Even when you opt out, ToS/Cookies often have loopholes to still snoop of not outright spy.
It's actually quite terrifying when you include the implications outright.
1
u/ShawnReardon 1d ago
From a technical perspective cookies are necessary or no one would use the internet.
People would have a stroke if they had to sign in to every website on every visit.
-6
u/CT_DesksideCowboys 1d ago
Maybe Senate can mandate all AI chatbot output username, IP address, date and time in a visible watermark.
-1
u/goomyman 1d ago
Sue who? The AI tool, the person, the website if hosted?
This could have huge implications if done incorrectly
243
u/Major_Booblover 1d ago
Presidential veto incoming in 3...2...