close
close

Pasteleria-edelweiss

Real-time news, timeless knowledge

Images of child sexual abuse produced by artificial intelligence are spreading. Law enforcement forces race to stop them
bigrus

Images of child sexual abuse produced by artificial intelligence are spreading. Law enforcement forces race to stop them

WASHINGTON (AP) — A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to show a group of naked girls. A U.S. Army soldier has been accused of creating images depicting children he knew being sexually abused. A software engineer tasked with creating hyper-realistic, suggestive images of children.

Law enforcement agencies in the USA are taking strict precautions in this regard Disturbing dissemination of child sexual abuse images From manipulated photographs of real children to graphic depictions of computer-generated children created with artificial intelligence technology. Justice Department officials say they are aggressively pursuing criminals using artificial intelligence tools. States are competing To ensure that people who produce “deepfakes” and other harmful images of children can be prosecuted in accordance with their own laws.

“We need to signal early and often that this is a crime, that it will be investigated and prosecuted when the evidence supports it,” Steven Grocki, head of the Justice Department’s Child Exploitation and Obscenity Section, said in an interview with The magazine. Associated Press. “And if you sit there and think otherwise, you are fundamentally wrong. And it’s only a matter of time before someone holds you accountable.”

The Justice Department says existing federal laws clearly apply to such content, and recently brought what is believed to be the first federal case involving images entirely generated by AI; This means that the children depicted are virtual, not real. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska, accusing him of posting innocent photos of real children he knew through an AI chatbot and rendering the images obscene.

I’m trying to catch up with technology

The prosecutions come as child advocates urgently seek to crack down on misuse of the technology to prevent a flood of disturbing images that authorities fear could make it harder to rescue real victims. Law enforcement officials worry that investigators are wasting time and resources trying to identify and track exploited children who do not actually exist.

Meanwhile, lawmakers are introducing a series of bills that would allow local prosecutors to file criminal charges under state law for AI-generated “deepfakes” and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse images, according to a review by the National Center for Missing and Exploited Children.

“Frankly, we are trying to catch up as law enforcement with a technology that is advancing much faster than we are,” California District Attorney Erik Nasarenko told Ventura County.

Nasarenko pushed through the law signed last month Gov. Gavin Newsom makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California law requires prosecutors to prove the images depict a real child.

Law enforcement officials say AI-generated images of child sexual abuse could be used to groom children. Even if they are not physically abused, children can be deeply affected by having their image altered to appear sexually suggestive.

“It felt like a part of me had been taken away. Even though I wasn’t physically violent,” said Kaylin Hayman, a 17-year-old who starred in Disney Channel’s “Just Roll with It” and helped push the California bill after falling victim to “deepfake” images .

Kaylin Hayman, 17, poses in front of Ventura City Hall in Ventura, California, October 17, 2024.
Kaylin Hayman, 17, poses in front of Ventura City Hall in Ventura, California, October 17, 2024.(Eugene Garcia | AP Photo/Eugene Garcia)

Hayman testified last year at the federal trial of the man who digitally superimposed his and other child actors’ faces onto bodies performing sex acts. He was sentenced to more than 14 years in prison in May.

Open-source AI models that users can download to their computers are known to be favored by criminals who can further train or modify the tools to uncover explicit depictions of children, experts say. Authorities say bad actors are trading tips on darknet communities on how to manipulate AI tools to create such content.

A report last year An investigation by the Stanford Internet Observatory found that a research dataset sourced by leading AI image generators such as Stable Diffusion contained links to sexually explicit images of children, contributing to the ease with which some tools produced harmful images. The dataset was removed, and the researchers later said: they deleted More than 2,000 web links to suspected child sexual abuse images.

Leading tech companies including Google, OpenAI and Stability AI have agreed to work with anti-child sexual abuse organization Thorn combat the spread images of child sexual abuse.

But experts say more needs to be done initially to prevent misuse before the technology becomes widely available. The steps companies are taking now to make it harder to misuse future versions of AI tools will “do little to prevent” criminals from running older versions of the models on their computers “undetected,” a Justice Department prosecutor said in recent court documents.

“Time wasn’t spent making products secure rather than efficient, and as we’ve seen, it’s very difficult to do that after the fact,” said David Thiel, chief technologist at the Stanford Internet Observatory.

Artificial intelligence images become more realistic

Last year, the National Center for Missing and Exploited Children’s Cyber ​​Tip Line received nearly 4,700 content reports involving artificial intelligence technology; this is a small fraction of the more than 36 million total reports of suspected child sexual abuse. As of October this year, the group was submitting about 450 reports per month regarding content involving artificial intelligence, said Yiota Souras, the group’s chief legal officer.

But experts say that because the images are so realistic, it is often difficult to tell whether they are produced by artificial intelligence.

“Investigators spend hours trying to determine whether an image actually depicts a minor or whether it was created by artificial intelligence,” said Rikole Kelly, a Ventura County deputy district attorney who helped write the California bill. “There used to be some very clear indicators… With advances in AI technology, that is no longer the case.”

Justice Department officials say they already have the tools under federal law to go after criminals who use such images.

at the US Supreme Court 2002 lifts federal ban about virtual child sexual abuse materials. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaging in sexually explicit behavior deemed “obscene.” This law, which the Justice Department says has been used in the past to caricature child sexual abuse, specifically states that there is no requirement that “the minor depicted actually exists.”

An Orlando mother is suing Google and a separate technology company after her 14-year-old son committed suicide. (Source: WKMG, CHARACTER.AI, FAMILY PHOTO, CNN)

The Justice Department brought the charge in May against a Wisconsin software engineer accused of using the AI ​​tool Stable Diffusion to create photorealistic images of children engaging in sexually explicit behavior, and was caught after the engineer sent some of them directly to a 15-year-old boy. The attorney for the man who pushed for the Instagram message to have the charges dismissed on First Amendment grounds, authorities said, declined to comment further on the allegations in an email to the AP.

A spokesman for Stability AI said the man was accused of using an earlier version of the tool released by another company, Runway ML. Since taking over the custom development of the models, Stability AI says it has “invested in proactive features to prevent the misuse of AI for the production of harmful content.” A spokesperson for Runway ML did not immediately respond to a request for comment from the AP.

In “deepfake” cases, in which a photo of a real child is digitally altered to make it sexually explicit, the Justice Department files charges under the federal “child pornography” law. In one case, a North Carolina child psychiatrist who used an artificial intelligence application to digitally “undress” girls posing on their first day of school in a decades-old photo shared on Facebook was convicted on federal charges last year.

“These laws exist. They will be used. We have this will. We have the resources,” Grocki said. “This is not going to be a low priority that we ignore because there is no actual child involved.”

__

The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. You can find AP’s standards for working with philanthropies, a list of supporters and funded coverage at: AP.org