AI-created child sex abuse imagery seized in NZ

Chief customs officer at the Child Exploitation Operations team, Simon Peterson (right) pictured with Customs operations manager Stephen Waugh. Photo: David White/Stuff.

Artificial Intelligence-created child abuse imagery has been seized in New Zealand, including a game that was created depicting child sexual abuse.

Stuff contacted New Zealand Customs, police and the Department of Internal Affairs to ascertain whether any AI-created objectionable imagery had been discovered in New Zealand, following reports of instances overseas. All three agencies confirmed they were aware of such material.

'Customs has seen an increase in digitally-generated child sexual exploitation material,” says Simon Peterson, chief customs officer at the Child Exploitation Operations team.

'Recently, Customs seized a game that was created depicting child sexual abuse. The concept is not new, but the power of AI is unfortunately making these images appear more realistic.”

Detective Inspector Stuart Mills, manager of police intercept/technology operations, says police are 'aware artificial intelligence is being misused to create images depicting the sexual abuse of children”.

'This is an issue confronting law enforcement internationally.”

A DIA spokesperson says it has seized 'significant quantities” of child sexual exploitation material, 'including computer generated imagery”.

'DIA is aware of online forums dedicated to the discussion of computer-generated child exploitation material, including AI content.”

All three agencies, and a number of AI experts the Stuff spoke to, are clear that despite being purely AI-generated, the abuse material is covered by existing laws and is illegal.

'Any publication that promotes or supports the exploitation of children for sexual purposes, whether digitally generated or not, is deemed an objectionable publication,” says Peterson.

He says Customs has seized digitally-created abuse material since at least the early 2000s, typically made using technology like Photoshop, but are now seizing material that was 'purely AI”.

He says around a quarter of the material seized now by the three agencies is digitally created.

'Some people's collections will be mainly digital stuff.

'The risk with the AI platform is any idiot can use it. I'd like to say we're pretty good at picking the fakes but AI can be pretty realistic.

'Someone can make something with AI and we couldn't tell the difference.”

He says AI is simply the latest tech space being exploited by paedophiles, following on from the internet, then social media.

'It's made child abuse more available. A scary prospect.”

Mills, from police, is clear the AI material created real world harm too.

'Outside of these images being shared, sold or traded, AI-generated imagery is likely being used by offenders to manipulate and coerce young victims online in instances of offending like sextortion.”

According to Associate Professor Colin Gavaghan​, University of Otago chair in emerging technologies, the use of AI for these purposes is not 'remotely surprising”.

'Worries about ‘deep fakes' have been around for years now,” he says.

'Digitally-rendered images depicting real people in sexual situations have been used quite frequently in attempts to humiliate usually female politicians and public figures.

'Pseudo images of child abuse are nothing new either – there have been convictions for those before now. The only thing that's changing is that they're getting more realistic and harder to distinguish from real images.”

Peterson says that when it comes to the tech companies themselves, some flag abuse material to the authorities, but 'some are better than others”.

Amazingly too, according to University of Waikato Artificial Intelligence Institute director Albert Bifet​, the tech companies may be unable to detect whether their tech is being abused to create this material.

Asked if they could detect the creation of abuse material on their platforms, he says 'unfortunately this cannot be done currently”.

'However, the EU and UK are considering a requirement for labelling pictures and videos generated by AI. Additionally, they may request that AI models disclose the data used in their creation.”

-Benn Bathgate/Stuff.

0 comments

Leave a Comment


You must be logged in to make a comment.