Published Date : 7/10/2025
As AI tools churn out striking art, the question remains: when the spark comes from prompts and past data, is it creation – or just clever theft?
When University of Auckland computer scientist Professor Michael Witbrock’s dad took him to see 2001: A Space Odyssey as a kid, the robot HAL did not scare him off. It hooked him. Following a grim 1970s summer selling boyswear in a Dunedin department store (“Don’t ever do that: People do not leave things on the shelves well arranged!”), young Witbrock had earned enough to buy an early computer, an Ohio Scientific Superboard 2, and start tinkering. Then, while doing a PhD at Carnegie Mellon University in the United States, he and a bunch of friends started playing around with something they called ‘genetic art’.
An image is just a bunch of pixels, so if you work out given the x and y coordinates, what colour and intensity those pixels should be, you can produce an image. Some of the pictures were quite striking; swooping pastel forms with crystalline structures in them. In the range of things you might think were artistic.
Fast forward three decades and what Witbrock and his friends were doing with AI art in the 1980s isn’t so very different from what happens with present-day artificial intelligence image production systems, he says. Where those early students used tidy mathematical formulae, AI now uses huge neural networks trained on vast quantities of data.
But is it art? Still, the core questions have barely shifted: Is the output art? Can it be original? And who owns it? Alex Sims, a professor in the University’s Commercial Law department specialising in intellectual property, has little time for ‘AI slop’, the term for low quality AI generated images. But her research highlights a real tension – not all AI-generated work is slop.
Sims points to a graphic novel called Zarya of the Dawn, the brainchild of New York artist Kristina Kashtanova, who used image generation tool MidJourney to generate the pictures. They spent literally days on each image. Initially they said ‘Create me whatever’, and then used a series of prompts to finally get to something creative.
The US Copyright Office initially granted protection for the graphic novel when it didn't know the images were AI-generated. When it found out, it reversed the decision; US law states images have to have a human author to be protected. That’s not the same in New Zealand, where computer-generated work can be protected. Sims says deciding where the line is between ‘creative’ and ‘non-creative’ isn’t black and white. Photographs weren’t protected by copyright law for decades, she says – people argued photographers hadn’t created anything, they were just recording nature. And creative people have always – knowingly or unknowingly – drawn on influences from the past.
There are a number of big legal cases about companies training their AI on original creative work wending their way slowly through the court system – Disney and Universal versus MidJourney, Getty Images versus Stability, and more. And they may eventually provide more certainty.
The paradox is AI needs authentic original human works to be trained on. In the meantime, Sims’ view is there should be some protection for AI-generated work, “but the protection should be a lot lower”. So they are not protected for as long, and you don’t get what we call ‘moral rights’ – the right to object to use of your work in certain situations. She also says artists should be forced to be up-front about their work being AI-generated, including stating that in the metadata for the image.
Here’s another conundrum: you might think generative AI makes humans redundant, but the truth is the robots can’t do it alone. It’s sometimes called the paradox of AI creativity, or the paradox of digital decay, and it’s something University of Auckland senior law lecturer Dr Joshua Yuvaraj examines in his research. The way AI works is you put in a prompt: ‘Give me a picture of a black cat’, and the AI has been trained on millions, billions, trillions of data points, images, words to statistically predict what it thinks you mean when you say ‘black cat’. And it’ll come up with a picture of black cat. The paradox is AI needs authentic original human works to be trained on. If it is trained on AI-generated works, eventually it leads to what computer scientists call ‘model collapse’.
Imagine the AI-generated black cat. If I re-feed that image into the AI and train it on that and we keep doing that, eventually the image of a black cat that’s produced is so distorted it looks nothing like a black cat. But the AI thinks that’s what a black cat is.
If we don’t have human output to train AI, then AI will become redundant, Yuvaraj says. So if you look on Seek and other job sites, you have jobs being advertised for people who create stuff to train AI. It’s actually becoming an industry. Here’s another irony: it takes an astonishing amount of human hard work to get artificial intelligence to create an original piece of music. At least, if the experience of Dr David Chisholm is anything to go by.
Q: What is AI-generated art?
A: AI-generated art is created using artificial intelligence algorithms that can produce images, music, and other forms of art based on input data and prompts.
Q: Can AI-generated art be considered original?
A: The question of whether AI-generated art can be considered original is still debated. While AI uses existing data to create new works, the extent of creativity and originality can vary.
Q: Who owns the rights to AI-generated art?
A: Ownership of AI-generated art is a complex legal issue. In some countries, like the US, AI-generated art must have a human author to be protected by copyright, while in others, like New Zealand, computer-generated work can be protected.
Q: What is the paradox of AI creativity?
A: The paradox of AI creativity refers to the fact that AI needs authentic human-created works to be trained on, but if it is trained only on AI-generated works, it can lead to a loss of quality and originality, known as 'model collapse'.
Q: How does AI art impact the job market?
A: AI art has created new job opportunities, such as roles for people who create content to train AI systems. However, it also raises concerns about the potential displacement of traditional artists and creators.