Since the earliest cave paintings, visual artists have used tools and materials to express themselves. This constant evolution includes use of paint, brushes, cameras, printers, computers, and more recently tablets and mobile devices. In such cases, the artist has always been at the centre of the work and its creativity. But as with all other areas of technology, the evolution of art is now encompassing artificial intelligence ('AI').
In October 2018, a portrait produced using AI was sold at a Christie's auction in New York for over $430k (smashing the initial estimate of $7k-$10k).
From a legal perspective, copyright and moral rights are the main rights which enable artists to be rewarded for their work, incentivise them and protect their intellectual creations from being exploited without consent. We explore in this series how these rights apply in the context of increasingly powerful and prevalent artificial intelligence.
Machine learning and neural networks
Traditional software operates by reference to a pre-determined set of rules which produce a predictable outcome for a given input. The programmer writes the code (ie rules) which dictates what the machine does. In machine learning, the machines algorithm is trained based on examples. These machines may be 'neural networks', which have multiple complex connections, inspired by the human brain.
Two main types of machine learning are 'supervised learning' and 'unsupervised learning'. In supervised learning, the machine is given a set of inputs (examples) and the trainer gives feedback on which outputs are desirable or not desirable. The machine learns by the supervisor making adjustments to the parameters of the algorithm, until it becomes skilled at producing correct outputs with a sufficiently high degree of probability.
For example, by feeding numerous images of cats into the system, the machine learns to recognise cats and non-cats with a high degree of probability, which would be more difficult to achieve by trying to write strict rules on what all cats look like from every angle. In the art world, a machine could be trained to recognise whether or not a painting is a Picasso, for example.
In unsupervised learning, the machine detects patterns by itself within large data sets, without the programmer necessarily knowing what these are, nor dictating what outputs are desirable. For example, by feeding data from multiple artworks into a machine, the machine may learn how to produce a style or a new artwork based on the data.
With AI-generated art, the artist often cannot predict with certainty what the output work of art will look like. For example, artist Pierre Huyghe's 'UUmwelt' exhibition at the Serpentine Gallery in London – 3 October 2018 to 10 February 2019 – is partly based on data from fMRI scans of the brain activity of individuals given certain descriptions.
The data was fed into a deep neural network which attempts to visually represent it on screens. However, the actual output on the screens is also dictated by the conditions within the gallery, including light, temperature, humidity, the presence of a community of live flies and the visitors themselves. As Huyghe explains "…what is made is not necessarily due to the artist as the only operator…".