Study reveals that common perception about AI are mostly confused and incorrect – adding to social complexities around the technology
Artificial Intelligence (AI) is still a grey area, as much for the industry as for society in general. While issues regarding privacy and ethics have been in news, other more subtle yet impactful areas have often been overlooked. Now, a new study,led by a PhD scholar at the Massachusetts Institute of Technology Media Lab, reveals that not only the public has no idea of the roles and responsibilities involved in an AI-project but there are also big gaps in attributing proper credits for AI-based inventions. This particular study had been based on a 2018 project that created an AI-generated painting – but the basic concern applies to the AI industry as a whole. As an all-pervasive emerging field, it is necessary that AI experts and prospective data scientists become aware what people think of the work they do; especially because this may soon involve legal complications around intellectual property rights.
Edmond de Belamy – the AI-generated painting
In October 2018, the world-renowned Christie’s auction house sold a painting titled “Edmond de Belamy”, that was announced to have been created by an AI algorithm. The technology used generative adversarial network (GAN) algorithms developed over years by several data scientists. AI scientist Ian Goodfellow had invented the entire field of generative adversarial networks; engineers Alec Radford, Luke Metz, and Soumith Chintala, created the particular GAN involved in this work – called “DCGAN”; and programmer Robbie Barrat fine-tuned DCGAN to produce the artwork. The input database used to train the algorithm involved thousands of artwork samples produced by artists through the ages – analysing which this algorithm plotted the particular style of the portrait.
The painting sold for US$432,000.
So who got the money? The entire amount went to Christie’s and the Paris-based collective named Obvious that produced the final physical painting on canvas based on the design plotted by the algorithm.
This complete blackout of recognition to parties who had contributed to the technology behind the work created some noise in knowledgeable circles, but not as much as it should have. So when Ziv Epstein, of MIT Media Lab decided to research on the intersection of AI and art, he took up this episode to conduct a study on popular belief regarding AI. He was supported in the experiment by Sydney Levine and David Rand of the Department of Brain and Cognitive Sciences at Vassar (also involved with Harvard’s Department of Psychology and MIT’s Sloan School of Management, respectively), and Iyad Rahwan of the Center for Humans & Machines at the Max Planck Institute for Human Development in Berlin.
The study involved formulating a fictionalised narrative similar to the Belamy painting sale episode and collecting the views of several hundreds of individuals regarding the incident. The narrative was devised in two parts, simulating two scenarios – negative and positive. In the first the painting brings in huge profits, and in the second it attracts a legal battle over IP issues and results in huge fines. Participants were asked to: (1) rate on a scale of one to seven how much they thought each party in the scenario should be given credit; (2) split the amounts in both the imagined scenariosamong involved parties; and (3) to rate how much they agreed with various statements about responsibilities regarding the algorithm.
The paper has been recently published in iScience. It revealed, people had a host of false beliefs regarding technology itself, as well as the roles involved in any innovation. As Epstein himself told in a media interview: “You ask people what they think about the AI, some of them treated it very agent-like, like this intelligent creator of the artwork, and other people saw it as more of a tool, like Adobe Photoshop…” He thinks these findings would have profound implications on how much society should know about and discuss about technology – especially all-pervasive emerging technologies like AI.
Overall, the findings can be summed up as follows on a very broad level:
- People mostly gave credit to the algorithm itself for the final product.
- They also gave credit to the real-world art collective Obvious, for creating the final work.
- They gave added credit to the technologists who created the algorithm and the also “crowd” whose human artwork is used to train the computer.
- However, people did not give any credit to the person who trained the final algorithm (programmer Robbie Barrat in the real case).
- Overall, the study shows humans view the situation differently by actually buying into notions of agency, essentially anthropomorphizing the algorithm.
- Mostly, it was not understood that such outputs are results of prolonged and continuous research involving a lot of people and collaborative effort. This finding would have long-term implication on the technology-society equation.
Epstein’s own words sums up the concerns most aptly: “…a lot of people don’t understand what these technologies are because they haven’t been trained in them. That’s where media literacy and technology literacy play an incredibly powerful role in educating the public about exactly what these technologies are and how they work.”