The We Robot conference I recently attended had several outstanding presentations and papers on issues facing AI/robot ethics and law. You can see an archived livestream where you may find the author asking questions (quite frequently, in fact). This article deals with what I believe were my first and last questions at the conference: what makes an AI author/artist different from a human one, and does a machine spirit confer the ability to make art? Let me be very clear from the outset that the two concepts I’m about to combine in this paper are well outside my usual bailiwick. My spiritual knowledge has been demonstrated by my Robot God article (anyone who has read it can come to their own conclusion). My knowledge of copyright is mostly that I want one, and some basics of its application. My understanding of art – inferring from the behavior of an art professor I worked with – is somewhere in the negative region. The metaphysics and social concepts of what makes art art and why some things are art and other things aren’t art is something I freely admit confounds me to the degree that I do worse than chance when trying to separate what is from what isn’t art. In fact, I somewhat suspect that my result would be less than 10% on the art / not art test.
With that admission of being worse than ignorant on one of the three concepts I’m about to attempt to talk about, let’s get into a discussion in which I will try my darndest to at least be accurate even if I’m not right.
The first was brought up in the workshop day, so there is no paper to go with it. This was the idea of Shinto with respect to robots. As an animistic spiritual structure, the question was what role the spirit of inanimate objects plays in AI and robotics. I am already wrong, however. A brief look at Google and it’s Wikipedia entry tells me that Shinto isn’t strictly animistic anymore. Since this premise was being discussed by a predominantly Japanese and highly educated panel, I can at least be mistaken in good company. The discussion we had suggested that some limited quantity of classically inanimate objects and forces have a spiritual component, and that a robot or AI may be one of those. Let’s move on with the discussion with the assumption that they can. Even if there is no major animist religion I can name just now, the possibility that robots have spirits remains.
This has many implications, but let’s stick to art for now since that’s the premise of the article. Time to be even more wrong than I was about Shinto!
The copyright panel (I’m linking to We Robot’s program because all the papers were preliminary and so it isn’t good form to directly cite) concluded the conference on a fascinating and contentious topic: can an AI produce copyrightable art? Let us take it as read one more time and then not bring it up again: I am not an expert on copyright, and I know even less about what a romantic author is. However, I think I got the gist of the argument against AI authorship. The central points seemed to be:
- An AI author’s work has no intrinsic artistic value because it is the algorithmic combination of existing works.
- An AI cannot be an author because authorship must change the author. I used the word “improve” in my question, which appears not the be the best term. I think the idea is that no matter what the impact on the consumer, if the author has no concept of the universe of thought which makes up human civilization, does not contribute to this universe, and is not in some way altered by the act of authorship, they aren’t worthy of protection or the appellation of “author.”
- An AI has nothing to gain by being protected as an author. They will create regardless of whether they are protected, they have no use for money, and recognition is immaterial to them. Therefore anything that might be of value culturally should be in the public domain so that all may use the output immediately.
I as a technocratic philistine disagree to some extent with all 3 points (although I sincerely hope that I have restated them with moderate accuracy).
First, it’s important to distinguish an AI author from an author who uses AI to augment their capabilities. There’s no easy place to draw the line, but I think that the example of Harry Potter and the Portrait of what Looked Like a Large Pile of Ash works in many ways for this article. First, it was made using a predictive text generator which I have tried out in a graphical format that the website provides. You have to specify whether you are doing dialogue or text, as well as ending a sentence when you want. You can “seed” the sentence with a word or phrase to start, and pick from a list of 18 suggested words. I think this still makes it an AI author since the input can be quite minimal, and contributing structure still leaves the content to the AI. The “voice” belongs to the training set data.
This brings me to point 1. The voice of the author is derived from the training set. Therefore the act of writing – should the content created be added to the set – can only reinforce what has been written. That is, the voice never changes because what comes out cannot never evolve and is a sort of closed conceptual structure. Sort of like if you build a machine that shines a light which is on the other side of the color wheel from a light shone on it, and then you reflect the output back to the input (it just goes red -> green -> red -> green, and cannot escape this loop). This would be a perfectly valid argument to me if the AI was made with a training set already picked out and no new inputs other than what it had created (and perhaps not even that).
The thing is, why not have an AI that looks further? If it were to consume material at varying intervals, with a bias towards the genre it’s supposed to write, it wouldn’t be that much different in its reading habits than I am. That probes the boundary between replication and influence. Even with the Harry Potter AI, it created a plot which JK Rowling is almost certain not to have ever conceived of, and did so in her authorial voice, making it arguably some of the best fanfiction out there.
Going on to the similar point 2: An AI author is not changed by the act of authorship, has no intention or meaning that they themselves understand as such, and thus cannot create art. Let’s be clear about one thing, we’re talking AIs in the near and possibly mid term future. Far future with sentient/sapient machines, totally different ball of wax. I dislike but understand this argument. I think it brings in too much ambiguity to be enforceable, because it assumes that humans are ipso facto artists by their very nature. That a toddler putting together random paint is changed by the act – and possibly they are. It also assumes that the mind and soul of the artist and author are the core of protectable art. It doesn’t matter what we as viewers and readers experience, it’s the creator that is important.
This is crucial, because it means that the Harry Potter story referenced above is not art or at least not protectable art. The fact I found it funny, insightful, and a clever reflection of how an author’s voice in print is as distinct as it is when heard, regardless of whether it makes sense, is immaterial. The fact that it spawned a myriad of dramatic readings and animations online all of which are protectable is immaterial. It isn’t protectable because we don’t believe it changed the author in a way we consider artistic. It may have altered the text predictor in an algorithmic way, but because we can mathematically (eventually) derive the precise change, it isn’t spiritually/intellectually/culturally/whatever valuable. Of course, I suppose the flipside of that is that it can’t be infringement. Does that break the chain? Is all the derived work of a piece that is axiomatically in the public domain thus not infringement as well, or does this public domain for AIs exist in a sort of artistic subspace where the chain simply skips one inspiration and links the derived works to Harry Potter itself?
Here’s a question, if mechanistic predictability and understanding is the quotient here, what happens when the human brain is completely mapped? What if we find that we can mathematically model our own minds? Do we then cease being artists because we can create a function for our brains that transforms input to output without needing us? A computer capable of simulating a human brain would likely be able to simulate something more complex. Do we lose our ability to be artists when something that can treat us the way we treat a modern AI comes along, ceding art to the most mystically unintelligible intelligence?
See, that’s the thing for me. The following image is artistically unintelligible to me:
I trust that someone could eventually explain its artistic merit, message, and contribution to human culture. Trust, that’s the key word there. We believe that we are being enriched by paintings of soup, or color splashes, or geometric shapes arranged in a certain way. It doesn’t need to tell us a story because we believe it represents a story to the person who made it and that story meant something. Barbara Cartland may eventually have a bibliography of nine hundred books. Imagine how changed she must have been by the time she died! Or perhaps she started approaching the asymptote of authorial alteration, a literary Zeno’s paradox. What’s the Planck length of artistic and cultural evolution, the atomic quantity of personal growth?
We trust each other as human creators to create something that at the very least means something to us. If we don’t extend that trust to AIs, we’d better have a reason that doesn’t apply to our own creations as well. We assume that there is change occurring in the human creator, but what if the robot does have a soul? Does that change the equation? Don’t we then have to give it the same consideration?
With Ms. Cartland we reach point 3. She needed protection not, perhaps, as a meritorious creator but as a person who needed to make a living. An AI is not motivated by money, survival, or anything but the programmatic imperative to create. I won’t argue the point directly because I don’t (directly) disagree. We protect IP partly to secure the rights of creators and encourage people to create more and reap the benefits. I’ll just make one yikes factor point: isn’t it the goal of most artists, at least to start out with, to create simply because they want to? To be able to throw off the limitations of what is profitable and just make something because they want to?
So I guess that I agree with the conclusion if not the premise. An AI doesn’t need protection because an AI has, by its nature, achieved the end goal of an artist: it is free to make what it will, without fear of starvation or penury, and without the demands of patrons and customers. An AI isn’t undeserving of a copyright because it’s lesser than we, but because it’s transcended the need.
Getting back to the spiritual side, if robots do have souls, and the soul of an artist lives in an AI that makes art, it’s a heck of a lot closer to the ideal than most human artists. And isn’t that the biggest kick in the teeth for us as biomechanoids: we may hold the intellectual high ground, but our spirits aren’t looking quite so shiny. If a war robot’s spirit is harmed by being a war robot (and maybe it isn’t if the souls of warriors are improved by engaging in combat), then an art robot’s soul is commensurately improved by its unhindered artistry. If that’s the case, it’s goodbye to points 1 and 2 because there’s the improvement right there, in a realm where mathematics can’t be used to deny the robots the status as authors. As for point 3, I suppose we could start charging them for electricity and processing power.
I might be getting a bit artistic with my arguments in the last couple paragraphs, but hey, I’m allowed. I’m human.