The Ethics of AI
For the past 3 weeks I have been teaching an online course for MusicFirst titled AI in the Music Classroom. Running a music tech company is a full-time gig but I am thoroughly enjoying teaching this course, not only for the meaningful interactions that I’ve been having with the students enrolled in the class, but also the deep prep work that I’ve been doing each week. I truly believe that the only way to be a good teacher is to throw yourself in the deep end and do LOTS of preparation before each class. As part of that work, one of the aspects of AI that we looked at last night was the ethical considerations and issues that go hand in hand with AI - specifically Generative AI. While the focus of the class last night was the impact that AI is having on the music industry, the portion of the class dedicated to the ethics of AI seemed to resonate the most with my students. Here is an overview of that discussion.
The three main ethical considerations that I focused on when it comes to AI in music and education were Attribution, Amplification of Biases, and Diversity, Equity and Inclusion. There are obviously more issues to be unpacked than these, but these are the three that I decided to focus on - specifically because of the impact that they can have on education.
When it comes to Attribution, there are many unanswered questions when it comes to WHAT DATA the large language models and generative pre-trained transformers were trained on. In non-geek speak - what data is ChatGPT drawing it’s information from? As human beings, we are influenced and inspired by musicians, artists, authors and other creators and we can draw upon the sum total of those influences when we create something ourselves. The difference between humans doing this and machines is that humans will inherently have a MUCH smaller pool of data and lived experiences to draw from, whereas ChatGPT draws on everything that is publicly available on the internet, Musicians borrow from other musicians, as do composers. It’s part of what we do. But as one student pointed out, it’s a human being doing that - not a line of code. It seems that consciously or unconsciously, we’ve got a big problem with machines doing creative work. It’s almost as though AI has an unfair advantage on the one hand (it can draw upon an infinite amount of data) and is appropriating human endeavors on the other (which we might feel threatened by).
Further, the tech companies are very secretive about what data they have used to train their algorithms, and how they’ve done it. That means that we simply don’t know who or how to attribute the creative outcomes that generative AI produces. Right now, if you ask Udio or Suno to write a song that sounds like Beyonce, you’ll get a message that says that they can’t do that, but that they’ll give you something in the same genre or style. The question is - what are they doing behind the curtain while the algorithm whirrs away? Do you really think that it’s not drawing upon Beyonce’s music? I certainly don’t believe that. And that is the issue with attribution. Until these tech companies come clean and either footnote each work it produces with all of the works that it used to generate the new song and/or create an equitable royalty scheme that would fairly compensate the artists whose works were used to create the new song, this issue will remain problematic and likely continue to make musicians and creators fight these new technologies tooth and nail.
Everyone knows the expressions “Don’t believe everything you read.” Smart advice that is meant to instill critical thinking. When it comes to the internet, this expression should be a warning label on every site you visit - mine included. There is so much good content available online - written by professionals and amateurs alike. You can access incredible information about endless topics, but it is wise advice to take everything with a grain of salt. There is also a TON of deceiptful content and misinformation available online as well. For example, have you ever been swayed to purchase something online based on the reviews, only to receive that product and realize that you’ve been hornswoggled? It’s happened to me numerous times. Or how about restaurant reviews? These reviews usually fall into two categories: the BEST place I’ve ever eaten - or the WORST. Which reviews do you believe? IF ChatGPT and other generative AI algorithms have been trained on the entire internet, how can it decide what is true and what isn’t? Do you think these algorithms had human guides helping them review every site? Highly doubtful. What this means is that all of the inherent biases that are baked into the fabric of our society and the information about it has been naturally baked into ChatGPT.
The example that I used in class is that there are LOTS of websites that claim that the Holocaust never happened. LOTS. Do we really want an AI algorithm amplifying that misinformation? I would certainly hope not. What about many of the other ugly aspects of human history - including slavery, racism, genocide, terrorism, extremism, etc.? Does ChatGPT really know what is true and what isn’t? Does it have it’s own inherent biases? If 60% of the content on a given subject says one thing, and 40% says another, how does ChatGPT figure out what is correct - or at a minimum - what is the commonly accepted answer. Think about politics and religion. Think about traditionally marginalized populations that due to traditionally racist policies don’t have their voices heard in the data that ChatGPT was trained on. If humans believed (and they did/do) that one race was superior biologically over another and that information is in the data that was used to train these algorithms, will the algorithms amplify this type of bias? I asked ChatGPT that very question, and was surprisingly pleased with the answer I received. Maybe there is some hope, but until AI companies are more transparent, these issues and questions will remain.
That leads me to the last ethical issue that I discussed last night - Diversity, Equity and Inclusion. In music education, and education in general, we are finally experiencing a reckoning when it comes to how music has been taught over the past century and a half, and how - whether knowingly or unknowingly - our curriculum has excluded many diverse voices that should have been included. My recent interview with Prof. Nate Holder provides LOTS of examples of how we can do better as music educators in this arena.
Because of the inherent LACK of diversity, equity and inclusion by all aspects of education in the United States, there is little doubt that the AI algorithms have picked up on that, and will likely continue to amplify it. I hope I’m wrong. If you have done any research into this topic, you’ll know the issues that I am talking about. There are so many marginalized people that haven’t had a seat at the table when it comes to music composition - especially during the Baroque, Classical, and Romantic periods of classical music. Further, one only needs to look back about 70 years into our own history to see segregation as policy and an enormous imbalance when it comes to access and equity in educational resources and environments.
I am truly hopeful that the folks behind the curtains of the many companies that are launching these “game changing” services upon humanity are mindful of how big of a responsibility they have to address these ethical issues. Teachers should be ahead of the curve on this and have these important discussions with their students NOW rather than waiting for it to happen. To be clear, I’m not accusing these tech companies of wrong doing at all - in fact, I don’t think they are doing anything necessarily “wrong”. What I do believe firmly, and it seemed to me that my students agreed, is that these AI tech companies have a moral and ethical responsibility to make sure that ALL of these questions, and the many more that I didn’t raise in class last night are answered - with transparency and accessibility to all who are interested.
Here are some questions that I would recommend raising with your colleagues and students that get right to the heart of the matter:
Who do you think is making the majority of the money when it comes to generative AI music making tools? The tech companies? The artists? The creators?
Do you think anyone is being left behind by generative AI programs? If so, who? If not, why?
Whose responsibility should it be to fact check the responses to prompts submitted to generative AI tools? The tech company? The user? Someone else?
Do you believe that generative AI tools have a built in moral compass to help them make sense of all of the information they have been trained on? If so, what makes you believe that? If not, do you think they should?
How can AI tech companies make sure that they are always doing the right thing for humanity? Do you think that they should have that responsibility?
Do you think that if a musical artist has been emulated or copied by a generative AI music tool they should be entitled to compensation?
Do you think that tech companies should be more transparent on what data was used to train their chat bots? If so, why? If not, why not?
What do you think?