Think Out Loud

Portland band YACHT experiments with artificial intelligence and music

By Sage Van Wing (OPB)
Oct. 20, 2022 8:09 p.m. Updated: Oct. 28, 2022 10:13 p.m.

Broadcast: Friday, Oct. 21

In 2019, the band YACHT released an album called “Chain Tripping.” Every piece of the album was created by artificial intelligence: the music, the lyrics, the album art and the title. The band members, who come from Portland and Astoria, also made a documentary about the process of creating the album. It’s called The Computer Accent, and it’s playing this week at PAM CUT. Lead singer Claire Evans joins us to explain what AI can do for music, and what it can’t.

THANKS TO OUR SPONSOR:

Note: This transcript was computer generated and edited by a volunteer.

Dave Miller: This is Think Out Loud on OPB. I’m Dave Miller. In 2019, the band Yacht released an album called Chain Tripping. Every piece of the album, the music, the lyrics, the art, the videos, the title was created using artificial intelligence. The band members, who come from Portland and Astoria, also produced a documentary about the process of making the album. It’s called The Computer Accent, and it’s playing this coming Tuesday at PAM CUT, formerly known as the Northwest Film Center at the Portland Art Museum. Claire Evans is one third of the band Yacht. She joins us now to talk about the promise and also the limits of using AI to make music. Welcome to the show.

Claire Evans: Thanks for having me.

Miller: Thanks for joining us. I thought we could start with one of the songs from the album and we can talk about the making of it. This is Death.

[Song “Death” plays, then fades . . . ]

Why did you and the band decide to use AI to create this album?

Evans: Good question. Well, we’ve been a band for 20 years, and in that time there have been a number of really significant transformative, technology-driven changes in music, like from social media to digital recording to streaming. We’re still here and I think that comes from a willingness to try new things. And AI is one of those things. It promises to be sort of the final boss in terms of industry-wide transformation technologies. So we wanted to get basically a jump on it. We wanted to understand it.

We wanted to find a way to make it our own, so that it could never really displace us and we tend to learn through making things. So the prospect of making an album immediately seemed like a neat, really self-contained way to approach understanding something new. Here’s a way to explore what AI could bring, not only to music composition and writing lyrics and recording, but also the artwork and music videos and typography and photography. You name it. We’ve always felt like an album is kind of a frame that brings all different kinds of creative work together. So it was a really interesting opportunity to kind of assess the state of the art with this tech and and play around with it and see what we could make.

Miller: What were your fears before you started working on the album, or as you were starting work on the album?

Evans: Oh, we didn’t really understand what we were getting into, to be honest. I think like a lot of people our understanding of Ai was heavily influenced by somewhat reductive mainstream media representations of what it is, and decades and decades of science fiction representations of malevolent anthropomorphized, sentient, machine demons. So, we didn’t really know what we were getting into. We had a suspicion that it would take a lot of work, and we felt very daunted about the fact that we didn’t understand how to code, we didn’t understand the math that was under the hood. So we were worried that we would just be too dumb to do it, frankly. We were also concerned that it would be too easy actually, also. It was sort of like either it’s gonna be too difficult, or it was gonna be too easy. We were afraid that if it was as simple as pushing a button and training an algorithm to generate a song then, what would be interesting about that? What would the challenge actually be? Initially, we thought the challenge would be just finding a way to kind of make these robot produced songs our own and perform them in some way that made them feel real. It’s not really until we started working that we realized how far behind the state of the art actually was. I mean, you can’t press a button and make a pop song. You couldn’t when we started working on this album in 2017, you can’t do that now. It’s a good thing, frankly, but it ended up taking just a lot more work, a lot more effort and a lot of human-in-the-loop engagement, which ultimately would make the whole project a lot more rewarding. And I think we learned a lot from it.

Miller: Can you describe the set of rules that you all came up with governing how you would make the album, what you would do with the output from computers.

Evans: We thought it was really important to think of the output not as the end of the process, but as the beginning of a new process. We sort of jokingly referred to the entire recording experience as sort of like a scotch-taped-together recording process. Because like I said, there were no AI tools available that would allow us to produce a full song with verse, chorus, bridge, lyrics. So we had to find different tools for each aspect of the construction of a song. So one for creative lyrics, one for creating melodies, different tools for generating sounds. And they need to take a unique approach to each of those tools and understand each of those tools. But it was really important for us that whatever tool we were working with, it had to be trained on, that is to say it had to kind of draw from our own back catalog and our own influence. So we spent a lot of time making ourselves legible to the machine, like building databases of all our favorite music. So we could teach the lyrics-writing algorithm, the kinds of lyrics we like.

Miller: And you had to actually feed it thousands of songs from your catalog, but you don’t have thousands of songs, so from your favorite artists. So what was that piece of the algorithm? What would it do with the lyrics that you loved or had written in the past?

Evans: Well, we used what’s called a character recurrent neural network. I’m not gonna get too deep into the technical weeds, but basically it’s a system that generates language one letter at a time, based on what it expects from its training with a massive database of text. What it expects the most likely letter to come next is, so you give it an A, it will automatically fill in an N, and a D. Because that’s a word. And you can tweak the parameters, but basically we taught this character-generating model language, but only using song lyrics. So the only context it had for what a word is, or what frequency certain words appear in language, was from this massive database of song lyrics that we put together, I think two million words of songs from all of our favorite artists, from all the music we grew up listening to from the music our parents liked. Everything that we thought might be kind of kicking around in our own heads, we tried to put into this system so it could generate lyrics that felt like us. And it was really interesting. I could talk about this for hours, but the kinds of surprising things that the lyrics generating model was able to generate, like it would tap into these really universal, almost like lizard brain aspects of songwriting. Sometimes it would get stuck in these loops and just repeat things over and over again. But the kinds of things that songs are always about, you know, love and desire and rage like maybe 50 pages of just I want you, I want you, I want you, I want your brain. You just played a short clip of Death. We drew some lyrics for that song from a passage that just said “stab” over and over again. So stab a crash, stab a car, stab your hair, stab a factory. I mean it’s so much fun to play with these models because they generate this deeply surreal material on its own. It doesn’t hold up because it doesn’t have any structure. It doesn’t have any meaning. But our job as artists is to kind of take that material and give it meaning either by putting it together in some way like a puzzle or actually embodying and performing it and projecting all this stuff that we bring to it as performers and as artists. It’s a really fun role to try to fill and step into now.

Miller: Do you have the same emotional connection to the music, and especially to the lyrics, for these songs that you do to songs for which YOU were the human who wrote the lyrics?

Evans: Yes and no. I think meaning is very fluid and there are songs that I wrote, pen and paper from the old style way a long time ago, that I’ve been performing for decades and the meaning of those songs has changed for me over the years. Because life happens and you have a different perspective on things. The way I feel about the meaning of those songs is different than what our fans might feel or what other members of the band might feel. So everyone’s relationship to meaning in the context of songwriting involves a great deal of projection and personal experience. I think with these machine generated lyrics, it’s a little bit more effort to project meaning onto them. But it’s also much more expansive and open ended. And I really love the idea of performing songs that I am still figuring out the meaning of and that I can share that experience with the audience and we can talk about it and we can each share our own interpretations, sort of like a lyrical Rorschach test that we’re collaboratively viewing.

THANKS TO OUR SPONSOR:

I think that’s really beautiful.

Miller: It’s funny, because on some level, the process of feeding this algorithm thousands of songs, millions of words, it feels really foreign or alien. It seems at first blush like a different way of creating something than the way we think of an artist. But on another, I mean, you as a songwriter or you know, somebody who writes short stories or novels, your people who have been, who’ve taken in all kinds of of other works over the years, they’re all somewhere in your head and it’s I wonder if on some level that the act of creation actually feels similar for a human, that you’re doing a version of what the computer is doing, manipulating all these things that are in your head, and then coming up with something that can’t help but be related to them somehow?

Evans: Absolutely. I think it absolutely is the difference. There are differences of course. These systems are capable of producing material at a scale and volume that is genuinely terrifying. Like, we press the button and print hundreds of pages of lyrics. I mean an amount of lyrics that I could never write in a lifetime I could create with one of these systems in a matter of minutes. I mean of course there’s all this time that goes into training and building it to begin with. But it is daunting to come up against that level of just generative potential.

Miller: It’s millions of monkeys at typewriters. But the monkeys are smart, or sort of.

Evans: The monkeys are not smart. The monkeys are really, really good at really, really specific things. They have no general.

Miller: They’re trained, if they’re not smart without being smart.

Evans: Yes, we are the smart ones. Everything that these systems produce, everything that’s beautiful or interesting that these models can create is because of the artists and musicians who make up the data sets that they were trained on and built with. Everything AI knows about being creative, it’s learned from humans. Without us, it’s nothing. I think it’s really important to remember that, because we very easily get overwhelmed thinking ‘my God that’s so powerful’. But it’s really just the sort of hallucinatory distillation of everything people have made before. And that is a much more exciting prospect I think, the idea of collaborating with hundreds of other artists throughout time rather than just one machine.

Miller: I want to play a scene from this new documentary in which you’re talking with your bandmate, Jona, who has been working on playing as an actual drummer some of the loops or or licks that were given to you all by the algorithm. Let’s have a listen:

[Excerpt plays from documentary film]

“So this today started with recording these drum loops that were spit out from a gente and they were so difficult to play. And it made me realize that the way I play drums is all just piecing together different clips that I’ve learned. There’s like certain things that you do on the drums. Like if you’re gonna hit a crash you usually hit the kick to, it sounds silly to hit like a crash in the middle of a pattern without a kick. And this has it just this creating loops that are very weird. It was pretty hard, it was like having to learn drums again. And even though it was supposed to be interpolating between two of our previous patterns in between, there’s some weird stuff that no one has ever done when you’re playing things that are so weird the first time this drum pattern will ever be on an album in the history of time. Yeah, I think that’s safe to say. That’s cool.”

I love that because it shows us what happens when somebody who hasn’t played an instrument before and also doesn’t have, maybe isn’t beholden to the limits of the known limits of instruments. Just says, well why don’t you do it this way? It reminded me of another scene in the movie when Rob and Jona were trying to play the music spit out by a computer. And it was hard because they thought that it was too straight ahead. They wanted to sort of loosen it rhythmically. And then the question was, would that be cheating, going against your own rules that you all had set for yourselves? How would you figure out those kinds of internal debates?

Evans: Yeah, we debated a lot. You can see in the film, there’s a lot of argument about what qualifies as cheating and what doesn’t. I think for us, we really wanted to set these very explicit rules at the beginning of the process that we only wanted to interpolate from or draw from our own back catalog and our influences. Once we’re working with the generated material, we didn’t want to add anything. We wanted to only work with what we had. We could subtract and transpose things and rearrange things, but we couldn’t jam. We couldn’t come up with a harmony. We couldn’t improvise. We couldn’t do any of the things that felt natural or normal or good in the moment. Ultimately we held to that rule, and we couldn’t work with anything that the models couldn’t create. So there was no machine learning model that could generate chords at the time that we wrote this album. So there are no chords on the album, things like that, where it’s probably overly fussy and rigid. But I think for the purposes of really taking this project seriously, understanding this technology, and really seeing what we could do within those constraints, it was important to us.

And I think also, I speak for myself, I think it’s fairly true for a lot of artists: having constraints is a really important part of beginning a major project, otherwise you can get so easily overwhelmed. But it’s true that within those constraints, we struggled a lot to try to figure out how can we make this sound good, or how can we make this feel good, because there’s so much of this material that these models these tools, they don’t have a body, they have no understanding of what feels good to play, they’re not they’re not coming at composition from habit or embodied experience. They’re just generating notes in sequences based on mathematical probability. So when we sit down to play those notes or sing those melodies, they might not be on paper very complex, but they’re just weird. Like they’re just to the left of what something would normally be, and they don’t feel good. So it made us realize how much we were bound to our own sort of bodily experience in history playing music.

Miller: What has all this taught you about the best use, the most useful use, and least destructive use for these tools when it comes to making music?

Evans: Yeah. I think we would never make an album in this way again, but we also never make an album the same way twice anyway. So, on to the next always. There are aspects of this experience that I will take with me for the rest of my life. I think for one it broke us out of a lot of habits, like I was saying physical habits, but also our own patterns of thinking, we knew what a Yacht song should sound like. Once we were playing with these tools, we were open to a lot more subtle melodies, a lot weirder things. I think it gave us an ear for the weird, that we will take with us. I write songs differently. Now my relationship to language is different because I’m not thinking about meaning quite the same way. I’m actually thinking about words as sounds first, rather than things with meaning. And then the meeting kind of comes afterwards now a lot of sort of technical things like that, where it’s, it’s just slightly a sort of broke us and put us together in a new way. But still now, I mean there are still tools that are of interest to me. I think when we get stuck sometimes, it’s really fun to play with a language generating AI model and see what it suggests. Sometimes our reaction is to do the opposite of what it suggests, but sometimes you just need something to bounce off of. So I think there are lots of really interesting uses for these tools. And of course the tools are evolving so quickly. I mean the things that we were using in 2017 which were like cutting edge things. We were working with research scientists, really getting the fresh-off-the-presses mathematical models. Those things are now old hat. They’re archaic, they’re built into a lot of recording software under the hood, they’re frictionless, totally invisible to most. And a lot of these tools are much better at the kinds of things that we wanted them to do back then, in a way that makes them look interesting.

Miller: More interesting and more powerful, and more to talk about in the future. But Claire Evans, thanks so much for joining us.

Evans: Oh, yes! Thank you so much for having me.

Miller: Claire Evans is a member of the band Yacht. We’ll go out with Sticking To The Station from their 2019 album, Chain Tripping. You can hear more about the album and the documentary, The Computer Accent.

Contact “Think Out Loud®”

If you’d like to comment on any of the topics in this show or suggest a topic of your own, please get in touch with us on Facebook or Twitter, send an email to thinkoutloud@opb.org, or you can leave a voicemail for us at 503-293-1983. The call-in phone number during the noon hour is 888-665-5865.

THANKS TO OUR SPONSOR:
THANKS TO OUR SPONSOR: