It may sound counter-intuitive, but Mouse on Mars co-founder Jan St. Werner thinks artificial intelligence can help us expand what it means to be a human being.
If the idea of increased enmeshment with technology strikes you as invasive and deeply unsettling on a gut level, you’re certainly not alone. Even the band’s latest collaborator, science fiction scholar Louis Chude-Sokei, freely admits that he can’t forecast how AI will develop as we move forward. On AAI, the new Mouse on Mars album, the veteran electronic outfit (which also includes fellow co-founder Andi Toma and longtime drummer/percussionist Dodo NKishi) embraces that uncertainty with an inquisitive, even playful approach to the concept of artificial intelligence.
Working alongside Chude-Sokei and a group of software programmers, the band captured a machine-learning process whereby an AI system becomes increasingly aware and, in a sense, sentient as the album progresses. A far cry from the bleak, dystopian tone of popular works like The Matrix, The Terminator, Blade Runner, or, say, William Gibson’s Mona Lisa Overdrive, AAI doesn’t play off of widespread fears and assumptions about technology as a menacing force. The music conveys less of a “dark” vibe than a reflective one. The album’s point of view doesn’t exactly align with techno-utopianism either. Werner actually harbors many concerns about how technology can exacerbate inequity. The solution, he proposes, is for us to get more engaged with machines.
At one point during the track “Speech and Ambulation”, we hear Chude-Sokei’s voice explaining that “We had very little understanding of how knowledge and consciousness actually worked. We assumed that machines were merely the perfection of logic. We did not imagine they were capable of desire. What we still don’t know is what machines want.”
It’s easy to imagine Chude-Sokei — director of the African American studies program at Boston University, editor of the esteemed academic journal The Black Scholar, and author of several books that address the confluence of music, culture, and technology — delivering this passage in a lecture hall. But there’s a catch: at many points on the album, what we’re listening to isn’t Chude-Sokei but a simulation of him.
Once the listener realizes that machines generated certain spoken parts, AAI — which stands for “anarchic artificial intelligence” — takes-on a whole new, rather chilling context, and the listener is left haunted by a host of existential questions that the band never deigns to answer. At times, the speech patterns degrade into semi-intelligible utterances, such as the repeated — and very inhuman-sounding — refrain of “walk on the wild side” on “Walking and Talking”, which brings to mind images of an android replica of Chude-Sokei sputtering its way to a functional level of speech capability. Here, the band’s abstract approach to electronic music production suits the subject matter perfectly.
Over the last three decades, Mouse on Mars have carved a zig-zagging path between experimental and groove-based styles. At points, the plasticity of the sounds on AAI — and NKishi’s parts in particular — gives the music a holographic quality, as if highly complex, intersecting panels of 3D imagery were being projected all around the listener. For all its conceptual weight and technical precision, though, AAI is also too rhythmic to engage only the mind without including the body. The highly danceable “Artificial Authentic”, for example, consists mainly of a chorus-like hook. Taken as a whole, AAI can’t be understood either as sound design for an art-installation or as sweaty dance club fodder.
This is not to say that those two poles were ever mutually exclusive — or a fair representation of all the things electronic music could embody — but that Mouse on Mars continue to defy convention without necessarily thumbing their noses at it.
St. Werner, who has also lectured at MIT and is currently a professor of dynamic acoustic research at the Academy of Fine Arts in Nuremberg, spoke at length about AAI with PopMatters, talking about what we can learn from AI and how even he doesn’t fully understand what they accomplished with this new record.
* * *
The new album, AAI, contains several monologues, but there are also long instrumental sections. How do you think the larger points you’re making about technology come across in the parts that don’t have words?
Well, the record has several narratives. One is that it’s a story, but it’s also kind of like a science fiction experience [on a purely musical/sonic level]. The way we used the instruments and processed the sounds reflects this weirdly hyper-technological [approach]. At the same time, these very organic ideas about technology are part of our everyday lives as an extended version of ourselves. Technology has a casual-ness to it. It’s not like it was a hundred years ago, where advances were always a surprise and always a source of fascination as if the technology were separate from us.
We’re in such a deep, casual entanglement with machines that even the way we use them as instruments carries a lot of the themes we talk about on the record. For example, a synthesizer like the Nonlinear C15 has a very distinct feedback system of synthesis. It looks like a futuristic Rhodes — the processing inside is very advanced, but it has no MIDI, so you can’t sync it to anything. They’re about to change that, but with the version we have, you have to play it by hand. So it’s like this whole extended cybernetic experience of feedback and being inside the machine, but also of you as a user or host or sparring partner — you’re engaged in that feedback too. And you learn from the machine; it’s not just the machine fulfilling your needs.
You also change through that interaction. And that’s what the idea of this record is. So you couldn’t talk all the way through. You would have to have these parts where it just proves itself casually as music or as an abstract narrative or as an experience in sound. But the way it reflects the conceptual idea is there in every detail.
As the record goes on, we hear the automated process you used becoming more “self-aware”. You’re presenting that aspect as a metaphor for emergent sentience.
Yeah. We tried to make it a narrative that takes you from A to B. We like that a record is kind of a journey. That’s something we’ve tried with every record. As much as I do like vinyl, I really like CDs because they take you from beginning to end. At the same time, with this one, you can jump around. Maybe it’s akin to some of the French structuralist stuff like Deleuze and Guattari that people were referring to a lot in the ’90s, with discussions about non-linear readings and alternate ideas of how you engage in a text or how you’d engage with the internet through hyperlinks, scrolling back and forth and having para-narratives. I think this record has that.
We found [that subtext], and for us, it came through experimenting and experience, but we were also deliberately making sure that there were weird little cross-references in certain sounds and words. So the more often you hear the record, it lures you in, and you’re less obliged to understand it from beginning to end. You can juxtapose parts and find your own routes. Like the idea of a house of leaves that has a shape from the outside, but it gets bigger and bigger when you look at it from the inside. We love that. That’s a big part of what we love about science fiction: it’s not a fantasy about a future to come; it’s a different reading of the present because the future isn’t something that suddenly kicks-in.
Speaking of the non-linear aspect, there are sections where the spoken parts have more of a cut-up structure. But the first time we hear Louis Chude-Sokei’s voice, the train of thought is very coherent as if you’d just inserted one of his monologues. But that’s not what those parts are. It’s shocking to think that a non-human process could come up with something that very much resembles human thought.
We were shocked ourselves at how well the speech synthesis works. Our goal is to have performable speech synthesis. We succeeded, to a large extent, with this record, but we really want to pursue it and push it further. It’s something that doesn’t really exist currently. There isn’t a tool that works like a synthesizer where it’s actually speech, especially at the level of an existing human being donating their vocal identity to such a synthesizer. But it’s quite taboo to do that. If we tweak and bend a synthesizer so that it actually sounds like a real human, if it turns out to be a machine sounding like a human, we don’t want that. And the question is, “Why don’t we want that?”
What makes us think that this is the frontier [at the edge of what] makes us human, where a machine is no longer just a machine? That’s one of the urgent questions on the record. Because, of course, we in the band are 100% pro-humanist, and we think that we as a society should be much more humanistic. This society is very anti-humanist, as we see with border politics, social division, the privilege of money versus talent versus being human, access to resources, access to knowledge, access to education, access to basic elements like water and a place to live — all these things. We accept these separations in society while at the same time we cling to something ridiculous, like the idea that a body belongs to something specific. A woman can’t make decisions for her own body; she (and humans in general) can’t design their own body. They’re very welcome to have extensions and prostheses when it helps them to function because they’ve lost a part of their body, but they’re not supposed to go beyond that frontier.
A voice has to be a real voice because it has to be identifiable. But I want to be able to change my voice in certain contexts. Like, if I know I’m in a household where people are using Alexa, I’d love to have a different voice.
There’s a recent interview with Chude-Sokei, where he talks at length about agency: if we feel like we’re the ones guiding the process, then we’re okay with it, but as we start to become aware that there’s this whole mechanism acting on its own, that’s threatening to us on a gut level. The way corporations function, though, it’s as if they’re living things that have grown beyond our ability to harness them. So, in a sense, we already had an AI-like mechanism as far back as the 1800s.
Exactly. Take the de-centralized power aspect: you can’t even hit the core of the system. I think it’s time to hit back from an artistic position. It sounds a bit naive, but we have to be more cryptic, and we also have to become more complex to maintain our identity. We really have to make an effort to become more complex beings so that we’re less easily read and controlled. And by “control,” I mean controlling our desires and needs as humans.
So, if we want to challenge this, we really have to work on what we define as human. From an artistic point of view, I think there’s headspace. There’s a lot we can do. The identification of a human with their own voice, for example, is enormous. The topic of the voice and what our record is scratching, it’s a whole discussion. We’re just poking a little hole in there by saying, “Look, that’s a voice, obviously, but it’s not a human speaking.” So is it Louis who’s speaking, or is it the machine speaking as if it was Louis, or is it just a machine that happens to sound like Louis?
He was so gracious. He helped that machine to [develop], and he didn’t feel like anything human was taken away from him now that there’s a machine that sounds like him. Because Louis knows that the way he sounds to himself can’t be replaced by any machine. So, there’s this question of “Where is the position and location of a voice? [What constitutes a voice?] How do we define a voice?” Like right now, Zoom is recording this conversation. I mean, we wanted to be recorded, but it’s on some server. It’s in some buffer being analyzed right now.