Computer Life JANUARY 1998 - by Sean Kelly


The godfather of electronic music describes how computer-driven audio can have a life of its own.

Brian Eno has always been ahead of his time. Over the past several decades, the forty-nine-year-old musician/artist from England has consistently opened doors to new kinds of artistic expression through sound.

Eno can be counted among a generation of widely diverse musicians whom we will forever remember for their exploration of the medium and how they defined their own categories - David Bowie, Laurie Anderson, and Peter Gabriel, to name a few. Eno has released more than thirty different albums and CDs since his debut in '72 (not including albums he's produced for others, such as U2). In many of those, such as Discreet Music, Music For Airports, Thursday Afternoon, and Neroli, you can witness his experiments with audio and realize how he refined the process of musical creation to a systems science - and incorporated computers as so much a driving force in that science that the computers themselves become the music. For Eno (who is a member of futurological think-tank Global Business Network), computers, musicians, and even the production studio are an organic math. And the music that results from their combination is as alive and organic as the musicians themselves. It's not music you dance to - it's music you think to.

Still using technology as a major part of the equation, Eno continues to refine his ideas about "generative music," a process he focused on in a discussion at San Francisco's Imagination Conference in mid-'96. Loosely defined, generative music is audio organized such that, once an artist sets variables that define the first few seconds of a computer-driven piece (instrument types, tone, attack, release, and so on) and technology sustains the interaction of those variables, the audio continually generates itself, growing and changing into new forms over time.

Executive Editor Sean Kelly recently spoke with Eno about generative music, and about computers in general and how they may add complexity to our experience with sound in particular. Following is an account of their generative conversation.

SEAN KELLY: I remember hearing you say a year and a half ago at the Imagination Conference that you felt computers were cold and hard - less organic than a lot of things you were doing at the time.

BRIAN ENO: I think what I was objecting to was the assumption throughout the computer industry that suddenly, with any new technology, you can do absolutely anything. You know: Only your imagination is the limit. But, in fact, that's not true. Computers are a medium like any other medium, and all media have their own tendencies and inclinations and restrictions. So I felt it was completely obvious in music that computers made certain types of music and didn't make certain others.

I'm used to the idea that if you draw in charcoal, you don't make oil paintings, and if you work with violins, you don't write piano concertos. But the hubris of the computer industry, that a computer is this completely transparent medium that can become anything you want it to, simply isn't true. So a lot of my objections about technology have to do with trying to redress what I felt was yet more utopian hype about a new medium.

Of course, back then I fastened on CD-ROMs as a particular example of a real misunderstanding - a failure, actually. A failure that is still a failure, in fact, except for certain local uses, like a few games and so on. I think the failure of CD-ROMs came about because people wouldn't clearly differentiate between what the medium could and could not do.

Today, my feelings have to some extent focused themselves a little bit more. I have much stronger feelings about the use of computers in music - more articulate ones now that I have greater experience with them. And I think what's happened is that computers have created several new kinds of music. You know - techno, hip-hop, lots of music that involves sampling, looping, complex editing, and so on. I am very grateful about that. But at the same time, computers have virtually killed certain other kinds of music.

For example?

Well, you know what happens so often now in recording studios? Well, I should give a little context to this. I was probably one of the first people who said, "Music should be born in and of the studio." So the studio is a medium in and of itself.

It is the music?

Yes, it is the major tool of our musical era. So I always pushed people toward coming into the studio with almost a blank sheet of paper and then trying to make the music in terms of the studio rather than using the studio just as kind of a transmitter of stuff that already existed but you wanted to get on tape.

That was always my inclination, and now I've seen that grow disastrously wrong since computers have become part of the studio. That formula simply doesn't work anymore, and the reason for this is, first of all, as soon as you have a computer in the studio, you can very quickly, within minutes, make something that sounds quite good.

You can strap together a few nice-sounding samples, and they're all loops, so the thing will run forever, and in half an hour or so you have something that really sounds like music. And that's very dangerous. The thing has a sheen - it sounds professional; it sounds authentic - but actually you haven't really got anything yet.

Then, of course, the poor bastard who still has to write the song is sitting in the back of the studio, scratching away for the next four months, and the rest of the band who now are so bored with the whole process, keep throwing more overdubs on - you know, more loops go on, more this and more that, and the space left for what is supposed to be the heart of the song becomes more and more restricted.

So artists are relying too heavily on the technology and not enough on the soul of the process?

I'm not saying, "Don't use computers," but I'm saying, "Understand that it's a medium that will make you tend to do certain things and not others." Computers have made making music extremely easy. They haven't made making songs any easier at all.

Making songs - that's to say infusing lyrics and melodies with personalities - is as difficult as it was in the fourteenth century. There's been no technological advance there. So putting the grooves together has become so easy that it has raced way ahead in terms of the amount of attention it gets over the songwriting side.

Now, that's fine if you are making music that's based around that whole cluster of things that computers do well. But, if you're trying to write songs and create music that rely on personality and soul and articulation and lyrics and blah, blah, blah, then you might be well advised to not bring the computer in too early in the game.

The idea you're talking about right now with regard to music and personality are suggesting the latter comes strictly from the person rather than the machine. But your ideas pertaining to generative music seem to provide a first step that applies personality to technology. And, conversely, it applies intelligence to audio.

What generative music seems to be spawning here is a whole new way of looking at organized sound - as perhaps "artificially intelligent audio" [AIA]. I wonder if you have any thoughts on how that might proceed?

First of all, I think that's exactly how I would define it: artificially intelligent audio. That's a very good definition, and I shall quote it. For me, what it does is it uses something that computers do extremely well, but humans don't - computers keep track of a whole number of variables at once and compute that interaction of them. That's something that really requires a different kind of brain from what we have, and so for me, generative music was really something born of the thing that computers can do.

I feel that with generative music, some people say, "Oh, well, the music isn't made by a human, so it's passionless," but actually it is made by a human. A garden isn't made by a human, but it's planted by one, you know? I feel that generative music means composing at that sort of meta-level of stepping back further and saying, "I'll specify the universe within which this music occurs and the sort of rules with which it will unfold, and then I'll be just like any other spectator and watch it happen."

You gave your talk on generative music well over a year ago. Has your interest in it changed at all?

I still find the idea thrilling, and I'm working with it more. I think the future is not that suddenly we'll abandon all other ways of making music. I think the future will have some sort of new mixed medium where, you know, I can imagine you buy a CD and you stick it into your combined computer hi-fi system, which is not very far off anyway, and some of the things on the CD are reproductions - that's to say, they're like things on current CDs - the singer's voice, this particular guitar performance, dah, dah, dah. So things might be fixed, but perhaps the context and order and dynamics within which they're replayed can all be generated differently each time you hear the CD. Or not.

You can choose a constrained performance, or you could say, "OK, now generate," and the CD will start to make lots of other mixes that you can also more or less control. You can prohibit certain mixes, you can encourage certain others... You know what I mean?

Well, you're suggesting that the music we buy will come with an unprecedented level of interactivity.

Yeah, it seems to me that this is really a new age for music. We've had this strange little blip of a hundred years, where people listen to exactly the same thing again and again. I think that will look strange in the future. It's really not that different from music boxes.

A music box is just a little machine that does the same thing over and over. You could think of hi-fis and the whole record business today as systems of producing very complex music boxes. But this generative thing sort of breaks that bubble that we've all grown up in and says, "OK, the future is some kind of mixture of variables and fixed ingredients, and composers will choose the things they want to plant in the garden."

Computers' natural generative capabilities seem like they could make the way those ingredients mix together a ton more complex. You've experienced that with SSEYO's Koan program [software that infinitely generates evolving music according to about a hundred and fifty variables set by its user].

Moving into the artificial intelligence arena, a generative audio piece could even receive input that determines its growth, from itself. So it could, for example, learn from itself and regenerate. Do you see any appeal to that?

Yes, I do. In fact, it's what I've been working on recently. First of all, one of the consequences of having pieces that are self-generating is that, of course, they are continuously variable. You as a listener might want to constrain some of those mutations. You might one day hear a mutation of the audio and say, "Boy, that is really lovely; that's the best I've ever heard it."

Then you might want to say, "OK, store that little set of values," whatever they are - you don't have to know what they are as long as the computer does. Store that set of values, and in the future use that as the seed from which we further mutate the piece. So we've moved a step away from the original. Now, I can see this is a very interesting kind of future where people start to evolve the music away into a place that they want it to be.

In that case, it would be very difficult to know who's authored a piece. So this piece starts to evolve a bit like folk music. You know: You'd sing a song; somebody else would kind of remember it, try to reproduce it, and do it in their way.

And then pass it on to another village.

Yes, along and along and along. So there's actually two kinds of evolution that can go on in this artificially intelligent audio, as you call it. One is evolution in the piece itself; the other evolution outside the music as it's passed around and evolved.

And it sounds like there are two kinds of generators: your computer and your computer [points toward brain].

Yes, there are two different types of generative processes, actually. One is the one that you can do in the computer, which is what we've largely been talking about, but the other is the one that you are always doing in your mind.

Your mind is always being presented with similar inputs and rearranging them, interpreting them differently. We're using words all the time, for instance, that flex their meaning depending on the rest of the sentence and depending on the context of the whole conversation. We are used to the idea that things have flexible meanings, and I like to co-opt the tendency of the brain to generate meaning and to vary meaning.

Editor's Note: In the spirit of generative technologies, we conclude this conversation here but give the remainder of the interview new life, online. Log on to to watch this interview much further.