researchcomputermusicscrap

Max Mathews Full Interview | NAMM.org

Nottaの文字起こしを使用したのち手動で固有名詞は修正したが、まだエラーあるかも/Transcription is powered by Notta and I corrected specific words like human names and institution names but I think still there are several errors…

AI要約

この書き起こしは、コンピュータ音楽とデジタル音響合成の先駆者にインタビューした内容を広範に紹介しています。Mathewsはベル研究所とその後スタンフォード大学での経験を語り、初期の音楽プログラミングから影響力のある音楽ソフトウェアの開発に至るまでの道のりを説明しました。彼は、バイオリンを弾いたり海軍の任務中に音楽の持つ感情的な力を発見したりした初期の音楽経験について触れました。また、Music 1からMusic 5への音楽プログラムの進化を詳細に述べ、ブロックダイアグラムコンパイラやウェーブテーブルオシレーターといった重要な革新を強調しました。ベル研究所での音声コーディングやデジタル音響処理における彼の仕事の重要性についても説明しました。さらに、ラジオ・バトンのようなリアルタイムパフォーマンス楽器の開発、FM合成の影響、および大型メインフレームから現代のノートパソコンへのコンピュータ音楽技術の進化についても触れました。また、ベル研究所の歴史やその研究文化、John ChowningやPierre Boulezといった重要な人物についても言及しました。インタビューを通じて、音楽技術の開発における人間の知覚を理解することの重要性が強調されました。

チャプター

00:00:11 子供の頃の音楽的背景と教育

話者は、初期の音楽体験について語り、高校でバイオリンを学び、その後オーケストラや室内楽グループで活動を続けたことを話しました。また、シアトルでの海軍勤務についても語り、シェラックやビニールレコードを使用したリスニングルームで音楽の持つ感情的な力を発見したと述べました。

00:01:24 学歴とキャリアの初期段階

話者は、ネブラスカから海軍を経て、電気工学のためにカリフォルニア工科大学(カリテック)に進学し、最終的にはMITでコンピュータとアナログコンピューティングシステムに出会った教育の旅について詳しく説明しました。

00:10:48 音楽プログラムの開発

話し手は、音楽プログラムの進化について説明しました。最初にMusic 1の限界について述べ、その後、Music 2の四声システムとウェーブテーブルオシレーター、さらにMusic 3のブロックダイアグラムコンパイラ、最後にMusic 5のFORTRAN実装に至るまでの流れを紹介しました。

00:26:10 ベルラボと音響研究

発表者はベル研究所での彼の仕事についての概要を述べ、音声符号化の研究、デジタルテープ技術、音声と音楽の伝送を圧縮するシステムの開発に焦点を当てました。

00:51:21 FMシンセシスとスタンフォード大学

話し手は、FM合成の重要性、ジョン・チャウニングの貢献、スタンフォードでのサムソンボックスの開発、そして大型メインフレームから現代のノートパソコンまでの音楽技術の進化について語りました。

00:57:47 楽器とパフォーマンス技術

講演者は、ラジオバトンと指揮者プログラムがライブパフォーマンス用の楽器として発展してきたことを説明し、機械から無線技術への進化を詳しく述べました。

00:59:19 ラジオバトンシステムに機能を追加し続けることについて言及しました。

00:59:05 話者は、ラジオバトンプロジェクトにおけるトム・オーバーハイムとの継続的な協力について言及しました。

01:01:11 話者は、退職後にスタンフォードで週に2日関わり続けることを述べました。

00:17:50 講演者は、音楽家がデジタル音作成のために波形の物理的なパラメータを理解することの重要性を強調しました。


文字起こし

Interviewer 00:06

Thank you for having a few minutes for me, I do appreciate it. 

Max V. Mathews 00:09

Okay. 

Interviewer 00:11

I think it’s a good place to start, if you don’t mind. It’s just a little bit of background on yourself. And tell me the role of music in your life when you were growing up. 

Max V. Mathews 00:23

Two things, I learned to play the violin, not well, and I still don’t play it well when I was in high school. And I continued to play the violin with school orchestras and chamber groups, and still do that. It’s a great joy in my life.Then at the end of the Second World War, I was in the Navy in Seattle, and the good people of Seattle had set up a listening room where you could go and listen mostly to shellac 78 RPM records, but a few vinyl 78 RPM records. And so I realized at that time that music had an emotional and beautiful and pleasurable content, and that also has been a great factor in my life. So those were the two places where I got into music. 

Interviewer 01:22

Now where did you grow up? 

Max V. Mathews 01:24

I grew up in Nebraska, and when I was 17, I guess I enlisted in the Navy as a radio technician trainee. Now we were called radio technicians, but we were really trained to repair radars, but the word radar was secret at that time. And so I finished school there and then went to Seattle and helped commission a destroyer, and then we shot the guns and shook the boat down and went back to Seattle, and then I was mustered out because the war had ended and VJ Day was over. I met Marjorie in San Francisco at the radar training school on Treasure Island, and we hit it off immediately. So I stayed in the West Coast, went to school at Caltech, studied electrical engineering there because I was in love with radar circuits. I wish I had studied physics there, but nevertheless it’s a wonderful school.And then I went on to MIT and got introduced to computers. Those days analog computers were the useful computers, digital computers were still being developed, and I sort of loved these big complicated systems, and so we solved the kinds of problems that analog computers could solve, and that was my schooling. 

Interviewer 03:03

Very interesting. Can you give me a little background on your family? Did your parents also grow up in Nebraska? 

Max V. Mathews 03:11

Yes, my parents were both born there and grew up there. They were both teachers. My father was the principal of the teachers’ training high school in Peru. There was a little teachers’ college there. But what he really enjoyed was teaching the sciences. So he taught physics and biology and chemistry. And he let me play in his laboratory as well as in his workshops. And that was another thing that set the course of my life. I still enjoy working in a workshop and I still enjoy the sciences very much. 

Interviewer 04:00

Very interesting. Well, what were the computers like when you first started getting interested in that? 

Max V. Mathews 04:10

Well, the one computer that we used most, and this was to develop counter missiles to protect mostly against air attacks at that time. And this was a combination of an electromechanical system. So the integrator on the computer was a mechanical integrator, but the other components, the adders and more simple operations were done electronically. Then operational amplifiers were designed and came along at that time. And so then most of the simple integrations were taken over by the operational amplifier feedback circuit that still does that job. And only complex integrations of fairly nonlinear processes had to be done with the mechanical components.So the computer itself filled a large room full of relay racks that held both the analog components and the mechanical components. Now, there were a lot of interconnecting that had to be done at a patch field. The question would be, had you done it correctly, would it give the right solution to the original problem? And so we needed check solutions, and you could integrate the solution on a Marchant mechanical multiplying calculator machine. If you had a group of five or ten, I think in those days it was entirely women, and they worked for about a month to calculate one solution, whereas the analog computer, of course, would turn out a solution in a few seconds. So we would get these digital integrations and compare it with the analog result, and then figure out what mistakes we’d made and corrected, and then go ahead and study a lot of different conditions.When I came to Bell Labs in 1955, I started working and always worked in acoustic research there, and our main job was developing new telephone, well, new speech coders that really would compress the amount of channel that was needed to transmit the speech over expensive things like the transatlantic cable. And in the beginning, people had a number of ideas on how the encoding might work. Pitch period repeating was one of them. Channel vocoder processing was another of them. Format vocoders was yet a third, and in order to try these things, one had to build a sample model of them, and this was very complicated. The vacuum tubes were the things that we had to design and work with in those days. The transistor had not yet become practical. So it might take several years to design a trial equipment, and usually it didn’t work. So then you would go back and do it again. And I thought that, well, I should say that this was just the time that computers were becoming powerful enough to do a digital simulation of many things. And in the case of speech, the essential thing was a way of getting speech into the computer and then getting it back out after you had processed it to see what it sounded like. And the key element that made that possible was not the computer, digital computer itself. You could run the computer for a few days to make a few minutes of speech. But the crucial thing was the digital tape recorder, which could take the output of an analog to digital converter at speech rates. 

Max V. Mathews 09:00

In those days, it was 10,000 samples per second. Today it’s 44,000 samples a second for CD music and more for other things. Anyhow, take these rapid flow of samples coming out and record them on a digital tape that then could be taken to the computer to be the input, slow input. And the computer would write a digital tape and you could take this back and play it back again at the 10,000 samples per second so you could hear the thing at speech frequencies. And this digital tape-based A to D computer input and output was the equipment that we built at Bell Labs that made this possible and was a completely successful device for speech research. And most of the modern coders came from this. And now, of course, as you know, it’s not only digitized speech is not only used for research, it’s the way that almost all information is transmitted. The reason being that digital transmissions are very rugged and number is a number and you can hand it on from one medium to another and from one company to another. And as long as you use the proper error correcting codes why if it goes to Mars and back you’ll still get the correct numbers. So that’s how the world works today.

Interviewer 10:38

Very interesting. Max, when did it first come into your mind that computers and music could be put together? 

Max V. Mathews 10:48

I’ve forgotten the exact date, but it was in 1957, and my boss, or really my boss’s boss, John Pierce, the famous engineer who invented satellite communication, and I were going to a concert. We both liked music as an art. And the concert was at local pianist who played some compositions by Schnabel and by Schoenberg. And at the intermission, we thought about these, and we thought that Schoenberg was very nice and that Schnabel was very bad, and John said to me, “Max, I bet the computer could do better than this”, and “why don’t you either take a little time off from writing programs for speech compression or maybe work in the midnight oil and make a music program”. And as I said at the beginning, I love to play the violin, but I’m just not very good at it, and so I was delighted at the prospect of making an instrument that would be easier to play, at least in a mechanical sense, and I thought the computer would be that. So I went off and wrote my Music 1 program, which actually made sound, but horrible sound, so that you couldn’t really claim it was music. But that led to Music 2 and eventually Music 5, which did make good music. And gradually, I’m not a musician, well, in any sense. I consider myself a creator and an inventor of new musical instruments, computer-based instruments. But my ideas did make an impact on musicians and composers and I think started, or it was one of the startings of the fields of computer music. 

Interviewer 13:05

Absolutely. Tell me about music too. I’m sort of curious about that. 

Max V. Mathews 13:10

Well, Music 1 had only one voice and only one wave shape, a triangular wave, an equal slope up and equal slope down. And the reason was that the fastest computer at the time, the IBM 704, was still very slow. And the only thing it could do a tall fast was addition. And if you think about it, each sample could be computed from the last sample by simply adding a number to it. So the time was one addition per sample. Well, the only thing the composer had at his disposal was the steepness of the slope, how big the number was. So that would determine how loud the waveform was, and the pitch that you were going to make, and the duration of the note. And so that wasn’t very much, and you didn’t have any polyphony there.So they asked for making a program that could have more voices. And I made one with four voices. And I made one where you could have a controlled wave shape so that you could get different timbers as much as the wave shape contributes to the timbre. Now, in a computer, calculating a sine wave, or a damp sine wave, or a complicated wave is pretty slow, especially in those days. So I invented the wavetable oscillator where you would calculate one pitch period of the wave and store it in the computer memory, and then read this out at various pitches so that this then could be done basically by looking up one location in the computer memory, which is fast. And I also put a amplitude control on the thing by multiplying the wave shape by number. So this cost a multiplication and a couple of additions. So it was more expensive. By that time, computers had gotten maybe 10 or 100 times as fast as the first computer. So it really was practical. So that was music too. And some thing that most listeners would call music came out of that. And some professional composers used it. But they always wanted more. In particular, they didn’t have any things like a controlled attack and decay, or vibrato, or filtering, or noise, for that matter. So it was a perfectly reasonable request.But I was unwilling to contemplate even adding these kind of code, one device at a time, to my music program. So what I consider my really important contribution, that still is important, came in music three. And this was what I call a block diagram compiler. And so I would make a block, which was this waveform oscillator. And it would have two inputs. One was the amplitude of the output. And the other was the frequency of the output. And it would have one output. And I would make a mixer block, which could add two things together and mix them. And I made a multiplier block in case you wanted to do simple ring modulation. And I made a noise generator. And essentially, I made a toolkit of these blocks that I gave to the musician, the composer. And he could interconnect them in any way he wanted to make as complex a sound as he wanted. And this was also a note-based system so that you would tell the computer to play a note. 

Max V. Mathews 17:50

And you would give the parameters that you wanted the computer to read for that note. You almost always specified the pitch and the loudness of the note. But you could have an attack and decay block generator included in this, and you could say how fast you wanted the attack and how long you wanted the decay to last, or you could even make an arbitrary wave shape for the envelope of the sound. And so this really was an enormous hit, and it put the creativity then, not only for composing the notes, the melodies, or the harmonies that you wanted played on the musician, on the composer, but it gave him an additional task of creating the timbres that he wanted.And that was a mixed blessing. He didn’t have the timbres of the violin and the orchestral instruments to call upon that he understood. He had to learn how timbre was related to the physical parameters of the waveform. And that turned out to be an interesting challenge for musicians that some people learn to do beautifully and others will never learn it.The man who really got this started at the beginning was Jean Claude Risset, a French composer and physicist who came to Bell Labs and worked with me. It was one of my great good luck and pleasures that he was around. And so he made a sound catalog that showed how you could create sounds of various instruments and sounds that were interesting but were definitely not traditional instruments. And that work still goes on. Risset is coming here to give some lectures at Stanford on April 3rd. He’ll be here for the entire spring quarter. 

Interviewer 20:03

Hmm, very interesting. 

Max V. Mathews 20:06

But to finish up this series, that got me to Music 3. Along came the best computer that IBM ever produced, the IBM 704, 1794, excuse me. It was a transistorized computer, it was much faster, and it had quite a long life. They finally stopped supporting it in the mid-1960s, I guess.I had to write Music 4, simply reprogramming all the stuff I had done for the previous computer, for this new computer, which was a big and not very interesting job. So, when the 1794 was retired, and I had to consider another computer, I rewrote Music 5, which is essentially just a rewrite of Music 3 or Music 4, but in a compiler language. FORTRAN was the compiler that was powerful and existed in those days. And so that when the next generation beyond the Music 5 computers, the PDP-10 was a good example of a computer that ran well with music, I didn’t have to rewrite anything. I could simply recompile the FORTRAN program, and that’s true today. Now the sort of most direct descendant of Music 5 is a program written by Barry Vercoe, who’s at the Media Lab at MIT, and it’s called Csound, and the reason the C in CSound stands for the C compiler. Now you’re asking about Bell Labs, and many wonderful things came out of Bell Labs, including Unix, and of course Linux, and now the OSX operating system for Macintosh all started at Bell Labs. And the most powerful compiler, and I think the most widely used compiler, was also created at Bell Labs. It was called the C compiler, A and B were its predecessors, and C was so good that people stopped there, and now that’s it for the world. Every computer has to have a C compiler now, whether it’s a big computer or a little tiny DSP chip. So that’s where that came from. 

Interviewer 23:03

Very interesting you had mentioned um the envelope before and i just wonder were there other applications for that before the music programs .

Max V. Mathews 23:18

Other applications for what? 

Interviewer 23:22

Well, for the process of, like, the use of envelope and pitch changes and. 

Max V. Mathews 23:29

Ah, well, most of that is specific to music. Now, there are plenty of speech compression programs, and there are also music compression programs. And they make use of many ways of compressing sound. But I think the most interesting and most important today is compression of speech and music that is based on a property of the human ear. And this is called masking. And if you have a loud sound and a very soft sound, the loud sound will make it completely impossible to hear the soft sound. You won’t hear it at all. And in fact, if you have a component in a sound, let’s say a frequency band, which is loud, and the adjacent frequency band is very soft, why, you can’t hear the soft frequency band. So that means, as far as speech coding goes, that you only have to send information to encode the loud things. And you do not have to send any or very little information to encode the soft things that are occurring while the loud things are happening. And this is how MP3, this is one of the important factors in MP3 and in speech codes that enable us to send and record and play back good music and good speech with very little bandwidth. How to send speech over Skype and other devices that send it over the Internet entirely digitally and without an enormous bandwidth. So I’ve forgotten the question that I was answering there, but anyway, this is one of the useful directions that has come out of the acoustic research in the last decades. 

Interviewer 26:01

That’s very interesting. Could you give us a little information, the background on Bell Labs and some of the key players? 

Max V. Mathews 26:10

I can give you information about the most important players there, which were the members of the research department at Bell Labs. AT&T got started based on a patent of Alexander Graham Bell as a telephone network, and there was a lot of technology needed to implement Bell’s patent, and so AT&T set up a scientific and technical group in New York City originally to do this, and that became a separate sub-company owned by AT&T called telephone laboratories. It grew to have a number of different parts, one of which was research, and that was a fairly small part of the company. The major people were in the development areas that took the research ideas and then converted them into products that were then supplied to the telephone companies. Originally and almost to the end, the research department consisted entirely of PhDs, usually in the field of physics and mathematics, then gradually some chemical departments were added to this, but a very select group. At that time, the telephone system was a regulated monopoly so that there was only one telephone company in almost the entire country. That made sense because there was no real reason for having two networks of wires connecting the houses together, and that was a very expensive part of the system. This then became a great source of income, and a very small portion of this income financed the research department. The research department didn’t directly try to do things that would make profits, rather it tried to do things that were useful in the world of communication. They had a great deal of freedom in deciding what they thought would be useful.The sort of golden age of research at Bell Labs, at least in my horizon, started with the invention of the transistor to replace vacuum tubes for amplifying signals. This was done by what we call solid state physicists, physicists who understand how crystal materials interact with electrons, and how you can make amplifiers and get controlled voltages out of these. Then acoustic research was set up to apply the technology and to understand how people, how their ear works, what they need to understand speech, what they need to like speech, and what’s dangerous about sounds, if they’re too loud. The threshold of hearing and basic things about human hearing were part of that group. Now, the golden age of research at Bell Labs was really, well, it started out with the idea that Bell and his associates had that one should support a research group with an adequate amount of money. but it continued with one man, William O. Baker, who was the Vice President of Research. He both maintained the very selective standards of the people in the group, and he guarded the freedom of choice of how they would use the money, what they would do research on, very, very zealously, so that he insisted that AT&T provide him with the money to run the research department without strings attached, and his associates would decide how they would spend this money.Finally, he kept the size of the research group very limited. When I went to Bell Labs in 1955, there were about 1,000 people in the research department, and Bell Labs was about 10,000. 

Max V. Mathews 32:10

When I left in 1987, there were still about 1,000 people in the research department. The rest of the Bell Labs had about 30,000 people, so he insisted that everyone use their resources wisely and not try to grow. This lasted until the Consent Decree in about 1980, which broke up the Bell System into seven operating areas, separate companies, and a company called AT&T, which would contain the Bell Labs, the research part, and also the Western Electric, which was the manufacturing arm that would provide telephone equipment to the operating companies, as it always had. But it opened the whole thing to competition, and also by that time digital transmission was coming in. In contrast to analog transmission of sound, which is very fragile, and if you want to send a conversation from San Francisco to New York or to Paris by analog, that means you really have to send it over carefully controlled analog equipment that really means all the equipment needs to be run by one company. But when digital things came along, then you could pass the digits on from between many, many companies in many, many ways. So essentially, the Telephone Research Lab no longer had the support that it did with this controlled monopoly, and so it was no longer possible really to support this group. It’s expensive even to run a thousand people. The budget was something like $200 million a year. So that’s my view of research in the part of Bell Labs. It was a wonderful time. It was a time when there was, of course, in the Second World War and afterwards, a strong military research group at Bell Labs and development group and things like the Nike anti-aircraft missile were developed there and many other things. Underwater sound was also another branch of the military research. I think the military research actually still goes on. Bell Labs eventually split up and became Lucent, which is the name you probably know it by. And now it’s amalgamated with the French company Alcatel, so it’s Alcatel-Lucent. And it’s no longer limited to working in the field of communications as the original AT&T was. As a monopoly, it could not work in any field. It was allowed to work in the movie field, though, and developed sound techniques for movie film in the 1920s. 

Interviewer 36:26

Was it still in New York when you joined them? 

Max V. Mathews 36:29

No, it had moved, well, they still had the West Street Laboratories in New York, although they subsequently closed them maybe in 1960. But its central office was in New Jersey, Mary Hale, New Jersey, about 30 miles west of New York City, which could communicate to New York City easily on the train.And AT&T’s headquarters at that time was still in New York City. And then it had other facilities in New Jersey, primarily at Homedale, which was about 50 miles south of Murray Hill and Whippany, which was about 10 miles north. But it had other laboratories connected more with products near Chicago and Indiana and became more diversified, which was a problem. 

Interviewer 37:35

How so? 

Max V. Mathews 37:36

Oh, just the fact that it’s a lot easier to think of something new by going to lunch with your friends and talking with them than it is to call them up over telephone in Chicago from Murray Hill. 

Interviewer 37:59

Do you think, based on what you were doing and what others were doing at Bell Labs, that it is correct to say that what Bob Moog and Don Buchla were doing were the first in their fields for synthesized music? 

Max V. Mathews 38:22

Well, saying what’s first is always problematic, and I don’t much try to speculate there. The thing that was interesting was that Moog and Buchla and myself, both, all three of us developed what I called a block diagram compiler. A compiler is not the right word. In the case of Buchla and Moog, they were modular synthesizers so that you could have a bunch of modules and plug them together with patch cords that a musician, the user, could plug them together in any way he wanted. They were analog modules, and I made the digital equivalent of most of those, or they made the analog equipment of mine, the oscillator, of course, and the attack and decay generators and the filters and the mixers and things like that. The computer had at least the initial advantage that the computer memory could also contain the score of the music, and in the early Moog things it was harder to put the score into an analog device. They did gradually introduce what they called sequencers, which is a form of score, but it never became as general as what you could do with a digital computer, and it never became as general as what you can do with MIDI files.And do you know what the difference is between a MIDI file and MIDI commands? Well, a MIDI command has no execution time attached to it per se. Just a command that lets you turn on a note in some synthesizer by some other keyboard that sends a standard command, the MIDI command, to the synthesizer.And this was an enormous advance for analog equipment or combination digital analog because the MIDI file itself is digital. But it was an enormous communication standard, very reluctantly entered into by the big companies. Yamaha, I don’t think, was at the beginning of this. It was Dave Smith that, I’ve forgotten his name of his company. 

Interviewer 41:14

Sequential circuit? 

Max V. Mathews 41:14

Sequential circuits, and Roland and one other company that were the initiators of the MIDI commands. Then people figured out that if you put a sequence of these commands into a computer that would play them one after the other, and if you put a time code in that said when to play them, or really the delta time, how long it is between playing one command and playing the next command, then you could encode a complete piece of music as a MIDI file, and so this was another really great breakthrough that Smith and Roland and this other company did. 

Interviewer 42:06

Yeah, absolutely. What role, if any, did music concrete play in the evolution of all of this? 

Max V. Mathews 42:16

Um… Oh, music concrete started before all this came along, and the technology used was the tape recorder technology, and changing the speed of tapes and making tape loops, which play something repetitiously, and being able to splice snippets of tape with various sounds on them, so you could make a composition, for example, by splicing the tapes of various pitches, and that was a very successful and a very tedious operation, and one of the things that I tried to do was to make the computer do the tedious part of it, which it does very well, and make the composer think more about the expressive part. Now people argue a lot about music concrete, and what was Stockhausen’s alternate thing where he generated all sounds, not by recording real sources, but by using oscillators, I think. I’ve forgotten the name for that, but anyway, that now, I think, is an absolutely meaningless argument, because digitized sound is so universal that the sources of the sound can either come from nature, from recordings of instruments, sampled things, or they can be synthesized, and you can use FM techniques, or additive synthesis, or a myriad of other ways of making your sound. So I don’t really think it’s worth hashing over this very old conflict, and I guess Pierre Schaffer is died a number of years ago. 

Interviewer 44:50

Yeah. 

Max V. Mathews 44:51

Stockhausen is still around. Chowning’s FM synthesis really started out as a purely synthesized sound with no recording of natural sounds being involved.But now most synthesizers use samples. They process these samples in ways, including FM ways, to get the timbre that the person wants. 

Interviewer 45:21

And did you know, John, before… 

Max V. Mathews 45:26

John was studying as a grad student at Stanford, and he and Ray Say too read a paper I wrote in Science Magazine about the Music 3 program, and he came back to Bell Labs and spent a day with me, and he was very bright, and he understood what I was doing instantly, and he went back to Stanford and wrote his own music program, and then he tied up with the artificial intelligence laboratory that John McCarthy had set up at Stanford, and they had a very good computer, a DEC PDP-10, which in my mind was by far the best computer that existed in those days. So John could, at night when the AI people were home sleeping, he could use a computer for making music on these programs, and so he made wonderful music, and he, well, one of the things that Ray Say found was that in order to be interesting, the spectrum of a sound has to change over the duration of a note, and if the spectrum is constant over the note, why your ear very rapidly gets tired of the sound and doesn’t think it’s beautiful or charming, and so Ray Say used additive synthesis with a lot of oscillators and changing their amplitude, their outputs to make a changeable spectrum, and he could make very good instrumental sounds and other sounds this way, but it was very expensive, and John found that by using frequency modulation in a way that it had never been used for communication purposes, that he could also make the spectrum change over notes and do similar things to what Risset did with additive synthesis, and this was much more efficient.It took less computer power to do that, and he also, John was a very good salesman. He persuaded the Yamaha company to design a chip to do FM synthesis, and this was the Yamaha DX7 computer, and sort of overnight that brought down the price of an entry-level system that could make interesting music from a PDP-11 computer costing about 2,000, and of course that increased the number of people who were using this from, I don’t know, maybe a ratio of a thousand to one increase from the decrease in the cost. So anyway, as I say, John visited me in the early 60s, and then he went back and did his thing at Stanford, and Risset spent several years at Bell Labs in the 60s, and then he went back to France, and gradually got a digital system going there, and persuaded Pierre Boulez that, or maybe Boulez persuaded himself that there should be a computer part of the IRCAM laboratory that Boulez had talked Pompidou into supporting in France, and Risset was put in charge of that laboratory. Risset persuaded me and Boulez that I should spend some time there. I continued to work at Bell Labs, helping set up IRCAM. 

Interviewer 49:41

Hm. 

Max V. Mathews 49:41

I was the first scientific director there. It was a very interesting job. 

Interviewer 49:52

What sort of things made it so interesting for you there? 

Max V. Mathews 49:56

Oh, no, excitement of working in Paris, trying to learn how to speak a little French. Getting a system going with a PDP-10 computer, which the French had enough money to buy, and getting the analog to digital analog parts on it. Using them, they had some very good studio rooms so that you could do good psychoacoustic research. You need a nice quiet room to listen to things in, and here come had that.The rooms were connected to the computer so you could make good test sounds to evaluate. Working with Risset and Gerald Bennett, who I still work with very much. David Wessel, of course, came over there. It’s about a decade or two. Working with the musicians there and the technical people. It was an exciting time in my life. 

Interviewer 51:09

Going back to John for just a second. From your perspective, what was the importance of FM synthesis? 

Max V. Mathews 51:21

Well, the importance was that you could make good music with it. That also led to the Samson Box, which could do real-time FM synthesis, as could the DX7, but more powerful synthesis. And so the Samson Box was designed and built here, I guess, in the Bay Area by Peter Samson. And for about a decade, it had a monopoly on the rapid and efficient synthesis of really powerful music, a monopoly at John’s CCRMA Laboratory. And so just an enormous string of very excellent music came out of that, and good musicians from all over were attracted to CCRMA because of that machine. Now, you could make this same music, but at a much slower time on a PDP-10 by itself, but the Samson Box made a second of music in a second of time. That was real time. It was intended to be used for live performance of computer music. That was a tension, and it could have done that, but it really was never capitalized on because, A, you had to have a PDP-10 to drive the Samson Box, and B, you had to have the Samson Box, which was about the size of a big refrigerator. And so it really wasn’t practical to take this on the stage where you have to do a performance. And so it produced essentially tape music, but rich tape music. The lifetime of the Samson Box was really ended by the advent of the laptop computers, and the laptop computers getting so powerful that they now can do what the Samson Box can do ten times faster than the Samson Box. Either the Macintosh or the PC that I have can do that. They, of course, surpassed the PDP-10, so the power of computers that you can carry around in your briefcase is greater than musicians know how to utilize. The world is no longer limited, the musical world, by the technology and what it can do. Instead, it’s very much limited by our understanding of the human ear and the human brain and what people want to hear as music, what excites them, what makes them think it’s beautiful. And that’s the continuing forefront of research and future development for music entirely. 

Interviewer 55:00

What exactly is an oscillator and were the oscillators that theremin used the same oscillators that were used in the early days of Bell Labs? Can you talk a little bit about that? 

Max V. Mathews 55:16

Yeah, they were the same oscillators. They were based on the vacuum tube, the triode that do forests, and maybe others invented. And that made it possible to make radios and do things.And Thurman’s work came along very shortly after the vacuum tube came along, and long-distance telephony essentially had to use vacuum tubes. 

Interviewer 55:48

What made theremin’s use of the oscillator so unique, do you think? 

Max V. Mathews 55:55

Oh, he found that if you had a somewhat unstable oscillator, you could influence the pitch of the oscillator by moving your hand in the electric field produced by the oscillator and an antenna attached to the oscillator. And so this was a way of controlling the pitch. And he also used the same technique for controlling the loudness of the sound. So that was his real contribution. 

Interviewer 56:34

Did you ever have a chance to meet him? 

Max V. Mathews 56:35

Oh yeah, he came over with one of his daughters, I think, to Stanford and gave a lecture and a concert. I played with the daughter.She played the theremin and Rachmaninoff’s vocalese, and she did the vocalese part, which the theremin is good for. I did the orchestral accompaniment on one of my instruments, the radio baton. 

Interviewer 57:13

Very interesting. What sort of guy did you find him to be? 

Max V. Mathews 57:18

Oh, he, at the age of 90, could out-drink and out-stay me in the evening, and I stayed around until midnight, and then I went home and collapsed. Yeah, I think he was a universal man, a citizen of the world. 

Interviewer 57:37

You mentioned a music baton, which is something I wanted to just briefly talk about. You had several instruments that you really helped design. Was that the first? 

Max V. Mathews 57:47

Well, Music 1 was the first, and then I got interested in real-time performance. The radio baton and the conductor program were intended as a live performance instrument.The conductor program supplied the performer with a virtual orchestra, and the performer was a conductor, not an instrument player, or at least a simulated thing. So he would beat time using one baton in one hand, as the conductor did in the conductor program, would follow his beat. He could speed up or slow down. Then he would use the other hand to provide expression to the music, the loudness or the timbre, and both of these batons could be moved in three-dimensional space and could send XYZ information to the computer. That’s where the radio part came in to track the batons. 

Interviewer 58:55

Interesting. How many of those were made? 

Max V. Mathews 58:59

Oh, about, they’re still being made, about 50 of them. 

Interviewer 59:05

Is there any part of that that you wish you could have added a feature or something to that didn’t get worked in right away? 

Max V. Mathews 59:19

I’m still adding features to them, and so originally they were a mechanical drum that you had to actually hit to sense, but it would sense where you hit it. Then it became a radio device. The radio technology was designed by a friend from Bell Labs named Bob Bowie. He’s retired and lives in Vermont now. Anyway, this meant you didn’t have to touch anything. You could wave these things in three-dimensional space, and that was nice, a great freedom. Originally, you had to have wires attached to the batons to power the little transmitters that were in the ends of the batons. The latest model is wireless, and Tom Oberheim helped me design and build the batons. We still worked together, and I went to breakfast with him before I came here. He and I together made the radio baton version of it, the cordless radio baton. So that is my main live performance instrument, and I love live performance, I think. Performing music and playing with other people is one of the real joys of life. Chamber music is wonderful. 

Interviewer 01:00:56

Well said. And just because I don’t want to insult the enormous contribution you did at Stanford, I just wanted to acknowledge that and ask you, was that a good run for you? 

Max V. Mathews 01:01:11

I still go down there a couple of weeks, days a week, even though I retired last September officially. But yes, and I’ve enjoyed working with John, for example, and Bill Schottstadt, and many of the other people in CCRMA. It’s a great group. A very, again, a very free group where people aren’t told what to do. They have to figure out what they want to do.