Apollo Guidance Computer Project - MIT Conference Series, Part 2

Search transcript...

[MUSIC PLAYING]

GEROVITCH: So I think everybody is here, so I thought maybe we could start. My name is Slava Gerovitch. And my email is on the board, which conveniently has my first name on it.

Professor David [? Mandel, ?] who is the principal investigator on this project, is speaking today on the National Public Radio on Science Friday program. So I'm not sure if he'll be here. [? Sandy ?] [? Brown ?] who is a research assistant, and who has been in contact with all of you, is currently in New Zealand. And he couldn't make it back because of the recent tragic events. So I think these events kind of make us think that maybe we should appreciate every day of our lives that we have, and try to do as much work as we can today.

I will say a few words about the overall project for the sake of Jim Miller and [? Eric ?] [? Taylor, ?] who were not here at the first meeting. The project is called History of Recent Science and Technology on the Web, and its goal is to develop some tools that would enable veteran scientists and engineers to participate in writing the history of their particular disciplines or projects. So our role as historians here is just be your helpers, to offer you some tools and to listen to you what you would like to have in terms of support and logistics that would allow you to contribute to writing history.

There are different formats for using the web in this way. Some people prefer using web forums or email lists, things like that. David [? Mandel ?] has suggested that we would have these live meetings, where people would exchange their memories and discuss some of the historical issues. Then we would have these meetings transcribed and put transcripts on the web, where everybody could read them and comment on them.

We also are planning to put-- or actually are already doing it-- to put some archival documents up on the web, where you could read it and, again, you could write comments on particular documents, on parts of it, which will also be posted on the web. So people after that could come and see your comments and the original document, and maybe write some other comments.

So these are the kind of things that we're thinking about. If you would think that some other forms, like a discussion list, would be helpful, please let us know, and we'll try that. I don't know if anybody of you took a look at the website, which is hrst.mit.edu. And it stands for history of recent science and technology. Doesn't have any www.

So this is the overall project, which includes five subprojects, one of which is Apollo guidance computer, one is molecular evolution theory, one is bioinformatics, and a couple of others. In different fields, people try different approaches. We'll see what works in our case.

So far, we've had one meeting last month. And we already have a transcript of it. And as far as I know, it's been circulated somewhat, and people wrote some corrections. And we are planning to put it up on the web shortly, within the next couple of weeks, where you will all be able to see it and comment on it.

I should mention that this website is kind of working tool. So it has two layers on it. One is public, what everybody can see, and one is a working area where only members of our group can see. And so when you put your comment on the web, only members for our group would be able to see it at the moment, so because we would like to have some kind of place where we would be able to discuss our views freely and have some debate. And then we would decide to what extent we would open it up to the public.

So at the first meeting, we generally-- I'm just saying it for the sake of Jim Miller and [? Eric ?] [? Taylor, ?] who were not there-- we generally had an introduction when people talk a little bit about their role in the development of software for the Apollo guidance computer. And some of the general topics came up. What I would like generally to do is to focus on a few important issues in this project that really made that project so salient and significant, issues that, first of all, allow that enormous enterprise to be accomplished successfully.

So as Fred Martin, I think, put it last time, if we had to do this program today in the manner in which programs are done today, I don't think the program could have ever be done. I don't think that the amount of paper generated, the meetings, the views, the number of people involved, I don't think the program could have ever gotten off the ground. So that's a very interesting comment that suggests that the way the project was done back in the '60s was kind of unique. And we've tried to recreate that uniqueness. And for the reason, your own reminiscences as participants are invaluable. That's something that documents cannot tell us.

So I would ask you to bring up as much as you can these elements of uniqueness, of something that distinguish that particular project and you work on it from other things. If you could compare it particularly with other things that you encountered in other times and other places, organizations, and to draw explicit comparisons, that would help us, too, to determine what distinguishing features that would be particularly helpful.

Another thing that we would like to do is to avoid writing kind of linear history, when basically what we often can see is just a line of decisions. First there is this, then that, then this, then that. [? And then it's ?] [? success. ?] There are always some points where different decisions could be made. There were debates when, you know, there were some roads not taken. At some places, they had to go back and take the other road, and then go back. So we'd like to recreate that rather complex road map instead of just a sequence of solutions that we eventually found their way in the eventual product.

So we'd like to have that sense of debate, of controversy, what exactly was the points of contention at different points of the project? Which design decisions or organizational decisions were particularly important and debated at the time, so that history would be alive rather than a sequence of facts.

A few things that came up in terms of topics that we discussed last time, or the issue of the degree to which the automatic and manual systems must be combined in the guidance of a spacecraft. And as far as I can see in this discussion, mostly the issue of trust in the electronic equipment came up. And the astronauts didn't really trust it.

So the automatic side had to be justified somehow. While in the Soviet program, curiously-- and I should mention that I am actually working on the Russian side of this story-- in the Soviet program, the issue was to what extent we can trust the humans, the opposite issue. So somehow, the Soviets had a greater belief in automatic systems, in their reliability, and they thought that the humans were the weakest link.

So I'd like to kind of frame that issue a little bit around this question. Maybe these issues came up in your project, as well, but you simply didn't mention it. So that would be interesting. Another thing is man-machine interface, to what extent the astronauts had an input into the way they wanted to interact with the computer, to what extent computer designers had their say.

The role of warning systems, to what extent computer was allowed to let the astronaut knows that something that astronaut is doing is not really right. The issue of memory constraints and speed constraints that really put a limit on these kind of backup and warning systems. Also, series of implementation meetings was mentioned in the issue of disciplining the software program, introducing all sorts of measures that kind of streamlined that process and made it manageable from the NASA point of view. And I would like to-- maybe if you could say a few words about the way that was done, how it was accepted, what consequence did it have, and a little bit more detail on that.

And lastly, the question of openness of the whole project. Again, there is a startling contrast with the Soviet program, which, as you know, was all classified and run under the military umbrella, while the Apollo project was open. And some of the participants in the last meeting stressed that it had a very important role. And I really want to push you a little bit on that, and see whether this openness actually maybe posed some problems for you, in terms of interacting with military people. Maybe military people didn't want to tell you something because they knew that your project was open, or-- I'm just speculating about that, maybe it wasn't a problem at all. So maybe you could say a few words about that.

So today's meeting, I would suggest to start by asking Jim Miller and [? Eric ?] [? Taylor, ?] who were not here the first time--

PRESENTER 7: [? I ?] [? was ?] [? here. ?] [INAUDIBLE] Alex [INAUDIBLE].

GEROVITCH: Oh, Alex [INAUDIBLE], sorry. To introduce themselves and to talk a little bit about your role in the Apollo guidance computer story. And then we would go around the table and have people talking about the issues. Start with Alex?

PRESENTER: OK. I think I started the-- I joined the lab in '63.

PRESENTER: For the second.

PRESENTER: Hm?

PRESENTER: For the second time.

PRESENTER 5: For the second time. Dan was one who winkled me out of England to come over at the start of the Apollo program. I came as a physics graduate. I knew nothing about computers, analog or digital. I knew nothing about programming.

PRESENTER: I knew that.

[LAUGHTER]

PRESENTER 5: And one of my observations actually has to do with that because one of the successes here-- I mean, I know when Margaret joined, she didn't know anything about inertial navigation systems-- I hope I'm not putting words in your mouth-- and yet, we grew this knowledge to the extent that we needed to understand it, to control the right programs that control it. To me, that was always amazing, that so few people, relatively inexperienced in the subject that we were dealing, they were just smart people, managed to acquire enough information from a vast array of resources around us in the lab to not be blocked by a wall of ignorance.

I can't move because I don't know. You went and found out. You talked to people. And there weren't very many people that you needed to do that with. To me, one of the strengths of that program was the few people that were working. And we always came to the conclusion-- and Dan and I talked about this in the early days, that the optimum size of a software group producing complicated software like this was about five or six, because during some of the stressful period where we were really churning stuff out, there were little knots of five or six people that really put the foundations to this.

One of my roles when I began to feel my way around that I can remember-- and my memory is really starting to come apart at the edges-- was to integrate some of these areas. There was a launch area, there was a reentry area [? on the dam, ?] there was a powered flight area, who were investigating some of the technical problems in these various areas. I had to stick these together in order to achieve our first guided flight, which I still have pictures and [? stuff of ?] which we knew as Apollo [? 202 ?] is to manufacture the glue and just make stuff up that integrated the astronauts' actions, the powered flight equations, track the launch, which we didn't control. So when the time we took over, we knew where we were and how fast we were going [INAUDIBLE] then we took over. The midcourse stuff, which was a whole other ball game, which I knew absolute nothing of. And even at the end of the program, I really didn't understand it very well.

So my role was to knit some of these pieces together. And my memory is dim, but it was a heck of a lot of hard work. Long hours. Stuff didn't work. Stuff we made up as we went along didn't work. But in the end, it all did work. If I remember rightly, that first flight landed remarkably close to its objective in the Pacific.

PRESENTER: Being late late for lunch, because I wouldn't let you go you when you were dying of hunger. You remember a few of those, right?

PRESENTER 5: I can ramble on, but I think to me the amazing thing about this, looking back, was the quality of the people, the [INAUDIBLE] of the people, the immense support that you got from all the resources that were arrayed around you. The openness, I guess, is what you were referring to before. We ourselves, at least that's what I felt, were the only blocks, because of ignorance. But we overcame that as we went along, and we made it work.

I'll just shut up at that point. I'd like to hear what some of the other people [INAUDIBLE].

PRESENTER: You'll get another chance.

GEROVITCH: All right. [? Jim? ?]

PRESENTER 1: Not sure exactly how much of what to cover, but let me cover some things briefly about how I got there and what I did. I evaded the draft during the Korean War by going off to graduate school and staying in college. Got a master's degree in electrical engineering. And that was as long as I could postpone being in the service.

And I entered the service in 1956, the Air Force, and was assigned to a place called Cape "Cana-vare-al," or something I had never heard of. John [? Daley, ?] who read the news on TV, hadn't heard of it either, and that's how he pronounced it. But within a week or two of being there, I was out at the Cape, and watched the Redstone launched at night. And I was completely hooked into the idea of rocketry and guiding the things.

And I stayed in the service a couple of years, and came up to MIT to get a doctorate in program-- Doc Draper-- called instrumentation, which was a combination of academic disciplines, multi-departmental. I started writing code for computers in '57 or something. We had a digital computer down there.

And eventually, I got into programming it for whatever reason. And writing my doctoral thesis, I got into both celestial mechanics-- because I wrote on guidance of a one-pound thrust rocket vehicle from Earth orbit to lunar orbit. 1,000-pound vehicle with a one-pound engine, if I recall. It takes a long time to build up the energy you need to get up to the moon. But in the process, I learned a lot about celestial mechanics and trajectories, and about writing code for computers.

And I really have to say that the computer part of it appealed to me more than any of the other parts, although I liked celestial mechanics. But when I finished my work, Doc Draper had, through his incredible influence, landed a sole-source contract at the laboratory-- instrumentation lab, where I was based-- to do the Apollo guidance system in late '61.

And with Bob [? Stern, ?] Dick [? Batten, ?] and Hal Laning, we were a group of what I think was called Advanced Guidance or something to work the Apollo problem. There were other people in the [? Polaris ?] side that were busy, as well. But we started without really having any contact with those folks.

I worked on trajectories and trying to figure out abort trajectories and so forth. But pretty soon-- maybe that was later, but Hal Laning had written an executive and an interpreter for a computer, which was being proposed to be the onboard computer for Apollo, which was a computer that had originally been intended to fly to Mars and take one photograph, and come back and drop it somewhere where it could be retrieved. It never happened, but that computer, which was a 15-bit computer, of all things, with three bits for op code and 12 bits for address, was very nicely designed. It had eight op codes and 4K of memory, of which I think 500 words were erasable. The rest was fixed memory. These folks know more about that than I do.

But Hal had written this executive that dispatched tasks on a priority basis, and an interpreter, which was something I'd never even thought of or heard of. Hal was probably not the first to do that, but a real pioneer in a lot of ways. But he had written its code, and I started reading it and trying to make it better, which was difficult.

But it occurred to me that one of the things that we were going to need is something that simulated that computer. Because we didn't have a computer to run, and if we had, it wouldn't have been designed for debugging and so forth. So I got, eventually, into the game of writing a computer simulator for the Apollo-- for what we called then the Mars computer. And that developed, over time-- and I sort of skip around a lot-- but over time, into a full simulation of the spacecraft, [? the ?] guidance computer, all of the hardware that interacted directly with the computer, or that controlled the spacecraft, and even with the crew, including where the fixed stars were, and all the sextant operations and so forth. And ended up in the library developing a star catalog, which as far as I know, was the one that went to the moon.

Well, I don't really know. I lost track of it. But they don't change very fast, so it probably stayed the same. And the group developed into, I don't know, half a dozen people or so on different aspects of writing the simulation. And we were racing to get it ready in time for Alex's Tool [? Two ?] software. And I say Alex's because he was in charge of it, and bore the brunt of all the stress. Other folks just took life easy, and--

PRESENTER 6: Like Margaret.

PRESENTER 1: Margaret and Dan, and [INAUDIBLE]. But trying to keep up with their needs to do the simulation. We had, at that time, moved from 650 powerhouse [? of the ?] instrumentation lab up to a Honeywell machine, which was designed to do insurance programming, I think, and had lots of tape drives. And fun machine, but wasn't quite fast enough, so we eventually got a faster one, and then we moved onto IBM machines.

PRESENTER 5: Is that the one with [INAUDIBLE] on it?

[INAUDIBLE]

PRESENTER 1: I remember one day, we had two [? 370 ?] 75's, just about the world's fastest commercially-available machines, other than maybe some other-- [? I'd seen ?] more [? crazy ?] machines. And we had fixed it up so each one had two big [? 23 ?] 14 arrays. 23 [? 14 ?] was a big disk drive, which was about the footprint of this table. It had eight multi-platter disks in it. Nine, actually, because one would fail or something. I guess it was for loading up another on one while the other ones were in use.

And as I recall, we had two of those connected to each machine. And because of cable restrictions and distance of the machines, we had a fifth one that was halfway between the two machines, with a cable running from each, so we had a shared disk. And I remember standing there thinking, we have a gigabyte of storage on this machine-- well, almost-- between these machines. And I thought that was-- I mean, it was, at the time, absolutely mind-boggling. Not nearly as mind-boggling as what's happened since then.

[LAUGHTER]

PRESENTER: You should point out, a megabyte on the 75 cost about $1 million. [INAUDIBLE]

PRESENTER 1: Well, the machine that I ordered-- the 75 that I initially ordered had 1/2 a megabyte of memory. And little did I know-- and it [? had ?] four 23 [? 11's ?] because the 23 [? 14's ?] weren't available. I've forgotten how small the 23 11's were.

PRESENTER 5: I [? think ?] the 28 megabytes.

PRESENTER 1: They were small.

PRESENTER 5: Yeah.

PRESENTER 1: And the operating system used up, I think, all of the space, and all of the memory, leaving no space for anything else. But memory was so expensive. We just-- well, we had to buy-- we went up to a whole megabyte. I remember, living in the Netherlands, bought a megabyte of memory for my AT that I had bought. And rode home from the electronics store with a megabyte of memory in my pocket on my bicycle. And I thought, gee, things have really progressed. Now my 60-gigabyte disk that I have on my machine at home fits on my shirt pocket. Just can't believe what's happened.

And it is very difficult for me, having even been there-- and I think for people who weren't there, almost impossible-- to imagine the difficulties that people dealing with a machine with a 15-bit word and lots of restrictions upon expanding it-- even though the technology was available, the reliability requirements prevented much happening that would take advantage of what could have been done. Trying to squeeze enough software in there to fly to the moon and back into what became, I think, 36 K-words of fixed memory, and 1K of erasable memory.

PRESENTER 6: 2K.

PRESENTER 1: 2K, yeah, and 1K accessible. Right. And squeezing that out of 3-bit op code and 12-bit address took some funny shenanigans, which caused a lot of debugging activity. But I don't think anybody can imagine the kinds of problems that were faced trying to squeeze a huge amount of software into that tiny machine, which really was small.

After getting the simulation group going-- and there are lots of war stories about that, probably of little interest to most and known by the folks here-- I foolishly accepted the responsibility of becoming the leader of the mission program for the first lunar module. This was after the Apollo fire, and long hiatus of flights during which time everybody moved to the block two stuff.

And I remember a long, long time, it seemed that Alex was sweating and working on the [? 202 ?] software. The 204 software had never flew. It was going to fly in that [INAUDIBLE] which the three astronauts died on the [? pad. ?] But I went to Alex, and I don't know if Alex remembers this, but I remember it. I've told a lot of people.

I said, Alex, I don't know anything about the mission software. I've been doing the simulation and trajectories, and I know all that stuff. Cool. But I don't know anything about the mission program. What do I do? What did you do?

He says, well, anytime somebody comes to you and asks you what they should do, make something up. And I said, you're kidding. He said, absolutely not. He said, if they come to ask you, they don't know, and they know better than anybody else. So if they don't know and you don't know, you got to make something up.

Well, he was exactly right. It worked. You just had to not get stuck on something. You just had to take a path, and not all of the things that either us made up, probably, but certainly in my case, worked. But if they didn't work, you found out about it. And then somebody would come back and say, this didn't work. What should I do you? And make up something else different. And that usually worked because then you knew. It was the seat of the pants a lot of the way. And that's probably a good place for me to shut up for now.

[INAUDIBLE]

PRESENTER: [INAUDIBLE] about 9:15.

[INAUDIBLE]

PRESENTER 2: Well, I got involved in the Apollo project by way of a backdoor. I was working for Raytheon, and attached to the instrumentation lab on the Polaris project as a resident. And shortly after that petered out a little bit, I became reattached as a resident on the Apollo program in the computer group.

And when I-- just putting things in context, I was all of three years out of school. So not what you'd call really, tremendously experienced in computer design. But got into the computer group and was doing work with Ramone and Dr. Hopkins, Dave [? Shanksy ?] to go ahead with the Morris computer.

And then, shortly after arriving, the concept of the integrated circuit arose. I remember a meeting in Elden Hall's office, where we all gathered around and tried to determine which direction the group ought to go as far as implementing either with the Morris circuit technology, the core transistor logic, or the integrated circuit approach, and all of the issues that that was going to raise. And the decision was made to go ahead with the integrated circuit approach. Of course, none of us knew what an integrated circuit was at that time.

But I ended up with the responsibility of re-implementing the Morris design in the integrated circuit world, and carried that through. It's a very interesting computer design. Many levels of interrupt. I remember tech words like pink, mink, shink, and shank--

[LAUGHTER]

--for, essentially, instantaneous reaction to requests from resolvers and sink rows, and communication links, and things like that. And a multi-structured level of interrupts and software levels of hierarchy. I also remember a fateful day in 1963, sitting in the Apollo computer lab, which was a big room, and then a little room behind it.

I was in the little room behind it with the first brass board Apollo computer prototype. I was debugging it when the word came that Kennedy was shot. That was kind of a real seminal moment. I think everybody became particularly re-energized in making this project work. And that was my role.

I did a little bit of programming, when the design was finished, to some orbital calculations. But my role was primarily in the design of the hardware.

PRESENTER 3: I'd like to add something to what Herb did, which was a quote which I think applies to the business of you could do this project today. At the time the Apollo computer first became available, there was an internal need to have something that could show that it worked. Not that it did the job, that this computer was [? alive. ?] And Herb programmed the little game, which would be completely primitive by today's standards. But it was a little version of "Pong," in which the ball bounced all over the place.

And it was after doing that, and then all the work that came after, that Herb-- I heard Herb, when they say, you can do so much with 4K and so little with 40. You know? You can do so much with five people and so little with 400 people all trying to do the same job, which I think is part of the thing.

And what I think the-- in terms of why the project can be done, I think the genius was in the partitioning of this project that [? was ?] based on several things. One is having a clear goal. The other having an institution that was already talented. [? You ?] can't do that. You're going to go anywhere.

But then, the fact that the job was broken up so that people go work, as to what they did. And didn't have to constantly cross-check everything at every second. There was plenty of that to be done. But I think that that's probably one of the [? signature ?] characteristics of that project. I've been in many projects since then-- and everybody here has, too-- and I think that those projects that fail almost always had enormous bureaucracies of cross-checking that just didn't seem-- the partitioning was wrong.

GEROVITCH: Do you think the right partitioning was achieved by intuition, or did that idea come from somewhere?

PRESENTER 3: Well, generally, somebody's-- there's got to be a combination of political and technical architecting done. I don't know if it's any one person. But I think the lab had-- first of all, the lab was divided by stovepipes. So there was the gyro group, the [? Herschel ?] platform group. Then there were the single commissionings, and then the optics group. And then the computer group, which was kind of a foreign object in the lab, as I recall, for many years.

But so that partitioning by discipline already existed. But even within that, I think once the job was done that there was going to be a mid-term guidance and a booster guidance, and a reentry. And there's going to be an abort. All these things were partitioned-- [? it's ?] probably Davie [? Hoag ?] was the person who just thought that this was the way the world should be. So I think that it requires a [? architecting ?] visionary on that.

PRESENTER 1: Our space life consists of a lot of periods of thrust and activity, followed by long periods of just tracking where you are flying to. And so some of the partitioning just came from the separation by periods of inactive flight.

GEROVITCH: Sure, sure.

PRESENTER 4: You want us to-- are you through over there? Should we all start?

PRESENTER: Sure. Now it's all yours.

PRESENTER 4: Well, I have, I think, two things. One, the overall speed, I think you got to look that it was a totally different era. It sort of had followed not that long after World War II, when it still had the spirit of whatever you got to do, you got to do it quick and done, because there's a [? per ?] pressure. And the aerospace industry, for instance, started with nothing when the-- '41. And by '45, they had built 300,000 airplanes and bombers and transports. So they had an industry that really tooled up and moved fast.

And after Sputnik, we were in another mode of moving that way. Some of us had worked with [? Davy ?] [? Hoag ?] on the Polaris, and we worked on the guidance and so forth. But those guys-- Admiral Rayburn and the special projects-- they started, said go. In 18 months, they had a submarine built and ready to take a missile by short-- I mean, imagine sitting down now and telling somebody, go, and building a submarine and having it out there in 18 months. You couldn't do the paperwork in 18 months.

But those guys really moved. I never quite understood-- well, they took a submarine that existed, and split it in two, and redid-- it took all sorts of tricks to do it. But those of us who worked on that also saw that, whatever you did, it was going to move very, very rapidly. And we had [? Davy ?] [? Hoag ?] to partition-- to lead us, who was really a genius, as far as I'm concerned. A lot of us think he's the mentor for engineering, showed us what engineering really was.

Then, when we moved to Apollo, which [? Davy ?] said-- last time was August 8, 1961. It was in Washington when they awarded it, and some of us were there then, so we were there from even the start. But it started, as Jim said. Then they started to put together teams. There was a little before, but not much. And built it in late '61.

But you got to have two-- there's really two distinct phases with different objectives and organization. First is the engineering phase that lasted, well, two to four or five years, depending on what part. And that's when you had powered flight, rendezvous. I was in the reentry. Something else. LEM later, but I don't know that it started that way.

And you guys were doing the simulator. Who else? There was about five or six groups. But it was definitely-- engineering organized different parts of it. And Dick [? Batten ?] was the guy that ran all of it. But we were all involved--

PRESENTER 5: Well, he didn't really run it.

PRESENTER 4: He didn't really run it.

[INTERPOSING VOICES]

PRESENTER 4: He presided.

PRESENTER 5: And he was real hands-off.

PRESENTER 4: Real hands-off.

PRESENTER 5: Just a guru--

PRESENTER 4: Right.

PRESENTER 5: --in his corner office.

PRESENTER 4: Very smart in his corner, and smoked his pipe and nodded.

PRESENTER 5: Everybody thought he was just writing a book, but by golly, he knew exactly what was going on. And he didn't interfere. And to me, that is one of the secrets of success. Again, as I said before, it was the fewness of the people--

PRESENTER 4: Right.

PRESENTER 5: --that allowed this massive project to be shoehorned into this teeny box, and work.

PRESENTER 4: Each of those engineering groups was quite small, as [INAUDIBLE] said. We had three or four in our entry, and each of these others didn't have a lot. And you had a free hand. NASA was very busy with a lot of other-- they had a lot of other things cooking. So we had a lot of ability. We went to these panel meetings every month or two. In my case, it was Aaron [? Cohen. ?] And a lot of other NASA that you-- you informed them what you were doing, and waited for critical comments. But you weren't ridden.

PRESENTER 5: Yeah. I had a single point of contact with NASA. Tom Gibson, remember him?

PRESENTER 4: Right.

PRESENTER 5: The cigar-chomping guy?

[INTERPOSING VOICES]

PRESENTER 5: He oversaw our work. But he didn't interfere.

PRESENTER 4: Right.

PRESENTER 5: We told him. There were not hordes of bureaucrats who caused you to fill in forms and make present [INAUDIBLE] briefings and that kind of crap. We didn't have that at all. You kept Tom happy, he kept them off your back. And you went on with the engineering.

PRESENTER 4: But then we awoke, several years later, realizing we had a big programming problem. So we switched from the engineering-- develop equations, make simulations, study, develop guidance equations. We shifted to a different mode. We had a program, [? this ?] AGC. And now, we needed a new organization, different people leading it. The organization changed entirely.

And now, it was more mission-oriented, where you integrated all the needs into-- like, he started the 202 one, and different people took over other ones, and integrated the efforts of these different groups, and really had to program it. Because in most cases, the engineers did a so-so programming job, and it had to be redone and put into a form that would [INAUDIBLE] consistent for a particular flight. And you found stupid errors from engineers here and there, like somebody, for pi, using 22 over 7.

[LAUGHTER]

We kept getting errors, and couldn't figure out. And then this guy-- I mean, he was from the old school.

[LAUGHTER]

[? Good ?] enough for--

PRESENTER 5: Good enough for a [INAUDIBLE]. Exactly.

PRESENTER 4: And there were subtle errors that even got to the flight. You think Earth rate is a well-known constant, hasn't changed very much. Earth rate is Earth rate. But for us novices, although we knew more at the time, the earth rotates once in 24 hours. And that's what the guy used on his calculation. But that isn't right. The Earth does not rotate once in 24 hours. 24 hours is when it gets back to the sun again. But if you're doing the Earth's rotation rate, it actually rotates in about 23 hours and 56 minutes, as I recall. And if you use the difference, your longitude is going to be way off, which it was. And I can't remember what early flight there was.

So things did get through. But the programming effort was directed differently and organized differently. And these guys really-- Margaret, Alex, Fred all really took over and, in a very good, delete all of those. Then we had a break into LEM versus command module. LEM engineering took a lot longer, because [? Grumman ?] was longer, they didn't know-- some of us thought we were done, and they were just almost getting started, Allen [? Klumpf. ?] And then the auto pilots came along, which required some more engineering. Anyway, I've--

PRESENTER 1: For a long time, there was a myth that was absolutely nothing more than a myth, that each mission program was a slight enhancement of the one that went before. So that all you had to do for the next one was the enhancement. The myth part was that you had to start all over again because everything was different. Somehow the hardware was different or the mission was different. There were very few things that--

PRESENTER 4: I think I'm the only one that we flew exactly the same thing twice.

[LAUGHTER]

I don't think there's any other. Maybe [INAUDIBLE]. 501 and 502 were identical because they were unmanned. There was nothing. And I think it was absolutely the same thing. But I don't think we did any others that way.

PRESENTER 1: But the trouble is a lot of us were fooled into believing the myth somehow, even after we knew it really wasn't working that way. And thought, with these small groups of people, we could handle it. I remember these talks in which we said, we got to do this with either about five people or maybe six. Or we got to do it with 150. The number we've got, which is about 30 or 40, is absolutely going to kill us if we keep on. We have to do either one way or the other. I think, happily, we went to the 150, although it wasn't fun at the time.

PRESENTER 4: No, it was very difficult.

PRESENTER 1: A whole new culture.

PRESENTER 4: Especially when you had what's-his-name from IBM's-- Brooks [? Law, ?] that adding more people to a job that's in trouble makes it later and things like that.

PRESENTER 1: We couldn't figure out why anybody else needed to know what was in the guidance computer besides us. What does NASA need to know for? Why do we have to publish all these documents? But we, of course, did have to publish all that stuff.

Speaking of the point that Fred apparently raised in the prior meeting, and that Dan touched on, I remember-- I mentioned Hal Laning, who had written the executive for the Mars computer, and who stayed involved for a while, but decided he didn't like the big money and the big pressure. Got off.

But one day, there was a meeting sort of late in the Apollo project, in which-- it was a two-day meeting at the laboratory of about 30 people to decide whether the executive program for the shuttle computer should be [? interrupt-driven ?] or schedule-driven. And I'm not sure it came to a conclusion after 60 person days.

But Hal said to me, these people have just spent more time-- more man days-- in that meeting than it took me to conceive of and write and debug the executive for the Apollo computer. I mean, things just had changed by orders of magnitude, and it was clear to me it wasn't going to be fun anymore to work on that sort of stuff, so I didn't.

PRESENTER 6: I'm very familiar with this particular issue. And I'll get into it a little bit later. It is the tail, almost, of a long story that I would categorize as almost a religious war between the synchronous executive folks and the [? interim ?] priority-driven folks. And it mushroomed in the shuttle program, and it had its roots in Apollo. And perhaps, I'll get into that a little bit later.

PRESENTER 1: You asked about trust in automatic systems. I remember-- just a short anecdote. We were visited at the laboratory one day-- and I don't remember when, but fairly early on-- by John Glenn and [? Alan ?] Shepard, and I think there was a third astronaut. I don't remember who. And it was at the time when reporters were still chasing around after the seven astronauts, and it was a big thing when the astronauts were coming.

And the laboratory was showing those folks what we were doing. And one of the things that, at the time, we kind of had the arrogance to believe we should do was to make sure that nothing could be done by mistake by the crew that would endanger them. So there were a lot of tests that said if the delta v is in the wrong direction, or this or that-- I can't remember what all the conditions were. There were lots of conditions. Then they couldn't execute the proceed order to go ahead because something was-- you know, the software thought something was wrong.

And [? Shepard, ?] with his marvelous way about words, I remember two things he said there. But this one, he said, you guys think you know more about what is going to face us during flight than we will know during the flight. He says, you're wrong. He said, take all of those inhibitions out. He said, if we want to kill ourselves, let us. It may involve saving ourselves.

Anyway, of course he was right, and I think everybody sort of woke up to that. It made the software a lot easier, too, [? because we didn't ?] have to worry about all those [? checks. ?]

[LAUGHTER]

But he was adamant about that.

PRESENTER 4: They did let him start the pre-launch program on the way--

PRESENTER 1: That's right.

[LAUGHTER]

Well, they weren't capable of perfection. They were pretty well-trained, and they certainly were motivated.

PRESENTER 4: They were very well-trained, I will say that. But they also got very tired.

PRESENTER 1: Yeah.

PRESENTER 4: I had the reentry, which is at the end of five days out there. And they swore, through all the early years, that they were going to fly it manually. And as far as I know, none of them ever touched the manual stick. I mean, they were so beat. And then, all of a sudden, you hit seven G's. I'm not sure they could even see what was going on by then. Because I mean, they didn't get any sleep out there for all that time, really, did they? I'm surprised they were as-- doing as well as they did after all that time.

PRESENTER 7: Yeah. I think [? our ?] we had a different opinion of this subject matter than the astronauts did, or at least the software group did, at the time that this took place, the pre-launch. And we talked about that the last time, when he selected pre-launch during flight. So I think there's sort of a in-between. You don't want to overdo it, but you also want to make the crucial areas, where you can't have a catastrophe.

PRESENTER 1: Shepherd, in that meeting, was responsible, I think, for the demise of another thought that we had had, which was that the computer, not being perfectly reliable, it was going to be possible to do in-flight repairs. And so there were a lot of thoughts about diagnosing what had gone wrong and changing modules and so forth. I don't know how far along that got, but [? Shepherd's ?] comment to that was, yeah, and we should all train to be brain surgeons so we can operate on each other. And I think that was about the end of the in-flight repair.

PRESENTER 6: I think there were two areas here that people are talking about that somewhat different. There was the small group, seat-of-the-pants effort that was unfettered and unhindered, for the most part, by NASA, and devoid, pretty much, of real bureaucracy. And I think that, as Dan pointed out, you had-- a lot of the fundamental engineering work was done during that period. And it was very fortunate that we were able to do it unfettered. Because I think that a lot of the fundamental issues on how to get to the moon, let's say, were solved in this period of isolation-- almost isolation [? and ?] small groups.

And these various flights that have been talked about, I think the term-- when Alex was talking, the term system mother came into my head.

PRESENTER 1: [? Rope ?] mother.

PRESENTER 7: [? Rope ?] mother.

PRESENTER 6: Rope mother.

PRESENTER 1: Yes. [INAUDIBLE]

PRESENTER 6: And this was a person who was-- the responsibility was focused upon this person for a particular flight, even though it was a relatively small group. But I think this era was-- I would say this is the pre-built [INAUDIBLE] era. And I think that, somewhere along the line, someone did some, let's say, back-of-the-envelope calculations about how many lines of code we have to do, how many person months-- how many lines of code per person month and things like that. And came up with a number of people that were just completely different than what the lab had in doing this work.

So instead of having 30 people, we needed 200 people or something like that. And not only that, but we needed all this documentation, and we needed all these review meetings, and we needed all NASA supervision, and we needed a lot of stuff.

And at that point, there was a LEM that was being split off, and we sort of became quote, unquote "professionalized," in the sense that I remember one day-- and I can remember the day very clearly. There was an evening meeting about 7:30 in the evening, with Ralph Reagan running the meeting. And the lab has been under a lot of pressure in previous weeks with respect to NASA coming down and questioning their organization, and questioning their documentation, and questioning their testing and questioning everything.

And I was told, that evening, that we're going to have a different organization and we're going to have something called Project Managers. And we didn't have any project managers. We didn't have that title, anyway, even though you might think of a [? rope ?] mother being a project manager.

And that's when I was appointed the project manager of CSM software. And another individual, George Cherry, was appointed the project manager of LEM software. And these groups now were fairly large, and NASA wanted to see formal schedules, and time to complete this module, and time to complete that module, and reporting on a monthly-- I can't even remember the schedule, what it was. But things started to build up in reviews and in paperwork, and in monitoring and so on. And I would say that that became a different era of doing work at the lab, and much less fun for a lot of people.

GEROVITCH: Excuse me. Can you put a date on that switch?

PRESENTER 4: It was after Gemini that NASA descended on us, but I don't know quite when the date was.

PRESENTER 6: Yeah. I would say-- I'm guessing, and I'm doing it on my own time schedule. I would say it's either-- it's either in the fall of [? 1960-- ?] I think it's the fall of 1966. I don't think it's into '67, But I--

PRESENTER 4: By '67, you were [? in plates. ?]

PRESENTER 6: I think it was the fall of 1966.

GEROVITCH: And what do you think actually prompted the change in NASA's attitude? Was it the concern of one particular [? faser? ?]

PRESENTER 6: Well, I think that's a very good question, and I have an opinion, but it's just a completely personal view. The people at MIT were very smart, very talented. We certainly produced a lot. But we were also, I think, sort of arrogant from the outside. And I think that that arrogance started to irritate people who were tired of hearing that MIT knew best, and knew better. And you don't even know what you want. We know what you want, and so on.

And I think that that was part of why there was an effort to almost sort of rein in MIT, or to get them on this program, not their own program. And then that NASA is the boss, not MIT. And I think there was a little bit of that. Plus, there may have been some perfectly honest professionals at NASA who had had experience with large programs, and just felt that this was not going to get there in the manner in which it had been going so far.

GEROVITCH: So you don't recall any particular conflict that may have prompted NASA--

PRESENTER 4: Well, those operations guys that you see down in all those rooms in Houston were doing first Mercury and Gemini, and zillions of [? them-- ?] flight controllers. And all of a sudden, they said, now what are we going to do for Apollo? And they had nothing in the way of documents, procedures. All of it was really scanty.

PRESENTER 6: [INAUDIBLE] what they created, though, [INAUDIBLE] as well. I can remember this. They had a-- NASA, at Houston, on this seventh floor in Building 1, I think. They had this beautiful conference room, really beautiful conference room. And that's where they held a lot of their big conferences. And they built a wall on this conference room of sliding translucent boards that were like this blackboard-- whiteboard. And they were white, and they were-- they went deep, like six deep, so that you could push one and there was one behind it, and push another one. And you had a big, long-- it was about the size of this wall, and you could move things from one side to the other and so on.

And on this wall, they created a schedule. Today, we call it a Gantt chart. And they had somebody who was taking care of this. Some contractor who would come up every day with his blue tape or red tape or yellow tape, and mark where the schedules were, and all these things, and what was going on. And they could roll this board back from that board and so on. And this was being kept, eventually, daily by this contractor. And it was magnificent to see it. That's how much they got into the schedules and sort of the project management of the program.

PRESENTER 4: Yeah, and MIT was woefully unprepared for all the tons of stuff and support and so forth. So they had to reorganize all together. And it would have been-- I don't know how it would have worked. There were a few people, like Fred, who really did a great job of dealing with them, and trying to shape the rest of us up, put on some reasonable form for it. Plus, there was Bill Tindall, that you have Tindall grams that we dealt with, who was up here a lot trying to work the problem, as they would have said in those days. And say what you may, he did a great job in trying to match the--

GEROVITCH: So were you seriously adopting these techniques in your own group, or were you just kind of shielding that group from NASA, allowing old practices in the groups to go on?

PRESENTER 6: There was a certain amount of shielding, but I would have to say, personally, I was trying, as a project manager, to adopt project management methods. And I was trying to keep track of schedules. And I was trying to encourage people to document things, and put documentation in listings and so on and so on.

So in that sense, I was almost one of the enemy, if you want to put it that way. Because there are-- there were a lot of people, and I'm sure, looking back at it, people would still have those feelings, that this was just-- all this nonsense was just getting in the way of getting this job done.

Well, NASA just didn't view it that way. That was part of the job. You had to work to schedules, you had to work to-- you had to have reasons why you couldn't meet a schedule. You couldn't just say, well, I promised this in two weeks and I couldn't get it, you know?

PRESENTER 7: But I think one thing that helped, it was a matrix management setup.

PRESENTER 6: It definitely was.

PRESENTER 7: And so Fred and George, at that time, were project managers, but then there were the functional managers. So while they were worried about the schedules of each mission, from the functional people worried about, hey, is this stuff working-- the stuff we're supposed to be delivering? The actual-- maybe the man-machine interface part. This was a whole separate division. So the functional people and the project managers would-- I thought they worked together very well.

PRESENTER 6: Well, there was a certain tension.

PRESENTER 7: [? Didn't ?] [? we? ?]

PRESENTER 6: There's always tension in a matrix organization.

PRESENTER 7: Right?

PRESENTER 4: But Fred did a great job.

PRESENTER 7: Yeah.

PRESENTER 4: Because he came to us and wasn't as high-handed as some people at NASA, and pounding on the table-- give me this or else.

PRESENTER 7: No, Fred was like a psychiatrist project.

PRESENTER 6: Right. You had to be a psychiatrist--

PRESENTER 4: He was--

PRESENTER 7: He made everybody work together.

PRESENTER 5: Fred was a psychiatrist drawn from our ranks, so he understood. I always thought of you, Fred, as Mr. Powered Flight. That probably was a fairly short period when you--

PRESENTER 1: It was a short period.

PRESENTER 7: Yeah.

PRESENTER 5: But that's what you were. And you were one of the guys that I needed.

PRESENTER 1: Was [? Ed ?] in powered [? flight? ?] Who else was in [? NASA? ?]

PRESENTER 4: I just remembered Fred.

PRESENTER 5: Actually--

PRESENTER 6: You might say that, from the background [? the ?] people, it's interesting. In those days, we, of course, had physicists on the job, and mathematicians, and engineering graduates and students. We probably had a literature major here and there, too, who jumped in there.

But there was no computer science discipline, as such. There was nobody there who was a computer scientist, per se.

PRESENTER 7: Did they even teach computer science?

PRESENTER 6: And I don't think that even taught it.

PRESENTER 7: It was not a course.

PRESENTER 6: It was not a course.

PRESENTER: Right, right.

PRESENTER 6: But I often felt that part of our success at that time was due to the engineering and physical science backgrounds of a lot of those people. Because maybe they weren't the most elegant programmers, OK? But they understood what a specific impulse of an engine was, or what the physics of what you were trying to do. And you weren't taking somebody who was a-- sort of from the inside of a computer, and then trying to-- who knew that, and then trying to get them to understand the problem that you were trying to solve.

And I'm not saying they couldn't do that, but I'm just saying that, at that time, I thought that one of the reasons that we so rapidly were able to do this-- I think there was a great understanding of what the problem was, even if we had to do some fast learning. But people had the basic tools. I think that was important.

PRESENTER 5: Yeah, I remember-- [? I ?] illustrated before, I remember when you joined, Margaret. And I didn't want to tell this against you, but one of the first things I did in interfacing with you was explain inertial navigation system to you. And you were a math major, if I remember rightly. And you hadn't worked-- you understood, but you hadn't worked with coordinate systems, and I spent a couple of hours, a couple of days, going through what the basis of the guidance system was and how it worked and blah-blah-blah. I never talked to her again because you understood, and you became an inertial-- at least somebody who knew what inertial navigation system-- how you describe them and how you control them.

PRESENTER 6: I want to get back something that Jim said, and what he did in that group. There was another fellow who, perhaps, should be here, as well, who also contributed to that group, [? that ?] is Bill [? Wittennall. ?] We needed those simulations so much because it was the only thing that grounded us to what the truth was. I mean, you could never tell what you were doing was going to work until you banged it against a simulator.

PRESENTER 7: Unless the simulation was wrong.

PRESENTER 6: Unless the simulation was wrong, and I didn't-- I suppose that happened. I don't remember too many--

PRESENTER 4: But that's why it was so important.

PRESENTER 6: I don't remember too many times, and I was really impressed when somebody had five degrees of freedom in the fuel [? slashing ?] module of the vehicle. And they had all the stars in the right places, and all the-- everything else that would happen. And it was really-- it became our truth. And I was thinking about this last night before I came here, also. This was, basically, a fun project, although there were plenty of difficult times in the project when you're in the middle of it.

But I was-- we did a lot of testing in this project, a lot of testing with these simulators. And I think that if all we ever did was fly the nominal mission with nothing wrong, it would've been pretty boring after a while, if you had this just one set of parameters, and you always fly this one set of parameters.

And I think that part of the fun that we had was thinking up and creating these crazy off-nominal cases that probably could never, never happen. And then see-- and then bang that against the software that we designed, and design certain margins into the software that would make sure that if you-- which way, or what way the [? gimbal ?] would go that you'd be able to get out of it. And other things that made such variety in the testing. That for one nominal case, we ran 100 off-nominal cases.

PRESENTER 7: Also, an advantage, from a computer science or software engineering point of view, of having different missions is that we learned all about the advantages of things that people don't think too much about today, like reuse and how that could cut down on testing. We learned all the things about what it means to manage these different missions, what's reused between the LEM and the command module, between this mission and the one that's going next. How do you evolve? How do you do it in a reliable way? In other words, it was a great place to learn about how to do things the right way. So that was--

PRESENTER 6: The whole thing had to be structured, too. I mean, all that software had to be-- we structured it into programs with numbers. We'd call them things. We had names for them. The astronauts eventually picked up some of those same names. They would get into P40. I mean, they didn't know what P40 was. These things were-- not that those names were so great, but the fact of the matter is that that software all had to be structured and organized. And there had to be a nomenclature on it, and there had to be interfaces built.

PRESENTER 7: It took on a life of their own.

PRESENTER 6: It took on a life of its own.

PRESENTER 4: --banks manually here, there.

PRESENTER 6: Right.

PRESENTER 3: And it had to be ready about-- what was it? 8, 10 weeks before flight time. You couldn't change it after that.

PRESENTER: At least.

PRESENTER 7: And also, we learned about how important it was to prioritize as to what was most important. Because there was so little room to put the programs into that you would learn to delete what was less important. But prioritizing became something, and backup systems also became something that you lived your life later by doing things like prioritizing, and doing things like having backup systems. [? And ?] in your daily life-- and even going to the grocery store later on, it was such an influence.

But we had a Black Friday-- I can't remember what Black Friday was. I just remember we were deleting all kinds of things because something more important was coming in. What was it?

PRESENTER 4: [INAUDIBLE] sessions where they cut--

PRESENTER 7: Yeah.

PRESENTER 6: [INAUDIBLE] I think they called it Black Friday.

PRESENTER 7: Yeah. But we deleted an awful lot because something really important had to be done. Maybe it was backups or-- I don't remember now. I wrote about it, but I can't remember what it-- I'll have to go back and find that out.

PRESENTER 3: I'm sorry to have to leave, but thank you very much.

PRESENTER 4: Ray, nice to see you again.

PRESENTER: Good to see you, Ray.

PRESENTER 6: Ray, were you the guy who--

[INTERPOSING VOICES]

PRESENTER 6: Ray, were you the person who inflicted banks on us so we had to worry about--

PRESENTER 3: That was a solution to a very bad problem.

PRESENTER 6: It was a very bad solution to a very bad problem.

[LAUGHTER]

PRESENTER 3: Well, that solution exists today in Windows, right? Or in the 8086 architecture. Because you still have to do banking in order to get the extended address fields and things like that.

PRESENTER: Right.

PRESENTER 2: It's just invisible.

[INTERPOSING VOICES]

PRESENTER 6: It drove us bananas, I'll tell you that. It was almost like area codes, I would say, in the telephone system. I mean, you had to--

PRESENTER 7: Oh, yeah.

PRESENTER 6: --you had to be able to address [? this-- ?] with the [? addressing-- ?] how many bits you had for addressing. And in order to get unique addressing, you had to mark it with another set of field-- another field that would let you address uniquely. And so you had to manage that other field, and that got very confusing in the software. That was a difficult issue.

PRESENTER 4: And the erasable memory was so small. What was it? 2K?

PRESENTER: It was about 2K.

PRESENTER 4: That certain of that was permanent, but a lot of it used during burn. And when you were done burns, that same area was used for something in a later phase. And so these phases were sequential. And all the previous information was gone, and not used anymore. So you didn't have unique--

PRESENTER 7: You could see there were a lot of things to do with interfacing correctly. And therefore, we had the opportunity to make a lot of interface errors because we had to squeeze everything into such a small space. But also, it's amazing what one could do in so little space. And how little one can do with so much space today.

[LAUGHTER]

PRESENTER 5: One of the things that I have remarked on to myself, over 30 years since then, is at the time, we brought software into a machine that, by today's standards, would be a very advanced architecture. I mean, we had-- you heard it all the time-- preemptive, event-driven executives. I mean, I've worked with people who say, what? You programmed into one of those? A real-time control computer with event-driven asynchronous? We don't have that today.

PRESENTER 4: Yeah, with auto pilots and [? jets ?] sending signals.

PRESENTER 5: Multiple levels of interrupts and control.

PRESENTER 7: Man-machine interface, all of it.

PRESENTER 5: We invented it for this job, and it hasn't reemerged as a standard of computer design today. Surely, there are interrupts and stuff like-- but they were not used in this coordinated fashion that we had then. Something else that we invented-- and it's saved those guys [INAUDIBLE] on the moon-- was the software restart, which allowed reaction to any kind of failure for the computer to retrack back to its last known, properly coordinated state, and to [? re-move ?] forward.

PRESENTER 4: You don't have to do Control-Alt-Delete.

PRESENTER 5: No. This thing sets-- what do we call those things?

PRESENTER 4: Restart points.

PRESENTER 5: Restart points as it went along.

PRESENTER 7: And Dan was the father of that.

PRESENTER 5: I was going to say, this guy and Woody [? Vandiver-- ?] I was-- Dan's the father. Woody was the guy that put together-- and I advised that Woody should be. [INAUDIBLE] But not only did the software restart concept, I think, rescue the LEM from failure just as the [INAUDIBLE] [? touched ?] down, they were also a terrific help in debugging this complex piece of software. We used them. Do you remember that?

PRESENTER 7: Yeah.

PRESENTER 5: We triggered restarts, because if this thing was able to go over the same piece, we could change things and rerun chunks using the characteristics of the software itself. But I think Margaret probably remembers more of that than I do. But I remember, if we did not have that, testing of this real-time stuff would have been a hell of a lot more difficult.

PRESENTER 7: Well, you brought up something that was really interesting. Because people today, when they use computers, they try something, it doesn't work, they try something again, it doesn't work, and they just have the computer at their disposal. We could only put in runs overnight. So you'd have to think very carefully. Because if it didn't work, then you'd have to wait till the next day.

PRESENTER 5: You wasted a day.

PRESENTER 7: A whole day. So then what we learned to do was to find a way to run 100 runs at once, thinking about it before the fact, and the restarts remind me of that. So you'd try all these different things all at once. And in a way, you got more done because you had to think it out so carefully. You got done maybe more in that 24-hour period than if you were sitting here, ad [? hocly, ?] just trying something. And so you're basically doing things more in parallel, I think, in your thinking process and your design of the test. So that was a totally different--

PRESENTER 5: Yeah. I don't know what lessons can be learned from that, but we did this with cards offline.

PRESENTER 7: Yeah, cards.

PRESENTER: Punch cards.

PRESENTER 5: There were no such things as terminals or interaction. As Margaret said, you got your stack of 2,000 cards. You gave it to the guys in the computer room, who by then, had cordoned themselves off. You couldn't-- we used to go in there and run the runs ourselves, but that-- at some stage, when Jim's double Honeywell came in, that was the end of that.

PRESENTER 7: And the thing is, if you dropped the cards, you had a real problem.

[LAUGHTER]

But it made you realize-- I mean, that was part of the reason we worked on doing something where, if you drop the instruction sets, it wouldn't matter because it was smart enough to get it back together again. That was later. But we used to take pieces like cards and say, here, put this in your deck, and that would be the reusable. You remember that?

PRESENTER 5: Yeah.

PRESENTER 4: We had to learn to use 519's.

PRESENTER 5: These are like horror stories, or war stories. There's probably something to be learned from this experience that the few, under arduous circumstances, were able to turn out something as complex as this piece of software was, and have it work-- and prove that it worked. That seems to be missing today.

I don't know how to put that together, but the fewer people, and the more crude the development environment was, the better seemed to have been the progress and the end product. There's a conundrum there. I'm not sure what it is.

PRESENTER 6: We did something that I know Margaret was intimately involved with. And that is that-- and I think it's written up in Microsoft's method of doing their software today. But we had a nightly configuration control group that looked at all changes that were created by anybody during the day, and had to pass this little group of people, which might have been four people. And we had the--

PRESENTER 7: The [? rope ?] mothers.

PRESENTER 6: We had this, quote, "advantage" of doing an assembly every night so that these changes were brought in to one listing, you might say. And then, by the next day, you'd have an update of that program. And so you had a very-- an extremely tight configuration control system.

Today, if everybody is programming on monitors, and they're all distributed and so on. This is a much more difficult environment than we were in this one building, everybody producing changes and cards, and write-ups of what their changes were, all being funneled to one small configuration control group, who would then pass on everything that went into that system.

And that was done continuously. I don't know if it started right at day one, but it was done on the flights that I remember. And it was a very effective way of keeping errors out of the program.

PRESENTER 7: But there was more to add to that, in that every single day, the person who reviewed the changes would put out a memo that would go to everyone that was involved in that mission, so that they were aware of what the changes were. So then, if they had problems, they'd know that a change had been made in that area. So it was that communication-- just hard-copy memos that went out alongside each daily revision that I think helped.

PRESENTER 1: It shouldn't be forgotten that this was an assembly language program.

PRESENTER 6: Yes.

PRESENTER 7: Yeah.

PRESENTER 1: And if somebody had set up a simulation with what we call special requests, addressed to certain locations, and somebody changed the underlying stuff, the locations could move in that simulation run, or just be incorrectly formulated, as it would turn out. But that would take a day. And so you had to be careful not to kill somebody by shooting down something somebody else was planning to do with a fixed configuration.

PRESENTER 7: Exactly.

PRESENTER 1: Which was never fixed because somebody had to change something every day. Towards the end of the development of a single mission program-- this is a fascinating thing. It's been mentioned that NASA let us do just about anything we wanted to in the code. But we did have some help from a famous guy [? at ?] [? TRW. ?] But basically, we were completely given free reign until we delivered the software to Raytheon to build the core [? rope. ?]

And then we would go into a series of tests with absolutely fixed configuration, the purpose being to find any more bugs that hadn't been found. And sometimes, when you found-- sometimes you would find bugs that just you felt had to be fixed. And usually, they could be. But once the program was delivered, this group of people, who were the only people in the world who NASA thought could write the software, suddenly were so stupid that no change they proposed could be plausible at all.

And so anything you suggested, the answer was no. And you had to beg and so forth, for the interest of the project, to be allowed to remove this desperately bad thing. And I remember one night, in the LEM 1 software game, when one of these showed up. Probably the digital autopilot, which was new.

And it was late at night. It must have been 9:00 or 10:00 at night. And I called the guy, and I'm not going to mention any names for reasons that you'll see. I called the guy who was monitoring us at NASA at home because I had to get his permission to put this change in, and to let him know it was coming. And his wife answered the phone and said he was at work.

So I called him at his office. Nobody there. Well, I knew that this guy wasn't always telling his wife what he was up to, to say the least. And here I was with this problem. It was 10 o'clock at night. We wanted to put this change in. It was, in my mind-- whatever it was-- really important to put this change in.

I knew the guy wasn't where his wife thought he was, and I didn't know what to do. I couldn't leave a message at work because they didn't have voicemail then. And I didn't want to call his wife back and say he wasn't at work, for fear of what would happen to him when he got home. And what do you do in a case like that? We were absolutely at the mercy of getting approval on those things. I don't remember what happened.

PRESENTER 7: He was in the bathroom.

PRESENTER 1: I doubt it. But there were some funny situations like that. Many times, the software was changed after the rope had actually been built, and they would sometimes have already potted it, and they'd have to depot it, and change the wires and so forth. It was an incredibly small and arduous process to go through.

One of the things that I think deserves mention to illustrate the frugality that went into the design and the software was that the guys that invented these interpreters-- and we had a second generation of them later than the one that Hal Laning had developed-- realized that single precision was too small to do Earth-to-moon navigation, but double precision was adequate.

And so the interpreter would do everything in double precision, or lots of things in double precision. It was a ones' complement machine, which is funny enough by itself. So that there were 32,767 states, not 32,768 states. And unfortunately, there were some optical encoders on rotating gadgets that needed 2 to the 15th exactly, and not 2 to the 15th minus 1. And so it also was capable of doing [? two's ?] [? complement ?] arithmetic, which caused some other problems.

But here we had a ones' complement machine with two forms of 0. And double precision, in which the interpreter was perfectly happy to let the two parts of the [? word ?] have different signs. So you could have a plus 1 over here and a minus 1/2 over here, and the result would be whatever it is. Plus 1/2. And that was clever because the software to do sign correction all the time took time, took space, and was eliminated.

Unfortunately, however, there were a lot of cases where that just turned out to be a real pain in the backside. And there were lots and lots of little bugs that showed up when there was sign disagreement in there, in a way that somehow got through all the checks. And so there were things that we did to ourselves sometimes, in the interest of frugality, or [? needed ?] to. It was a different day. It was a different day.

Dan, of course, solved the assembly language problem. He solved lots of problems. Wrote an assembler. We had an assembler on the Honeywell machine, which was a batch assembly. You would take all of the assembly programs that you were going to update, and you would assemble this huge deck of cards that would affect all of them. And one of the characteristics of this assembler was it was designed for the tape machine, so it had tapes up and down, and so forth. If your cards were in the wrong place, or had the wrong number on them, they wouldn't go where you put them in, and they wouldn't necessarily even go into the program that you had them aimed at.

And so you had to meticulously check your assembly listing in the morning when you got up to see if all your changes went in there. And if they weren't where you expected them, you had to go try to figure out, where did they go? And it was an arduous process, and a careless person could wreak havoc on the world in those assemblies. And they would run for a long time.

The saving grace of the Apollo computer was that it was slow. And why I say that was that the simulations that we did, and the assemblies we had to do, were very long. And if the machine had been 10 times as fast, it would have taken 10 times longer on the best hardware that we could buy to simulate this stuff. And we would never have been able to run these simulations. They would have just taken more time than we could've possibly had. So sometimes, the limited resources are a blessing in disguise.

PRESENTER 4: That's why those emulators people have on today's computers have so much trouble. They're trying to emulate something that's so fast.

PRESENTER 1: Exactly. We had things in the simulation called clock advance, which you'd tell a simulator you were willing-- when nothing was happening-- to just jump the clock up to the point where something happened. And the simulator would figure out when that was, and just take that leap up there.

PRESENTER 7: That's great. I really liked that.

PRESENTER 1: And it really advanced the speed of the simulation to the point where it was nearly real time in a lot of cases, dependent on how busy the computer was. But there were lots of periods where it was just idling, and maybe a few instructions, or maybe a few seconds. But it would just leap the clock forward. And that, as far as I know, never caused any difficulties. Because we didn't pass anything where, if an interrupt happened, that's where we'd go or something else. There [? were ?] a lot of tricky stuff in there.

PRESENTER 7: I keep telling people what that simulator used to do, and they can't believe it. [? It's ?] many things it did that people don't do today that they should do.

PRESENTER 1: One of the fascinating problems was the reuse of erasable memory. And like every memory problem, a piece of software that would use somebody else's memory would likely get away with it because if it started writing into that area, and then completed what it would do, it would go away. And then the person who had left something in there for its use when it came back would hit something that somebody else had written in there, and do it completely wrong. And then, by then, the real violator was long gone and you had to figure out who did it.

PRESENTER 4: Who done it?

PRESENTER 1: And there were also problems of using erasable that hadn't been initialized the way you thought it had. And we came up with a technique to put a kind of random number into all of erasable when we started.

PRESENTER 7: In the background.

PRESENTER 1: And if that was ever hit, you could see it, and we had a way to put that same random number back in so you could do all kinds of diagnosis to find it. There were just things that nobody does anymore.

PRESENTER 4: You know, they still haven't solved that. You don't know how many students of mine now, their programs only work when they're 0 in the location when they start. That's the worst thing, I've decided, that you can put in [? unused-- ?]

PRESENTER 1: Yeah, that's right.

PRESENTER 7: But why aren't people doing these things? There are so many things that would really help debug systems today that never were brought forward. Many things are, but that was one that was, I think, more advanced than most debugging tools we have. Why do you think that is?

GEROVITCH: Was there any effort to disseminate, in a systematic way, the kind of knowledge and programming that you gained within that project to flesh out the most important principles that were worked out in the work of this project, and published?

PRESENTER 6: There have been a few summaries of the program. People have written volumes I think that you probably have gotten your hands on. I know there was a late volume written by that woman, [? Madeline ?] Sullivan? And sometime, she wrote a multi-volume history of Apollo. I think it was one of the lab documents. But I don't know that anybody has tried to record the essence of, let's say, programming breakthroughs, or programming inventions, or something of that nature. Or the salient discoveries, if you will, in the Apollo. I haven't seen anything like that.

PRESENTER 4: No, I don't think so. Some parts of the engineering have been covered with some good papers-- well, [? Batten's ?] got a book, but that's more theoretical. But haven't there been some papers on LEM's autopilot lunar landing?

PRESENTER 6: Perhaps. I haven't--

PRESENTER 1: Bill wrote-- Bill [? Wittennall ?] wrote a book on autopilot.

PRESENTER 4: Right.

PRESENTER 1: I think it was the CSM autopilot.

PRESENTER 4: Right.

PRESENTER 1: Published by MIT Press.

PRESENTER 4: That's vaguely coming back. Maybe that's--

PRESENTER 1: But not too many people were thinking of publishing. They were trying to just get home. I mean, I don't know how many people were working six or seven-day weeks, 80-hour weeks, and doing everything they possibly could in their power to get these guys to the moon and back.

PRESENTER 5: A number of marriages sank during that period, including mine.

PRESENTER 6: I think there was an element in this program that is missing in a lot of current programs. And the 60, 70-hour week that we just spoke about, I'm sure that people do that today, too. But there was an-- I don't know how to say this-- an immediacy, or a familiarity with the astronauts. I'm talking about people at MIT, but it's probably true in other parts of the Apollo program. That you felt a certain personal responsibility to-- either you made promises, or you had conversations with these people, or you were doing it because you wanted to make sure that there was absolutely zero errors in this program. Because you knew that their lives were at stake. There was some sort of connection to the astronauts that we at-- I felt, and I think other people at MIT felt-- perhaps because they came there for training. Or I know there were occasions we went to dinner with Doc Draper at [? Lockovers ?] with some of the astronauts after a flight.

And I think that affected some-- or kept morale up, and kept people's attention, and feeling of dedication to the job because of that sort of connection. You weren't working for some big-- necessarily some big bureaucracy. It was this-- in my case, it was sort of a feeling that-- some sort of connection with Chris [? Craft, ?] that you were satisfying his leadership, you might say. Plus this feeling-- this sort of feeling of the astronauts, I would say, that certainly affected me in the job.

PRESENTER 4: Absolutely. They were here. I mean, which Apollo flight had Jim [? McDevit, ?] Dave Scott, and Rusty [? Schweiker? ?] That was Apollo 7, 8? Whatever. But they seemed to be around frequently before that, and you just seemed to see them doing this stuff, and get to know what they were doing all the time. It felt very close.

In fact, we used to laugh at them picking on Dave [? Scott. ?] He wasn't quite as fast picking up the stuff when you showed them that. But they were there frequently. But even before that, I remember the first time we went to Houston and gave a presentation to a lot of them, that was earlier in the-- still in the engineering mode. I mean, these guys came out and-- [GASPS]. They didn't just sit there and nod off. They were excited, and asked-- I remember Neil Armstrong overwhelmed me with the questions he asked, and how well-prepared, and how thoughtful. And I said, jeez, these guys, there's no fooling them. They know what they're looking for, and we better be really careful of what's going on here, and make sure we're prepared. When they ask questions like that, you can't wing it.

GEROVITCH: Were they actually interested in software, as such?

PRESENTER 6: Some were, actually.

PRESENTER 5: Not per se.

PRESENTER 4: But they wanted to know how this worked, and what was going to happen, and what it meant to them, and how it--

PRESENTER 5: They worked through the controls, but a lot of the controls were driven by the software. So what they said or didn't say had an immediate impact on us, who were making those things work in the front, through the software itself. I remember Steve [INAUDIBLE] was the man-- I've forgotten what the hell that thing was.

PRESENTER 4: Oh, you should mention--

PRESENTER 5: The astronaut document that gave the whole sequences of all of [INAUDIBLE].

PRESENTER 4: Right. The word for John [INAUDIBLE]. What's the name of it?

PRESENTER 7: The [? GSOP. ?]

PRESENTER 5: The guidance systems operations plan.

PRESENTER 4: Steve and Malcolm and Jack Schilling all cranked out all that stuff that went on and on and on.

PRESENTER 5: This was the first time I'd ever seen a specification for anything. And today, you design the spec first, and the software falls out the bottom. But those GSOPs, I remember John [? Dolan ?] was the guy-- remember him?

PRESENTER 4: Yeah, yeah.

PRESENTER 5: The man with the eight kids, I think he had.

PRESENTER 4: That's right.

PRESENTER 5: Who lived in the South Shore somewhere, and came into work every day.

PRESENTER 4: That was a thankless task, churning out all that stuff.

PRESENTER 5: Yep.

PRESENTER 4: And you had to get it all--

PRESENTER 5: Because he was-- the astronaut--

PRESENTER 4: They had to run down and catch us to get--

PRESENTER 5: Yeah.

PRESENTER 4: Every time they had a question, they'd start-- can I talk to you a minute?

PRESENTER 5: Yeah.

PRESENTER 6: I still have something like 280,000 Eastern Airlines miles that are now part of the Continental system that never expire. And--

PRESENTER 1: Look out.

PRESENTER 6: These were on these multiple--

PRESENTER 1: Midway has already thrown in the towel. Who's next?

PRESENTER 6: Multiple trips to Houston. But one of them, I want to just bring up in the conversation here. I don't know how many trips we all made to Houston. I can remember-- I can just see myself having this heated discussion for an hour and a half between Boston and Atlanta with Margaret on some programming issue, or something that we were doing. But we attended innumerable meetings down at NASA, and we had a very live wire at NASA who we went to see all the time. And that was this guy Jack [? Garmin, ?] who really was-- he really-- if there was anybody at NASA who sort of understood what we were trying to do, it was this guy, Jack [? Garmin. ?] And--

PRESENTER 4: I think-- I always felt like Dr. Frankenstein, that I'd created a monster.

PRESENTER 6: Yeah.

PRESENTER 4: He got assigned to me, and for six months, an hour a day, I'd keep--

PRESENTER 6: He was--

PRESENTER 4: --telling him the stuff, and he was persistent. Phone calls, phone calls, phone--

PRESENTER 6: But I remember this day, this particular day that [? I had ?] finished a meeting at NASA, and I was coming home. And we were standing-- I guess walking between the parking lot and the terminal. It was like a driveway that headed up toward the terminal in Houston. I can't remember whether this was Hobby or the International. I can't really remember.

But anyway, everybody said, well, wait a minute. There's the President. And so we watched, and this open car came from the terminal-- passed us. And there weren't very many people standing around. Maybe one deep along the road that might have been 1/2 a mile long. And he rode by in an open car with his wife at maybe-- I thought it was around 15 miles an hour, but nothing faster than that. Maybe 20.

And I remember standing there. I don't know who was with me. And I said something like, you know, if anybody wanted to kill this guy, this would be pretty easy. Because he's just rolling by in an open car and so on. Well that was November 21, 1963. And I think it was a Thursday, I think, and I was coming home. And I was in my office the next day. And he was no longer in Houston. He had gone that night, and then he went to Dallas the next day.

PRESENTER 4: Yeah, I went there that same Thursday. I think it was a different meeting. And he went right down-- we were-- they hadn't built that MSC yet. We were in Houston. It was an office park, whatever it was called. And he went right down the main highway there, although we didn't go out and see him. But I stayed over till Friday, and then Friday, I was heading to go home. And then, at the airport, we got all this news. And--

PRESENTER 5: But do you think it was a Friday?

PRESENTER 4: Yeah.

PRESENTER 5: Yeah. That's my recollection.

PRESENTER 4: Yeah. But Thursday, he was in Houston, and went up and down whatever the name of that [? Gulf ?] freeway, or whatever it is. There was a parade or something. And then Friday-- I stayed over Thursday night, some more meetings Friday morning. Was heading home Friday afternoon, and then, in the Hobby Airport, we heard the first news that all of this had happened.

PRESENTER 6: I wanted to touch upon this religious war I mentioned before, and get into it just for a moment. The issue really came to a head in the shuttle program, where there were forces that believed in asynchronous, priority-driven, interrupt-driven executives. And other forces that believed in absolutely synchronous executives, where you planned out all the software, and you executed the software by tables. And you could tell what part of the software was operating at every instant of time because you had planned it out so that it all operated very rigidly.

And the people who really used those kinds of executives were mostly people in the aircraft industry. And they had come from various airplanes, where they had these computers and they had these synchronous executives. And it was very important for them to know exactly what was happening at every instant of time, either from a testing standpoint or whatever. That was their mindset.

And the people that had worked on Apollo, for the most part, believed in this asynchronous executive, where you have priorities and you didn't have everything structured completely, but you allowed higher priority jobs to interrupt lower priority jobs, and so on. And there were these incredible discussions in the early shuttle time, in deciding what kind of an executive should be used in the-- operating system should be used in the shuttle.

And the shuttle had its own hardware issues, with multi-reliability strings and IO processor, and other things that had to be synced. And so it had a lot of reasons why this was a big issue. But the roots of the Apollo viewpoint really stemmed back to the alarms that occurred on the lunar landing. And as you recall, in that flight when that LEM was coming down, and all of a sudden, he had these alarms, and the astronaut kept getting these alarms. He said that he had alarms and, finally, there was a decision in Houston to just push on, and that he should land. And whoever made that decision-- Steve [? Bailes ?] or whoever it was-- either he understood what the problem was, or he felt that he--

PRESENTER 4: Well, Jack [? Garmin ?] was screaming-- I heard it-- go, go, go, go. And I don't think he had any evidence other than he thought we had restarted it perfectly.

PRESENTER 6: Anyway, he landed safely. And following that landing, there was just a frantic session at the MIT lab to find out what in the world was causing these alarms. And it took us the better part of 24 hours, I think, as I recall. And going back to the simulators, we simulated this. We tried everything to make this happen, and we could not make this happen. It was as if the AGC was operating at about 25% slower than it usually operates. And so we tried lots of things, and we couldn't make it happen.

Eventually, [? over ?] the suggestion of somebody who worked at the Cape a lot, we tracked down the fact that they had a switch in the wrong position, and it was stealing cycles. And there was a radar rendezvous switch, and that made the AGC run slower. And because it was running slower, it was dropping low-priority jobs, and not getting to these low-priority jobs. And only doing the highest-priority jobs. And so the job queue filled up, and when the job queue filled up, it caused this alarm to go off and so on.

At any rate, MIT got a lot of criticism for software era causing this to happen. And so now you had sort of a dichotomy, where people believed that what had happened was actually an error, and other people believed, no, the software actually saved the program. Because in the face of this mistake in the switch, the software, which was written as a priority executive, was able to go on with the highest-priority jobs, and not tank the mission because it didn't have to do this boxcar-structured, synchronous system, where it would give time to everything, whether it was important or not.

PRESENTER 7: Well, you know, it was even more complicated than that because the error was in the actual documentation.

PRESENTER 6: Yes, I was going to mention that in a moment. So this feeling went forward, including the NASA folks-- people like Jack Garmin, who absolutely felt that it was important to have such a system in shuttle-- versus the airplane folks, who felt that they could never be able to test the software afterward if they couldn't have an exact match, and know exactly what happened at any time. And the asynchronous executive was almost not repeatable, you might say.

So that caused a great deal of difficulty. What Margaret is pointing out is that, when we finally got, and found-- I remember the incident that we ran upstairs--

PRESENTER 4: [? Ross ?] Larson.

PRESENTER 6: --to look at the telemetry to see where the bits were set. And sure enough, the [? radars ?] bit-- the switch-- the bit, which was picked up by telemetry and downloaded, that was in, let's say, 15-bit [? or ?] how many [? bits-- ?] [? words ?] there were in the telemetry, and bit [? 9, ?] or whatever it was, showed that the switch was on.

PRESENTER 4: Rendezvous radar.

PRESENTER 6: Right. Which meant that that's what it was. And eventually, they told-- the ground told the astronaut, when he was about to take off from the moon, to put the switch in the right position, he said, in a very low-key fashion. But when that was run down to the end, it was found in the crew procedures to put the switch in this position.

So the next question was, well, how come, if you put the switch in this position, wasn't this picked up in the simulators at Grumman, which these guys had been training on for, I don't know, a couple of years. And in that crew simulator, they had always done exactly what was in the document, put the switch in that position.

However, that switch wasn't connected to anything. In other words, that switch was not connected, in the simulator, to the AGC, which would slow them down in that simulator.

PRESENTER 4: You mean at Grumman?

PRESENTER 6: At Grumman. So they did-- they trained exactly to what those crew procedures said each time, and they did it on the landing, too.

PRESENTER 7: Well, it kind of gives you an understanding more of how important everything is with respect to the entire system of software, [? peopleware, ?] you know, that--

PRESENTER 4: And I put it the other way. Why didn't our simulations follow the crew procedures?

PRESENTER 6: I don't know that the crew did that in our simulations. I don't know. It probably didn't-- I don't think it was in our digital simulator.

[INTERPOSING VOICES]

PRESENTER 4: --if I turned on the rendezvous radar?

PRESENTER 6: I don't know.

PRESENTER 7: But when you're testing-- back to the system thing-- what would make the software do something like that? In other words, you begin to think in terms of modeling and designing a problem. And where do you stop? Where's the outside of the outside? But it's modeling all these things as one system of which part is realized as software, part as [? peopleware, ?] part as hardware. But then, you begin to model it as an entire enterprise, and then simulate it as an enterprise.

But it makes you realize that when people blame something as being a software error or a hardware error, you're not necessarily sure what is it, and what does it mean to be a software error or a hardware error? And this brings up all these questions when you have events like this take place. And we've still not solved it today, I mean, in the industry.

GEROVITCH: So did they stick to that asynchronous design in the shuttle program?

PRESENTER 6: Did they stick to the-- excuse me?

GEROVITCH: Asynchronous executive--

PRESENTER 6: I think that they had a mixed system.

PRESENTER: It was a hybrid. Yeah.

PRESENTER 6: It was some sort of a mixed system. They had some hardware constraints on their I/O processors, and the synchronization of I/O with the computer that made it impossible to have, I think, a purely asynchronous executive. So they had some real hardware issues involved in it. And we, and the company that we formed after the-- we created a programming system for shuttle, in which we built into the programming language the concept of asynchronicity. That is, that you could have multiple jobs and priorities and interrupts and things like that. And the shuttle people, for the most part, didn't want to use those features that we had put in the language. We were very influenced in the language by the experience in Apollo, and very influenced by the executive that Hal Laning had designed, and the manner in which he had designed that whole thing. We were very influenced, in the language design, to take advantage of those things, of which they were, I would say, used to a minor extent in the manner in which the shuttle was put together.

GEROVITCH: Maybe we'd make a little break for five minutes while they're changing the tape now.

PRESENTER 6: OK.

GEROVITCH: And then we'll continue for about one hour. Because we started at 10:00.

PRESENTER 6: That's fine.

PRESENTER: That's fine.

PRESENTER 4: All the time on our Apollo, we had just about no contacts to the military. The military has its own-- were worrying about military satellites and so on. But our paths just didn't cross. I don't know if shuttle is where they finally came together, but not on Apollo. There were a big difference between the people we dealt with in Houston and most of NASA and Langley and the old people, versus Huntsville, Wernher Von Braun and the German gangs down there. They did things a little bit differently. And so there was a lot of difference of opinion. I mean, they built things in a certain way, which was very, very solid, [? methodical. ?]

PRESENTER 6: With all due respect to Jim's time in the service, I think there was an actual disdain for the military in this program. I think that you were afraid of the military getting some sort of influence on the program, in some fashion. And you were very happy that the program was completely open, and it was open to the public, and it was-- you could talk about it, and--

PRESENTER 5: Well, there were certain aspects that still had the stamp of the military on them. Some of those inertial instruments and stuff were still classified documents, if I remember right, at the beginning.

PRESENTER 6: Is that right?

PRESENTER 5: Trying to get some data. Because they sprang from Polaris and things.

PRESENTER 7: Were the computer instructions classified?

PRESENTER 5: Not that I recall, no.

PRESENTER 2: The only thing I remember being classified about the computer was the oscillator frequency.

PRESENTER 7: Because I wasn't able to get a hold of the [? Block ?] 1 instructions because of some reason. I thought it had to do with being classified, but-- when I first came to learn.

PRESENTER 1: I think ossified is--

PRESENTER 7: Was it ossified?

PRESENTER 1: Ossified.

[LAUGHTER]

PRESENTER 7: Do you remember that, Dan? You weren't able to give me-- or Al-- that you couldn't give me a document because it was classified.

PRESENTER 5: I don't remember that exactly.

[INTERPOSING VOICES]

PRESENTER 5: I know there were source documents that were classified.

PRESENTER 7: So you had to write down-- I can remember you had to write down a few instructions to say here they are.

PRESENTER 6: But didn't we have safes in that building? I remember-- I can picture the safe.

PRESENTER 5: Yeah.

PRESENTER 6: I can't remember what I kept in it.

PRESENTER 4: What was in there? It was definitely a classified area. Yeah.

PRESENTER 7: Yeah.

PRESENTER 5: Having to do with the inertial--

PRESENTER 6: I thought it had to do with the rocket.

PRESENTER 4: I don't know.

PRESENTER 7: You can see how much we were influenced by--

GEROVITCH: Did you have to obtain any security clearance or something?

PRESENTER 6: Yes. We all had security clearances.

PRESENTER 7: Yeah.

PRESENTER 6: But I don't know whether the security clearance--

PRESENTER 4: Nothing we ever wrote was classified, though.

PRESENTER 5: It was [? unclassified. ?]

PRESENTER 6: My security clearance was initiated in the Polaris--

PRESENTER 2: That's where it came from. Yeah.

PRESENTER 6: I think it was kept going from that [INAUDIBLE].

PRESENTER 7: But then I had to get one.

PRESENTER 6: Oh, you had to get one.

PRESENTER 7: Yeah.

PRESENTER 5: Interesting. I had one.

PRESENTER 7: Although, actually, I had one in the Sage system, so maybe mine was left over. I don't know.

GEROVITCH: And so you didn't have any sense that the military were interested at all in innovations that went on in your project?

PRESENTER 6: No.

PRESENTER 7: We never thought in terms of--

PRESENTER 1: The only military person I can associate with Apollo is General Phillips, and he didn't exactly come around looking over our shoulder.

PRESENTER 4: No.

PRESENTER 6: I don't know. I think that's a little too strong. I don't know that we knew that they weren't interested. They just were out of sight. They just weren't--

PRESENTER 7: They just weren't part of--

GEROVITCH: I think they just thought we were going to totally flop, so why did they want to talk to us?

[LAUGHTER]

PRESENTER 5: Yeah. Going to the moon was a political thing, not a military thing. So--

PRESENTER 7: Yeah.

PRESENTER 5: There were no objectives of the military involved in it. They weren't going to the moon.

PRESENTER 6: Well, there may have been, but they didn't [INAUDIBLE].

PRESENTER 7: It was an [? eagle ?] thing.

PRESENTER 5: Yeah.

PRESENTER 7: I mean--

PRESENTER 6: It might have been an edict. It might have been an edict from the President, you know? That this is an open program, that this is a big political show for the world.

PRESENTER 7: Right.

PRESENTER 6: We're going to have everything [? open, ?] you know?

PRESENTER 7: Because Sputnik happened, so we had to go and--

PRESENTER 6: Contrasted with the Russians, they were in the middle of Siberia, and nobody knows what's going on. And here, this is completely open.

GEROVITCH: In Russia, there was always kind of a rule that if a significant invention was made that might potentially have military significance, they were classify it, even if it was done within some open program, initially. But this here doesn't seem to be that kind of oversight, in the sense of military searching for things that are going on that might potentially be important.

PRESENTER 6: Well, there's a big danger in that because the military, it seemed to me, and the people who were in charge of security, had a ever-expanding horizon. I mean, you'd have everything being classified because you couldn't tell whether this little gizmo might have a military significance. So after a while, you'd wind up with almost everything.

GEROVITCH: That's what the Russians [? wanted. ?]

PRESENTER 6: Yeah.

[LAUGHTER]

GEROVITCH: So I'm getting a sense that there was a lot of interaction [? in ?] small groups while working on those programs. Or did you have mostly one program, one programmer? Or several programmers working on one program? How kind of knowledge about what--

PRESENTER 7: Maybe four or five on each functional area.

PRESENTER 6: Well, you had-- yeah. You had different areas. I mean, if somebody was working a navigation module, they probably did not have too much contact with some people who were working in powered flight. So they were worried about star sightings and covariance matrices, and things having to do with navigation computations. And somebody else was worried about auto pilots and how they were going to fire engines and so on. And they were aware of each other, but there may not be very strong interfaces between them. And these two groups, I would say they might be-- you might have a powered flight group of maybe six people, and you know, a navigation group of some other number like that. And you had people who were testing, and you had people who were writing specs. And so I'd say you had a lot of programming groups. And when you add it up, you get to the numbers we're talking about. But each group was a somewhat contained group.

PRESENTER 1: There was a parallel group of analysis guys that were writing back programs all the time, and testing algorithms and so forth. They weren't really writing software for the [INAUDIBLE] computer.

PRESENTER 5: Yeah, I was going to make that comment. Because programmer was not a job description. Some people programmed and did analysis, and some people only did analysis. Somebody did all their work in [? Mac, ?] this analytical programming language we had.

I got the brunt of that because I remember, in trying to put this first flight program-- of which I was the inaugural rope mother-- together, I had to interface with these groups and make their stuff meet the stuff of the next group I remember putting the launch people stuff, which was just plain analyzing dynamics and finding out where we were, to the powered flight people, who steered the thing and took over the mission at that point.

Then it went into this midcourse thing, where some of these star-sighting people, who were wacko analysts-- I mean, I didn't understand what-- I still, to this day, don't really understand some of that. That was a whole different regime. Because here, 10 minutes was a short time to these guys. Whereas one second was a lot to the powered flight people.

And then, I survived going through that midcourse stuff. Got all their stuff straight. Was able to, then, power up this guy, who [? was ?] the entry. He'd been working [INAUDIBLE]. And I remember the first day at simulation, knitted these two phases together, and we got a coordinated-- so these guys' program-- Dan and Ray's program, and Fred and company, and some of the other-- and all my glue that I had hung this thing together with actually worked.

But there were a lot of different programming styles and conventions that you have to straighten out, you know? These guys expected the delta V's in some-- we were giving it in the wrong-- you have to-- you know, this kind of thing. So putting these cells together was an integration job. We didn't integrate at the beginning. It was not a programming exercise for the whole mission. There were these analytical groups, some who did programming, some who did different styles. There were no standards. All that received a lot more discipline and method later.

PRESENTER 6: [INAUDIBLE] [? model ?] that you had-- that we didn't have more interface-type errors. Because we did have no standards. We had no programming standards. Each group, or each little entity, would have a style. I think, when we got this project management, I did try, to some extent, to get some standardization, but it was hard. I think people used different expressions for constants in their programs.

PRESENTER 1: But don't forget Don [? Boler ?] and the stuff he did with ICDs. If we hadn't had a function like that, with all of its specs from all of [? those ?] other vendors, as well as our own, we would have really been up a creek. And that was absolutely--

PRESENTER 5: Who was that?

PRESENTER 1: Don [? Boler. ?]

PRESENTER 6: Don Boler.

PRESENTER 5: Don Boler. Big Don. Another guy I remember, even bigger, Peter [? Peck-- ?]

PRESENTER 1: Yeah, he was a big guy.

PRESENTER 5: --who instituted control over revisions and stuff. I don't know if that group that you said that took over-- wasn't that Peter's group?

PRESENTER 4: Well, he started it earlier, [? when ?] he did it all along.

PRESENTER 5: But this came out of nowhere. He thought this needed to be done, and he started doing it. I mean, there was no "you are the configuration manager."

PRESENTER 7: Oh, Peter [? Peck. ?]

PRESENTER 5: Peter started that kind of stuff. He'd come and argue with you. You couldn't put changes in that, [INAUDIBLE].

[INTERPOSING VOICES]

[LAUGHTER]

PRESENTER 5: Yes. He was a very large person.

[LAUGHTER]

PRESENTER 7: You don't want him to sit on you.

PRESENTER: I know what this is.

PRESENTER 5: But we invented that in the course of the work.

PRESENTER 1: There certainly were integration problems outside our building. And I think that our experiences within the building-- and we were all within a building which wasn't that big of a building-- were relatively well-controlled. But some of the interface problems we had that were experienced between our stuff and other people's stuff, and between various other people's stuff-- I remember, for example-- was it 501, Dan, or 502 where there was an engine failure, and the software-- it had nothing to do with us-- shut down another engine, thinking that they were shutting down the one that had failed. And so it was down two engines out of five.

PRESENTER 4: Right. We ended up in orbit, and they were supposed to do a translunar injection. But because of the failure, and they didn't have a [? S4B ?] left, or whatever it was, we ended up, instead of on a trajectory headed towards a translunar, we ended up either in orbit or an elliptical thing. So instead of doing a burn to knock us out of a translunar trajectory, we burned first to put us in, then we turned around and knocked us out.

PRESENTER 1: That was one example of one of these integration problems that woke up a lot of people. I think the accident on the pad was probably the single biggest disaster that woke people up and stopped the project for, what, a year and a half or something. Investigations were done, and block tub was invented. They hustled into place.

The mission that I was involved with, which was subsequently called Apollo 5, which we called Apollo 2-- or didn't call Apollo 206, it was just 206. First flight of the LEM, which is unmanned and--

PRESENTER 6: This is the flight I think I mentioned the last time. Is this the one that only-- it was at the Cape, and the engine shut off in two seconds or four seconds?

PRESENTER 1: Well, I'll mention a little bit about what happened because I think it was revealing. We'd never flown a Grumman spacecraft before, and I'd never guided one. We'd never flown a block tube system before. We'd never flown a digital autopilot before.

So there was a lot of new stuff, and its rope mother was new. And we had a lot of veterans and a lot of other people that were new. And our relations with Grumman were different than the relations with North America.

PRESENTER 4: Were they? [? Hoag ?] [? talked ?] last time about it.

PRESENTER 1: It was just a whole different ballgame. And I, for one, had never had any contact with the mission control people, and didn't have much as it was. But I went down there to one flight procedures meeting. It was run by the guy who was going to be the flight director. And let me not to mention his name.

But it was a terrible meeting, I thought. The guy was so arrogant and obnoxious that people just stayed quiet who probably should have asked questions or spoken up. But in the event of the flight, we had the first burn scheduled. And although we'd simulated it lots of times, when the command went to the engine, unknown to me, as the mission program guy and as the simulation guy, there was a lot of pressurization of tanks and stuff that took place between the fire signal and when the engine actually lit up.

And I had been told by the guy that wrote the descent burn software, that it had to have a very narrow window on the start-up because of something or other. I don't remember what. But we had a fairly narrow window in there. And it was-- the timing of the interrupt and so forth, it just missed reaching the delta V threshold by milliseconds, if I remember.

And so we shut it down. The software shut it down. And utter chaos took place in mission control center. Everybody was climbing all over everybody to find out what happened, totally preventing anybody from finding out what happened. Because managers were insisting on knowing stuff that you didn't know yet. And it was bad procedure, and I don't know if that continued on other flights, but it certainly screwed up this one.

PRESENTER 6: Were you in Houston then?

PRESENTER 1: Yeah.

PRESENTER 6: All right. This is the--

PRESENTER 1: But let me go on.

PRESENTER 6: Go ahead. I'm sorry.

PRESENTER 1: We had put in there, as a part of the mission software, a very clever and fairly extensive ability to reprogram all of these burns. And when we-- through the hierarchy of Jack Garmin, up to the flight director-- told them that we could easily reprogram this thing and try it again, the answer was, it's no longer within telemetry coverage.

Somehow, they had designed this mission, unbeknownst to us, in a way that there was no possible way to use this reprogramming stuff if anything went wrong. [? Of ?] course, was the only purpose for which this reprogramming software was in the flight computer. But there was no telemetry coverage if the mission didn't go exactly perfectly. And of course, there was no need to reprogram it if that's what happened.

So here we were, out of telemetry coverage. And then, they decided to use the LEM abort guidance system-- [? AGGS-- ?] to do that burn, and drop the thing without any knowledge of the guidance software. It took control away from the guidance software, gave it to the onboard [? guy at the ?] LEM abort guidance system, which fired the descent stage, separated and did an ascent stage burn. And then, they handed the control back to the guidance system.

Well, nobody had ever planned or thought that that would happen. And the guidance system on-- our guidance system on the LEM knew that it was in the descent configuration. By then, the descent stage had been thrown off. And so the digital autopilot had this little sporty vehicle when it thought it was controlling this massive thing, and so it was doing horrible limit cycles and burning fuel like nobody ever saw.

And they said, what's going on? Well, I knew right away what was going on. Nobody, of course, had asked MIT anything when all this was happening. They just knew better and took over, of course. I thought they didn't know better.

But I, again, sent the information up that if they do this heat soak thing that we had put in there, that the first thing it did was to check agreement between the bits that indicated what stages were present and the configuration the guidance system thought. And if it found a mismatch, it would correct the situation for the guidance system, and then pop out of this program.

So it was a way to get the thing back in sync with one uplink command. When this was made known to the flight director, he said, we can't do that program. We've never run it in our practice simulations here at mission control.

[LAUGHTER]

And here was this thing that we'd spent a lot of time putting in the-- it was the second thing in the flight software that couldn't, or wouldn't, be used by the mission control people. Well, the result was not good, and we took a lot of flak for that. I think a lot of it undeserved. But I think it really shook up the mission control people, who realized that their abilities to handle things was much lower than they thought. And their ability to understand and deal with the possibilities of what would go wrong in the flight computer, and in the system other within that, was completely undeveloped.

And I think they went through a lot of learning, and made a lot of changes that led to the success in things like the lunar landing, where something really unexpected came up. And there were people there who were confident enough, or foolish enough--

[LAUGHTER]

--to say, go, whatever it was. But it worked. But without some of these blunders that happened, I think there would have been a much worse outcome, or could've.

PRESENTER 6: Let me ask you-- did you-- did the clever things that you had put in, were they known to the mission controls?

PRESENTER 1: They were [? in the ?] software. They were in the software. They had tested all its-- no, our contacts knew that they were in there. It wasn't our job, and we didn't have time to do anything that wasn't our job. We didn't have time to do things that were our job. I was working 80 hours a week during that time.

PRESENTER 6: Sure.

PRESENTER 1: [? Aaron ?] Cohen kept saying, go down to Grumman and talk to them. When? Who's going to do the configuration control? I would stay there till 8 or 9 o'clock every night, looking at the changes to the listing to make sure that the assembly was successful. Usually, it was. I didn't have time to go to Grumman. I would've stopped-- how many people were working in putting restarts in? But we ran out--

PRESENTER 6: But when you were down there, that was one of the only launches I went to. And I was at the Cape at that time. And when that happened, I was telling the group here last time, that there was a lot of finger-pointing that occurred on this narrow window you were talking about, as to whether this was a programming error or whether-- we were programming exactly what the engine-- we were programming that to the characteristics of the engine that were given to us from Grumman. And there was a lot of finger-pointing as to whose fault it was.

And of course, along came Doc Draper, who was down at the Cape. He'd like to know what was going on. So there was this picture of this madhouse room with a listing this big, opened up to page 179, or whatever it is. And somebody sticking Doc Draper's nose into these various assembly language things, trying to talk about how this engine was cut off.

PRESENTER 1: Well, that points out something else that I didn't think of, but I think is worth saying. I don't know if 206 was the first mission of this sort, but it was a first-- it may have been-- where Houston had told us that they, not we, were going to be responsible for what we call the erasable load, which was the stuff that was preloaded into the erasable memory before launch.

So the responsibility for the erasable load was ostensibly with the guys down in Houston. Well, that was fine. But for the simulator, we had to have a lot of stuff preloaded into erasable memory. And there was this piece of code called Mr. Clean, which would go clean out stuff and preload all of these constants that we thought would be fine for the erasable to start with. And people could override those and so forth. But that's what that thing did, if I remember. It was there for some reason. I remember it was a popular commercial on the radio at the time that happened to fit in eight letters, and so that's what it got.

[LAUGHTER]

But it turned out, Houston never touched the erasable, at least for 206. And so the stuff that we had in there for the simulator, which always worked in the simulator, and Grumman ran the simulations, and we never heard-- I never heard anything from Grumman that said something's funny there.

But there was another one of these mismatches, where we never saw what Houston might have done to the erasable load. It turns out they didn't do anything. But there were these interface gaps that weren't local, but there were between geographic locations, that went wrong, and I think caused people to shore up the defenses a little bit on missions that came later.

But without some of those mishaps, I don't know where we would've come out. But that was a tremendous disappointment to all of us that busted our tails on the 206. And it was never possible to point a finger. I could never point a finger at anybody. I remember exactly where I was standing when the person who was responsible for the dips burn-- I'm not going to mention his name, but everybody here, I think, knows who I mean-- told me there were restrictions on that thing. And so the number came directly from him. I didn't make it up.

And so it wasn't something that we did just crazy. But mishaps helped us a lot. The accident, of course, took three lives. It was way beyond our purview. But had the biggest effect. Because I think, without that accident, we never would have made the schedule, and maybe never would've made the landing.

PRESENTER 5: Something else I remember about it, it was to [? have ?] [? flown ?] the flight program 204?

PRESENTER 1: Yeah.

PRESENTER 5: That program, which we were forced to release, was just so lousy, so [? chocked full ?] of bugs. It would've killed them anyway. That's my feeling. It was so--

PRESENTER: Woof.

PRESENTER 5: It was sort of a blessing, if you can call it that, that that was ditched and was never used, and we went into block [? tube. ?] But

PRESENTER 1: It's an example of how-- and I mentioned before, each program was supposed to be an evolutionary step from the one before.

PRESENTER 5: Yeah.

PRESENTER 1: This one had a crew on it. The one before it had no crew on it. If you don't think that was utterly different, it was utterly different.

PRESENTER 5: And I remember, we had six weeks to turn it out.

PRESENTER 1: Yeah, very short schedule. And of course, nobody wanted to be the one that caused the schedule to slip.

PRESENTER 5: Yep.

PRESENTER 1: We were confident Grumman would make the schedule slip. We just knew they'd never make it. Of course, they did, which caught a lot of us by surprise.

A lot of good things happened, too. I remember, when we ordered the first 360, Lyndon Johnson had just ensured that IBM got a sole-source contract to supply four [? 36075's ?] to the [? manned ?] Spacecraft Center. And all the other vendors were just completely up in arms.

And IBM, who was the only bidder that could do this, supposedly, had to supply a fifth computer because the four computers didn't meet the specs and so forth. It was a horrible situation. And I went to the guy down there that I had to go to say we needed a faster machine, and we want to put out a request for a proposal. And got approval to do that. We got bids from at least three vendors.

And I called him up and I said, well, we've done our proposal evaluation. We're ready to proceed. Any problem with that? And he said, no, not any at all. He said, who is the winner of your proposal? I said, IBM. He said, oh. He said, we're going to need a lot of justification for that.

So we spent about a month writing down what we thought they wanted to hear. People's [? time-- ?] Cliff [? Hyde ?] was a guy that [? wants ?] to sort of [? saying ?] why IBM was the chosen vendor. So there were a certain amount of politics in there that didn't have anything to do with us.

But other funny things happened. I don't know if this is useful or not. Houston wanted to have an independent computer simulation from the one we had, as a cross-check. And so they assigned somebody to write a simulator-- and they wrote it in Fortran, I think-- of not the computer, but some other stuff.

And I had personally modified our [? Mac ?] executive, so that some things that we needed to do could be done by runtime software in the [? Mac ?] executive that ran at runtime for [? Mac, ?] that involved stealing some functions that I knew nobody, or thought nobody would ever use. We had hyperbolic cosecant function in the language that I didn't think anybody had ever used.

And so if you put hyperbolic cosecant of minus 1 in there or something, it would type on the console typewriter or something. And I got a look at the listing of their independent simulator written in Fortran. And right there, [INAUDIBLE] hyperbolic cosecant of minus 1 to write on the console of a machine that wasn't even the one they used. And certainly, not what you'd call an independent simulation.

[LAUGHTER]

[INAUDIBLE] this code directly into Fortran without the slightest idea of what it was doing. But here we are, with a real cross-check. We did get there. I was going to say-- I lost the thread. Senior moment. It happens almost continually with me now.

After having gotten this first IBM machine against all odds, and getting it installed, and stealing it from somebody that we were doing business with. And George Miller called them up and told them they were going to have to lose their machine. And it got very ugly because they got really-- this company got really angry with us for identifying them as somebody who might give up a machine and so forth. It took us quite a long time to get it in and installed.

And we were running lots of simulations on there, and a lot of other lab stuff, as well, when we realized we just needed more simulation time than there was on one machine in one a week. So I went to Chris [? Craft ?] and said, we've got to have a second machine. It was absolutely astonishing. Within six weeks, that machine was running in our place. Before, it had taken probably 10 months. And he just was able to make things happen for us, and for everybody else in the project, that really gave it what it needed to succeed.

And I think without influences like Chris [? Craft, ?] both inspirationally-- because he really was a guy we all respected and liked-- and without his clout, the project would have not made it. There were just lots of things that happened to come together.

PRESENTER 6: By the way, he's written a recent book. Has anybody read that?

PRESENTER 4: I've seen him on TV, and heard him on the radio talking about it, but I haven't had a chance to look at the book.

PRESENTER 6: It's supposed to be pretty good.

PRESENTER 7: But on Apollo?

PRESENTER 6: Yeah, and the space program and issues, and--

PRESENTER 4: Everything.

PRESENTER 6: Very frank.

PRESENTER 1: I also read, in Jean [? Kranza's ?] book, that the volume of Tindall grams was actually published and circulated. I've never seen that, either.

PRESENTER 4: They have some here because Malcolm sent his packet to you guys, right? I think he sent it--

GEROVITCH: To Sandy.

PRESENTER 4: Sandy. To Sandy.

PRESENTER 1: Way after [? words-- ?] it must have been '91 or so, I saw Bill Tindall at Intermetrics and urged him to publish his Tindall grams. And he maybe already had, but he didn't say so. He said, why would anybody want to read that stuff? And I tried to sell him on the idea, but he didn't further do it, as far as I know. But I did read that, from the [? press, ?] that they were circulated around. [INAUDIBLE]. Another guy, without which things would have gone far differently, and worse.

PRESENTER 4: Right. Bill Tindall helps solve so many-- especially at Houston down there. Drag them all together and beat enough heads-- pound enough heads together to get agreement. It was amazing how he did it without any-- just by dint of his personality. I'm not sure he really was the guy-- the boss of any of those people. But when he said it, it happened.

PRESENTER 5: Well, I remember, before Bill, the ill-fated Dr. [? Joe ?] Shay. Remember that--

PRESENTER 4: Poor Joe.

PRESENTER 5: [? FACI ?] or whatever we had there.

PRESENTER 4: Joe got--

PRESENTER 5: He got the axe.

PRESENTER 4: Blamed for a lot of things that weren't his fault, I thought.

PRESENTER 5: But he had a style about him in conducting. You remember those [? FACIs-- ?] first article configuration inspection? It was Joe [? Shay ?] sitting like this [INAUDIBLE] red [? socks ?] on his boots. He didn't-- he was a rather unassuming guy. I was kind of fond of him, and looked at him from afar. Then he got-- he was the scapegoat for all--

PRESENTER 6: You know, in the later stages of the program, we talked a lot about the personnel and the people at MIT, and good people that we got, and so on, and the closeness. In the later stages of the program, it was the determination that we didn't have enough personnel. And that we needed to run this more like an aerospace program, you might say.

And so there was actually an effort to bring in contractor personnel. And I don't know how many were brought in, but quite a few-- a lot. And by and large, these people were very good people. And they were integrated into our efforts, but it was just another example of how something that began relatively small and close started to get, toward the end, look like a lot of other programs, where you brought in people under different badges and different management, and so on. And then you also faced-- tried to parcel or proportion them in intelligent ways, either testing or trying to create tasks that they could do that were self-contained. But in many cases, they were integrated into groups that [INAUDIBLE] IT personnel, as well. I think that, by and large, that we did that fairly successfully.

PRESENTER 7: And in the flight software group, I think they were-- you couldn't even remember that they were subcontractors.

PRESENTER 6: Right.

PRESENTER 7: [? You're ?] told to [? sign ?] them.

PRESENTER 4: Yeah, Joe [? Sapinaro, ?] who ran that large team, cracked the whip on all of--

PRESENTER 7: Yeah.

PRESENTER 1: It's been a bunch of guys from CUC.

PRESENTER 4: Yeah.

PRESENTER 6: Yeah.

PRESENTER 4: Phyllis.

PRESENTER 1: And I don't remember how many from AC, but if Jay Sampson hadn't been in that group, we would've languished.

PRESENTER 4: Right. We did the 501 with Jay and the gang. Jay did most of it.

PRESENTER 1: Who?

PRESENTER 5: [? Ronnie ?] Gilbert.

PRESENTER 1: Oh, yeah.

GEROVITCH: Was there any interaction between the instrumentation lab and the rest of MIT as a teaching institution? Did undergrads or grad students [INAUDIBLE]?

PRESENTER 7: We hired a lot of people from MIT.

PRESENTER 4: Well, there was close interaction about the time Apollo started, because it wasn't so big, and it was closely tied with the aero department. And the instrumentation lab was part of MIT then. You remember the faculty [? club ?] [? athletic, ?] and so on. Like Lincoln Lab-- I don't know, maybe Lincoln Lab still is.

But then, at some point, due to Vietnam protests, the lab split off. I don't remember-- early '70s or so--

PRESENTER 5: '69, I think.

PRESENTER 4: --when that happened. And became less connected with--

PRESENTER 6: Well, there was connections. There were students-- you always had research assistants that were around. People like me, who did that twice. I went from employee to-- and then became a student and got a master's, and then became a student, got a PhD, and so on. But there were other students-- there were always students around. Some of them turned into astronauts, like Buzz Aldrin and, I think, Ed Mitchell, and some others that were in the aero department at one time.

PRESENTER 1: Dave Scott was another one.

PRESENTER 6: Who?

PRESENTER 1: Dave Scott.

PRESENTER 6: I think so.

PRESENTER 1: Yeah.

PRESENTER 6: These turned out to be students of Dick [? Batton, ?] and they all wanted to know something about guidance and control. And I think, probably, students of [? Wally Vanderbilt ?] as well. So there was a connection into it, although I don't think that the people-- any of the people, other than these students that I mentioned-- I don't think that the employees thought of themselves, in any way, as academics of any kind. They were working engineers in a laboratory associated with MIT. And there was some vague, let's say, organizational connection up above, that certain MIT policies would flow down, whether it was retirement benefits or health benefits, and other things of that nature-- library cards and--

PRESENTER 7: Going to the faculty club.

PRESENTER 6: Going to the faculty club, things of that nature that you had. One big difference in the laboratory, as opposed to some other laboratories, like-- I don't know. I'm not sure about the radiation lab, but I know, with respect to Lincoln Lab. Lincoln, at that time, was a line item laboratory in the Air Force. So they got-- the Air Force had a budget, and they said, here's the budget for Lincoln laboratories. And they did all their work under this budget item. And the instrumentation laboratory didn't have any budget angel, in that sense, and they had to scrounge around and do their own marketing, and find work and so on. And of course, Draper got this huge sole-source contract to do Apollo, which was magnificent.

But basically, the lab was not a kept lab. They had to do-- they had to get out there in the world and find that business. And that was a big distinction between this laboratory and some of the other MIT labs.

I think the thing that Dan is referring to is that, during the Vietnam War, there were tremendous student gatherings on campus here, and all kinds of protests. I think I had just left the lab, and I can't remember what the essence of what they were protesting at the lab. I think it was ballistic missile--

PRESENTER 1: Defense contract.

PRESENTER 5: Yeah. Just military and defense contract. It had no place in academia.

PRESENTER 1: No.

PRESENTER 6: So you had--

PRESENTER 2: [INAUDIBLE]

PRESENTER 6: You had the instrumentation laboratories. This whole section was hollow, you might say. But there were other sections of the lab that were doing inertial guidance work for ballistic missiles and so on. And we heard stories of them having barriers across the doors, and bolts, and this and that. And pressure to divest the laboratory from the school. And eventually, some accommodation was made with respect to the organizational structure.

PRESENTER 4: All of us were in one building right around the corner here, in Cambridge Parkway. We weren't over at main MIT. We were down towards Lotus. Lotus wasn't there then. A bunch of warehouses.

PRESENTER 2: The building's gone now, right?

PRESENTER 4: Yeah.

PRESENTER 1: The building's gone.

PRESENTER 7: Right next to the hotel, what's that hotel?

PRESENTER 4: Sonesta.

PRESENTER 7: Sonesta.

PRESENTER 6: Well, that was a couple of buildings further down.

[INTERPOSING VOICES]

PRESENTER 7: Was that what it was called?

PRESENTER 1: Before it was the Sonesta.

PRESENTER 6: Of course, Apollo didn't start in there. We sort of gathered into that building. We started in other places. The lab had-- I don't know. That was W6, right? No, that was 7. W6 was way down Vassar Street. And then, there was 1, 2, 3, 4, and 5. And you had a garage, and you had an old shoe polish building, and so on. And Apollo began in these other buildings, with various groups of a few people here and a few people there, and so on. And I can't remember the exact date when everybody was gathered.

PRESENTER 4: I don't know. We had the big computers down there in the first floor, and Dick [? Batten ?] in the corner. And all of us--

PRESENTER 6: But I know Dick-- when I went to work for Dick, he was in W1, near that elevator shaft that was in W1. And so--

PRESENTER 1: Yeah. On the fourth floor.

PRESENTER 6: Yeah. So it must have been--

PRESENTER 5: Where the 650 was?

PRESENTER 1: About '62, we had--

PRESENTER 6: Yeah, it must've been 1962 or so, that everybody was gathered and brought into this one building. And they had things on the roof, too. Didn't you have a sextant and something on the roof of this building that did star sightings?

PRESENTER 4: When we all moved to the second floors, when you had the project organization and all of that, we took over all that second floor.

PRESENTER 6: But I thought that either [? Nevins, ?] or John [? Dolan, ?] or somebody had optical equipment.

PRESENTER 1: It seemed to me that was down in [? 6. ?] 224 [INAUDIBLE] Street.

PRESENTER 4: A lot of it was, but they had the thing on the roof that you can look at the-- they had that thing you sat up there, Doc Draper's toy.

PRESENTER 6: Yeah. Yeah.

PRESENTER 4: Anyway, it was quite a time in there starting in the early '60s, and escalated as you got closer and closer to each flight. And I mean, Alex did the first one, 201. I don't remember the sequences after. The first few were all unmanned, and I don't even remember--

PRESENTER 5: Yeah, 201 was unguided. I think 202 was the first [? HEC-guided ?] [INAUDIBLE].

PRESENTER 4: 202, I mean. Right. 201 was just open-ended.

GEROVITCH: Just let me ask you the last question, probably. But did you have any international contacts? Did you go abroad t Europe, whatever? Did you have any contacts with foreign [? venues? ?] Was anybody interested from abroad in what you were doing?

PRESENTER 6: I'm sure there was great interest, but I can't recall-- it wasn't like space lab. I did some work on space lab, where you had a real international group that you were--

PRESENTER 4: Nobody in the world was doing much yet, and so it all pretty early. There were individuals interested, but there wasn't any institutional interest.

PRESENTER 5: There was huge popular interest.

PRESENTER 4: Right.

PRESENTER 7: I presented a paper in Paris at a conference on a theory that had come out of the Apollo project. It was part of the-- part of our organization sent us there. So in that way-- but it wasn't really what we did in Apollo. It was what came as a result of it.

GEROVITCH: Were they interested?

PRESENTER 7: Very interested. I just can't-- my mind can't remember--

PRESENTER 5: That was about the mid-'70s, though, right?

PRESENTER 7: You know, early '70s.

PRESENTER 5: Early '70s. But it was still part of the Apollo group. The Colloquium Program [INAUDIBLE] colloquium or something, in Paris. Mostly academics. But yeah, there was a lot of interest. But it was mostly academics, not application types.

PRESENTER 4: No, during Apollo, most of our trips were to Houston, or for those of us that dealt with Rockwell, out to Rockwell. It was all the business meetings.

PRESENTER 6: But some of us were very shocked when we-- I was shocked when I first made a trip to North America. And we had-- I don't know-- you have the Cambridge scene here, you know? Everybody has his own office, and everything's sort of quiet. And you look out the window. You might see the river--

PRESENTER 7: And watch the planes take off.

PRESENTER 6: Yeah, watch the planes take off.

PRESENTER 7: From my office, that was my favorite thing.

PRESENTER 6: And then you made these business trips out to Rockwell, to North America.

PRESENTER 4: North American--

PRESENTER 6: What you were faced with was this three-block-long, open hangar with God knows how many people in that hangar. And telephones ringing and noise--

PRESENTER 5: The [INAUDIBLE] edge of the building was lost in the smoke--

PRESENTER 6: Right.

PRESENTER 5: [INAUDIBLE]

PRESENTER 6: The contrast-- you wondered how anybody got any work done. I mean, the contrast between what we used to think of as the aerospace industry, and the norm of how people worked in this aerospace industry versus the norm of what we were doing at MIT was night and day. It was just a different world.

PRESENTER: Right.

PRESENTER 4: There were metal vendors, and they wanted to build 10,000 more airplanes if we told them how.

PRESENTER 7: Our offices looked like graduate students' offices at MIT. You walked in there-- I came from the meteorology department, and I felt like it was part of the same environment. It was not like a big corporation outside of MIT.

PRESENTER 1: It was interesting, in the early years, to me, to find that MIT people-- we were never instrumentation lab people. We were always MIT people, for whatever reason. Although we had this local environment of our own, when we went out to North America, we found that we were the communication means between parts of North America that we interfaced with that didn't talk to each other. And often, we were the ones that told them things that their own management, perhaps, should have told them. But they were really glad to know. And we were served a very useful, if inadvertent, function that way.

PRESENTER 7: That's interesting that you say that because the glue that Alex was talking about, when we put together the different powered flight and navigation and everything, we found that we were a communication vehicle [? that ?] the entire mission software for the different groups that were being interfaced, as well. And that's how we learned about-- we didn't really know what was inside the-- or at least, I was less familiar than you-- inside the black boxes. But we really knew how they all tied together.

PRESENTER 4: One more thing you probably should say about this. Working at the lab, which was MIT, we were in a very privileged position. We got to see and talk to not only all astronauts, everybody at NASA, all over the place. I mean, Wernher Von Braun would come and visit. And I remember giving him lectures on what we were doing, and so forth. All those people, we got to talk to as a small, elite organization.

And when the flights were going off-- the Apollo flights-- we were tied in very closely on those headsets and boxes. I can remember one flight-- I don't remember what and who I was talking with in Houston. Might have been Jack Garmin. I said-- it was a problem we had, and I said, I think we need an erasable dump. Will you get one? Within 30 seconds on the main [? cap ?] commander--

PRESENTER 5: [? Capcom. ?]

PRESENTER 4: [? Capcom-- ?] whatever-- to the astronauts. MIT wants a dump. Will you do it? Something or other. And it was always, MIT wants this and that. And so I felt anything we did was going to get there and back very quickly. That might have been the time we're worried about erasable being messed up.

PRESENTER 6: I also thought some people attributed a pedestal standing to us that maybe, I don't think we really deserved. But--

PRESENTER 1: Oh, yeah.

PRESENTER 6: There was one thing that happened, though, that I still vividly recall. It was near the end of the-- there was a landing on the moon, and it was successful, and they were-- the program-- it either was-- yeah, it was just after that landing, or it could have been just before.

But we used to go to these dinners at [? Lockovers. ?] And I don't know how many I went to, maybe four or something. And Draper was very expansive about this, and he'd want to get all these people there, and he'd always have some astronauts in there because that was kind of fun.

And so I remember this dinner, and I was sitting next to Frank [? Borman, ?] and we were eating dinner. And Frank Borman-- somewhere in the conversation, he felt that the people at MIT-- we were all geniuses, and we were all just these almost like everybody was a guru, and everybody was a genius.

Then he said to me something like, you guys at MIT, you ought to get out of this program now. He said, this program is nothing but-- it's going to be just grinding it out. He said, there's not going to be anything more of interest. There's nothing more that you guys can do. You ought to go off and do something bigger and more interesting than Apollo. And I thought that was interesting.

[LAUGHTER]

PRESENTER 4: So we followed Frank [? Borman ?] into Eastern Airlines.

[LAUGHTER]

PRESENTER 6: That's exactly right.

GEROVITCH: All right.

PRESENTER 4: Yeah, we did have a good position. But also, there were so many of the astronauts that had previous connections at MIT that we weren't poked fun at too much. We had some inside tracks.

PRESENTER 6: I might tell you, just as a closing comment for me, that Dick [? Batten ?] asked me, last year, to be a guest lecturer at one of his sessions. He runs a freshman seminar on Apollo. And he has these kids, and kids come in. And last year, he had Harrison Schmidt. Is that his name? The astronaut. And he had a number of other people. And I was there. He's trying to get Don [? Isles ?] to come in.

And it's really remarkable. I only did it once. And so what you do is you're talking to these kids, who are about 18 years old, and they really know almost nothing-- zero. I mean, whatever they know, they know from television or cartoons, or maybe they read something or other. They know nothing.

But it is interesting, the level of interest that they have in the program. And usually, a certain percentage of them, like four out of eight, or five out of 10, all have-- they all want to be astronauts. That's one of the things that drives them. Because you wonder why this person, as a freshman, takes this freshman seminar with Dick [? Batten ?] on Apollo, from 30 years ago.

But it is interesting. Of course, Dick loves it. And I did bring in some artifacts. This is a program listing. This is a this, this is a that. And most of it is just getting up to a blackboard, and drawing a few stick figures, and telling them a little bit about how we got to the moon, and something about the computer. And each person has something else to say. Of course, I'm sure that Harrison Schmidt gave them the whole astronaut viewpoint.

It really is remarkable for these kids to have this connection, though Dick [? Batten, ?] to this world. Because it's something that very few people-- very few high school students, or very few freshman in college-- could have an opening to. And to have these individuals appear in front of them. And Dick just loves it.

PRESENTER 5: This is something I felt coming from Europe. I've got friends and relatives there. Incredible intensity, that interest in Apollo, what happened. I often feel like, since I was in it, I haven't done my bit in satisfying some of the yearning for information, and contact with this glamorous world. And that's just sort of a popular reaction. It's nothing to do with military, academia, or business or anything. The men in the street view the American Apollo program as just an incredible achievement.

PRESENTER 6: If it ever really happened.

PRESENTER 5: Indeed.

[LAUGHTER]

Is this true? Yeah, people who weren't even alive then still--

GEROVITCH: Yeah, we hope that the website that we're working on would-- one of its functions would be to provide a window into the history of that program for students, general audience who might be interested in that. [INAUDIBLE] both the basis in documents, but also a human subject perspective on this event. I wish to thank you all very much. It's been very, very--

PRESENTER 1: Your job is going to be a severe one, in terms of getting young people now interested in things that, by comparison, with today's technology, are so ancient and so primitive. Why would anybody want to look back at stuff from a machine that had a 3-bit op code and 12-bit address? I just bought the Microsoft flight simulator 2000, and it says, you need to have 635 megabytes of disk space available.

[LAUGHTER]

Good heavens.

PRESENTER 4: Wait until you see the operating system that comes out this fall.

PRESENTER 1: If this is what the kids are dealing with, why would they care about a 36K word machine? And the problems that went with it, that are just so completely removed from today's problems.

PRESENTER 5: But I think this could be a lesson--

PRESENTER 1: It can be knitted into today, but you have-- that's a challenge for--

PRESENTER 7: I would like to make a comment about your comm-- along with your comment. I think that, in looking around at software problems today, they still have the same problems that they had yesterday. And I think many of the things that we learned, and that evolved-- lessons learned could still be learned today. The hardware is different, but I think software, in fact, is worse in many ways now. It's done in a more primitive way now than it was back then.

PRESENTER 5: Because the resources are limitless, people just don't try--

PRESENTER 7: So I think we could go back and learn a lot from what has evolved from that. So if we can't, then some of us are in trouble.

GEROVITCH: Right, right. So look at history not as just progression of machines getting bigger and better, but also the evolution of human skill.

PRESENTER 7: Of how you think.

GEROVITCH: Right, right.

PRESENTER 7: And model and design. Exactly.

GEROVITCH: And some of those skills in the '60s might have been very valuable that is not really in use now.

PRESENTER 7: Sometimes, we go back to the very foundations, the very core, and learn from what we might have thrown away.

PRESENTER 6: The other thing is that people are-- even though they can hold a handheld GPS in their hand, they're still interested in how people did things with a sextant and a watch, or not even a watch.

PRESENTER 7: Or a [? disky. ?]

PRESENTER 6: Or a [? disky. ?]

PRESENTER 7: Which I sometimes call it by mistake.

[LAUGHTER]

GEROVITCH: Maybe we should have a simulator for [? disky ?] on the website where they can actually press--

PRESENTER 7: Yeah.

[INTERPOSING VOICES]

PRESENTER 7: Oh, definitely.

PRESENTER 6: That'd be great.

GEROVITCH: You think so?

PRESENTER 7: Yes. That would be great.

GEROVITCH: Then you can call up all the programs and get [? Jim's ?] simulator going again.

PRESENTER 7: [INAUDIBLE]

GEROVITCH: Right.

[INTERPOSING VOICES]

PRESENTER 7: Yeah. Priority alarms.

PRESENTER 6: You can have your verbs and nouns.

PRESENTER 7: Yeah.

GEROVITCH: All right. Thank you very much.

PRESENTER 7: Thanks.

GEROVITCH: I will be in touch. As soon as we have a transcript, I will send it to all of you for corrections and details and all of that.

PRESENTER 1: I'd appreciate a copy of the transcript from the prior meeting, so I can know--

GEROVITCH: All right. Sure. Sure.

PRESENTER 6: That hasn't been distributed yet, has it?

PRESENTER 7: None of us have it yet.

PRESENTER 1: No, but-- but you know--

[INTERPOSING VOICES]