Henry W. Kendall, “Finding Quarks With Electron Beams” - LNS46 Symposium: On the Matter of Particles

Search transcript...

[MUSIC PLAYING]

KENDALL: Thank you very much, Francis. Yesterday, Francis took me aside and said, in very earnest terms, I very much want you to give a good talk, Henry.

[LAUGHTER]

I said, well, I will do the best I can. I can't promise anything. I think it's an unrealistic expectation. People have unrealistic expectations from time to time in our society, and that reminded me, actually, of a bumper sticker that I saw here some time back. It said, in clear block letters, US out of North America. Without arguing the merits of that case or the lack of merit, it did seem, like Francis's suggestion, to be somewhat unrealistic.

The subject of the talk today-- the subject, certainly, is well known, I'm sure, to most of you in the audience. It is a series of experiments that were carried out at Stanford by an MIT/SLAC collaboration that started in the early 1960s. It started to take data in the deep inelastic reaction in 1968, and wound down in the early 70s rather slowly. It was a very interesting series of experiments for the people in it. We enjoyed it enormously at the time. We were aware of how much fun it was to do that.

Part of this has to do with what Martin and other people-- Martin Deutsch in his earlier talk, and others-- referred to as the golden age. And if you remember the rather poignant financial diagram that Bill Wallenmeyer showed at lunch the other day, you remember that the funds, in what I referred to as the golden age, climbed up in the period up to about 1970, and then coasted down again, and leveled out for some years. So the golden age is associated, at least in some part, with another translation of the word gold, which is "gelt," which was the funds that kept it all going. And it really was, in many ways, an era which had, for a period, apparently unlimited possibilities.

The MIT/SLAC collaboration was the collaboration-- part of this was initiated by Jerry and myself in the early 1960s. We had both worked at Stanford and been on the Stanford faculty, working in Robert Hofstadter group on electron scattering, and when the large machine-- the two mile accelerator-- was being contemplated, and in fact, construction plans were being laid, we wrote Pief Panofsky a letter and said we would like to join in that group, almost all of whom were our old friends.

Well, it was not just the two of us that were involved. This is a list of the collaborators in that, and I've taken the liberty-- because this is an LNS camp gathering-- to underline in red those people who either were at MIT at the time of the collaboration or had been educated at MIT, and you could see there were enough of us so the SLAC people could not reliably get away with anything.

[LAUGHTER]

We-- Jerry and I, particularly-- were enormously well, bountifully supported, not just financially-- although that seemed to be the tenor of the time-- but also, in some sociological sense, by the physics department under then-chairman Bill Buckner, who many of us remember. He basically allowed Jerry and me to become a single hyphenated professor in the department, and we were able to split the travel to SLAC between us in any way we chose. I, being single at the time, over not quite 10 years, spent nearly a third of my time out there. It was one of the first big, heavy-duty collaborations that people in the physics department had engaged in, and it set the tone for a lot of the subsequent good collaborations that have occurred between LNS people and work at distant accelerators.

Now, to begin, to lead into the subject of electron scattering, both of us, Jerry and I, as I said, had been with Hofstadter's group. Hofstadter's principal and quite great contribution was to study the elastic scattering of high energy electrons from the proton and from the neutron, and in summary, the sort of thing he found was that as you increase the momentum transfer, the observed scattering cross-sections coasted down much faster with momentum transfer than one would expect from the scattering of a point charge. And this is due directly to the fact that, as you increase the momentum transfer, the contributions to the outgoing wave from components distributed over space interfere more and more and give you this dramatic decrease.

And this is interpreted, and was interpreted at the time, as the result of the finite distribution and finite structure of the proton. And later experiments have carried that curve out further. And unlike the nuclear atom, there was no hint in this extended data of any hard single center in the proton. And there were physicists who concluded publicly that therefore, the proton had no hard constituents and no core in it. At least, certainly not a single core.

Now, the reaction, which we studied was the inelastic scattering of electrons. This is the Feynman diagram for them. The electron travels along here in vacuum. There is a single interaction. Here comes the proton. There is some momentum transfer, which can, in the laboratory, be adjusted, so that one knows what it is. There is an energy transfer-- just the difference between the incoming and outgoing electron energies. And there is some amount of energy communicated to the proton-- or neutron, as the case may be-- in simply blowing it up. And that becomes, in the inelastic case, a variable, which is subject to adjustment. And you study the cross-sections over a range of inelasticities.

This is a set of all-purpose equations that govern the process, because the elastic scattering is unique in that there is no energy communicated to the struck system, except recoil energy. The inelasticity here has a unique value, which is for that reaction alone. Momentum transfer is given here. I don't need to go over these equations in detail, except to note one or two features that I'll come to later and use in some of the later slides.

Much of the development of the central scattering equation depends on known quantum electrodynamics and it is not a subject of investigation. But the unknown structure of the target system is buried in these two functions-- W2 and W1, each of them, in principle, a function of two variables, the inelasticity nu and the momentum transfer.

Now, without going into the details of it, if you look at the scattering in terms of virtual transverse and longitudinal photons, there is a quantity of interest, which was measurable as in some of the later stages of our experiments, which is conventionally called R, the ratio of the longitudinal to the transverse yields. And it turned out, as we will see in a minute, to be useful to determine that because of the role it played in the validity of some of the models.

Now it's one thing to write down a Feynman-- diagram that is simply paperwork, but to get at the heart of these measurements, you need equipment. And this is not Martin Deutsch's tabletop equipment anymore. Here is the two-mile linear accelerator at Stanford. This is Route 285, which is indeed out of the radiation field. It is also not true that the San Andreas fault cuts across the accelerator line. It is, in fact, a quarter of a mile from the gun.

This machine, the actual accelerator itself, is roughly 40 feet underground. It was at the time-- and may still be-- the single largest single-cantilevered structure that the human race has constructed. And at the time the device was built, it was certainly the most precise machine of any sort that had been built. The beams, after the acceleration was completed-- which would be below that point roughly on the diagram-- were sent into a beam switchyard and could be energy analyzed and deflected into one of several laboratories. This early picture does not show the colliding beam facilities that were installed later.

Our experiments were done with a deflection toward the north into this rather large 250-foot shielded building. These beams are biologically very potent, and the whole building was heavily shielded. One could not go in it during machine operation. The emergent beam came out through a pipe here, and then was buried in the hill. This hill never got the quixotic names that the earlier hill had in the High Energy Physics Laboratory, where it was called Mount Panofksy. And rumors were that giant mutant frogs would lope around in the evening, having been affected by the radiation.

You will instantly recognize this diagram as an experimentalist's Feynman diagram. And like the earlier Feynman diagram, the initial beam comes down here in vacuum. In our case, in an aluminum pipe. I'm tempted to refer to it as the false vacuum, but it really isn't. It's an imperfect vacuum.

Electrons are reflected in a hydrogen target there into a large one, or another magnetic spectrometer, which were built for the purpose. The detector arrays enable one to determine the precise scattering angle. Actually, a number of scattering angles were accepted by the equipment and determined by the detectors. The recoil system in our experiments was let go by itself. We made no attempt to do coincidences or to study it at all.

This is just one diagram one of the two large detectors that we used. They were fairly substantial devices. The primary beams came down here and were measured, brought to foci, their intensities determined, positions monitored, and so forth with liquid hydrogen targets. They were typically 9 inches or a foot in diameter or so, some smaller, some larger, and upward, a double flat magnet bend into a substantial shielded detector housing.

These devices are not large now, by the conventional detector sizes used at the collider, but at the time, they were very large instruments, weighing of the order, with the shielding, of several thousand tons. These things opened up like clam shells, giving us access to the elements. MIT did much of the detectoral design and construction, as well as the electronic systems in the counting house.

Now, let us leave the experimenters to their switches and fuses for a minute, and turn to a theoretical interlude, which I'll call the 1968 hadron, which is the scene, so to speak, at the time our experiment started. As far as the experimental situation was concerned in the late 1950s and 1960s, there had been a remarkable series of discoveries that involved identifying dozens of hadrons-- hydronic resonances at the strong interaction machines.

There had been quite a body of work done studying pion and proton scattering from nucleon targets, and the energies not being as high as they are now available, it turns out that those studies showed what you would call soft constituents and soft interactions. Namely, cross-sections which decreased exponentially with increasing perpendicular momentum. And coupled with the Hofstadter results, the general picture of the hadron was a rather soft device, which underwent what you would call soft interactions.

With respect to classification, Murray Gell-Mann and others had helped elaborate the so-called eightfold way, which had classified these numerous new discoveries into identifiable and sorted arrays. And in 1964, Gell-Mann, and independently, George Zweig proposed that the basis for these classification schemes could be constructed with a mathematical entity which Murray called quark.

There's been some discussion about the origin of the name, and from its inventor, I will tell you that he invented the name first, out of blue sky as a word. Just a sound, which he would have spelled Q-U-O-R-K. Later, he found the word used in-- I think it was, what? Finnegan's Wake. And he adopted that spelling, but retained the old pronunciation. So that's where it came from.

They had the unusual property of requiring fractional charges, and it was quite a successful scheme, theoretically. The proposal of quarks stimulated a number of quarks searches, which went on for a number of years. And quarks were searched for in cosmic rays without success. They were searched for by hopefully being produced in accelerators and identified there without success. And they were searched for in an array of experiments reminiscent of the Millikan oil drop experiment, searching for fractional charges without success.

Separately, there was a body of theory-- the S-matrix theory-- with a number of elaborations, which dealt quite well with the then-known scattering cross-sections of various kinds. Among these, and I don't want to go into them in great detail, was vector meson dominance. Because of the failure to identify isolated quarks in the various classes of experiments in which they had been searched for, the proposition that nucleons were, in fact, constructed of quarks had a rather rough time.

There was a very small industry that looked at the proposition and tried to calculate the nucleon properties and resonance properties, based on what was called-- is called-- the constituent quark model. And the constituent quark model was, to some extent, different than the quarks that were required as the basis of the classification schemes. And this constituent quark model attracted very, very few adherents and proponents. A few people persisted-- Dalitz, Morpurgo, and others-- primarily dealing with low-energy actions of one sort or another and hadron characteristics, but not attempting to push these to high-energy, high-momentum transfer reactions.

The reason that most of the theoretical community-- including, incidentally, some of its very distinguished members-- found that the whole proposition of constituent quarks was unattractive was based, as I mentioned, partly on the idea, partly on the experimental evidence that nobody had produced them or seen them. Also, on the theoretical requirement that in order to be satisfactory in the models, they had to employ very, very strong binding, which was upsetting.

And the problem is that while these things were expected to be spin and a half, they had symmetric statistics, which was troublesome. And of course, the fractional charge violated every piece of experimental evidence that had, before that time, been accumulating. Now, some of these constraints were removed in some theories, but nevertheless, the aggregate of them made a constituent quark picture quite unattractive. And it generally not believed.

One other thread was the set of theories called her current algebra. It was initiated by Murray Gell-Mann, and was a quite abstruse theory. The experimenters in our group-- I think we generally did not understand it really very well. And it did not play any real part in its predictions in the planning of our experiment. But it did have one current, so to speak, that affected us later. Based on the current algebra ideas, there was started, first by Adler, and then Gross, and Llewellyn Smith and others, a sum rule industry. And Jim Bjorken, who was an old friend, became involved in that, and used those techniques to predict something which I will come back to later, called scaling.

So to summarize the 1968 hadron, there was, at best, a rather cloudy idea of the dynamics. There was a quark model whose successes were in the mathematical domain, but whose practical application as constituents was simply not really believed. There were a number of candidates with theories which, in principle, would apply to the sorts of reactions we were about to study. One of them, as I mentioned, was vector meson dominance-- Sam has talked about that-- a heavy photon, if you like.

The reason it was a live candidate was because it had been quite successful in processes that involve real photons, and as I've noted on this view graph here, many people believed that it would succeed in dealing with the virtual photons, which we could adjust in our experiments to be quite far off the mass shelf. Nevertheless, the expectation is that this process would happily let us understand the deep inelastic cross-section. And one of the consequences of adopting this view was that it was not expected that one would see anything more than a continuously fuzzy proton and neutron.

A couple of quotations on the possibility of real quarks being constituents of nucleons and other strongly interacting particles-- this is a remark of Murray Gell-Mann's, or two remarks taken from a Physics Letters paper of 1964, in which it is quite clear from what he said that while he doesn't utterly reject the idea that there might be constituents in there, he certainly doesn't think that's the first thing. He's certainly not saying, well, I think they're there. Go look for them, folks. Quite the opposite. And an even stronger statement in Kokkede's book on the quark model, in which he almost refers to the idea in scathing terms, referring to it as tentative and simplistic expression and ill founded.

Well, so I think one could feel that we were not too badly off-base designing equipment to study the rather low cross-sections that were expected on the basis of the current thinking. When we got to make the first inelastic measurements, we found something quite surprising. This is one of the first surprises. And this is the earliest data in which we observed it-- as a function of momentum transfer horizontally, two different cuts at 2 GeV and 3. You see the data moving across here with hardly any momentum transfer dependence at all, and compared with this, just arbitrarily normalized. The consequences of elastic scattering and the enormous decrease in yield-- that is a consequence of the finite structure of the aggregate proton when it is undisturbed in the elastic reaction.

So this was certainly a discovery. And when the dust had settled and we had a chance to look around later and compare what our measurements were with what our earlier expectations had been, we found this. Measurements that were enormous. predictions on which the experimental equipment had been based. Lo, there's a factor of over 40 here. This was taken at a primary energy of 16 GeV, primary electrons at a scattering angle of 6 degrees. Such deviations become even more pronounced at larger momentum transfer.

The second discovery, in a sense, it was directly suggested to us. I happened to be doing the data analysis at the time, so I was by chance the fellow that first saw this. What happened was we-- and I-- had been looking at the inelastic spectra. And here are six of them at different-- this is now plotted in a different way than the earlier graph. Each of these spectra is at a constant momentum transfer, and it's plotted as a function of energy laws.

These are the two independent variables that you have, in principle, available. And you can see on this plot that the data scatters around. And you can take my word that if one expends the range of momentum transfer, the rest of this region in here eventually gets filled with data, and there's no obvious pattern in that.

Jim Bjorken came to see me one day, and in his rather gentle, hesitant way, he had something very powerful to say. He said, well, why don't you look at the data that you have accumulated as a function of a single variable, a combination of nu and Q squared, and see whether this doesn't explain, in a consolidated way, what you are measuring. And I did that.

There was one parameter in this that we did not, at that stage, know, not having any measurements of it-- that was that quantity R that I mentioned earlier, the ratio of longitudinal to transverse. I took that data and plotted it for the two limiting cases, R equal infinity and R equals zero. And what happened was a spectacle. I mean, essentially under my eyes, this data consolidated into a compact universal curve for either of the two limiting cases.

And I remember at the time the little tingle I had looking at this, and I recall thinking how Bohr must have-- not Boher, but Balmer must have felt when, with his empirical expression, whose origins he had no idea of theoretically, he had found this beautiful understanding of the series in the hydrogen atomic spectrum. So it was clear that B.J. had really gotten his hands on something here. And that was scaling. And this is the first plot that you see here, was the discovery of scaling deep in the last electron scattering.

As time went on and as the program advanced, we got much better data. We were able to unscramble and separate the contributions from the two inelastic form factors that I mentioned. Each of them turned out indeed to be a function of a single variable. I'll come to its specifications in a minute. But you see them here, consolidating a wide range of new and Q squared in two relatively universal curves.

Now, as the experiments continued and became more precise and we got more data, it was found that there were minor variations-- that this was not an absolutely strict rule. There was some scale breaking, but it was of a very small character compared to the gross scaling that was observed. So this has ultimately become, effectively, almost a law of nature.

Now, to be specific about what B.J. had done, as I mentioned, based on rather arcane application of current algebra, he had predicted what now bears his name-- the Bjorken scaling and the Bjorken limit. And to be specific, in the limit in which the energy loss goes to infinity, the momentum transfer likewise goes to infinity-- that as this ratio is held fixed to M nu over Q squared, that the quantity omega becomes the variable on which these structure functions depend.

I'll pass over the lower part, which is elaborated in this next slide, because it was not very long after those first measurements, when people were scratching their head over their meaning, that Feynman came up to visit us in the summer of 1964 and heard about these measurements. And he had been attempting to understand hadron-hadron interactions on the basis of a constituent model-- not quarks, just constituent parts of a proton, which he gave the obvious name partons to. And Feynman went down to his motel for the night after talking to Jerry and others, and the next day, he came back full of interest and excitement, and with the following picture that the electron virtual photon was interacting with a parton, nature unknown.

And one explained the large cross-sections by the fact that these partons were soon to be, essentially, point particles. That took care of that part of the paired discoveries. And second, that the scaling variable came directly out of this picture, because we were scattering elastically from one or another of the particles in there, which were point-charged scattering, the difference between scattering off of one of those and the proton being that A, they carried some fraction-- each of them-- of the proton's mass, and they were in motion. And the scaling variable is, in fact, just that relation which I showed you earlier for elastic scattering from the proton with the single proviso that you have a quantity in there, x, or it is an x. It represents a fractional mass, which is that fraction of the proton's momentum that the parton happened to be carrying when it was struck, and it is struck as, essentially, a free particle.

This is a picture that I happened to snap of Feynman when he came back late that fall. This is a muddy Xerox of it, but there is a color picture over in the Johnson Center, where you can see it in better detail with his enormous vitality and interest. And In that October, he gave his first public talk on the parton theory, the parton not yet being identified with quarks.

It was a joy to see him operate. Here is a little paragraph from the book he wrote where he put these things together a few years later. Those of you who knew him can kind of visualize the enthusiasm he brought to it. I mean, look at the wording of this-- "an exciting adventure to try the idea" that these things are simply quarks, and to meet and surmount the then-still outstanding obstacles to that interpretation.

Well, the experiments did not stop there. It was a program that went on for a number of years. And the stimulation of these results into the theoretical community initiated a very interesting, almost fascinating interplay of theory and experiment, which went on for a number of years. And eventually drawing in other laboratories with other confirming measurements, the experiments experimental results were themselves never challenged, and were confirmed as other laboratories looked at similar analogous reactions.

But the interpretations took a long while to develop. It was like watching the grass grow, in total contrast to Richter's discovery of the psi particle, the J/psi, where at 4:00 in the morning, very little was happening, except the humdrum, incremental studying of counting rates, and by 8 o'clock, they were breaking open champagne bottles and putting vodka in the orange juice. This is quite different, in our case.

But within a few years-- no, very quickly, almost the same year-- Callan and Gross showed that in the framework of this Feynman parton model, that you could begin to get a handle on the spins of the constituents, that a particular combination of these form factors was one if the spin was a half and zero if it was zero. And remember, the mathematical quarks were to be spin one, and very quickly, we had an experimental result. You see the red line is the prediction for spin a half, the green line is spin zero, and you have no problem deciding on that one.

The vector meson dominance proposal didn't last very long, either. They, too, ran afoul of some measurements. Very soon, we were able to make determinations of R not only for the proton, but for the deuteron, and by subtraction, for the neutron itself. The vector meson dominance predictions went up in here. There was some uncertainty of the angle, but strikingly different expected behavior than the data showed. And there went the vector dominance models.

Well, as I mentioned, this interplay of theory and experiment continued on with more measurements involving the nucleon now-- that is, our deuteron and neutron measurements. And let me just wind this down by going on a brief historical tour of what happened during the decade of the 1970s after the original discovery.

I've told you about the early theoretical circumstances. SLAC itself had been proposed in the period when many of the resonances were being discovered for the first time. The classification schemes were in the early 1960s. Getting SLAC approved by the Congress was a fairly arduous problem. There was considerable opposition to it-- to some extent, similar to what has happened with the superconducting supercollider-- but was indeed approved in 1961. The joint collaboration between us and them started the following year.

The shovels were out in 1963. The machine went on the air in 1967, on time and within budget. And they essentially turned the switch, and it came right up to beam. We just got used to that. What else? That does not always happen in later times.

In 1968, as I mentioned, the first deep inelastic proton scattering. Very quickly, Feynman came in on that with the results I've discussed. Those results were presented in Vienna that September. I've shown you the slide of Feynman actually giving his first talk at SLAC. Callan-Gross and the determination of the spin potential came in November. Bjorken and Paschos were doing sum rule evaluation as part of BJ's work on current algebra. And 1969, our first experimental results on R, and the first paper's published that year.

Then other laboratories came in on the basis of the general interest and excitement that had been generated. One of the predictions from the parton theories was that the neutrino scattering data would show cross-sections which increased linearly with the primary energy. And CERN went back and looked at earlier data they had taken on that reaction, on neutrino cross-sections, and discovered indeed that it increased linearly. So we were beginning to get outside corroboration of the results.

The following year, our electron deuteron scattering program started, and we very quickly discovered that the neutron/ proton ratio was not one-- as a number of the earlier soft proton models suggested it might be-- but dropped dramatically. And by the latter part of that year, it was already clear that the parton model was beginning to be accepted, that the evidence was beginning to accumulate.

By the following year, the [INAUDIBLE] ratio was found to decrease dramatically, down almost to a quarter. And the theoretical community simultaneously was beginning to focus more and more on the nature of the constituents, on the nature of the partons. There was a theory that Drell looked at, assuming they were nucleons, if I remember. But Julius Kuti and Viki were beginning to look at the first of the quark models.

The dedicated neutrino experiments were beginning to show results by late 1971, and the whole body of diffracted models was beginning to be weeded out and discarded. And later, some of the more abstruse predictions of what then became called the quark-parton model began to get some experimental verification. The anti-neutrino/neutrino cross-sections turned out to favor spin and a half, as the SLAC earlier results had done. Because the neutrinos have no charge, the fractional charge peculiarities which were inherent in the electron scattering cross-section did not affect the neutrino cross-section, so it was possible to make a cross determination to see whether the fractional electric charges were, in fact, what one was seeing in the SLAC yields. And lo and behold, the fractional charges turned out to give predictions in agreement with the measurements.

There was a confirming measurement of the number of valence quarks. We had found from some early evaluations that we could not account for all of the momentum carried by the protons by looking at sums over cross-sections, and it was concluded that at least some fraction-- which looked like about half-- of the proton's momentum was carried by entities with which the electrons did not interact. And these are now known as the gluons. So this is quite indirect determination, but that's where gluons first started to show their heads through the shrubbery.

By 1973, a theoretical development by Politzer, Gross, and Wilczek began to tell us how it was that we could conduct scattering from what appeared to be a free quark. And yet, these things remained bound-- were not visible as free entities. That was a very important theoretical development. And then on the heels of that came the formulation of quantum chromodynamics, and the beginnings of the understanding of the small, but quite clearly observed scale breaking was that we were seeing.

Well, you can look at the rest of these results. What we go through the subsequent years has been the subject of a good deal of what Sam has been talking about and earlier talks, and I think I don't have to go through that. But what happened over the rest of this decade was a slowly consolidating feeling that indeed, the partons were quarks.

Part of this was the discovery of the J/psi and other things, which you see. And by the time the decade had wound down, the 1968 hadron was gone. It was gone forever. And it was replaced by a nearly complete theory-- quantum chromodynamics-- which had, by then, been linked with the weak interaction theory to form a theory which, in principle, one could calculate with, and which are entirely replaced what had been in place before our experiments had started. Thank you very much.

[APPLAUSE]