MIT MechE Symposium: Mechanical Engineering and the Information Age - Douglas Hart, C. Forbes Dewey Jr., Peter T.C. So and Sanjay Sarma

Search transcript...

[MUSIC PLAYING]

HART: Anyway, [INAUDIBLE], who had some very exciting things to talk about from then rising star Ron Adrian, who was at University of Illinois. And that's how I-- it was that talk. And in the back of the room, I heard-- Hans Liedman, who happened to be sitting in the back of the talk with me, commented about picture physics. And I later found out that it was a term coined by von Karman, who detested any kind of imaging and viewed it as pseudoscience.

And so now I display it quite proudly. With the invention of CCD cameras and computers, we've gone from picture physics as a term of disdain to one of-- well, a badge of honor. The very heart of all machine vision applications, as I'll talk to you, is founded in correlation technology. And correlation technology, that is-- it's the purest form of image processing to some extent, because it's based entirely on the statistics of the intensity profile.

And so everything from video compression, where one is trying to detect the motion of an object in a plane of view, to pattern recognition, where one's trying to measure sizes of objects in production and so forth are based on correlation. It's not used very often. And the reason it's not used is it's very computationally costly. And it's slow. And it has some other-- and because of that, people abandon it and try to search for other ways to accomplish the same process.

And one of the things we discovered-- or I should say, discovered early on-- was that in-- if one's trying to do local correlation for whatever reason, pattern recognition, sizing, there's a lot of redundant information. And one need not resort to standard techniques, which are FFT spectral correlation, to do this. FFT spectral correlation was always believed to be, by far, the most computationally efficient method of doing this.

What I stumbled into was the fact-- the idea that, well, there's a lot of redundant information. And the reason you use that redundant information is to develop a very high statistical probability of finding the correct solution in a local correlation. And if one goes back into information theory and says, well gee, the information content of a single pixel is really the probability-- has to do with the probability that that pixel, or that intensity, exists within some region.

And so pixels with low probability have very high information content. And pixels with high probability have low information content. For instance, if you had a black image with a few white specks, the white specks contain all the information, and the black specks have very little information. So with this idea, I said, well, why not compress the image and correlate only on the high-probability component of the image? And one can, in fact, do that. And the trick is, how do you do that? And I won't go into the details here.

But typical images can be compressed tremendously in this way. 100 to 1 is not uncommon. You don't recognize what the image looks like anymore, but the image can correlate quite accurately. And what you find out is that you get about 1,000 times faster than FFT processing if you do that.

It's even better than that. And the reason it's better than that is because it involves integer mathematics that are very, very well-tailored to hardware. Right now, on a standard PC, we can get about 10,000 vectors a second over a 32-by-32 block. And we have a hardware device that we're actually putting together right now which we're assuming a million vectors a second. So this is extremely fast by today's standards of being able to process.

So what can you do with it? Well, we didn't stop there. And the reason we didn't stop there was we said, well, there's got-- one of the problems with correlation is it's a statistical process, and therefore there's a certain probability that you'll find the wrong answer. And that became quite awkward for us. So we developed yet another technology.

And that is, if you take two adjacent regions, and you correlate those regions, and then you multiply the correlation table element by element-- you're taking a zeroth-order correlation of a correlation. If you will, it's a second-order correlation-- what you find is that all the noise is eliminated. If this is a correlation table and this is the correlation peak, perhaps and you-- anything that doesn't correlate within both areas then gets eliminated. And you end up with a very, very clean signal, a very high signal to noise ratio.

One of the very interesting things, it turns out, is that not only does it give you a very high signal to noise ratio, but it has a very wonderful property in that, if these are two regions I'm going to correlate and I have some sort of gradient and displacement occurring between the regions, then the correlation table yields a gradient that's in the direction of the gradient-- or sorry, crosswise to the gradient-- of the direction of the displacement. Then if you multiply the correlation tables together, then what happens is this part doesn't match with this part. And this part doesn't match with this part. And it forces a beautiful Gaussian-- that's not Gaussian, but I can't draw-- Gaussian profile that occurs at exactly the interface.

So not only do you have this advantage where you've boosted your signal to noise ratio-- it actually goes as an exponential function-- but you've also managed to narrow the exact location in which that correlation occurs. So you know precisely in the image where that is. So now you have a method in which you can have-- you have a very, very high probability of being able to locally correlate. And you have a very fast algorithm.

What can you do with all this? Well, I said I was a fluid dynamicist, and that's how I got involved in this. And indeed, what you're looking at, if it were in a little bit better focus perhaps, is an image taken from a standard off-the-shelf CCD camera-- nothing fancy. And these are 60,000 independent velocity vectors of a swirling flow undergoing a sudden expansion.

This is resolved down to the limits of what we can image. And I think if you look at just one little, tiny section, you can see just how fine the resolution is. We can pick up individual vortices and so forth. And we can do this-- you know, this was 6 seconds of processing to do. Our new system, we're hoping to do this in real time-- video rate being real time.

So the technology is quite exciting. With this kind of accuracy, this kind of resolution, one is able to resolve flow structures, in this case, to amazing levels. We're not stuck in 2D. We can do it in 3D.

There's a holographic cube. It was done by Joe Katz. It was processed by us. There's over 10 million vectors in this cube, just to give you an example of the type of stuff we're doing.

So that was-- I'll get back to this technology in a second. The other thing we became interested in was a technology of laser-induced fluorescence, which, if you look at the information content within an image, laser-induced fluorescence is very hard to beat. It has-- holography and laser-induced fluorescence, or LIF, have the highest information content one can achieve with a camera. But the problem you have when you try to implement this in real life is that it requires some sort of light source. And that light source always becomes distorted. And so it's very hard to use as a quantifiable measure other than to track some velocity, or displacement, or such.

Well, the biologists loaned us an interesting technology for measuring calcium. And we adopted this. What we did was we went in and we put two different dyes into the oil. And the dyes have this interesting characteristic. If you look at-- and here we were using it on oil. So that's why it says "film thickness." Although it doesn't-- it could be-- it could be intensity of the laser, if you will. And here's the emission. And the dye goes up. And then it saturates at some point, either with the intensity or, in this case, film thickness.

It turns out that one can use the ratio between these two dyes within the thing to eliminate the effects-- or non-uniformity effects-- of the imaging properties. And in this case, we're using the non-linearity part of the two. And if one takes a ratio between the two, this ratio becomes a very strong function of film thickness.

And we built a camera system. This is a dual camera system. It's two thermally-cooled 12-bit CCDs. They're coupled through what's called a multi-image module, which is made primarily for microscopes. We added a lens on the outside of this. And we have a dichroic mirror on it. And we can separate the two colors between the two cameras.

And when you do this-- just to give you an example, here's a case where we have some varying film thickness. And you can see the rings that we caused from our optics of our laser system. And when you take the ratio of this, it completely flattens out the profile. And one is able to measure the relative film thickness simply from this ratio very accurately.

One can also-- well, let me-- before I get on, let me show you another-- an application of this, one of the things we did just for fun. But it actually turned out quite interesting. We put a window in the side of a diesel engine. We fired a laser into the diesel engine as it was running. We imaged it with our camera system. And we were interested in the oil transport through the ring pack as the engine ran.

And we were able to measure oil film thicknesses accurately down to the half-micron level as the engine's running. This is actually a movie. I don't have it with me. But there's a very interesting phenomena. It became a best seller in Detroit, I heard.

It's so accurate, in fact, that you can measure the hook on the ring, which is-- it's about, oh, maybe 200 microns or so wide here. And you can detect the oil film thickness in the half-micron range within that ring. We weren't stuck just measuring oil film thickness. We can do the same thing, and we can measure temperature.

Here we have characteristics of two dyes. We have one that's thermally stable with temperature and one whose fluorescence drops with temperature. You can mix these and take a ratio in much the same way we did oil film thickness. And if one does this-- here we have oil film thickness. And we have a fixed temperature. And you can see the gradient due to the film thickness changed. But you can also see the laser rings. And when you take the ratio, you get a beautiful, flat profile that shows constant temperature.

And in fact, we calibrated this. There are a number of problems in doing this technology for temperature, not the least of which is nobody has any information on temperature effects on dyes. So it requires-- but we've been able to calibrate this relatively accurately. You see this is from 0 to 200 microns, oil film thickness. We're using this in seals as our primary application, looking at seals and bearings.

So once we had this technology, we went back to the correlation technology and said, gee, why are we stuck doing it this way? Why not take and manufacture small spherical particles that have two dyes in them? And then presumably, we'd be able to measure the temperature of a fluid at the same time we can measure the velocity of the fluid.

And so these are pearl polymerization of two dyes, pyrromethane and rhodamine. And these are about-- they range-- I guess they range 3 to 20 microns in diameter. And they're made using an ultrasound process.

And lo and behold, this is a-- we imaged it with a stereo camera pair. This is a hot-- or cold fluid entering a hot tank. And we got the three-dimensional velocity profile of the jet, and we get the temperature profile as well.

Just to show you something a little more dramatic, there is the swirling flow coming in, sudden expansion. And the hot fluid is swirling and coming up into a colder fluid. And you can see how it mixes. So we can image that.

Well, this was all well and good, but fluids was a very limited application for all this technology. So we got sidetracked. And we came up with another device. And that was, why couldn't we use this correlation technology at such high speeds and do 4D imaging, basically 3D imaging in video rates?

We came up with a unique device. We have a camera system that has a rotating aperture within that camera. And we project a speckle pattern onto an object. We have this rotating aperture that rotates. And then as we-- if you look at what happens in this system, it's a wonderful way to image. And the reason is if you have the aperture on one side, you get one view of the image. If you have the aperture on the other side, you get another view of the image. If you're at the focal point of the-- sorry, focal plane-- of the camera, both images from this aperture position and this aperture position end up exactly in the center, same spot.

But if you're out of focus, they are displaced by some distance. And you can correlate these two-- the images taken from the two aperture positions and measure this distance through correlation. And one can extract the x,y,z component of the image.

The beauty of this system is, if you're looking at a stereoscopic system, you have optical occlusion. If you're looking at a system like this, the precision in the alignment occurs entirely due to the grinding of the optics, which is highly precise. We're using off-the-shelf lenses now, Nikon lenses. And we simply attach a rotating aperture on the back end of these cameras. And so it's very low-cost. I'll show you one. We have a three-dimensional camera that's under $100 that I have sitting here.

This is a fancier version, and then the one I'll show you. This is our latest lens that we're trying to get down to the diffraction limit of the optics so that we can get very, very high-accurate imaging. Our little $100 camera, which I'll show you, is capable of doing this at video rates. This is a very crude, cheap camera-- nothing fancy. This was our very first attempt ever at doing this.

And you can image a face. We actually have human faces. I didn't want to put mine up here. Actually, the funny thing is I'm doing this with a group that's in Taiwan. And they imaged my face. And the problem was, as they put it, I'm too prominent, meaning my nose was stuck out too far. And they had trouble with the correlation algorithm that we had.

But anyway, we can image this. You can rotate it three dimensions. The applications of this range from-- what they're interested in and where the money, I hate to say-- video games all the way up to heart surgery, and putting it in endoscopes, being able to go into probes. We can do this extremely low-cost. We could put this on top of a PC. We've done this many times just as a demonstration model, as I'll show here. But there's any number of applications.

I'm interested in this technology. Because once you image in this way, things become no longer an image in a computer, but an object in a computer. So you treat the world as objects, not images. And that's a very powerful idea. Computers are wonderful things at image processing. But now you have something where everything in it is a solid model of something where it can be rotated, manipulated, transformed, so forth.

Let me-- because I'm running out of time, I don't want to run out-- over and get you back on schedule. I don't know whether this is going to work or not. I have a-- I have-- this was the camera that took that image. With lens, it's $100. It's a very cheap device.

And all we did was take this-- you can see it's a very small aperture too. So it's not particularly accurate. A good aperture lens-- we're now using f/1.485-millimeter lenses, which have far better accuracy. So this tiny, little lens, in the middle of it, we've stuck an off-axis iris. And we've just connected it so that we can rotate it. And if I can set it down here, perhaps I can image something that one can look at and see how it displaces.

Focus in. I can just rotate it. But if I rotate the aperture, you see how the image is shifting? It shifts in a circular pattern. And you notice the background is shifting in the opposite direction. I don't know whether you can see it through my shaking. Not sure whether that was apparent.

But what's nice about this technology is it's 30 hertz. So we can image and get 3D. But if you sit there, the aperture is rotating. So it's taking different information each time it takes an image. And so as you set it there, the accuracy of your image becomes greater, and greater, and greater. So it just sort of fades in.

You get a fairly good image with just two. And then you get three, four, five, six images, and the accuracy improves with each image. And it almost goes down to whatever the distortion in your optics are and your ability to calibrate out that distortion. George will show a really fascinating device where he's eliminated a lens completely. Our objective here is to develop a device that's robust, fast, low-cost camera for the masses. Anyway I'll take any questions if I still have a few minutes.

MODERATOR: Thank you very much.

[APPLAUSE]

We do have time for a couple questions, so ask away.

AUDIENCE: [INAUDIBLE] for sale--

HART: No.

AUDIENCE: --in the market?

HART: --but I'll sell you one. No, the company-- or actually, the group I'm working with want to sell it. Currently, it's interesting, they got cheap on us. And they only bought the patent rights to it for Taiwan. And we still-- MIT still holds the rights to the United States, and Europe, and Japan, which is interesting. Because all the-- they paid for the development of it. And all the users are in-- are outside their realm. So yeah, we're in a good shape on this one.

MODERATOR: More questions?

AUDIENCE: What kind of-- in a microscope context, what is the resolution in the z-axis?

HART: The resol-- it depends greatly on your optical setup, is really what it amounts to.

AUDIENCE: [INAUDIBLE] diffraction limit.

HART: The diffraction limit is-- the z direction is the diffraction limit in the same as the x, y. So you're about 10 microns. That's if you push the technology to the extreme. In a practical system like we're talking about here with a really low-cost imaging camera, nowhere near diffraction limited of course, for the size of a head, our z direction is the same as the x, y. So if you have 512 pixels across the face, you have essentially 512 pixels in depth as well, equivalent to. Yeah?

AUDIENCE: Camera scans at 30 hertz?

HART: Yeah, this camera scans at 30 hertz.

AUDIENCE: How fast is the aperture moving?

HART: The aperture moves roughly 90 degrees every image. So it rotates. We don't worry about blurring, because it correlates out. It's blurred in the same direction. So we just take the blurring and live with it.

We have plans for our very high-speed application where we take a burst of data. And there we use LCD high-speed apertures. And instead of rotating, we just fix them in two spots and just go click, click, click, click, click, click. And hopefully we can get high-speed motion that way.

MODERATOR: OK, well in the interest of keeping on schedule, thank you very much. [INAUDIBLE].

[APPLAUSE]

I think you can see from this why optics is so important in mechanical engineering instrumentation and applications. For getting information out of a system, you really can't beat visible light. I mean, also, it's amazing, the kinds of applications it suggests once you get going down this road.

So our next speaker is Forbes Dewey. Forbes is professor of mechanical engineering, professor of bioengineering at the bioengineering division at MIT. And he's going to talk to us about-- [INAUDIBLE]-- IT and bioinformatics.

DEWEY: The opportunity to speak to you today was something that I really grabbed onto. Because I think it's an opportunity to really pay tribute to an investment that Nam Suh made in the department a number of years ago. It was a real risk when he went out and encouraged everybody to get involved in the information technologies, and in particular, invested in a lot of young faculty coming into the department who had information technology as one of their key tools. And in my opinion, that investment has paid off marvelously.

And I think it's hard now, looking backwards, to realize what a risk it was at the time. So this is the first opportunity I've had to really pay tribute to what I think is one of the outstanding decisions in the department over the last 20 years. To me, information technology is absolutely essential to deal with the enormous amount of data that we have streaming in from every different corner of our technological universe. And what I'd like to do today is to talk to you a little bit about bioinformatics and some of the work we've been doing to handle all of the biological and medical information.

I want to go through just a couple of quick slides to tell you a little background. This is the-- oh. This is the team-- I lost my-- this is my team of students. Ngon Dao is my lead information architect. And Pat McCormick was previously.

Pat is now out with a startup in Silicon Valley called Tellme, which will be going public in about six months or less. So you should watch for that. He was the seventh employee out there. And I have already been dunning him about contributions to the Institute when that happens. He was involved in the 50k competition at MIT.

And Ngon Dao, in fact, his team won the 50k competition at MIT this past year. And as a comment to some of the ideas that Alex d'Arbeloff was talking about at lunch, I think that that's a very innovative program where, in fact, the students, in order to compete, have to go out to the marketplace, and understand the needs of the marketplace, and formulate solutions. Because the prize is they get $50,000 from a series of judges who come out of the venture capital community.

And in fact, they have an opportunity and an encouragement from that community to go out and commercialize the ideas that come up with. This whole 50k competition has been-- is now being replicated by many, many other universities. And I think it's a marvelous incentive for students to go out and, in fact, make that connection between market need and technology.

I'll just skip a couple of slides here. The International Consortium for Medical Imaging Technology is something we started in 1990. And it has been one of the things that has sponsored a lot of the research I want to talk about today. I'd like to go through, with you, about what the needs are in the bioengineering, and in particular, in the large data sets that you get in medicine and biology, describe those information objects, talk about the technology that we can use to organize that information and query it, talk about several solutions that are possible, and then just quickly, run through a series of projects that we're working on at the present time.

Now why do we need information technology? Well, first of all, the amount of data that is coming out of the medical and biological communities is just enormous. No longer do you take a whole week to do a single experiment. With a chain chip, you can do 10,000 experiments in one afternoon. So the stakes have just changed by orders of magnitude. And our ability to deal with all of that information in a rational way has not.

Second thing is that the information you need to make an intelligent decision-- for example, in trying to do a rational drug design-- requires that you pull together pieces from many different databases, many different models, different experiences. And in order to do that, you need an architecture that will support that kind of query. Finally, you can't do these systems one-off and expect them to be robust, and secure, and complete. They're going to break the minute-- in an academic setting, the minute-- the graduate student leaves. It will never get rebuilt.

And in an industrial setting, the minute that the person that built it gets pirated away, the company struggles in patches for the next five years until somebody redesigns a new system. You can't afford to do that. What you really need is an architecture that will survive.

Just to give you some examples, an MR image of 256 by 256, by 2 bytes per pixel, by 40 pictures, 40 slices is about 5 megabytes. If your medical record contains 1,000 pages, 400 words per page, and 10 bytes per word, you've got 4 megabytes. So one MR image is basically worth all the written information about you in the medical record in terms of space, in terms of the ability to query, and so forth.

To give you an example, Beth Israel hospital has saved, for the last 16 years, all of the patient encounters of all the patients that have gone through in terms of the textual information that's been involved with it. That total, which, a few years ago, looked huge, is 18 gigabytes. That's smaller than the disk drive that was just installed in my-- in the computer in my lab, actually, yesterday. So these things are changing very rapidly.

Now let me tell you why this is a multimedia problem. And here's a simple example out of Neil Pappalardo's area. This is an X-ray. This is just a schematic of it. And here are the annotations about that X-ray that were put in there by an expert. And this is all useful information. This is the X-ray. Somebody else can pick up the two.

But those two pieces of information are disconnected. And it's a very simple matter to try to key that textural information to the image itself. This increases the information content. It allows you to distribute and use that information in a much more open system. So this is just a very simple example. You can get much more complicated when the image analysis becomes more difficult.

So just to summarize, we've got-- in the whole bioengineering and medical community, we have clinical information and hospital records that may be important. We have engineering data on materials. We have engineering models of the cardiovascular system. We have chemical models, biochemical models of reaction rates and so forth. And we have various experimental results where we've gone out into the laboratory with gene chips and otherwise to get data.

There is also the entire biological sphere, which includes, as I mentioned, the gene chip arrays and the human genome database, which is now being completed. There's just an enormous variety of data here. And our challenge that we've been looking at is how to connect all of these different ones.

Let me give you another example that is a case that we're working on locally with Brigham and Women's Hospital. This is a drug discovery and drug testing protocol where we have a hundred patients. Each patient gets measured 20 times. All of these patients have multiple sclerosis. And we're trying to look to see how the lesions that you can pick up with appropriate image analysis change over a function of time.

The data mounts very rapidly for this, to tens of gigabytes. We also have an equivalent amount of information after the images have been analyzed. And what we're trying to pick out from that is, in fact, whether the frequency of the lesions, and the size of the lesions, and so forth will change with the use of one drug or another, and then to correlate that with information in a genetic database.

Now I have a little movie here. And you have to watch very closely down in this lower right-hand corner. Because unfortunately, it goes very fast. I've not figured out a way to slow down my computer. Normally you like a fast computer. But in this case, it's too fast. Let's see if we can-- uhp, there we go.

What you were looking at in that was, in fact, 20 different time points for a single patient, all of the images re-registered exactly by use of image analysis techniques. And then each time point took about 10 hours on a large Sun machine to do the segmentation to get the individual lesions. Nonetheless, that's the kind of data that are going to be used to effectively do drug trials and drug discovery.

So how are we going to go about developing common component software for this environment and bringing these different information pieces together? Well, I have three things that I would propose. One is to use open software tools to the extent that you can, things that will operate on multiple platforms. The second thing is to try to get international standards. And there are some that are in existence, and more coming. And finally, to try to take the best view you can of leading technology and see where you expect the field to be in five years so that you don't design to a standard that's either going to go away or become less important.

Well, you've got a lot of choices today for the kind of tying together of these information environments. And I've just given you some eye candy here to try to suggest what all of those various choices are. I'm sure you've heard of Java. I'm sure you've heard of XML, which is an extension of the HTML language that was-- is used in all of the browser technology and so forth.

CORBA means Common Object Request Broker Architecture. DICOM is a medical imaging standard. HL7 is a medical informatics standard. TCP/IP is the connection protocols that have been used since the mid '80s. All of these things are available. And you have various proponents of them. And the question is, how do you architect something that uses these in an intelligent way?

Well, they're just-- just briefly, there are three classes of software you have to develop. One is user access software. This is really the client part of it. You need databases for the storage systems. And finally, the third piece is the intermediate piece, which is the exchange software. And that's the one I want to emphasize here.

Here are all the clients up here. You're trying to run various models. You're trying to do analyses of data. You're querying about which patients are in trouble and so forth. Down here you have all sorts of different databases and varied information. And how do you put together the software layer that pulls all that together?

Well, first thing you want to do is you want to start with object-oriented technology. And I simply say that it used to be you could just put things in a file system, and name the file, and remember where it was. But there's too much data now. You can't do that. You can't exchange it. Ordinary relational databases can't deal with the multimedia. Object-oriented databases have a real problem, because there are no good standards for those. The object-relational is pretty good, but it's not absolutely necessary to this process.

But the one thing you do get if you use an object-oriented system is that all of this multimedia stuff is transparent. You can describe, up here, in an object-oriented program, exactly what's in the database. So the programming paradigms become very simple. If you don't use an object-relational system, then you get-- or an object-oriented system-- then you get into a real problem that you have to make all sorts of compromises in the relational system in order to get this object-oriented view from the outside.

One of the things we did as an example was to take the DICOM medical imaging standard, which is an international standard, and make a completely object-relational database out of it. You can't read all of the details here. But it was incredibly successful at allowing you to go in and query the image on its characteristics in a very enlightened way such that the image itself, as well as all of the derived information, was all available within the database with a common query format.

Here's another example. We have more than just databases. We have models. Here's a cardiovascular model. Roger Kamm, in our department, has developed this.

And what we're doing now is to put this on the web so that the model itself can query a database automatically for the information it needs. An outside user on the web can come in and set these various parameters. And the model will then run and produce information about what the various pressures are, and the flows are, around the cardiovascular system. This will be very nice for teaching. But you can imagine that these kinds of models are going to get more and more important for playing what if games in a clinical environment and for drug discovery.

Now the key technologies here reside in this interface. Here's the typical client. Here's the database. A request goes to the database. Results come back. But the database, in fact, could be fairly complex.

It could be a model which goes to another database, right? So there's a cascading effect that happens over here. And the client doesn't necessarily need to know anything about that. All the client needs to know is what kind of request has to be put back and how to interpret what comes the other way.

So we've developed a number of concepts to deal with that. One of the ones that's been very successful is to use something called a class mapper to produce an object-oriented view of the database independent of whether the database is object-oriented or not. Then what we do is we wrap that in what's called an XML wrapper. We then send that through a Common Object Request Broker environment that picks up the transport protocols, exchanges the XML with the client. The client then knows how to unwrap the XML and to display the results. So in a nutshell, that's what we do.

This is just another way to look at it. Here's the user up here, putting input in and getting display back. The request is interpreted. You can look on this as the transport layer. Here's, now, the database. This is the class mapper here, that represents the interface to the database.

I have an example that I can show somebody later. I guess I'm out of time, right?

MODERATOR: No, you have-- you actually have four more minutes.

DEWEY: Four more minutes, all right.

MODERATOR: [INAUDIBLE] and questions.

DEWEY: OK, well, anybody that's interested in how one applies this to the new gene chips that are coming out from Affymetrics and so forth, I'd be happy to talk to you about it later. Just qualitatively, what we have here is a basic chip schema that has been published by this genetic analysis technology consortium, which is basically Affymetrix and a couple of their friends. This is very probe-centric in the sense that it just defines what's on the chip. But there's no way to connect that with something like the human genome database, for example.

And furthermore, you don't know where that chip came from, or what the conditions were under which it was taken, what kind of subject information. It's not connected, right? So the next thing you have to do is you have to add all the information about how the sample was prepared and what the sample represents. So that's part of your database as well.

Then, the next thing you have to do is you have to connect that with the public gene databases so that you can now begin to make sense out at the entire system. And this is an ongoing project that we have right now that I think is very exciting. Because we probably-- I mean, unless we stumble, we're probably going to be the first people to actually be able to connect the new human genome databases with the Affymetrix chips in a way that you can go in and query at a very high level.

Like, give me all of the patients who have the following genetic profile, and see if you can match the following genes. Or here are six genes on these various people. I want you to take those six genes two at a time. And I want you to find the correlation between those two genes for the following people and then correlate that against the following diseases that they may have. I mean, these are the kind of high-level things you want to do in an object-oriented environment. But you can't do that unless all of the machinery works appropriately underneath it.

Well, I've mentioned most of the systems that we've been working on except the integrated health care. Since 1990, we've been working on some integrated patient record systems. They've been pretty well-adopted now in the UK. And they provide the clinical physician with a very good multimedia environment.

So I'd just like to close by saying that what we're trying to do is to get better diagnostics and therapy through technology. Thank you very much.

[APPLAUSE]

MODERATOR: Peter, why don't you come and start getting set up? [INAUDIBLE] with some questions.

DEWEY: Yes

AUDIENCE: How have you dealt with the problem of not having a standard for an object-oriented database?

DEWEY: What we have done in most of our work is to use the Informix object-relational database, which is quite similar to the Db2 that IBM has. The other big one is the Informix database, which is not as developed. But we just simply use this class mapper idea to handle that.

There is also an issue about object-oriented queries, which you can solve with SQL3. But SQL3 is still in the standards development phase. And you have to make a few compromises there. But those have not been nearly as difficult as working through the whole issue that you have to force everything to [INAUDIBLE]. And then everything else is easy. Yes.

AUDIENCE: It's interesting to see that this format be done by somebody in mechanical [INAUDIBLE] problem rather than computer science. So my question is, is there some kind of advantage that somebody in mechanical engineering can be-- can have over somebody in computer science in doing this kind of problem?

DEWEY: Well, I'm not a computer scientist by training. My background, like Doug Hart's, is fluid mechanics. I got into this through the back door by doing a lot of image analysis and getting involved in using optics for lots and lots of experiments in the laboratory. Then I realized that if we're really successful at this, as Doug, and George, and lots of others are, we're just going to be generating data that are just coming out of our ears. What do you do with it?

And all of a sudden, it was very obvious to me that this was a wonderful challenge. And I knew the marketplace. As Alex said, I knew the marketplace. That was my big, unfair advantage over anybody in computer science. So then it just went out and got it.

MODERATOR: Yeah, I mean, you know images. And would you prefer to have somebody who knows images and knows the marketplace doing the development to someone who only knows how to program computers, right? Exactly the two questions that Alex brought up-- do you have the person who knows the technology do the development, or the person who knows the-- and the person who knows the marketplace do the development as opposed to someone who just knows how to program? Yeah, one more question.

AUDIENCE: Actually, I'd like to second or third what Forbes was saying. It's a common route that many of us take-- I guess, like Forbes was saying, when we do optics, and imaging, and we try to develop this technology, a very common question is, what do you do with the data. So some of us take the computer science approach. For some others, the approach is more cognitive. So we ask the question, well, how does the brain do this? So you know, then, again, it doesn't matter who's department you belong, right? As long as you're having fun with it, it's OK.

MODERATOR: OK. Let's thank Forbes again.

[APPLAUSE]

Our next speaker is Peter So. I discovered he has an even longer title than mine. I lied. He is the Esther and Harold Edgerton Career Development Assistant Professor of Mechanical Engineering.

AUDIENCE: He just became associate professor.

MODERATOR: Oh, associate professor of mechanical engineering, excuse me. So Peter will also talk to us about mechanical engineering innovation in IT bioengineering. Thank you, Peter.

SO: Thank you, Seth, for the invitation to come here. It sounds like a really exciting conference. And the previous speaker actually set up what my talk is going to be very well. When Seth first asked me to give a talk in the IT conference, I said, well, I mostly think of myself as a hardware builder. So I was saying, well, what am I doing with information technology?

Then I start going back to look at what I do in my lab and start thinking, well, actually, almost everything that I do have some information technology content in there. And some of them are actually quite exciting. So this is a little bit of the work that what I am doing that I'm going to tell you about here.

As I look at it, biological system are probably some of the most complex systems, as Forbes has showed you, for a little while. Traditional biology and medicine is extremely descriptive. If you look at a X-ray, let's say a radiology slide, a very good doctor can extract very useful information, whereas a not so good doctor properly don't extrapolate useful information. But you wouldn't know who is good and who is bad.

And today, with the information technology that we have-- and it is quickly advancing-- we are at a position where we can really start to untangle and quantify the complex matrix of information that one could get in a biological system, and particularly, biological imaging, which I think I will focus on. And in some way, the quantification of a biological system, which is, in particularly, including ourself, is critical to control how we function, and which, of course, related to medical treatment as well as the possibility of developing new products.

Today my talk is going to be in three parts. I'm going to talk a little bit about what advances in information technology has allowed me to do today which I don't think it's able to do about maybe five years ago in terms of how to use information technology to impact some types of biological imaging applications. And then I'm going to tell you a little bit about how does some type of biomedical imaging problem pushes the frontier of algorithm development in terms of how to process information such that you can extract useful information from really complex biological data. Finally, I'm going back one step and say, well, as the information technology developed, we are able to do new things.

And we are developing new algorithms. But with new instrument development, we are actually generating more and more data. And actually, we are generating data in a probably exponential rate today, which, of course, would give you further demand in the advance of information technology.

I'm going to focus on the first problem, what information technology has allowed us to do today that we cannot do before. One of the central questions in terms of cancer biology as well as the treatment of cancer is how does mutation occur. What are the rate of mutations? Mutation, being one of the main cause of carcinogenesis, the initiation of cancer in the body, can you quantify the rate of mutations for a normal person as well as when a person is exposed to various toxins? So the question is, can one develop a system such that you can look at one type of recombination which we are interested in, which is called-- one type of mutation, which is called a recombination, that we are interested in? This is some work that we are doing in collaboration with Bevin Engelward in bioengineering and health science division.

So how do you quantify mutation? As it turns out, there are wonderful things in biology that you can actually put a gene in almost any normal organism that you can manipulate, such as mice, to make them glow green. This is a protein called GFP. If you put the protein in, you can make a mouse go green.

But you can also-- OK, now you might ask, why do you want to make things go green? Now, of course, it makes a extremely cool picture, number one. Another reason is, now, if you want to measure mutation rate, you can put in a defective copy of the gene into the animal. But when there is a mutation, the defective copy would combine with a second copy that is complete. When that happens, a cell goes green.

So now you can have a fluorescent marker in the animal that you can watch how many cell go green, where they go green. If they go green-- let's say it's a mutation-- does this particular clone expand? Does it become a big tissue that is green, which, some way, is called clonal expansion and is very important for cancer?

So there are a couple things that you have to do. Number one is you need a genetic engineer that can make the mouse for you. Number two is you need some technology such that you can look into tissue to count the green cells and look at the distribution of the green cells relative to the rest of the tissues in the animal. So the technology that I have available in my lab is so-called two-photon microscopy. I'm not going to go into the detail here.

The only thing I'm going to say is, using a red light, if you shoot at the right-- the red light in the right way, you can generate a fluorescence at a three-dimensional localized spot on the order of 1 micron on each side. So now you have a teeny beacon of light that you can put into the mouse, or into a person, and map out the fluorescence. If you can do that, then you can look through the tissue of a mouse and count the number of green cells.

So this is a thought experiment. We haven't really done it yet. So the idea is we would have a mouse. Hopefully not the whole mouse goes green, because mutation rate don't go-- is not that high. You can take the skin of the mice out, lay it flat. You might have to kill it, but that's OK. You would now image individual volumes in the skin of the mouse. And in 3D, you can go into individual cell layers and look for green cells.

Now if you think about this process, it doesn't sound so difficult. But it is actually a very difficult thing to get enough data. Spontaneous mutation rate-- that's the mutation rate of one gene in any given cell-- is on the order of one part in 10 to the fifth to one part in 10 to the sixth. It doesn't happen so often, and it's a good thing. Otherwise we are in trouble.

So if you do a quick calculation, given the image I just showed you, how many images you need to get enough statistics to say something useful, you need about 100,000 2D images, about 2,000 3D images, which is on the order, about-- whoops, sorry, that's not a T-- 10 gigabytes of data. It's not 10 terabytes of data.

But on the other hand, if you do 1,000 experiments, you will get to a terabyte. But on the other hand, 10 gigabytes of data doing 100 experiment on the order of a couple months is not entirely unreasonable. One of the things that I say is the advancing instrumentation is very great. We expect to take all these images within a few hours, OK? So we can really churn out the data. And if there are enough specimens available, we have a huge flow of data.

And of course, what are the problems? The problem, as Forbes pointed out very well, if you have such tremendous amount of data, where are you going to look? You are looking for a needle in a haystack. Let's say even if you find the green cell, you want to go back to it. You are not going to go for a file structure and find the field file that has the useful information. So one of the main problems that we have today is, how do we have high-speed storage, retrieval, and analysis of this sort of information?

At the rate that this data is generated now, I think today's technology can handle it. And that's why we are embarking on this study. We are now creating the mouse and doing the 3D imaging of the mouse. When we are able to achieve that, we will have a lot of information. And we can analyze that.

At the same time, we also recognize that, today, the analytical tool for 3D imaging and 3D data analysis is very lacking. So we are working with Forbes, of course, and also Peter Sorger, working on a very parallel effort in developing a software platform, a software architecture based on CORBA that would allow us to focus on changing 3D data, looking at acquisition, online and offline data analysis models, as well as storage and retrieval.

So let me move on to a second item that I think is also very important. Today, why we are-- our instrumentation and biological system, the data is extremely complex. Why we want to be able to analyze them, store them, and in the end, retrieve them for analysis? One, actually, if we take a look at the algorithm that we have for some of the data analysis, they are not quite advanced enough for some of what we want to do.

And one of the problems that we have, going back to the example of cancer, is pathology. The standard method of pathology is look at the tissue, tissue don't look right, take out tissue, slice up tissue, send to the microscope, pathologist look at it, a good pathologist pick out the cancer, which is all very good. But on the other hand, if one can do that non-invasively, it would be great. So one of the ideas that we have in our lab is, if we have two-photon microscopy that we can look into tissue, one could, in principle, do optical biopsy.

So now-- and we could do that. This is a quick movie of 3D images of the skin of a mouse we just-- well, actually, we killed the mouse, because it moved a little bit too much. We killed the mouse. We punch a piece of tissue from the skin, put it on the microscope, and we image it. And we can look at individual layers from the mouse skin. And then we can look at-- what can you see?

As you can see on the surface of the skin, you see individual hexagonal-shaped things. Those that dead cells on our skin that's ready to fall off. And then, because we have a three-dimensional data cube, we rotate around for the fun of it. And then it will come back to the other side.

The colors that you see are false colors, which means that they are representing intensity. Only red color means it is more intense. So you slice it through. You can see individual dead cells. And you can see living cells underneath it. And when you go a little bit deeper, you can actually see the collagen, elastin fibers that form the structural support for your skin. And when you go a little bit deeper, you can actually see the cartilage that supports the ear.

So this is all very well. You can certainly extract structural information from a lot of biological specimens. That's quite exciting. But on the other hand, one thing that's very important is, as Doug has mentioned, fluorescence has tremendous additional dimensions. Doug has mentioned that fluorescent spectroscopy has allowed him to measure temperature, has allowed him to measure other properties of a particular fluid system that he has been looking at.

In my case, one of the things that's very interesting is, if you are looking at different colors at the same layer of the skin, you are going to see different things. Why is that? Because at each individual pixel that we are looking at, it has many different biochemical components. And the ability to resolve the different biomedical components gives you additional information for diagnosis of the cancer and other medical applications.

So we are now working in collaboration with Dr. Peter Kaplan and Dr. [INAUDIBLE] in Unilever Corporation, looking at methods of combining both image data as well as spectroscopy data such that you can, at each individual pixel, not only find a structure, but identify individual spectral components that's associated with it. In some way, curve resolution, the ability to resolve individual structural components and spectral components of a spectrum have been around for a while. But today, we finally have images that combine both high-resolution structural information with high-resolution spectral information. Now we can think about combining image-based analysis methods such as segmentation to fit it in as a constraint to resolve individual spectral components in individual pixels of our images.

This is some of our beginning work. We have not used that the image information yet. But we have been able to, at different depths of the skin, resolve seven structural components corresponding to different wavelengths in the tissue. And for example, at the middle of the tissue, we can see this particular layer. This is all the pixels with this spectrum. And they are corresponding to the melanin that's sitting at the middle of the tissue that you can see.

Similarly, you can go up to the tissue's surface. This spectrum has most of the components close to the surface. And you can see individual cells on the surface. The thing that's very interesting, if you are going into the collagen-elastin fiber layer that's deep within the dermis, you can actually, based on the spectral difference, resolve different components of the fiber, which is consistent with what people think. There are many different biochemicals with different collagen and elastin structures that's in the skin.

So new biological problems pushes the need for new analytical algorithms. But another thing that's very important is bioinstrumentation moving forward in a very great rate in a way of acquiring data that's tremendously impressive. One would think about that you are generating more and more data faster and faster. And you really need higher capacity to handle those problems.

So one of the things that Forbes has mentioned a lot is gene chip technology, the ability to, now, sampling on the order 10 to the 3 to 10 to the 4 different genes from a bunch of cells at the same time is a great advance in understanding the genetic changes in humans and other organisms. But today, in many ways, this is almost a post-genome era. We are really at a point where the genes of humans would be sequenced within a year. So we know every single gene that is around.

The question is, is it possible to now thinking about mapping individual genes from individual pieces of tissues and also mapping that in each individual cells? Question is, can we do a gene map, gene expression map at individual cells? Well-- which is a major challenge.

Current gene chip takes about 10 to the fifth to 10 to the seventh cells to take a run, which is quite a bit of cells if you are thinking about looking at carcinogenesis-type problem. So what are the major limitations that you need so many cells? One of the limitations is really chemical kinetics. Chemical kinetics require you have certain concentration of the reactants, which are the extracts of the cells, to be present so that you can bind to the gene chip. And of course, the total amount of material that you need to put in is proportional to the volume of the chip. And the volume of the chip is limited by technology such as fabrication, imaging, and fluid handling.

So in collaboration with Dr. Rick Yang and Dr. Ben Yan in Whitehead Institute, we have been looking at different methods of trying to actually handle the problem of reducing the cells needed to sequence-- to find the expression at individual chunks of tissue of a few cells. How much time do I have?

MODERATOR: Five minutes.

SO: Five minutes, great.

MODERATOR: Four minutes.

SO: Four minutes, OK. So one of the techniques that we have developed in our lab is single-wave total internal refraction microscopy. We are thinking about using it to do pardoning. I am not going to go into the detail of the technology. But the technology is a method that allows you to do imaging at very high resolution. Normal optical systems, like most of the optical lithographic method, are limited by the resolution of light.

And typical optical resolution limit for about 500-nanometer light is on the order of about 100, 120 nanometer, which is the blue curve. However, if you use a new technology that we have been develop, we have some that we can image at the red curve, which is about 80 nanometer. And we think that we can push this technology down to about 50.

Now one of the things to think about is, today, the chips has individual elements on the order of 20 to 100 microns. If we can actually push this element down to the resolution that's close to the resolution limit of this technology, we would have-- may be able to gain by a factor of a million in terms of volume reduction. Even if we can't do-- if we can do a million, we can almost do single cell. Well, even if we don't do a million, if we can do 10 to the 1,000 to 10 to the 4, we can still start thinking about doing gene expression profiling on the order of about 100 cells.

And so this is really a conceptual slide. If we have the mice, if we can now-- at the beginning of the talk, I'm telling you we now have a method to look at the change of one gene over the whole mice. Question is, can we, in about 10 years, develop method such that instead of monitoring a single gene at every spot of the animal, can we actually monitor 10 to the 4 gene at every single spot of the animal? And so if we can do that, the information that would be generated from this sort of experiment would be tremendously vast. And how to handle that, I don't have the answer for. And I will stop here.

[APPLAUSE]

MODERATOR: Questions for Peter? No questions, OK. I have a question, actually.

SO: OK.

MODERATOR: So I mean, in trying to put together previous talk-- Forbes' talk-- with your talk, it seems that the common feature that they have is dealing with the-- dealing with images, large-scale image data. Now do you actually see that there's a possibility for putting your in this object-oriented framework that Forbes had so that you could actually do a-- you could say, if you have the-- I guess, ideally, over Sunny's new internet, you have pictures, images of a million mice. And then you could do cross-correlations between these million mice over the internet, just, you know, put it in--

SO: Oh, I certainly see some of that is going to happen. As I was saying, if we are looking at the whole mouse, just looking at which cells are turning green, if you are imaging the amount of data that we are imaging on the order of 20 gigabytes of data, just to find the right frame that you have a mutation to occur is a problem. And I think, basically, the technology that Forbes is developing and other people are looking at, like Sunny's internet technology, you have to have a storage medium that, in generally, would be far away. So how to retrieve the data and all that is really-- some of Sunny and Forbes' work is critical for some of these techno-- or this type of diagnosis and assay to work.

MODERATOR: So actually, I mean, one thing that strikes me as kind of interesting about this, while Sanjay's setting up, is that-- I mean, [INAUDIBLE], this morning, talked about how, by pushing the limits of manufacturing capability, you could make chips-- you know, the details on chips-- smaller and smaller. That, in turn, is driving this huge increase in capacity for information processing and memory. Without that, it would not be possible to deal with all the information that you guys are getting from pushing the limits of doing the imaging down to smaller and smaller scales.

I mean, it's a funny symbiosis between these, that you can increase your ability to handle visual information at exactly the same time you could have the ability to store it. Because you're developing the same techniques. Because in fact, these tech-- both of these advances come by development of the optical techniques that allow you to do sub-diffraction-limit or at-diffraction-limited imaging. The imaging allows you to create the chips. The imaging allows you to get the data. And the chips allow you to store and process the data.

SO: Right, and I think that's a very interesting thing. One of the things, that is, optical, technically, is really advancing in tremendous way. I mean, optical microscopy and imaging has a resolution on the order of about 300 nanometers or 200 nanometers for the last, pretty much, 300 years. And there are very little advance in this area. People go down to shorter wavelength, which of course, allow you to go shorter. But there are limitations in terms of light sources.

But one thing that is tremendously interesting is, right now, there are a number of groups that is really developing some really high-resolution imaging technology. And those technology can be turned for fabrication. And there are potential of doing optics on the order of tens-, to 20-, to 50-nanometer scale, which I think is very exciting.

MODERATOR: All right. OK, our next speaker is Sanjay Sarma, who also has a title longer than me, I know. He's the Cecil and Ida Green Career Development Associate Professor of Mechanical Engineering. And he will be talking to us about the Auto-ID Center. This does not have to do, so far as I know, with bioinformatics. However, Ian Hunter, who was going to talk at this time slot, had his daughter's birthday party at this time. So Sanjay kindly agreed to switch. However, as you'll see, it's closely related to the work and studies he was talking about this morning. Sanjay.

SARMA: Thanks a lot, Seth. Thanks a lot, Seth. You know, Sunny-- everyone calls it Sunny's internet, so he should probably run for president now. What I'm going to talk about today is the Auto-ID Center. It's a brand-new center which is now a part of the d'Arbeloff Lab. We thank Harry Asada for inviting us to the lab, and thank Alex d'Arbeloff, if he's still there, for setting up the lab. Let me just see if-- yep.

The Auto-ID Center is, in a sense, a project which brings together the internet and the physical world. Now data has always existed in kind of an island of its own. And what we're trying to do is put the internet-- bring it to the last foot. You know, we talk about the last mile problem in the internet, the last 10 miles. We're talking about the last foot problem, where, eventually, we hope that the internet can, in a sense, reach every physical object.

What I will do today is tell you about our center-- it was established on October 1, 1999-- and tell you what our vision is. This research is done in conjunction with Sunny Siu and David Brock. And I'll be representing them in my work.

If you look at the robotics revolution which never happened, in a sense, it was the perception revolution. Robotics succeeded, but the perception problem has always been a kind of a bottleneck in using robots to their fullest extent. The reason is that perception is a very difficult problem. And the way we perceive in robotics is we try and recreate the object, or recognize the object, through very slender-- a very slender understanding of perception.

For example, image processing-- you take an image, you do digital image processing, and then you say, well, that looks like a glass. And maybe I'll try and pick it up now. Now the problem with that is it's unreliable, its capability is limited, and it's brutal. The question you might ask is that most artificial objects are made by someone. And all the information about these artificial objects is already known.

So why try and re-recognize it? Wouldn't it be wonderful if we had a sixth sense which we could quickly refer to as identity? And if we could somehow sense this identity, then all the information is available to us. That is kind of the crux of the problem we're seeking to attack.

I'll make this a very quick presentation, because we would like to catch up with our schedule. The bottom line is that automatic identification refers to the placement of an ID on a physical object. If a man or a woman made the object, then surely you can put an ID on it. And if you can't put an ID, and if you can read the ID, you know everything you need to know, because a man or a woman made the object. And all the information is stored somewhere. That's the key point.

Now this ID concept is not new. In 1972, a bunch of grocers formed a committee called the Uniform Code Council. And they invented something called the barcode. And they came to MIT. And they said, can you look at this and tell us how long a window it will have? Will this survive? Will optical scanning be available for us? And MIT told them, 25 years-- we can give you a 25-year window, we can assure you. That was June of last year, OK?

So over 25 years, barcodes have survived and have revolutionized the retail industry, OK? Today, 5 billion barcodes are scanned every day. However, despite the promise, barcodes didn't deliver on certain deliverables that were promised at that point. The first is that barcodes are non-line-of-sight. So although we thought that barcodes were automating everything, in fact, they didn't. A barcode is printed by a company, but it's only used once in its lifetime. And that once is at the checkout, OK?

The problem is the barcode is non-line-of-sight, and it cannot be read automatically. It's a manual process. Wouldn't it be wonderful if you could get around that? And if we could, then the opportunities are really quite amazing. And that's what we are talking about at the Auto-ID Center.

Wouldn't it be wonderful if we had a non-line-of-sight ID, OK, where you could put the ID and somehow sense it? If you could do that, all you could do is put the ID on the object and put all the information about the object somewhere else. And by coincidence, the internet exists today. So you can access all that information with very little overhead. You couldn't do that 25 years ago, but you can do that now, OK? That's the key.

Now others have thought about this idea. And people have been doing research on this. For example, I know, in the robotics world, several members of the d'Arbeloff Lab have said, you know, wouldn't it be great if we had an invisible ink? And that's basically what we're trying to do.

There is a new chip that is becoming available today. It is called electromagnetic identification techniques. That's the general term for it. And I have here the business card of one of the vendors of this chip. This gentleman is from Motorola. On the back of it, you will see an ID chip, OK? I'm going to pass this around, but would very much appreciate it if you handed back to us at the end. Thanks.

The chip consists of a piece of silicon with two antennae, OK? And it is passive. The way it is passive is it has a little resonator. It resonates with the ambient field radio waves, OK? It picks up energy, charges up a capacitor, and then powers the chip, which sends a chirp back to the reader. So it's completely passive.

Today we are 128 bits, and it's growing. 128 bits, by the way, is enough to number every atom on the surface of the Earth individually, OK? So there's no shortage of IDs we can assign.

The price today is about $0.20, $0.25. And remember, we haven't hit bulk yet. If you can start making bulk, make manufacturing these in bulk, we are thinking that the price may drop to about $0.05. And one of our sponsors has informed us that, at $0.05, they'd be willing to put it on consumer items that you can buy in a grocery store, OK?

And finally, the range we get out of these, they're a few feet-- 2 feet, 1 foot. That particular tag, we have read at 4 feet, OK? That's the basic technology.

EMID tags are manufactured by a number of companies today-- Texas Instruments, Philips, Motorola, Arizona Microchip. They're mostly used in security cards so you can identify yourself when you enter a room, for example. But there's been some very interesting research at MIT by Neil Gershenfeld at the Media Lab and others, including our groups. And we are beginning to find that these chips can now move beyond these very niche applications of security and entrance to a very ubiquitous application in physical objects and identification of physical objects.

The applications, we are beginning to realize, are truly very exciting. One of the key applications is the supply chain. You know, we talk about bits and bytes, but eventually, human history has been shaped by the supply chain. I mean, wars have been fought on the supply chain. America was discovered because of the quest for India, for the supply chain of spice, the Silk Route. The supply chain is, in a sense, the most important stream that runs through our lives. And barcodes-- and now, radio frequency tags-- have the potential of changing the way the supply chain is automated today.

We talk about the e-commerce revolution. It is nothing but a war over the supply chain, OK? I'll become a little more specific in a minute. Factory system automation, robotics, smart devices, recycling-- the applications go on and on. And from a purely research point of view the applications in robotics are very exciting to us.

The MIT Auto-ID Center, our charter is quite simple. We are developing electromagnetic identification and sensor technology. We are also developing-- and that is why we have the likes of Sunny in our group. And this is truly a serendipitous event, that a mechanical engineer, and a computer scientist, and our third partner, who is a roboticist, happen to be in the same department. And the idea kind of came together.

One of the key things is we're building a network infrastructure. Our vision is-- and we can say that we can-- it's easy for us to paint a vision. We're at a university, so I'll say it-- is to carpet factories, shops, retail, warehouses, distribution centers with readers, put them on networks, OK, in the hope that, one day, every object will have a radio frequency tag on it, OK? And the idea then is that, if you so wanted, you could track objects anywhere. And once you can do that, we hope that any system where material is being moved can be controlled very tightly, resulting in very high-performance material movement and very good supply chain management, OK?

The other goal of our center is to actually develop and propose some of the standards. Just as the Uniform Code Council standardized the barcode, we are trying to develop and propose network standards, the standards for storing the numbers on the chips, the standards for the air reader interface, standards for how to store the information in XML. Forbes spoke about XML. And we are actually-- it's kind of a new thing for a university to really do this. We are wading in, hoping to understand this a little more. But performing standards is one of our key goals.

And finally, to develop applications-- this is the most exciting thing. We're trying to develop applications for this. I mean, some of the applications are fairly obvious. Others, hopefully, will emerge.

Our sponsors are the Uniform Code Council, Procter and Gamble, Gillette. We have a representative, John Sonden, from International Paper, Chep, which is a major logistics company-- they move pallets around the world. They're a multimillion dollar company-- and about six or seven more companies, which are coming on board in the next two weeks. Our vision is that all physical items will, at some point, have these electromagnetic identification tags. We will have large networks of readers and sensors. And eventually we will be able to achieve large system monitoring and control.

First step, architecture-- this, we have proposed in the last four months of the life of the center. The architecture of our-- that we have proposed is as follows. Readers are ubiquitous. Wherever you need them to be, you have readers. Readers are networked. We're developing standards, for example, new protocols for-- lightweight protocols for connecting these readers. TCP/IP is a very heavy protocol. There are new protocols that are lighter so that they can work much more efficiently for readers.

So now imagine that this room is full of readers. The readers now are a gateway to the internet, for example, or some network, OK? On the internet, you have databases available to you, which have all the information about the objects you're likely to see. So for example, if you go to a warehouse, all the objects have tags. And the warehouse is carpeted with readers. The readers can read the tags. But if you need information about a particular tag, you can go to it on the internet and yank down information from a global database about that particular object.

And finally, all the data is available to the people doing the control of whatever system it is that is being controlled-- the supply chain, a robot, whatever. Now I should also stress that, to us, remember that ID is just a sixth sense. The other five senses can as equally-- can equally be integrated into our system. So any sensor, in fact, can fit into the system. An ID is just a sixth sense of identification. But a temperature stamp, a timestamp, a humidity are all parts of the same XML file that we can take snapshots of when required. And these XML files are available globally on the internet barring security and privacy concerns, which I'll talk about in a second.

The services in place-- well, we are soon going to announce something we call the Electronic Product Code. You need an E in front of anything today to make it useful. It's a play on the Uniform Product Code, UPC. The Electronic Product Code-- Uniform Product Code, by the way, contains certain information about the object. But we are pure ID, only ID. And the Electronic Product Code is an ID.

The Object Naming Service-- if you're familiar with the Domain Naming Service, it is similar. An Object Naming Service is something which will take the gibberish, which is the identification of the object, the serial number, and convert it to the URL for website or a database site from which you can yank down all the information you want about that particular object. And finally, something called the Physical Markup Language, which is a large XML file which contains all the relevant information about the object-- it could be anything from instructions for use, code for how to run the object, down to-- you know, let's say if it's a food packet-- nutrition information. This is stored in a standard format on a database somewhere.

I'll tell you quickly about the three services. The unique object identification-- just to give you an idea, there are about 20 million manufacturers, we are guessing, today. And if you look at the number of bits required for things like automobiles down to grains of rice, you're down to 54 bits to number every grain of rice produced per annum. So we have plenty of bits.

And based on that, we've come up with a proposal for the next-generation Electronic Product Code, which looks something like this. What we're thinking is that, out of, let's say-- we're assuming 96 bits to begin with, say about 2 bits-- sorry, 8 bits-- which is 2 hex bits, as a header, the version number, 24 bits for the manufacturer, 24 bits for the particular product, and about 40 bits for the serial number of the particular item. And these numbers eventually, if this proposal is accepted, will be handled by an organization of the Uniform Code Council and given to manufacturers. So they can buy them, bid for them, and then print out chips.

The Object Naming Service is this thing which takes this gibberish, which is the EPC code, and converts it to a URL, an IP address, so to say. And that's what we have here. The Object Naming Service is up and running. It's objectid.net. The www there is misleading. It's not really a website. It's just a service for a particular protocol. And you can go there and give it to one of the EPC codes-- it's a prototype service right now-- and return a file which contains all the information about the particular object.

The Physical Markup Language contains temporal object data, static object information, sensor telemetry. So for example, if you want to track the temperatures a certain product or a certain object has been through, all these snapshots are being stored in that file. It could be a distributed file. It could be a local file. But all the information is available. Control instructions-- for example, you might have a state machine model of the particular device that is being represented so that you can simulate it, you can control it, a compiler so you can compile, for example, a code to a specific machine, and finally, modeling and simulation.

The challenges-- we have, obviously, a lot of challenges. This is a fairly ambitious project. At the very basic level, we're working at looking at the electromagnetics of the tags and the readers. We are, for example, trying to come up with the physics to maximize the range at which the tags can be read. Another problem is one of anti-collusion. What if multiple tags respond at the same time? They could jumble each other's signal. And this is a standard problem in networking.

Sunny has come up with an algorithm for multiple-- a very nice linear algorithm for resolving collusions in tags, for example, reserv-- the tasks involved include resolving multiple simultaneous responses, doing so in minimum time, and doing so with minimum onboard computing. Because if the anti-collusion algorithm becomes complicated, the chip becomes more expensive, and that kills the whole process.

We're caught in a chicken or the egg problem here. Because if the tags are expensive, they won't be manufactured in bulk. And if they aren't manufactured in bulk, they will remain expensive. And we're trying to break that loop. Eventually, we hope, if we can break that loop, and if these standards take hold, that barcodes will be-- I wouldn't say replaced, but certainly augmented with these radio frequency tags.

Another problem is device packaging. We are now talking about placing chips on paper, OK? Now the particular tag I'm passing around, the Motorola BiStatix tag, has an advantage in that the antenna needn't be a copper wire. It could be just printed graphite, because it's an electrostatic, as opposed to an electromagnetic, tag.

So a lot of questions arise. How do you print graphite onto paper? How do you place the chip on the antenna so that you get good contact and reliable contact? And how do you do it accurately? So these are some of the problems we're dealing with and struggling with.

There are also other questions, like shielding. If there's metal nearby, the field of the tag may be interfered with. And this is another thing we need to worry about. So there's a whole plethora of problems that we're slowly wading into. But these are all solvable problems, so we're quite confident.

At the network level, there are several challenges. The system must scale. Now we talk about millions of computers today. But we're also talking, in this case, of trillions-- perhaps hundreds of trillions-- of tags, OK? So this is quite a different ballgame than even the internet. And certainly, we'll run out of internet addresses. So there's IPv6, which is the new standard, which we already talked about.

The interesting thing is that if you look at the internet-- and Sunny spoke about this-- people are talking about broadband and high bandwidth. And the reason is they want to download multimedia, streaming video over the internet. This is a very different traffic pattern. What happens here is you have thousands of very low-length messages as opposed to single, very high-length messages. So the congestion issues are very different. And there's a lot of research being done in Sunny's group. Now we're just beginning to understand some of these issues for tags.

Of course, there's a whole problem of security and privacy. Initially, we will avoid the problem by only, perhaps-- and also from a cost perspective-- deploying these in high-value items where cost and privacy aren't an issue-- for example, in a factory or inside a supply chain. But eventually, when these tags get into low-value, cheap items, then we have to worry about consumer privacy and security. And there are ways to deal with these problems. But again, it's a matter of planning it properly.

Quality of service, which is making sure that, for example, when you want to read a tag and you want to get information about it-- if the network is down, you can't get information, the tag is useless. Because it's just ID. So we have to start thinking about those issues here as well. And finally, flexibility-- the internet itself is changing. But the good news is that we are right-- again, it's serendipitous that a group of us could get together with expertise in different areas. So Sunny can advise us on the viability of IPv6 and fourth-generation wireless technologies, because he's an integral part of that.

System modeling, verification, and stability-- we're building a huge system. We have to be aware that things might happen that we don't-- we can't predict. And it is important for us to start building models to understand these items-- understand these possible events. David Brock, one of our colleagues, was involved in the creation of SIMBA, of the nationwide Pentagon military simulation software where you can actually plug in different modules and simulate behavior you get.

We haven't really started doing research on this. But we have to start thinking about it, develop the modeling and simulation tools to understand the complex dynamic behavior of these billions of interconnected objects. This is more in the lines of something the systems division here would do. But certainly, we throw up a lot of problems for them.

And of course, there are also simple things. Like for example, if you download instructions to a certain machine, how do you verify that these instructions can be carried out by the machine? What do you do if, for example-- in fact, we have a demo. You know, these smart device demos are very popular with students. One of my students hacked into a microwave and hit-- the microwave now has a reader in it.

And he takes food packets and places them inside the microwave. The microwave reads the food packet, figures out the ID, in about half a second, goes and gets all the cooking instructions, compiles them for the particular microwave, and cooks them. The question now is, how do we make sure that the instructions that he's downloaded are compatible with that microwave? For example, a malicious instruction might blow up the microwave. So how do you verify that? So it comes down to, actually, verification. What we learn in computer science is verification, which is modeling the microwave as a finite state machine, taking instructions, and verifying it. It's a form of control and reachability-- and finally, looking at issues like stability.

Let me just cut to the chase, because I can see Seth standing there looking at me. Just to give you an idea of some of the issues that-- areas where this can be useful, this complicated picture is actually a simple simulation of a supply chain, OK? It is a game that was invented by-- at MIT called the beer game. And what they do is they actually support similarly the supply chain of beer. And there's a reason for that, because the beer distribution and supply chain has very slow information propagation.

And the way information is propagated today is that you have the retailer, the wholesaler, the distributor, and the factory. And when you place an order, the information, week by week, goes up. And then the material comes down. So there's an eight-week delay before you actually get the product. Now, in any controlled system, in any dynamic system, delay, of course, is a bugbear.

So what happens is that when you play this game where you have students sit at a table and order, we show them how unpredictable the dynamics of the system can be. This was developed by the system dynamics group at MIT. We can also play the game where you simulate, for example-- and this is done on a computer-- the rapid flow of information. You say that-- let's say that there are tags in the warehouses, and information is freely available, OK?

What happens is that, if you can do this, and you run the two simulations, it's kind of interesting to see the sort of results you get. Without information flow, this is the sort of supply chain performance you get. You get very cyclical supply chain performance. There's stock out. There's overstock, OK? With this information, you get fairly steady. And this is just a simple demo that-- a simple simulation that we were doing to understand this. You'll get much more predictable behavior. The supply chain, it's much more constant, OK?

And we heard Alex talk about, earlier-- and [INAUDIBLE], Dr. Rosen talk about earlier-- about how fast a supply chain needs to work today in order to keep up with the rapid obsolescence of products. And what happens with this is that you can now start pulling your safety stocks down, bringing down the level of safety stocks so that your supply chain is much more efficient. Because you have instant information of when the station down the line has run out of material. There's a buzzword for it, which is pull. And it falls into that level.

But at the most general level, what we are concentrating on is standards, the infrastructure, the technology, the pure engineering of making these systems work. And someone else will hopefully come in and use these technologies to improve the performance of industry.

Conclusions-- this is really a system-level engineering solution. We have a computer scientist, a mechanical engineer, a roboticist working on this and other faculty members in our department who can give us expertise on this. For example, XML-- we have folks here who have been using XML for the last three or four years.

We have a blend of information technology and mechanical engineering system dynamics. It's really a new role for a university, because standards and infrastructure development, you know, it's usually done outside of universities. But we're taking the plunge, hoping that we can make an impact. The bad news is that this can't be done by a company, because everyone's trying to own the whole pie. It's got to be done at a-- you know, in Geneva, in a non-aligned location. And MIT, we hope, is the right place.

You can get more information at http:autoid.mit.edu. So thanks a lot.

[APPLAUSE]

MODERATOR: Questions?

AUDIENCE: A lot of products that we're using are disposable, like baby diapers. And in a couple of weeks, these things will be gone. So do you foresee some systematic way of destroying the IDs on those products?

SARMA: Yeah, there are two ways to approach the problem. The first is, yes, I mean, we have been thinking about recycling the ID. So for example, when it goes into the trash, for certain items, you can free that ID so it can be reused.

But the other way to look at it is, right now, we have 128 bits. When you go to 256, we can really be almost wasteful with the numbers and get away with it, you know? After 100 years, you start reusing the numbers. So the answer to your question is, yes, we've thought about it a little bit, and we're not too worried, OK? Sorry.

AUDIENCE: I had a question. You know, one interesting attribute is the location of an object. And we think about GPS systems for cars and so forth. What scale can it down to before that becomes a problem? Like in manufacturing, something made of a thousand pieces, can you track where all those thousand pieces are as the piece comes together? Or would that be of interest in this type of project?

SARMA: Actually, from a robotics point of view that-- George and I have been talking about this for some time. And Sunny and I have been talking about this. To some extent, you know, for example, a cell phone is a localization scheme. I mean, I know which cell you're in from your cell phone. But it's a very large localization scheme.

But in this, the cells are about a foot by a foot. So within a foot, you can get it. The question is, can you get it down to within a few millimeters? And can you get orientation? That's not-- there is no obvious way to do that yet. And the reason is it's just a field. You can't really do any tomography and kind of reconstruct.

But there's other things you can do. For example, you can scan space and try and pick up location. So we haven't really looked into that much. We've just talked about it. You know, eventually, I think it will be very useful.

AUDIENCE: You could make a more expensive chip that would broadcast more than just its ID.

MODERATOR: Well, actually, if you have two sensors on-- if they're RF sensors, with the two sensors, then, actually, you can figure out the-- work the orientation out [INAUDIBLE] size of the signals you're getting out of it.

SARMA: Yeah, if you put in the money, and if you can triangulate, you can certainly, I suppose, try and find it. It's just that, from a cost point of view, we haven't looked at-- from a robotics point of view, that's very exciting. Yep.

AUDIENCE: I think that it's fascinating, the idea. But you know, one thing that worries me is that somebody can get close to my house and find [INAUDIBLE] things I have purchased in the supermarket [INAUDIBLE]. So what are you thinking about-- you know, you mentioned the word "security." [INAUDIBLE] society that might [INAUDIBLE].

SARMA: Yeah, I mean, obviously, we're very worried. And we're careful about that. For one thing, before these things get into your home, it's many, many years. That's the one thing to say. This is usually-- I mean, right now, it's for pallets in factories, OK? That's where we're going to start.

But that's a very pertinent question. And the answer is that the number, by itself, is useless, OK? It's when you can go to the Object Naming Service and get information that it becomes useful. Others' numbers-- you can even, in fact, intentionally randomize the number so no one has any idea what it is. And if we can control the Object Naming Service, we can, for example, the moment an object is checked out, lose the number. So we'd never know what the number is again. That's one thing.

The other thing is that we're also working on ways in which you can scramble the number if you wanted. If you really wanted to keep the ID, you could physically destroy it if you wanted. And finally, the range is very small, OK? The range is only a few feet. But that doesn't make it foolproof, but that also helps. But yes, that's something we're very concerned about and something we continue to look at.

AUDIENCE: This lightweight protocol for the readers, is that a standards-based protocol that you're contemplating?

SARMA: We're contemplating a standards-based protocol, but I'll let Sunny handle that one. Sunny, you want to?

SIU: Yes, right now, [INAUDIBLE] project knowledge. And what we want to do is actually-- we have, now, a protocol which is very lightweight. And we want to achieve [INAUDIBLE] and actually license it, free, to all industries so that they will use our MIT standard.

AUDIENCE: Are you familiar with LonWorks?

SIU: No.

AUDIENCE: There's a relatively recent ANSI standard on a lightweight protocol from a company called Echelon. It's called LonWorks. You might want to look at it-- L-O-N works.

SIU: But right now, it is owned by the company?

AUDIENCE: No, it's public domain in the sense that there's two chip suppliers that provide LonWorks chips, Toshiba and Cypress. And I don't want to go too far with it, but there is something that you would to look at.

SIU: Well, we know of consortia that have their own standards. So we want to make the protocols really very lightweight and very low-cost.

AUDIENCE: The other question I had was, the Object Naming Service, do you mean to say that each tag would go to a specific, or separate, internet address?

SARMA: It's up to the functionality that you want. I mean, if you want to take a stock in unit and give it-- you know, for example, if all Coke cans made in May have only one site, that's up to Coke. But if for some reason, you want to track-- you know, for example, in Belgium, there was a scare with some Coke cans going bad. If you wanted to do that, maybe you could do things like, based on the level of information you want, you're redirected to a particular, very tiny web database location where information particular to that instance was located. But that'll evolve. But we certainly want to keep that an available option.

AUDIENCE: But we've just seen that it could address the content-addressable database a lot more efficiently.

SARMA: Exactly.

MODERATOR: One more question, and then we'll break.

AUDIENCE: Would it be possible just to adopt a UPC code? Are there enough options to just make an eUPC code? And so you could hook barcode readers to these readers and use the same code.

SARMA: That's a very good question. UPC is only 13 digits, so it's not really enough. UPC is really meant to identify every particular-- what they call an SKU, Stock Keeping Unit. So for example, all Cokes would be the same. As it turns out, we can actually embed, and we do embed, the UPC code in this. So theoretically, you could do that. In fact, because we have 128 bits, one of-- I think a few fields are just UPC right now, pure UPC.

MODERATOR: OK, let's break. [INAUDIBLE].

[APPLAUSE]