Predicting The Future

By Alan C. Kay

Alan C. Kay is a fellow at Apple Computer Inc., a visionary, one of a few select scientists who have an independent charter to pursue far-out ideas. As he explains, his is a job which forbids him to grow up. The following remarks are taken from Kay's address before the 20th annual meeting of the Stanford Computer Forum.

"Xerox PARC (a computer science think tank for which Kay was a founding principal in 1970) was set up in Palo Alto to be as far away from corporate headquarters in Stamford, Connecticut. as possible and still be in the continental U.S. We used to have visits from the Xerox executives--usually in January and February--and when we could get them off the tennis courts they would come into the building at PARC. Mainly they were worried about the future, and they would badger us about what's going to happen to us. Finally, I said: 'Look, the best way to predict the future is to invent it. This is the century in which you can be proactive about the future; you don't have to be reactive. The whole idea of having scientists and technology is that those things you can envision and describe can actually be built.' It was a surprise to them and it worried them."

Another way to predict the future is to realize that it takes a very long time--about 10 to 20 years--to get a technology out of the research lab and into everyday life. It's very difficult to get brand new ideas out in less than a decade; in the case of the transistor, it took almost 25 years. No matter what you do, it may take several companies, several different groups of people, several different areas of venture capital funding and more before you get something back.

As far as predicting the future, that makes it really nice, because it means that a lot of the future that we're going to have to contend with is sitting in someone's research lab right now. And, by simply going around and looking in the right places you can get a tremendous ideas of the kind of things that are going to happen.

Another way to predict the future is best explained by an anecdote in John Dessauer's book. Dessauer was an executive at Haloid Corporation, the tiny company in Rochester, N.Y., that eventually became Xerox where he served as an executive for awhile. His book is called My Years at Xerox, the Billions Nobody Wanted.

The story describes how, in 1956, after some years of struggling, Dessauer was able to build the prototype of the 914 plain paper copier. Lacking the money to take the copier to market to build factories and so forth, he decided to take it just down the road to IBM. He told IBM, "Take this, build factories, go out and sell it. I just want a small royalty." And IBM did what all companies do when they can't make up their minds: They went out and hired some consultants.

After an exhaustive study that took 18 months, the consultants came back with a very thick report which conclusively proved that there was no market for a plain paper copier. They had two chief reasons and a host of minor ones. Number one: there wasn't enough copy volume. That was a big problem. The other was that the xerography process cost more than ten-times as much per copy as the AB Dick mimeograph process, which was the technology they compared it against. The consultants figured no one would spend ten times as much to copy anything. So based on their report, IBM turned down the copier offer, and that was several hundreds of billions of dollars ago.

That's a very interesting story because IBM thought that their computer group was not in the communications business, and so did their consultant, and they missed a very important point: Humans beings can't exist without communication. It's one of those basic human traits, and we're always willing to pay more for a better communications amplifier.

Many others have made this mistake. The railroads made a study after WWl which showed that for as far as they could see into the future, aircraft transportation would always be more expensive than railroad transportation. And you know, they're still right today; it's still more expensive. The problem is the railroads are almost gone because nobody cares if air travel is more expensive, they're willing to pay it. The railroad industry missed the idea that not everything is a commodity market, and that price is important, but there are also value markets where people are willing to pay extra for extra value.

WHERE DO IDEAS COME FROM?

Of various ways of coming up with new ideas, I think the weakest is brainstorming, to take what you've got and try to wedge it together into something, paint it and sell it. Of course you can get a product out of that: Take all the obnoxious things in a 12-year-old's room and glue them together and you get a boom box, which happens to be selling quite well. But most things done by brainstorming are like boom boxes.

The goal-orientated approach that the management books advocate is to find a need and fill it. We don't get many new ideas out of that because if you ask most people what they want, they want just what they have now, 10 percent faster, 10 percent cheaper, with 10 percent more features. It's kind of a boring way to predict the future. But if we look at the big hitters in the 20th century, like the Xerox machine, like the personal computer, like the pocket calculator, all of these things did something else. They weren't contaminations of existing things. They weren't finding a need and filling it. They created a need that only they could fill. Their presence on the scene caused a need to be felt, and almost paradoxically the company was there to create the need and fill the need. That's what the Xerox machine did; nobody needed to copy until the Xerox machine came along. Nobody needed to calculate before the pocket calculator came along. When mini computers and micro computers came in, people said, "What do we need those things for? You can do everything now on the mainframe." And the answer was, "Of course, you can do all those things on the mainframe, but it's for all the extra things you can do that you wouldn't think of doing on the mainframe."

WHY AREN'T WE BETTER DESIGNERS?

Marshall McLuhan has a line to try to explain some of this: He says, "I don't know who discovered water, but it wasn't a fish."

Part of what he meant is that if you're immersed in the context, you have an extremely difficult time being able to see what's going on. It's been remarked that the Japanese do a better job marketing to us than we do to ourselves, because they know the market through an alien culture. They actually study us in a way that we don't look at ourselves.

Another reason that we don't create very well, is that we're afraid. America for the last 20 or 30 years has been going through a failure of nerve. McLuhan says, "Innovation for holders of conventional wisdom is not novelty but annihilation." That's the way our executives react all too often.

When we think about the ways that mankind has extended itself over the years, at least for the purposes of this talk, I'd like to think about two major ways: one is through the notion of amplifying tools, something which amplifies our reach into the world. Many of these tools are extensions of the body, like the microscope and telescope; some of them are rhetorical tools. I think of them as better ways to manipulate things.

The other method is by goal cloning, that is, to convince other people that they should work on our goals rather than theirs. Lewis Mumford wrote a good book about this process called The Myth of the Machine. When you want to build a pyramid, you have to have some tools, but you also have to find ways to convince 10,000 people or 100,000 people to work with you to get the thing done.

I remember in the early days of PARC--during one of the many visits by Xerox executives--when I had just come up with the idea of overlapping windows. We had implemented a test version of it, and I showed this to the executive who was there that day. I wound up the demonstration saying, "What's even better is that this idea only has a 20 percent chance of success; we're taking risks just like you asked us to." And the executive looked me right in the eye, and said, "Boy, that's great, but just make sure it works."

Far too many executives want you to be in that 20 percent, 200 percent of the time. The idea that to have a 20 percent chance of success means that you have to fail four out of five times is totally repugnant to them. I might ask of those here in the Stanford Computer Forum how many of your companies have an award for the best failure each year. Probably none. This is a big problem because people go where the rewards are and the chances of getting something really nifty go down by a considerable amount.

Another problem is that we don't have a very good concept of the future itself. McLuhan's line--one of my favorites--is, "We're driving faster and faster into the future, trying to steer by using only the rear-view mirror."

Whitehead, the British philosopher, remarked that the greatest invention of the 19th century was the invention of invention itself. Not only were there 10 and 20 times more patent applications at the British government patent office, but about 80 percent of those patents were absolutely crackpot ideas. This was the century in which anybody who had an idea thought he could be an inventor and submit a patent for it because everyone else was doing it.

McLuhan had a great line about the 20th century. He said, "The 20th century is the century in which change changed." He was referring back to Heraclitus, the Greek who said, "The only thing constant is change itself." From our standpoint it's hard to see that as a revolutionary statement, but remember that before the Greeks, it was unreasonable for a person to be born into a world, live in a world, and die in a world that was any different from the world in which his parents had lived, or his parents' parents and so forth. Things were pretty much the same for many thousands of years.

But McLuhan was saying something else, that when change changes, you can't predict the future in the same way anymore; you have some second order or third order effects. So the biggest thing we need to invent in the 1990s is the invention of the future itself. In other words, to think of the concept of future not as a thing that comes from the past--although it has come from the past in a way--but to realize that the forces that are bringing about change right now are so great that it's very difficult to sit down and make simple extrapolations.

Science fiction had some ideas about us going to the moon partly because there were some fledgling things called rockets and someone could imagine one big enough to get us here. And science fiction could imagine robots with positronic brains, because Isaac Asimov did not have to explain how positronic brains worked. But science fiction totally missed the idea of the computer. Before the power of the transistor really became apparent there was just no conceivable extrapolation.

In some sense our ability to open the future will depend no on how well we learn anymore but how well we are able to unlearn. Can you imagine a course at Stanford on unlearning? That would be revolutionary. How could we try to subtract the lives that we're living in out of our prognostications?

I think the weakest way to solve a problem is just to solve it; that's what they teach in elementary school. In some math and science courses they often teach you it's better to change the problem. I think it's much better to change the context in which the problem is being stated. Some years ago, Marvin Minsky said, "You don't understand something until you understand it more than one way." I think that what we're going to have to learn is the notion that we have to have multiple points of view.

At PARC we had a slogan: "Point of view is worth 80 IQ points." It was based on a few things from the past like how smart you had to be in Roman times to multiply two numbers together; only geniuses did it. We haven't gotten any smarter, we've just changed our representation system. We think better generally by inventing better representations; that's something that we as computer scientists recognize as one of the main things that we try to do.


from Stanford Engineering, Volume 1, Number 1, Autumn 1989, pg 1-6