I want to examine the mainframe -- specifically, why it is still around? And what does its survival teach us?
The IBM Mainframe is more than 40 years old, having been announced in April 1964. I have personally been associated with it for the better part of that time. In 1977, upon returning from a sabbatical to the Watson Research Center where I was a research staff member in Computer Sciences, I joined a project aimed at exploring what it would take to design a mainframe 20-times faster than our then-fastest machine, which ran at all of 5 MIPS (millions of instructions per second.).
Since then, many have predicted that the mainframe was dying and would soon disappear, some because they were competitors hoping to get customers to replace the mainframe with their product, others because they truly could not imagine anything this "old" still having value in an industry where new things show up at such an incredible rate.
I myself often wondered what it was about the IBM mainframe that kept it going while most other computer platforms from the '50s, '60s and '70s are long gone. These musings are no mere academic exercise -- certainly not from IBM's point of view. The more we understand what has kept mainframes going, the better prepared we will be to keep them going well into the future. And the more we understand them, the better prepared we will be to maintain the increasingly sophisticated and complex infrastructures which, like the mainframe, are not likely to be replaced anytime soon -- the Internet, World Wide Web and Grids, for example.
Let me offer some thoughts on this question, starting by looking at what we mean by "old." Old commonly means something that has been around for a long time. Some old things become rundown. This often happens to products, businesses and towns. Alas . . . it definitely happens to human beings. But other things that have been around for a long time can continue to evolve, can be brought up to date and thrive if the proper attention is paid to them, and that is very much the case with the mainframe.
Mainframes have shown the ability, given the proper R&D investments, to keep evolving. What we mean by "IBM mainframe" today is very different from what those words meant when we started the research project I mentioned above -- to see how to go from 5 MIPS to 100 MIPS. We concluded that we could only achieve such large performance gains by coupling multiple SMPs in a way that was transparent to the software and applications. These ideas, after lots of work in our labs, eventually became the basis for the parallel sysplex architecture, a critical element in the transition from bipolar to CMOS processors.
In other words, while people in the mid '90s still bought things they called "mainframes," under the hood the machine had been totally reinvented. And this wasn't the only time that occurred. Another such major transformation was the transition over the last 10 years from a purely proprietary platform to one that embraced key open interfaces: By supporting TCP/IP and similar standards, mainframes were able to integrate seamlessly into the Internet and WWW infrastructure that was growing like wildfire.
Most of us, frankly, were surprised at how quickly Linux was ported to mainframe virtual machines, and how well it was accepted in the marketplace. Grid protocols, Web services, and just about any other such open, multi-platform protocols, are similarly supported on mainframes with relative ease. I also find it interesting that a number of technology advances that first appeared in mainframes over thirty years ago are now more important than ever, although the reason for their importance, like the mainframe itself, has evolved.
Nowhere is this more apparent than with virtualization technologies. To paraphrase the Wikipedia definition, virtualization provides users and applications with a logical rather than physical view of data, computing power, storage capacity, and other resources. Virtualization was first invented in the 1960s to support time-sharing, a revolutionary innovation that allowed many users to feel that each had their own personal, interactive computer, though in reality they were all sharing the same, rather expensive machine. This sharing enabled by virtualization was responsible for the huge success of S/370 in the 1970s, having by then been extended to the sharing of transaction applications by thousands of users.
Today virtualization is one of the “hottest” technologies in computing. While the efficiency gained from sharing expensive resources continues to be an important objective, even more important is the ability of users to share resources so they can collaborate with each other around the world, in designing cars, say, or discovering new medicines. The mainframe, because of its virtualization and sharing heritage, is emerging as a top platform for collaborative computing.
The key lesson to be drawn from mainframes is this: If what they offer has value; if their underlying architecture has the flexibility to keep evolving; and, most important, if one keeps investing in them with the latest technologies and capabilities, they can keep moving well into the future.