Menu
Menu
Shattering the Glass House

Shattering the Glass House

The great trauma in the history of the CIO profession-what 1789 was to the French and the dissolution of the empire to the British-came in the late 1980s, when the venerable host-terminal system started to die under the massed attack of swarms of PCs.

At the time, the mainframe was the fruit of almost 20 years of polishing and tweaking. Nothing about it looked broken, at least in the opinion of many sage IS managers. The practice of centralizing data storage (let alone applications) seemed like a particularly good idea, offering as it did a single point from which the accessibility, integrity, reliability and security of corporate data could be guaranteed.

Nonetheless, the tide of PCs was clearly irresistible. It followed, or so it seemed, that enterprise data and computing tasks were fated to be dispersed throughout a population of processors. As the cracks yawed in the walls of the ancient regime, many of us passed through the stages of loss, beginning with rage and proceeding through denial and into bargaining, often with higher levels of management. Here at CIO we skipped those early phases, which tend to be a bit dreary, and cut straight to acceptance. On Oct. 15, 1992, we declared the game over. "Once data has been disseminated to its users, there is no way to rein it back into a single central system," we said, and proceeded to lay out a vision of how IS could come to terms with the dispersed data regime inherent to the PC landscape.

The solution we favored was an architecture in which all these data stores were integrated with local procedures. These would handle relations with a central control point: compiling reports, satisfying queries, and complying with policies on indexing, security, recovery and so on. The CIO would sit at this central point, monitoring performance, defining transactions and standardizing operating processes. "Distributed technology offers a way for IS to once again exercise its managerial responsibilities for enterprise data while allowing users to retain localized operations," we said. Everybody wins.

In retrospect, we called the game far too early. If it is true that you can't fight progress, progress is often more than happy to fight itself. The rapid penetration of fast networks and the even more rapid improvements in mainframe and server performance (by some measures, more than 150-fold in the past decade) allowed centralized models to snatch the momentum of development back from the desktops. Shail Jain, CEO of Intelligent Information Systems, a Durham, N.C., vendor of software transformation services, was manager of Data General's database research and development group 10 years ago. At the time, he says, "I believed that distributed databases would take over the world." But in the following years not one of the dozens of database architecture projects on which he has worked has been distributed in any serious sense.

Today, however, dispersed databases are back in the spotlight, partly in consequence of several shifts in the technological background. Internet applications have added value to the traditional talking points of distributed databases-higher reliability and lower latency. Even more fundamentally, declines in processing costs, which turned into an ally of centralized systems, are no longer driving technological change. During the past few years, the cost of bandwidth has been falling along an even more dramatic curve (some call this Gilder's Law): halving every nine months. In other words, the cost advantage of distributed over centralized IT infrastructure doubles every year. Given the huge R&D investment now being made in networking, these differences seem likely to become steadily more consequential.

Expressions of this trend include the recent finding that traffic on large networks doubles every year (this relationship was discovered by the estimable Andrew Odlyzko of AT&T Labs), the growth in the ASP and network caching sectors, and the interest in Internet computing, a catchall term that covers a long list of schemes on how to recapture unused resources from underused systems.

Ideas leveraging these changes usually end up requiring distributed databases at some point. Sometimes these data stores communicate with agents or middleware, but often enough they use tightly integrated data/procedure packages that come very close to the vision we laid out in our 1992 article. A good example might be the "application caching" service being offered by a Palo Alto, Calif., company called Zembu, in which applications (such as programs that generate data dynamically) are replicated and distributed to the edges of the network.

All in all, our original article still holds up, if you can manage to overlook the little detail that we published it about nine years too soon.

Power to the people (who cares if they don't know how to use it). Send Revisit suggestions to et@cio.com.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

More about AT&TData GeneralZembu

Show Comments

Market Place