Back to the Future

In our technology business (software), we keep seeing many things come back from the past with new labels as the latest trend. It’s like my graduate school room-mate (a Canadian) who refused to buy a new tie in line with the style of the day. When “thin tie” was in, he was wearing a “wide tie”. When I asked, “how come you are wearing something out of date?”,  he would reply, “I don’t change. The styles will eventually come back to me”.

Similarly in our business. When we talk of cloud computing or SaaS, someone quickly remembers “time-sharing” days of the likes of ADP. When we rave about VMWare and virtualization, I recall IBM’s VM operating system of the 1970s. Whenever we brag about caching for better performance, I recall the “prefetching” we did in DB2 during the 1980s. When the “internet kids” thought “statefulness” is a cool idea (in an otherwise stateless web), we remember the days of the transaction processing, where the database contained the ACID properties and techniques like two-phase commits were invented for ensuring transactional integrity.  When we bring SOA and web services as key to re-use, we remember “subroutines” of the past. Concepts are the same, the execution might be a little different.

This is not to undermine the advances of technology, such as the Internet and the world wide web. Two things have clearly pushed the envelop ahead – processing speed and bandwidth. We have been debating SMP (symmetric multiprocessing) and MPP (massively parallel processing) for years. We have also debated the “scale-up” vs the “scale-out” model, the latter one used at new sites such as Google. We still debate the merits of “shared nothing” vs “shared disk”.

Few years ago professor Eric Brewer of UC, Berekely brought the idea that the old-world rigid model of two-phase commit etc. may not be the best choice for the internet era. He proposed a new acronym called BASE (Basically Available, Soft-state, Eventually consistent). This is based on his CAP theory, which says between Consistency, Availability, and tolerance to network Partition (distributed network), one can only achieve a maximum of 2 out of the 3. If you want consistency and availability (like a financial institution), then give up distributability or keep everything centralized. If you want distributed data and availability (like Amazon’s book business), then give up consistency. Finally if you want consistency and distributed data (like a nation-wide bank) then be ready to pay for availability. Leave some hours at night to run those utilities for synchronizing databases.

As we march into the future, Eric’s wisdom from several years ago would be a great guideline, even to database purists with an “all or nothing” philosophy.


One response to “Back to the Future

  1. Hi Jnan,

    Nice post. I can relate to your “back to the future” theme. In the 70s at DEC we built a PDP-11 based TP monitor, called Trax. It featured smart terminals (like a Browser) and a transaction execution environment (like J2EE) that executed small routines called transaction-step-tasks (like Java beans). 20 plus years later while at BEA building Weblogic Portal we were solving many of the same problems in the context of the web.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s