Real Time Big Data or Data in Motion

Most of the discussions on Big Data begins and ends with Hadoop. It is the commercial version of HPC (High Performance Computing), whose underlying technologies have been around for years: clustering, parallel processing, and distributed file system. In today’s parlance you can read Hadoop clusters on commodity hardware, map-reduce algorithm, and HDFS in that order. There is no doubt that Hadoop has taken off in a big way, but it does not address one big emerging area called real time query and analysis on data that is moving all the time. Data can be categorized into three buckets – transactional data, analytics on data at rest, and analytics on data in flight (streaming, real time). We are talking about the last one here.

It is not about just velocity, but also latency. When an event occurs, we need to act on it within seconds or minutes. We have to “react in the moment”. First, the enterprise data warehouse (EDW) needs to be loaded with real time data, as opposed to the offline batch loading. What we need is continuous loading and data ingestion. Second, we have to do query and analysis on this fresh data as it comes for split-second decisions. The EDWs were designed years ago for offline batch processing and are unsuited for this new role. Hence newer technologies like in-memory processing, querying, and ingesting have to be looked at. As someone said – RAM is the new Disk, and Disk is the new Tape, and Tape is the new Microfiche (if they exist). One TB of RAM costs around $4k today and it will keep going down. Most EDW are under 5TB. So enterprises must evaluate the cost part of doing in-memory processing.

Data in motion includes social network data feeds, clickstreams, trading data, sensor data, etc. Velocity is the new big thing and actions on such data must be taken within seconds. There are economic values as well as safety values. For example, at Citibank, a 100 millisecond processing delay can cost them $1 million dollars of business. This also drastically reduces the analysis window for finding root cause. Scale-out solutions on commodity hardware offer big economic advantage. Solutions such as MemSQL, SAP HANA, Argyle Systems, Yahoo Storm, and Apache Spark/Shark are bringing in-memory processing architectures to handle this area of data in motion.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s