I gave a talk titled Big Data – Trends and Challenges yesterday in San Jose. This was organized as a meet-up event by Datapipe and Compassites Software. Datapipe provides cloud infrastructure services to clients whereas Compassites Software (where I am a board director) is a technology services firm out of Bangalore, India focusing on areas like consumeration of IT, cloud computing, and Big Data.
At the talk yesterday, I realized how confused people seem to be on Big Data, as the term is so ill-defined. One thing is for sure, Big Data comes in one size – Big. Besides the size issue (over petabytes), there is the velocity issue (Data in Motion vs. Data in Rest) and the variety issue. I mentioned that as the volume of data keeps rising, the percentage of data for analysis and insight keeps declining. I mentioned that 80% of the data in the world is unstructured, hence new solutions are being invented. Also, M2M (machine to machine) or sensor data keeps rising. In the volume context, I said that a single engine in a Boeing 747, spills out 10 Terabytes per hour. When you take all four engines on a Boeing 747 flying across the Atlantic, it produces a staggering 640TB. Now everyday there are 25000 flights across the Atlantic and you can do the math on how much data gets collected per day.
We discussed the business value of big data and how the typical pilot project at enterprises seems to be IT Log Data analysis. Other areas like fraud detection, social media, call center feedback are candidates for Big data application. On the technology front, much has been happening during last 5-7 years. All the innovations are coming out of the new web companies like Google, Amazon, Yahoo, Facebook, and Twitter. The Hadoop platform is an offshoot of Google’s early work on GFS (Google File System) and GMR (Google MapReduce). Google is moving beyond Hadoop via its recent work on Dremel, Percolator, and Pregel. Facebook is also putting many new projects like Puma, mostly for realtime access and analysis. Twitter’s Storm project is also noteworthy. Google has offered the BigQuery as a cloud service recently. Then there are dozens of NoSQL products such as Cassandra, Couchbase, MongoDB, Riak, etc.
It is important to remember that the world is not being taken over by Hadoop, as it is a batch system for handling very large data volumes via distributed parallel processing on commodity hardware. It does not touch the space of OLTP which is critical for airlines and banking industries. Also, if your data volume is under 100 Terabytes and it is structured data, then current offerings of Data Warehousing via a RDBMS or appliances (e.g. Oracle Exadata, IBM Netezza) are excellent solutions. The web-centric interactive world has given rise to the need of extreme scale and the Hadoop-based solutions must learn to co-exist with the existing world. Hence Big Data integration will be a key area.
One thing for sure. There is a lot of interest on this subject of Big Data, as clarity is one thing lacking amidst all the marketing hype and noise.