Much discussion has been going on the new phrase called Data Lake. Gartner wrote a report on the ‘Data Lake’ fallacy, saying to be careful about ‘data lake’ or ‘data swamp’. Then Andrew Oliver wrote in the InfoWorld these beginning words, “For $200, Gartner tells you ‘data lakes’ are bad and advises you to try real hard, plan far in advance, and get governance correct”. Wow, what an insight!
During my days at IBM and Oracle, Gartner wanted to get time on my calendar to talk about database futures. Then afterwards, I realized that I paid significant fee to attain the Gartner conference to hear back what I had told them. Good business of information gathering and selling back. Without meaning any disrespect, many analysts like to create controversial statements to stay relevant. Here is such a case with Gartner.
The concept of a ‘data lake’ was coined by James Dixon of Pentaho Corp. and this is what he said – If you think of a datamart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples. Think of a data lake as an unstructured data warehouse, a place where you pull in all of your different sources into one large “pool” of data. In contrast to a data mart, a data lake won’t “wash” the data or try to structure it or limit the use cases. Sure, you should have some use cases in mind, but the architecture of a data lake is simple: a Hadoop File System (HDFS) with lots of directories and files on it.
The data lake strategy is part of a greater movement toward data liberalization. Given the exponential growth of data (specially with IoT and myriads of sensors), there is need for storing data in its native format for further analysis. Of course you can drown in a data lake! But that’s why you build safety nets like security procedures (for example, access is allowed only via Knox), documentation (what goes where in what directory and what roles you need to find it), and governance.
Without offering any concrete alternative, Gartner seems to say that a new layer (call it data refinery if you like) is needed to make sense of this ‘raw’ data, thus heading back to the ETL days of data warehousing. Gartner loves to scare clients (so that they seek help for a fee) on new technology and would want everyone to stay with the classic data warehousing business. This is not in line with the Big Data movement which involves some risk, as always with any new technology.