Category Archives: BI

Fast Data

During the 1980s and 1990s, online transaction processing (OLTP) was critical for banks, airlines, and telcos for core business functions. This was a big step-up from batch systems of the early days. We learnt the importance of sub-second response time and continuous availability with the goal of five-nines (99.999% uptime). The yearly tolerance of system outage was like 5 minutes. During my days at IBM, we had to face the fire from a bank in Japan that had an hour long outage resulting in a long queue in front of the ATM machine (unlike here, the Japanese stand very patiently until the system came back after what felt like an eternity). They were using IBM’s IMS Fast Path software and the blame was first put on that software, which subsequently turned out to be some other issue.

Advance the clock to today. Everything is real-time and one can not talk about real-time without discussing the need for “fast data” – data that has to travel very fast for real time decision making. Here are some reasons for fast data:

  • These days, it is important for businesses to be able to quickly sense and respond to events that are affecting their markets, customers, employees, facilities, or internal operations. Fast data enables decision makers and administrators to monitor, track, and address events as they occur.
  • Leverage the Internet of Things – for example, an engine manufacturer will embed sensors within its products, which then will provide continuous feeds back to the manufacturer to help spot issues and better understand usage patterns.
  • An important advantage that fast data offers is enhanced operational efficiency, since events that could negatively affect processes—such as inventory shortages or production bottlenecks—can not only be detected and reported, but remedial action can be immediately prescribed or even launched. Realtime analytical data can be measured against the patterns determined to predict problems, and systems can respond with appropriate alerts or automated fixes.
  • Assure greater business continuity – Fast data plays a role in bringing systems—and all data still in the pipeline—back up and running quickly, before the business suffers from a catastrophic event.
  • Fast data is critical for supporting Artificial Intelligence and machine learning. As a matter of fact, data is the fuel for machine learning (recommendation engines, fraud detection systems, bidding systems, automatic decision making systems, chatbots, and many more).

Now let us look at the constellation of technologies enabling fast data management and analytics. Fast data is the data that moves almost instantaneously from source to processing to analysis to action, courtesy of framework and pipelines such as Apache Spark, Apache Storm, Apache Kafka, Apache Kudu, Apache Cassandra, and in-memory data grids. Here is a brief outline on each of these.

Apache Spark – open source toolset now supported by most major database vendors. It offers streaming and SQL libraries to deliver real-time data processing. Spark Streaming offers data as it is created, enabling analysis for critical areas like real-time analytics and fraud detection. It’s structured streaming API opens up this capability to enterprises of all sizes.

Apache Storm is an open source distributed real-time computation system designed to enable processing of data streams.

Apache Cassandra is an open source low-latency data replication engine.

Apache Kafka is an open source toolset designed for real-time data streaming – employed for data pipelines and streaming apps. Kafka Connect API helps connect it to other environments. It originated at Linked-In.

Apache Kudu is an open source storage engine to support real-time analytics on commodity hardware.

In addition to powerful open source tools and frameworks, there are in-memory data grids that provides a hardware-enabled fast data enabler to deliver blazing speeds to meet today’s needs such as the IoT management and deployment of AI and machine learning and responding to events in real-time.

Yes, we have come a long way from those OLTP days! Fast data management and analytics is becoming a key area for businesses to survive and grow.

Advertisements

The New AI Economy

The convergence of technology leaps, social transformation, and genuine economic needs is catapulting AI (Artificial Intelligence) from its academic roots & decades of inertia to the forefront of business and industry. There has been a growing noise since last couple of years on how AI and its key subsets like Machine Learning and Deep Learning will affect all walks of life. Another phrase “Pervasive AI” is becoming part of our tech lexicon after the popularity of Amazon Echo and Google Home devices.

So what are the key factors pushing this renaissance of AI? We can quickly list them here:

  • Rise of Data Science from the basement to the boardroom of companies. Everyone saw the 3V’s of Big Data (volume, velocity, and variety). Data is called by many names – oxygen, the new oil, new gold, or the new currency.
  • Open source software such as Hadoop sparked this revolution in analytics using lots of unstructured data. The shift from retroactive to more predictive and prescriptive analytics is growing, for actionable business insights. Real-time BI is also taking a front seat.
  • Arrival of practical frameworks for handling big data revived AI (Machine Learning and Deep Learning) which fed happily on big data.
  • Existing CPU’s were not powerful for the fast processing needs of AI. Hence GPU (Graphical Processing Units) offered faster and more powerful chips. NVIDIA provided a positive force in this area. It’s ability to provide a full range of components (systems, servers, devices, software, and architecture) is making NVIDIA an essential player in the emerging AI economy. IBM’s neuromorphic computing project provides notable success in the area of perception, speech and image recognition.

Leading software vendors such as Google have numerous projects on AI ranging from speech and image recognition, language translation, and varieties of pattern matching. Facebook, Amazon, Uber, Netflix, and many others are racing to deploy AI into their products.

Paul Allen, co-founder of Microsoft is pumping $125M into his research lab Allen Institute of AI. The focus is to digitize common sense. Let me quote from today’s New York Times, “Today, machines can recognize nearby objects, identify spoken words, translate one language into another and mimic other human tasks with an accuracy that was not possible just a few years ago. These talents are readily apparent in the new wave of autonomous vehicles, warehouse robotics, smartphones and digital assistants. But these machines struggle with other basic tasks. Though Amazon’s Alexa does a good job of recognizing what you say, it cannot respond to anything more than basic commands and questions. When confronted with heavy traffic or unexpected situations, driverless cars just sit there”. Paul Allen added, “To make real progress in A.I., we have to overcome the big challenges in the area of common sense”.

Welcome to the new AI economy!

Big Data & Analytics – what’s ahead?

Recently I read somewhere this statement – As we end 2017 and look ahead to 2018, topics that are top of mind for data professionals are the growing range of data management mandates, including the EU’s new General Data Protection Regulation that is directed at personal data and privacy, the growing role of artificial intelligence (AI) and machine learning in enterprise applications, the need for better security in light of the onslaught of hacking cases, and the ability to leverage the expanding Internet of Things.

Here are the key areas as we look ahead:

  • Business owners demand outcomes – not just a data lake to store all kinds of data in its native format and API’s.
  • Data Science must produce results – Play and Explore is not enough. Learn to ask the right questions. Visualization of analytics from search.
  • Everyone wants Real Time – Days and weeks too slow, need immediate actionable outcomes. Analytics & recommendations based on real time data.
  • Everyone wants AI (artificial intelligence) – Tell me what I don’t know.
  • Systems must be secure – no longer a mere platitude.
  • ML (machine learning) and IoT at massive scale – Thousands of ML models. Need model accuracy.
  • Blockchain – need to understand its full potential to business – since it’s not transformational, but a foundational technology shift.

In the area of big data, a combination of new and long-established technologies are being put to work. Hadoop and Spark are expanding their roles within organizations. NoSQL and NewSQL databases bring their own unique attributes to the enterprise, while in-memory capabilities (such as Redis) are increasingly being utilized to deliver insights to decision makers faster. And through it all, tried-and-true relational databases continue to support many of the most critical enterprise data environments.

Cloud becomes the de-facto deployment choice for both users and developers. Serverless technology with FaaS (Function as a Service) is getting rapid adoption amongst developers. According to IDC, enterprises are undergoing IT transformation as they rethink their business operations, including how they use information and what technology to deploy. In line with that transformation, nearly 80% of large organizations already have a hybrid cloud strategy in place. The modern application architecture, sometimes referred to as SMAC (social, mobile, analytics, cloud) is becoming standard everywhere.

The DBaaS (database as a service) is still not as widespread as other cloud services. Microsoft is arguably making the strongest explicit claim for a converged database system with its Azure Cosmo DB as DBaaS. Cosmo DB claims to support four data models – key-value, column-family, document, and graph. However, databases have been slower to migrate to the cloud than other elements of computing infrastructure mainly for security and performance reasons. But DBaaS adoption is poised to accelerate. Some of these cloud based DBaaS systems – Cosmo DB, Spanner from Google, and AWS DynamoDB – now offer significant advantages over their on-premise counterparts.

One thing for sure, big data and analytics will continue to be vibrant and exciting in 2018.

Apache Drill + Arrow = Dremio

A new company just emerged from stealth mode yesterday, called Dremio, backed by Redpoint and Lightspeed in a Series A funding of $10m back in 2015. The founders came from MapR, but were active in Apache projects like Drill and Arrow. The same VC’s backed MapR and had the Dremio founders work out of their facilities during the stealth phase. Now the company has around 50 people in their Mountainview, California office.

Apache Drill acts as a single SQL engine that, in turn, can query and join data from among several other systems. Drill can certainly make use of an in-memory columnar data standard. But while Dremio was still in stealth, it wasn’t immediately obvious what Drill’s strong intersection with Arrow might be. But yesterday the company launched a namesake product that also acts as a single SQL engine that can query and join data from among several other systems, and it accelerates those queries using Apache Arrow. So it is a combo of (Drill + Arrow): schema-free SQL for variety of data sources plus a columnar in-memory analytics execution engine.

Dremio believes that BI today involves too many layers. Source systems, via ETL processes, feed into data warehouses, which may then feed into OLAP cubes. BI tools themselves may add another layer, building their own in-memory models in order to accelerate query performance. Dremio thinks that’s a huge mess and disintermediates things by providing a direct bridge between BI tools and the source system they’re querying. The BI tools connect to Dremio as if it were a primary data source, and query it via SQL. Dremio then delegates the query work to the true back-end systems through push-down queries that it issues. Dremio can connect to relational databases (DB2, Oracle, SQL Server, MySQL, PostgreSQL), NoSQL stores (MongoDB, Amazon Redshift, HBase, MapR-FS), Hadoop, cloud blob stores like S3, and ElasticSearch.

Here’s how it works: all data pulled from the back-end data sources is represented in memory using Arrow. Combined with vectorized (in-CPU parallel processing) querying, this design can yield up to a 5x performance improvement over conventional systems (company claims). But a perhaps even more important optimization is Dremio’s use of what it calls “Reflections,” which are materialized data structures that optimize Dremio’s row and aggregation operations. Reflections are sorted, partitioned, and indexed, stored as files on Parquet disk, and handled in-memory as Arrow-formatted columnar data. This sounds similar to ROLAP aggregation tables).

Andrew Brust from ZDNet said, “While Dremio’s approach to this is novel, and may break a performance barrier that heretofore has not been well-addressed, the company is nonetheless entering a very crowded space. The product will need to work on a fairly plug-and-play basis and live up to its performance promises, not to mention build a real community and ecosystem. These are areas where Apache Drill has had only limited success. Dremio will have to have a bigger hammer, not just an Arrow”.

Data Unification at scale

This term Data Unification is new in the Big Data lexicon, pushed by varieties of companies such as Talend, 1010Data, and TamR. Data unification deals with the domain known as ETL (Extraction, Transformation, Loading), initiated during the 1990s when Data Warehousing was gaining relevance. ETL refers to the process of extracting data from inside or outside sources (multiple applications typically developed and supported by different vendors or hosted on separate hardware), transform it to fit operational needs (based on business rules), and load it into end target databases, more specifically, an operational data store, data mart, or a data warehouse. These are read-only databases for analytics. Initially the analytics was mostly retroactive (e.g. how many shoppers between age 25-35 bought this item between May and July?). This was like driving a car looking at the rear-view mirror. Then forward-looking analysis (called data mining) started to appear. Now business also demands “predictive analytics” and “streaming analytics”.

During my IBM and Oracle days, the ETL in the first phase was left for outside companies to address. This was unglamorous work and key vendors were not that interested to solve this. This gave rise to many new players such as Informatica, Datastage, Talend and it became quite a thriving business. We also see many open-source ETL companies.

The ETL methodology consisted of: constructing a global schema in advance, for each local data source write a program to understand the source and map to the global schema, then write a script to transform, clean (homonym and synonym issues) and dedup (get rid of duplicates) it. Programs were set up to build the ETL pipeline. This process has matured over 20 years and is used today for data unification problems. The term MDM (Master Data Management) points to a master representation of all enterprise objects, to which everybody agrees to confirm.

In the world of Big Data, this approach is very inadequate. Why?

  • data unification at scale is a very big deal. The schema-first approach works fine with retail data (sales transactions, not many data sources,..), but gets extremely hard with sources that can be hundreds or even thousands. This gets worse when you want to unify public data from the web with enterprise data.
  • human labor to map each source to a master schema gets to be costly and excessive. Here machine learning is required and domain experts should be asked to augment where needed.
  • real-time data unification of streaming data and analysis can not be handled by these solutions.

Another solution called “data lake” where you store disparate data in their native format, seems to address the “ingest” problem only. It tries to change the order of ETL to ELT (first load then transform). However it does not address the scale issues. The new world needs bottoms-up data unification (schema-last) in real-time or near real-time.

The typical data unification cycle can go like this – start with a few sources, try enriching the data with say X, see if it works, if you fail then loop back and try again. Use enrichment to improve and do everything automatically using machine learning and statistics. But iterate furiously. Ask for help when needed from domain experts. Otherwise the current approach of ETL or ELT can get very expensive.

IBM’s Software Business

IBM has come a long way from my time – 16 years spent during the 1970s, 1980’s and early 1990s. Hardware was the king for most of my years there and software was merely a means to an end of “hardware sales”. Even during the early years of the IBM PC, that mistake (of thinking it was a hardware game), helped create a new software giant called Microsoft. Hence the acronym IBM was jokingly called I Blame Microsoft.

Advance two decades and we see a big shift of focus from hardware to software, finally. IBM has sold off much of its non-mainframe hardware (x86 servers) & storage business. During the 4th. quarter of 2015, IBM’s share of server-market was 14.1% with an impressive yearly growth of 8.9%. Contrast this to the growth rates of HPE(-2.1%), Dell (5.3%), and Lenovo (3.7%).

IBM’s software is another story. While it contributed about 28% to total revenue in 2015 ($81.7B), the profit contribution was 60%. If it’s software was a separate business, it would rank as the fourth largest software company, as shown below:

  1. Microsoft  –  $93.6B Rev. —> 30.1% profit
  2. Oracle        – $38.2B Rev. —> 36.8% profit
  3. SAP            –  $23.2B Rev. —> 23.4% profit
  4. IBM           –  $22.9B Rev.  —> 34.6% profit

IBM’s software is second most profitable after Oracle’s. The $22.9B revenue can be split into three components:

  • Middleware at 19.5B (includes everything above the operating system like DB2, CICS, Tivoli, Bluemix, etc.),
  • Operating System at $1.8B,
  • Miscellaneous at $1.6B.

It does not split its cloud software explicitly. Therefore, it is hard to compare it to AWS or Azure or GCE.

The only problem is that its software business is not growing. As a matter of fact, it showed a decline last year. Given the rise of cloud services, IBM has to step up its competitive offering in that space. It did acquire Softlayer couple of years back at a hefty price, but the cloud infrastructure growth does not match that of AWS (expected to hit $10B this year).

IBM is a company in transition. Resources are being shifted toward high-growth areas like cloud computing and analytics, and legacy businesses with poor growth prospects are in decline. Still, IBM remains a major force in the software market.

Big Data Predictions for 2016

As every year begins, several experts and analyst firms like to make predictions. Let us try to make some observations in an area much talked about lately – Big Data. So here goes:

  • Big Data quandary will continue as companies try to understand its value to business. Just dumping all kinds of data into a data lake (read Hadoop) is not going to solve anything. There has to be business value on what insights are needed. Therefore much like the Data Warehousing era brought additional tools in the ETL space, there is need for data curation and transformation for practical use besides the analytics piece.
  • Demand for BI and Analytics will reach new heights. The next-generation BI and analytics platform should help business tap into the power of their data, whether in the cloud or on-premises. This ‘Networked BI’ capability creates an interwoven data fabric that delivers business-user self-service while eliminating analytical silos, resulting in faster and more trusted decision-making. Real-time or streaming analytics will become crucial, as decisions must be taken as soon as some events occur.
  • SPARK will get even hotter. I had described IBM’s big endorsement of SPARK last year in a blogpost. Spark gives us a comprehensive, unified framework to manage big data processing requirements with a variety of data sets that are diverse in nature (text data, graph data etc) as well as the source of data (batch v. real-time streaming data). Spark enables applications in Hadoop clusters to run up to 100 times faster in memory and 10 times faster even when running on disk. In addition to Map and Reduce operations, it supports SQL queries, streaming data, machine learning and graph data processing. This also says in-memory processing will continue to thrive.
  • Analytics & big events will drive demand exponentially. This year’s big events like the US presidential election and the Olympics in Brazil will see the harnessing of big data to provide data-driven insights like never before.
  • Protection of data itself will become paramount. It’s still too easy for hackers to circumvent perimeter defenses, steal valid user credentials, and get access to data  records. In 2016, as companies protect themselves from the threat of data loss, new means of data-centric security will become mainstream to consistently control user access and credentials where it matters the most.
  • Shortage of Data Scientists will drive companies to look for Big data cloud services. To circumvent the need to hire more data scientists and Hadoop admins, organizations will rely on fully managed cloud services with built-in operational support, freeing up existing data science teams to focus their time and effort on analysis instead of wrangling complex Hadoop clusters.
  • Finally, shift to cloud is getting to be main stream, because of the clear ROI. At least the dev-and-test shift is happening quite fast. AWS seems to dominate the production config, even though big data as service is still in its infancy. Microsoft Azure and IBM’s cloud service plus Oracle’s new cloud offerings will make this space quite vibrant.