Translytical Database

This is a new term I learnt this week, thanks to the Forrester analyst Mike Gualtieri. Terms like Translytics or Exalytics (Oracle’s phrase) do not roll off the tongue that easy. Mike defined Translytical as , “single unified database that supports transaction and analytics in real time without sacrificing transactional integrity, performance, and scale.”

[Transactions + Analytics = Translytical]

Those of us who saw the early days of Data Warehousing, we deliberately separated the two worlds, so that analytics workloads do not interfere with transaction performance. Hence snapshots of operational data were taken to do data warehousing for offline batch analysis and reporting. Mostly that was getting a retro-view of what happened. In the current scheme of things, where data is coming fast and furious from so many sources, there is need to look at trends in real time and take action. Some insights are perishable, therefore need to be acted on immediately. All data originate fast, but analytics usually done much later. Perishable insights can have exponentially more value that after-the-fact traditional historical analysis. Here is a classification of analytics:                     Past —- Learn (Descriptive Analytics),                                                                             Present —- Infer (Predictive Analytics), Detect (Streaming Analytics),                   Future —– Action (Prescriptive Analytics).

Streaming analytics (real time) require a database that can do in-memory streaming for near-zero latency for complex data and analytical operations. The traditional approach of moving data to analytics has created many silos  such as CRM stack, BI stack or Mobile stack. Translytical databases are transactional as well as analytical. Point solutions like Spark data streaming which does micro batch processing are not the answer. Such a unified database must do in-memory processing (use RAM for real-time), multi-modal and support compression and tiered data as well.  Customers are stitching together open source products such as Spark, Kafka, and Cassandra to achieve streaming analytics, but it becomes a non-trivial programming task.

The only database claiming to be Translytical is VoltDB with functions such as: in-memory processing, scale-out with shared nothing, ACID compliance for transactional integrity, reliability and fault tolerance. It also has real time analytics built in combined with integration with Hadoop ecosystem.   Such a unified database has to prove its worth in the market.

So we have come full circle – from single database to more than one to handle both transactions and analytics; now back to single database doing both.

It makes logical sense, but let us watch and see if that works.


2 responses to “Translytical Database

  1. Pingback: Splice Machine – What is it? | Jnan Dash's Weblog

  2. Pingback: Splice Machine – What is it? | Jnan Dash's Weblog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s