API-driven Economy?

I just went to a couple of sessions at the API World going on at the San Jose Convention center. I heard all kinds of new terms thrown within a span of couple of hours – the new API driven economy, iSaaS (integration software as a service), iPaaS (integration platform as a service), APIM (API management), BaaS (Backend as a service), etc. Then there was confusing & overlapping mixture of ideas in microservices, containers, connectors, API’s..all in the space of system integration. There were lots of young software developers at this conference and booths from companies I have never heard of – Jitterbit (Enterprise iPaaS), Back4App (backend dev. via Parser server), PubNub (real-time API), Rigor (API monitoring and testing). I took a deep breath and thought of all these ideas over last 3 decades – api’s, subroutines, reusable web services, service-oriented-architecture, integration via connectors, assembly of interchangeable parts from common libraries, etc. Welcome back to the future!

I see the urgency of this now that we have so many products and platforms in every category. A speaker from Jitterbit showed how Cisco’s marketing software stack has 39 different technologies – SalesForce, 6Sense, Eloqua, App Annie, Live Agent, etc. They do functions like campaign management, CRM, email blast, mobile notification… This is definitely not the ideal solution. Jitterbit wants to be the mediator via API’s to consolidate all these based on activities and work flow. No wonder, this Alameda-based startup is doing very well. I was not surprised to learn that SalesForce & private equity firm KKR are investors in Jitterbit.

Gartner predicts enterprise application integration market to be $33.5B by 2020 (CAGR of 7.1% from $25.5B in 2016), whereas the integration platform as a service (iPaaS) will be $3.6B by 2021 (CAGR of 41.5% from $526M in 2016). The data integration market is going to reach $12.2B in 2022 from $6.4B in 2017 (CAGR 13.7%). Gartner says, “IT leaders should combine on-premise integration platform, iPaaS, iSaaS and API Management capabilities into a single, yet modular enterprise capability.” Gartner defines this whole space as Application Integration Platforms.

I think it’s time we consolidate all these terms and bring real clarity. Current marketing hype of API driven economy does not help much. What used to be a programmer’s term (api – application programming interface) is now marketed as a broad term to solve the world hunger problem.

The goal has not changed – we want integration of heterogeneous systems (both inside and outside the enterprise) to be highly efficient, transparent, and less labor intensive.

Advertisements

iPhone’s tenth anniversary – iPhone X

Yesterday (September 12, 2017), Apple celebrated the tenth anniversary of its original iPhone, launched by Steve Jobs back in 2007 at the Moscone Center in San Francisco. It was a big day when Apple opened its brand new Steve Jobs Theater at the new Apple campus. The show began in front of 1000 invitees with a Steve Jobs video from the first iPhone event, thus inaugurating his own designed theater. His wife Lauren and co-founder Steve Wozniak were present. It was a big moment.

Besides introducing incremental upgrades to Apple Watch and Apple TV (4K support), Apple introduced two versions of iPhone 8, basically very similar to iPhone7. The brand new thing was Apple X (Ten, not X). This was a very different design. The screen is bigger (5.8″) using OLED technology for the first time. Ironically the OLED screen is developed by Samsung. The iPhone X is only slightly bigger than the iPhone 7, but its screen is larger than that of the jumbo-size iPhone 7 Plus.

Here are the highlights of iPhone X:

  • A gorgeous screen and beautiful design.
  • Great cameras, wireless charging, better battery life, and water resistance.
  • No home screen, side button is multi-tasked to do few functions.
  • The best mobile operating system.
  • All on a device that you’ll end up using several hours a day.

Facial-recognition is the most prominent new feature. Called Face ID, it will be the primary tool to unlock the nearly $1,000 iPhone X, which is scheduled to start shipping Nov. 3. A camera system with depth sensors project 30,000 infrared dots across a user’s face that computing systems use to create a mathematical model that is stored securely on the phone. Each time users hold the device to their faces, the technology verifies the mathematical model before unlocking the phone in an instant. Considering iPhone users on average unlock their devices 80 times a day, the success of Face ID could make or break the device, analysts says, especially after early users get their hands on it and begin sharing their experiences publicly. This is a crucial function that must be flawless. Yesterday the demo failed and that’s not very auspicious.

If it catches on, the facial-scanning technology in iPhone X could unlock other changes in how we use smartphones. In one small example, Apple also is using the system to capture facial expressions and use them to animate images of chickens, unicorns and other common emojis. Those animojis, as Apple calls them, can be captured and shared with friends.

iOS remains the best smartphone operating system and the iPhone’s biggest advantage over its competition. Apple’s operating system is the only smartphone platform that comes with consistent, guaranteed updates. And it’s the only one that routinely brings cutting-edge features, like augmented reality, to older phones.

Johny Ive’s new design elegance is clearly seen in iPhone X, as also in the round glass auditorium lobby of the Steve Job’s theater.

Splice Machine – What is it?

Those of you who have never heard of Splice Machine, don’t worry. You are in the company of many. So I decided to listen to a webinar last week that said the following in its announcement: learn about benefits of a modern IoT application platform that can capture, process, store, analyze and act on the large streams of data generated by IoT devices. The demonstration will include:

  • High Performance Data Ingestion
  • Analytics and Transformation on Data-In-Motion
  • Relational DBMS, Supporting Hybrid OLTP and OLAP Processing
  • In-Memory and Non-Volatile, Row-based and Columnar Storage mechanisms
  • Machine Learning to support decision making and problem resolution

That was a tall order. Gartner has a new term HTAP – Hybrid Transactional and Analytical Processing. Forrester uses “Translytical” to describe this platform where you could do both OLTP and OLAP. I had written a blog on Translytical database almost two years back. So I did attend the webinar and it was quite impressive. The only confusion was the liberal use of IoT in its marketing slogan. By that they want to emphasize “streaming data” (ingest, store, manage).

In Splice Machine’s website, you see four things: Hybrid RDBMS, ANSI SQL, ACID Transactions, and Real-Time Analytics. A white paper advertisement says, “Your IoT applications deserve a better data platform”. In looking at the advisory board members, I recognized 3 names – Roger Bamford, ex-Oracle and an investor, Ken Rudin, ex-Oracle, and Marie-Anne Niemet, ex-TimeTen. The company is funded by Mohr Davidow Ventures, and Interwest Partners amongst others.

There is a need for bringing together the worlds of OLTP (Transaction workloads) and Analytics or OLAP workloads into a common platform. They have been separated for decades and that’s how the Data Warehouse, MDM, OLAP cubes, etc. got started. The movement of data between the OLTP world and OLAP has been handled by ETL vendors such as Informatica. With the popularity of Hadoop, the DW/Analytics world is crowded with terms like Data Lake, ELT (first load, then transform), Data Curation, Data Unification, etc. A new architecture called Lambda (not to be confused with AWS Lambda for serverless computing) claims to unify the two worlds – OLTP and real-time streaming and analytics.

Into this world, comes Splice Machine with its scale-out data platform. You can do your standard ACID-compliant OLTP processing, data ingestion via Spark streaming and Kafka topics, query processing via ANSI SQL, and get your analytical workload without ETL. They even claim support of procedural language like PL/SQL for Oracle data. With their support of machine learning, they demonstrated predictive analytics. The current focus is on verticals like Healthcare, Telco, Retail, and Finance (Wells fargo), etc.

In the cacophony of Big Data and IoT noise, it is hard to separate facts from fiction. But I do see a role for a “unified” approach like Splice Machine. Again, the proof is always in the pudding – some real-life customer deployment scenarios with performance numbers will prove the hypothesis and their claim of 10x faster speed with one-fourth the cost.

Apache Drill + Arrow = Dremio

A new company just emerged from stealth mode yesterday, called Dremio, backed by Redpoint and Lightspeed in a Series A funding of $10m back in 2015. The founders came from MapR, but were active in Apache projects like Drill and Arrow. The same VC’s backed MapR and had the Dremio founders work out of their facilities during the stealth phase. Now the company has around 50 people in their Mountainview, California office.

Apache Drill acts as a single SQL engine that, in turn, can query and join data from among several other systems. Drill can certainly make use of an in-memory columnar data standard. But while Dremio was still in stealth, it wasn’t immediately obvious what Drill’s strong intersection with Arrow might be. But yesterday the company launched a namesake product that also acts as a single SQL engine that can query and join data from among several other systems, and it accelerates those queries using Apache Arrow. So it is a combo of (Drill + Arrow): schema-free SQL for variety of data sources plus a columnar in-memory analytics execution engine.

Dremio believes that BI today involves too many layers. Source systems, via ETL processes, feed into data warehouses, which may then feed into OLAP cubes. BI tools themselves may add another layer, building their own in-memory models in order to accelerate query performance. Dremio thinks that’s a huge mess and disintermediates things by providing a direct bridge between BI tools and the source system they’re querying. The BI tools connect to Dremio as if it were a primary data source, and query it via SQL. Dremio then delegates the query work to the true back-end systems through push-down queries that it issues. Dremio can connect to relational databases (DB2, Oracle, SQL Server, MySQL, PostgreSQL), NoSQL stores (MongoDB, Amazon Redshift, HBase, MapR-FS), Hadoop, cloud blob stores like S3, and ElasticSearch.

Here’s how it works: all data pulled from the back-end data sources is represented in memory using Arrow. Combined with vectorized (in-CPU parallel processing) querying, this design can yield up to a 5x performance improvement over conventional systems (company claims). But a perhaps even more important optimization is Dremio’s use of what it calls “Reflections,” which are materialized data structures that optimize Dremio’s row and aggregation operations. Reflections are sorted, partitioned, and indexed, stored as files on Parquet disk, and handled in-memory as Arrow-formatted columnar data. This sounds similar to ROLAP aggregation tables).

Andrew Brust from ZDNet said, “While Dremio’s approach to this is novel, and may break a performance barrier that heretofore has not been well-addressed, the company is nonetheless entering a very crowded space. The product will need to work on a fairly plug-and-play basis and live up to its performance promises, not to mention build a real community and ecosystem. These are areas where Apache Drill has had only limited success. Dremio will have to have a bigger hammer, not just an Arrow”.

Serverless, FaaS, AWS Lambda, etc..

If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity.

We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events which follows certain rules. Functions are written in fixed set of languages, with a fixed set of programming model and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-native applications, it offers a new option. But the key question is what should you use it for and why.

Amazon’s AWS, as usual, spearheaded this in 2014 with a engine called AWS Lambda. It supports Node, Python, C# and Java. It uses AWS API triggers for many AWS services. IBM offers OpenWhisk as a serverless solution that supports Python, Java, Swift, Node, and Docker. IBM and third parties provide service triggers. The code engine is Apache OpenWhisk. Microsoft provides similar function in its Azure Cloud function. Google cloud function supports Node only and has lots of other limitations.

This model of computing is also called “event-driven” or FaaS (Function as a Service). There is no need to manage provisioning and utilization of resources, nor to worry about availability and fault-tolerance. It relieves the developer (or devops) from managing scale and operations. Therefore, the key marketing slogans are event-driven, continuous scaling, and pay by usage. This is a new form of abstraction that boils down to function as the granular unit.

At the micro-level, serverless seems pretty simple – just develop a procedure and deploy to the cloud. However, there are several implications. It imposes a lot of constraints on developers and brings load of new complexities plus cloud lock-in. You have to pick one of the cloud providers and stay there, not easy to switch. Areas to ponder are cost, complexity, testing, emergent structure, vendor dependence, etc.

Serverless has been getting a lot of attention in last couple of years. We will wait and see the lessons learnt as more developers start deploying it in real-world web applications.

Data Sharehouse?

This is yet another new term in our lexicon. The San Mateo, California-based startup Snowflake announced this week a new offering with this name, as a free add-on to the data warehouse it built for cloud computing. Now companies using Snowflake’s technology, officially called Snowflake Data Sharing, can share any part of their data warehouses, subject to defined security policies and controls on access, with each other.

Snowflake’s data sharehouse allows companies to provide direct access to structured and unstructured data without the need to copy the data to a new location. Current approaches include file-sharing, electronic data interchange, application programming interfaces and email, but all of them have issues ranging from lack of security to cumbersome methods of providing data access to the right people. Jon Bock, Snowflake’s marketing chief compared the difference in data sharing on Snowflake versus other methods to the difference between streaming music and compact discs. “It looks [to the data recipient] just as if the data resides on their own data warehouse,” he said.

The catch is that every participant must be a Snowflake customer using their data warehouse in the cloud. So this is another way to grow their market. We have seen this approach in the 1990s when Exchanges were introduced by the likes of Oracle for B2B data interchange. That did not go very far. Of course cost was a big factor, but the policy agreement on common formats and security for data exchange was another issue. Snowflake claims to solve this by having one source of truth in the cloud.

Of course companies, like manufacturers and suppliers, advertisers and publishers have been sharing data for quite a long time, but it has been cumbersome via technologies like EDI (electronic data interchange, developed in the 1940s), email, file sharing, APIs and more. That kind of sharing takes time and wasn’t created for the current situation, in which businesses need live data processed in real time to keep a competitive edge.

According to Bob Muglia, Snowflake’s CEO (ex-Microsoft), the data sharehouse changes the game and democratizes the possibilities, because anyone can access the service. Rather than being charged a subscription fee, users pay only according to the amount of data they have processed. Snowflake’s data sharing service is free to data providers, data consumers pay for the compute resources they use. Not only that, but data providers and consumers make their arrangements independent of Snowflake Computing which is the infrastructure provider.

In an increasingly collaborative world there is little doubt that sharing data easily, and in real time, without sacrificing security, privacy, governance and compliance is of great value. Whether it will create entirely new markets has yet to be seen, but actionable data-driven insights are likely to be huge differentiators in the digital economy.

It is a clever move, but time will tell if this will enable smooth data exchange or create more chaos.

Amazon+Whole Foods – How to read this?

Last Thursday (June 15, 2017), Amazon decided to acquire Whole Foods for a whopping $13.7B ($42 per share, a 27% premium to its closing price). On Friday, stock prices of Walmart, Target, and Costco took a hit downwards, while Amazon shares went up by more than 2%. So why did Amazon buy Whole Foods? Clearly Amazon sees groceries as an important long-term driver of growth in its retail segment. What is funny is that a web pioneer with no physical retail outlet decided to get back to the brick-and-mortar model. Amazon has also started physical bookstores at a few cities. We have come full circle.

Amazon grocery business has focussed on Amazon Fresh subscription service so far to deliver online food orders. Amazon will eventually use the stores to promote private-label products, integrate and grow its AI powered Echo speakers, boost prime membership and entice more customers into the fold. Hence this acquisition is the start of a long term strategy. Amazon is known for its non-linear thinking. Just see how it started a brand new business with AWS about 12 years back and now it is a $14B business with a 50%+ margin. It commands a powerful leadership position in the cloud computing business and competitors like Microsoft Azure or Google’s GCE are trying hard to catch up.

The interesting thing to ponder is how the top tech companies are spreading their tentacles. This was a front-page article in today’s WSJ. Apple, a computer company that became a phone company, is now working on self-driving cars, TV programming, and augmented reality. It is also pushing into payments territory challenging the banks. Google parent Alphabet built Android which now runs most PC devices. It ate the maps industry; it’s working on internet-beaming balloons, energy-harvesting kites, and self-driving technologies. Facebook is creating drones, VR hardware, original TV shows, and even telepathic brain computers. Of course Elon Musk brings his tech notions to any market he pleases – finance, autos, energy, and aerospace.

What is special about Amazon is that it is willing to work on everyday problems. According to the author of the WSJ article, this may be the smarter move in the long run. While Google and Facebook have yet to drive significant revenue outside their core, Amazon has managed to create business after business that is profitable, or at least not a drag on the bottom line. The article ends with cautionary note, “Imagine a future in which Amazon, which already employs north of 340,000 people worldwide, is America’s biggest employer. Imagine we are all spending money at what’s essentially the company store, and when we get home we’re streaming Amazon’s media….”

With few tech giants controlling so many businesses, are we comfortable to get all our goods and services from the members of an oligopoly?