If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity.
We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events which follows certain rules. Functions are written in fixed set of languages, with a fixed set of programming model and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-native applications, it offers a new option. But the key question is what should you use it for and why.
Amazon’s AWS, as usual, spearheaded this in 2014 with a engine called AWS Lambda. It supports Node, Python, C# and Java. It uses AWS API triggers for many AWS services. IBM offers OpenWhisk as a serverless solution that supports Python, Java, Swift, Node, and Docker. IBM and third parties provide service triggers. The code engine is Apache OpenWhisk. Microsoft provides similar function in its Azure Cloud function. Google cloud function supports Node only and has lots of other limitations.
This model of computing is also called “event-driven” or FaaS (Function as a Service). There is no need to manage provisioning and utilization of resources, nor to worry about availability and fault-tolerance. It relieves the developer (or devops) from managing scale and operations. Therefore, the key marketing slogans are event-driven, continuous scaling, and pay by usage. This is a new form of abstraction that boils down to function as the granular unit.
At the micro-level, serverless seems pretty simple – just develop a procedure and deploy to the cloud. However, there are several implications. It imposes a lot of constraints on developers and brings load of new complexities plus cloud lock-in. You have to pick one of the cloud providers and stay there, not easy to switch. Areas to ponder are cost, complexity, testing, emergent structure, vendor dependence, etc.
Serverless has been getting a lot of attention in last couple of years. We will wait and see the lessons learnt as more developers start deploying it in real-world web applications.
I happen to be in Poland and just heard of Andy Grove’s passing away. If you go south from here, cross Slovakia, you will reach Hungary. That’s where Andy Grove grew up in a jewish family. Then came the Nazi occupation in the 1940s when jews were exterminated. Andy had to flee the country to Austria in very difficult circumstances. Then he got unto a boat and sailed for unknown land of America. He reached New York with $20 in his pocket and no idea what he was going to do. As he enrolled at City University of NY, he took up physics. He taught himself english and topped the class soon. A brilliant student, he continued his journey to the west coast to UC, Berkeley where he got a Ph.D. in chemistry/chemical engineering.
Andy started at Fairchild, but left to join Bob Noyce and Gordon Moore to start a new company called Intel. He basically built the company from scratch. From memory chips, Intel took a leap into microprocessors, considered a huge risk in those days. But Andy was at the helm, the CEO of a hugely successful company Intel, that almost defined the Silicon valley of today. Many would agree that Andy created the Silicon valley. He was Time’s man of the year in 1996. His straightforwardness and no-nonsense style of management is legendary. His famous book, “Only the Paranoid Survives” made big waves when it came out. You have to run scared to win in your business. Intel defined the computing era of 1980s and 1990s.
Andy taught many leaders of the valley as a mentor – Larry Ellison, Steve Jobs, John Doerr, Mark Zuckerberg, etc. etc. During my days at Oracle, I had the opportunity to listen to Andy a couple of times. He lead by example and was completely full of substance. People who worked for him, adored him. With all that success, he had the humility of very egoless person. He contributed significantly to many charities through his foundation. Oh, there is so much we can learn from Andy Grove.
As we mourn his death, let us remember his great leadership qualities combined with utter humility.
Back in 2008, three young men applied to Y Combinator (YC), a school for startups, looking for help with their tiny firm, AirBed and Breakfast, a website that helped people rent out inflatable mattresses in their living rooms during conferences. YC helped them to refine their idea and meet early investors. Today, Airbnb, as their firm is known now, rents whole apartments in 190 countries and is valued at $25.5 billion.
Since 2005, YC has taken on batches of promising founders and this month they celebrated the funding of its 1000th. startup. About half have failed, but the successes are quite remarkable. Eight of these YC firms have become what Silicon Valley calls “unicorns”, valued at $1 billion or more. Examples:
- Airbnb (joined in 2009),
- Dropbox (valuation at $10B, 2007),
- Stripe (value $5B, 2010),
- Zenefits (value $4.5B, 2013),
- Instacart (value $2B, 2012),
- Docker (value $1.1B, 2010).
Combined, the companies YC has invested in are worth around $65 billion, although YC’s share is only a small fraction, maybe $1-2B.
So what is YC’s magic? YC melds the best of an investment firm and a university providing coaching on how to refine the product and create a viable business plan. It puts in a small investment (typically $120,000 in return for a 7% stake) and has produced a group of successful alumni. It has helped popularize the idea that startups are a viable career. Thousands of aspiring entrepreneurs apply to attend the 3-month YC program, mostly because of the networking potential with investors and other attendees. For its spring 2015 class, YC received more than 6700 applications and accepted around 1.6% of them, better record than Harvard University.
YC has a small campus in Mountain View, close to Googleplex. At the end of the three months, entrepreneurs deliver a presentation about their business to a group of Silicon valley’s top investors. This event is called the Demo day, where new ideas can be demonstrated to investors.
There are many “accelerators” like YC around the world (they have replicated YC’s investment and training philosophy), but none has achieved YC’s brand or its record.
The Sunday New York Times published this article on IBM’s new way of thinking that is worth reading.
The article states – The company is well on its way to hiring more than 1,000 professional designers, and much of its management work force is being trained in design thinking. “I’ve never seen any company implement it on the scale of IBM,” said William Burnett, executive director of the design program at Stanford University. “To try to change a culture in a company that size is a daunting task.”
If you ask people inside IBM for a design-thinking success story, they are likely to mention Bluemix, a software tool kit for making cloud applications. In just one year, Bluemix went from an idea to a software platform that has attracted many developers, who are making apps used in industries as varied as consumer banking and wine retailing. In the past, building that kind of technology ecosystem would have taken years.
Software developers are just as important as customers to IBM, since both groups create markets. “We wanted to redefine IBM for developers,” said Damion Heredia, an IBM vice president who leads the Bluemix operation. When a free test version of Bluemix was offered in February 2014, Mr. Heredia figured it might attract 2,000 developers in the first few months. It reached that number within a week, and a commercial version was introduced that July. Today, Bluemix is signing up 10,000 new users a week.
The new mantra at IBM is speed, speed and speed. The design-oriented approach is promised to yield the agility they need in re-inventing themselves. Being 104 years old, IBM has changed course and re-invented itself may times. Now the focus seems to be cloud computing, analytics, and big data. The emphasis from hardware has waned shifting more into software and service. They are also being successful in recruiting young graduates from top schools like Stanford, when these kids often think Google is an old company (and IBM is a historic relic). Design leadership from people like Phil Gilbert is bringing fundamental changes in this metamorphosis.
It’s worth watching how fast IBM switches its culture!
Yesterday, Dell announced the largest technology M&A in history with a proposed$67B buyout of EMC and VMware (via EMC’s 80% ownership of VMW). The combined company will have over $80B in revenue, employ tens of thousands of people around the world and sell everything from PCs, servers & storage to security software and virtualization software. Not to be overlooked is the fact that Dell and EMC will be private companies and free from the scrutiny of activist investors.
Dell has to borrow a ton of money to make this deal, like $40B debt. The annual interest payment will be $2.5B! This deal has three entities – Michael Dell’s own investment, Silverlake Partners, and Singapore based Tomasek. On paper this seems like the two companies bring complimentary values – Dell sells to small to medium size companies and EMC addresses the larger enterprise needs. The big attraction for Dell is the VMWare piece that revolutionized the desk-top virtualization market. Currently VMWare is 25% of EMC’s revenue, but 50% in valuation.
The concern is that as more corporations adopt cloud storage and cloud computing for their IT needs, there is less reason to spend money on the costly software and hardware upgrades typically offered by established IT companies like EMC. But by consolidating, they can better compete against the lower-cost cloud service companies – AWS (Amazon Web Services), IBM, Alphabet (Google), and Microsoft Azure.
This is going to be a big gamble. The HP CEO circulated an internal memo suggesting how this will be great opportunity for HP, as the combined company will create a lot of chaos and confusion. At the same time, being private, the new entity can execute radical restructure. But this will be a herculean task to make the combined company a winner in the highly competitive “IT infrastructure” market.
Last June IBM made a serious commitment to the future of Apache Spark with a series of initiatives:
- It will offer Apache Spark as a service on Bluemix (Bluemix is an implementation of IBM’s Open Cloud Architecture based on Cloud Foundry, an open source Platform as a Service (PaaS). Bluemix delivers enterprise-level services that can easily integrate with your cloud applications without you needing to know how to install or configure them.
- It committed to include 3500 researchers to work on Spark-related projects.
- It will donate IBM SystemML (its machine learning language and libraries) to Apache Spark open source
The question is why this move by IBM?
First let us look at what is Apache Spark? Developed at UC Berkeley’s AMPLab, Spark gives us a comprehensive, unified framework to manage big data processing requirements with a variety of data sets that are diverse in nature (text data, graph data etc) as well as the source of data (batch v. real-time streaming data). Spark enables applications in Hadoop clusters to run up to 100 times faster in memory and 10 times faster even when running on disk. In addition to Map and Reduce operations, it supports SQL queries, streaming data, machine learning and graph data processing. Developers can use these capabilities stand-alone or combine them to run in a single data pipeline use case. In other words, Spark is the next-generation of Hadoop (came with its batch pedigree and high latency).
With other solutions for real-time analytics via in-memory processing such as RethinkDB, an ambitious Redis project or commercial in-memory SAP Hana, IBM needed a competitive offering. Other vendors betting on Spark range from Amazon to Zoomdata. IBM will run its own analytics software on top of Spark, including SystemML for machine learning, SPSS, and IBM Streams.
At this week’s Strata conference, several companies like Uber described how they have deployed Spark all the way for speedy real-time analytics.
This is a new term I learnt this week, thanks to the Forrester analyst Mike Gualtieri. Terms like Translytics or Exalytics (Oracle’s phrase) do not roll off the tongue that easy. Mike defined Translytical as , “single unified database that supports transaction and analytics in real time without sacrificing transactional integrity, performance, and scale.”
[Transactions + Analytics = Translytical]
Those of us who saw the early days of Data Warehousing, we deliberately separated the two worlds, so that analytics workloads do not interfere with transaction performance. Hence snapshots of operational data were taken to do data warehousing for offline batch analysis and reporting. Mostly that was getting a retro-view of what happened. In the current scheme of things, where data is coming fast and furious from so many sources, there is need to look at trends in real time and take action. Some insights are perishable, therefore need to be acted on immediately. All data originate fast, but analytics usually done much later. Perishable insights can have exponentially more value that after-the-fact traditional historical analysis. Here is a classification of analytics: Past —- Learn (Descriptive Analytics), Present —- Infer (Predictive Analytics), Detect (Streaming Analytics), Future —– Action (Prescriptive Analytics).
Streaming analytics (real time) require a database that can do in-memory streaming for near-zero latency for complex data and analytical operations. The traditional approach of moving data to analytics has created many silos such as CRM stack, BI stack or Mobile stack. Translytical databases are transactional as well as analytical. Point solutions like Spark data streaming which does micro batch processing are not the answer. Such a unified database must do in-memory processing (use RAM for real-time), multi-modal and support compression and tiered data as well. Customers are stitching together open source products such as Spark, Kafka, and Cassandra to achieve streaming analytics, but it becomes a non-trivial programming task.
The only database claiming to be Translytical is VoltDB with functions such as: in-memory processing, scale-out with shared nothing, ACID compliance for transactional integrity, reliability and fault tolerance. It also has real time analytics built in combined with integration with Hadoop ecosystem. Such a unified database has to prove its worth in the market.
So we have come full circle – from single database to more than one to handle both transactions and analytics; now back to single database doing both.
It makes logical sense, but let us watch and see if that works.