Category Archives: Disruptive Technology

Vitalik Buterin & Ethereum

Many of you may not have heard of this 23 year old Russian-Canadian, Vitalik Buterin. He is one of those geniuses who started loving computing and Math from an early age. His parents immigrated to Canada from Russia when he was 3 years old. After attending a private high school in Toronto, he joined the University of Waterloo (my alma mater), but dropped out after getting the Peter Thiel fellowship of $100K to pursue his entrepreneurial work in cryptocurrency.

After trying to persuade the Bitcoin community for a scripting language which got no support, he decided to start a new platform to serve cryptocurrency plus any asset like a smart contract. His first seminal paper in 2013 laid the foundation and the same year he proposed the building of a new platform called Ethereum with a general scripting language. In early 2014, a Switzerland company called Ethereum Switzerland GMBH developed the first Ethereum software project. Finally in July-August of 2014, Ethereum launched a pre-sale of Ether tokens (its own cryptocurrency) to public and raised $14M. Ethereum belongs to the same family as the cryptocurrency Bitcoin, whose value has increased more than 1,000 percent in just the past year. Ethereum has its own currencies, most notably Ether, but the platform has a wider scope than just money.

You can think of my Ethereum address as having elements of a bank account, an email address and a Social Security number. For now, it exists only on my computer as an inert string of nonsense, but the second I try to perform any kind of transaction — say, contributing to a crowdfunding campaign or voting in an online referendum — that address is broadcast out to an improvised worldwide network of computers that tries to verify the transaction. The results of that verification are then broadcast to the wider network again, where more machines enter into a kind of competition to perform complex mathematical calculations, the winner of which gets to record that transaction in the single, canonical record of every transaction ever made in the history of Ethereum. Because those transactions are registered in a sequence of “blocks” of data, that record is called the blockchain. Many Bitcoin exchanges use the Ethereum platform.

A New York Times article in January said, “The true believers behind blockchain platforms like Ethereum argue that a network of distributed trust is one of those advances in software architecture that will prove, in the long run, to have historic significance. That promise has helped fuel the huge jump in cryptocurrency valuations. But in a way, the Bitcoin bubble may ultimately turn out to be a distraction from the true significance of the blockchain. The real promise of these new technologies, many of their evangelists believe, lies not in displacing our currencies but in replacing much of what we now think of as the internet, while at the same time returning the online world to a more decentralized and egalitarian system. If you believe the evangelists, the blockchain is the future. But it is also a way of getting back to the internet’s roots”.

Vitalik wrote the idea of Ethereum at age 19. He is the new-age Linus Torvalds who fathered Linux that became the de-facto operating system for the Internet developers.

Advertisements

IBM’s Neuromorphic Computing Project

The Neuromorphic Computing Project at IBM is a pioneer in next-generation chip technology. The project has received ~$70 million in research funding from DARPA (under SyNAPSE Program), US Department of Defense, US Department of Energy, and Commercial Customers. The ground-breaking project is multi-disciplinary, multi-institutional, and multi-national and has a world-wide scientific impact. The resulting architecture, technology, and ecosystem breaks path with the prevailing von Neumann architecture and constitutes a foundation for energy-efficient, scalable neuromorphic systems. The head of this project is Dr. Dharmendra Modha, IBM Fellow and chief scientist for IBM’s brain-inspired computing project.

So why is the Von Neumann architecture inadequate for brain-inspired computing? The Von Neumann model goes back to 1946 where it dealt with 3 things – the CPU, memory and a bus. You move data to and from memory. The bus connects the memory & CPU via computation. It becomes the bottleneck, and also sequentializes computation. So if you have to flip a single bit, you have to read that bit from memory and write it back.

The new architecture is radically different. The IBM project takes inspiration from the structure, dynamics, and behavior of the brain to see if they can optimize time, speed, and energy of computation. Co-locate memory and computation and slowly intertwine communication, just like how the brain does, then you can minimize the energy of moving bits from memory to computation. You can get event-driven computation rather than clock-driven computation, and you can compute only when information changes.

The Von Neumann paradigm is, by definition, a sequence of instructions interspersed with occasional if-then-else statements. Compare that to a neural network, where a neuron can reach out to up to 10,000 neighbors. The TrueNorth (IBM’s new chip) can reach out to up to 256, and the reason for that disparity is because it is silicon and not organic technology. But there’s a very high fan-out, and high fan-out is difficult to implement in a sequential architecture. An AI system IBM developed last year for Lawrence Livermore National Lab had 16 TrueNorth chips tiled in a 4-by-4 array. The chips are designed to be tiled, so scalability is built in as a design principle rather than as an afterthought.

In summary, the design points of the IBM project are as follows:

  • The Von Neumann architecture won’t be able to provide the massively parallel, fault-tolerant, power-efficient systems that will be needed to create to embed intelligence into silicon. Instead, IBM had to rethink processor design.
  • You can’t throw out the baby with the bathwater: even if you rethink underlying hardware design, you need to implement sufficiently abstracted software libraries to reduce the pain of the software developer so that he can program your chip.
  • You can achieve power efficiency by changing the way you build software and hardware to become active only when an event occurs; rather than tying computation to a series of sequential operations, you make it into a massively parallel job that runs only when the underlying system changes.

AI is getting notable success in the area of perception such as speech and image recognition. In the field of reinforcement learning and deep learning, the human brain becomes the primary inspiration. Hence the IBM Neuromorphic chip design becomes a significant foundational technology.

iPhone’s tenth anniversary – iPhone X

Yesterday (September 12, 2017), Apple celebrated the tenth anniversary of its original iPhone, launched by Steve Jobs back in 2007 at the Moscone Center in San Francisco. It was a big day when Apple opened its brand new Steve Jobs Theater at the new Apple campus. The show began in front of 1000 invitees with a Steve Jobs video from the first iPhone event, thus inaugurating his own designed theater. His wife Lauren and co-founder Steve Wozniak were present. It was a big moment.

Besides introducing incremental upgrades to Apple Watch and Apple TV (4K support), Apple introduced two versions of iPhone 8, basically very similar to iPhone7. The brand new thing was Apple X (Ten, not X). This was a very different design. The screen is bigger (5.8″) using OLED technology for the first time. Ironically the OLED screen is developed by Samsung. The iPhone X is only slightly bigger than the iPhone 7, but its screen is larger than that of the jumbo-size iPhone 7 Plus.

Here are the highlights of iPhone X:

  • A gorgeous screen and beautiful design.
  • Great cameras, wireless charging, better battery life, and water resistance.
  • No home screen, side button is multi-tasked to do few functions.
  • The best mobile operating system.
  • All on a device that you’ll end up using several hours a day.

Facial-recognition is the most prominent new feature. Called Face ID, it will be the primary tool to unlock the nearly $1,000 iPhone X, which is scheduled to start shipping Nov. 3. A camera system with depth sensors project 30,000 infrared dots across a user’s face that computing systems use to create a mathematical model that is stored securely on the phone. Each time users hold the device to their faces, the technology verifies the mathematical model before unlocking the phone in an instant. Considering iPhone users on average unlock their devices 80 times a day, the success of Face ID could make or break the device, analysts says, especially after early users get their hands on it and begin sharing their experiences publicly. This is a crucial function that must be flawless. Yesterday the demo failed and that’s not very auspicious.

If it catches on, the facial-scanning technology in iPhone X could unlock other changes in how we use smartphones. In one small example, Apple also is using the system to capture facial expressions and use them to animate images of chickens, unicorns and other common emojis. Those animojis, as Apple calls them, can be captured and shared with friends.

iOS remains the best smartphone operating system and the iPhone’s biggest advantage over its competition. Apple’s operating system is the only smartphone platform that comes with consistent, guaranteed updates. And it’s the only one that routinely brings cutting-edge features, like augmented reality, to older phones.

Johny Ive’s new design elegance is clearly seen in iPhone X, as also in the round glass auditorium lobby of the Steve Job’s theater.

Serverless, FaaS, AWS Lambda, etc..

If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity.

We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events which follows certain rules. Functions are written in fixed set of languages, with a fixed set of programming model and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-native applications, it offers a new option. But the key question is what should you use it for and why.

Amazon’s AWS, as usual, spearheaded this in 2014 with a engine called AWS Lambda. It supports Node, Python, C# and Java. It uses AWS API triggers for many AWS services. IBM offers OpenWhisk as a serverless solution that supports Python, Java, Swift, Node, and Docker. IBM and third parties provide service triggers. The code engine is Apache OpenWhisk. Microsoft provides similar function in its Azure Cloud function. Google cloud function supports Node only and has lots of other limitations.

This model of computing is also called “event-driven” or FaaS (Function as a Service). There is no need to manage provisioning and utilization of resources, nor to worry about availability and fault-tolerance. It relieves the developer (or devops) from managing scale and operations. Therefore, the key marketing slogans are event-driven, continuous scaling, and pay by usage. This is a new form of abstraction that boils down to function as the granular unit.

At the micro-level, serverless seems pretty simple – just develop a procedure and deploy to the cloud. However, there are several implications. It imposes a lot of constraints on developers and brings load of new complexities plus cloud lock-in. You have to pick one of the cloud providers and stay there, not easy to switch. Areas to ponder are cost, complexity, testing, emergent structure, vendor dependence, etc.

Serverless has been getting a lot of attention in last couple of years. We will wait and see the lessons learnt as more developers start deploying it in real-world web applications.

Amazon+Whole Foods – How to read this?

Last Thursday (June 15, 2017), Amazon decided to acquire Whole Foods for a whopping $13.7B ($42 per share, a 27% premium to its closing price). On Friday, stock prices of Walmart, Target, and Costco took a hit downwards, while Amazon shares went up by more than 2%. So why did Amazon buy Whole Foods? Clearly Amazon sees groceries as an important long-term driver of growth in its retail segment. What is funny is that a web pioneer with no physical retail outlet decided to get back to the brick-and-mortar model. Amazon has also started physical bookstores at a few cities. We have come full circle.

Amazon grocery business has focussed on Amazon Fresh subscription service so far to deliver online food orders. Amazon will eventually use the stores to promote private-label products, integrate and grow its AI powered Echo speakers, boost prime membership and entice more customers into the fold. Hence this acquisition is the start of a long term strategy. Amazon is known for its non-linear thinking. Just see how it started a brand new business with AWS about 12 years back and now it is a $14B business with a 50%+ margin. It commands a powerful leadership position in the cloud computing business and competitors like Microsoft Azure or Google’s GCE are trying hard to catch up.

The interesting thing to ponder is how the top tech companies are spreading their tentacles. This was a front-page article in today’s WSJ. Apple, a computer company that became a phone company, is now working on self-driving cars, TV programming, and augmented reality. It is also pushing into payments territory challenging the banks. Google parent Alphabet built Android which now runs most PC devices. It ate the maps industry; it’s working on internet-beaming balloons, energy-harvesting kites, and self-driving technologies. Facebook is creating drones, VR hardware, original TV shows, and even telepathic brain computers. Of course Elon Musk brings his tech notions to any market he pleases – finance, autos, energy, and aerospace.

What is special about Amazon is that it is willing to work on everyday problems. According to the author of the WSJ article, this may be the smarter move in the long run. While Google and Facebook have yet to drive significant revenue outside their core, Amazon has managed to create business after business that is profitable, or at least not a drag on the bottom line. The article ends with cautionary note, “Imagine a future in which Amazon, which already employs north of 340,000 people worldwide, is America’s biggest employer. Imagine we are all spending money at what’s essentially the company store, and when we get home we’re streaming Amazon’s media….”

With few tech giants controlling so many businesses, are we comfortable to get all our goods and services from the members of an oligopoly?

The end of Cloud Computing?

A provocative title for sure when everyone thinks we just started the era of cloud computing. I recently listened to a talk by Peter Levine, general partner at Andreessen Horowitz on this topic which makes a ton of sense. The proliferation of intelligent devices and the rise of IoT (Internet of Things) lead us to a new world beyond what we see today in cloud computing (in terms of scale).

I have said many times that the onset of cloud computing was like back to the future of centralized computing. We had IBM mainframes, dominating the centralized computing era during the 1960s and 1970s. The introduction of PCs created the world of client-server computing (remember the wintel duopoly?) from 1980s till 2000. Then the popularity of the mobile devices started the cloud era in 2005, thus taking us back to centralized computing again. The text message I send you does not go from my device to your device directly, but gets to a server somewhere in the cloud first and then to your phone. The trillions of smart devices forecasted to appear as sensors in automobiles, home appliances, airplanes, drones, engines, and almost any thing you can imagine (like in your shoe) will drastically change the computing paradigm again. Each of these “edge intelligent devices” can not go back and forth to the cloud for every interaction. Rather they would want to process data at the edge to cut down latency. This brings us back to a new form of “distributed computing” model – kind of back to a vastly expanded version of the “PC era”.

Peter emphasized that the cloud will continue to exist, but its role will change from being the central hub to a “learning center” where curated data from the edge (only relevant data) resides in the cloud. The learning gets pushed back to the edge for getting better at its job. The edge of the cloud does three things – sense, infer, and act. The sense level handles massive amount of data like in a self-driving car (10GB per mile), thus making it like a “data center on wheels”. The sheer volume of data is too much to push back to the cloud. The infer piece is all machine learning and deep learning to detect patterns, improve accuracy and automation. Finally, the act phase is all about taking actions in real-time. Once again, the cloud plays the central role as a “learning center” and the custodian of important data for the enterprise.

Given the sheer volume of data created, peer-to-peer networks will be utilized to lessen load on core network and share data locally. The challenge is huge in terms of network management and security. Programming becomes more data-centric, meaning less code and more math. As the processing power of the edge devices increases, the cost will come down drastically. I like his last statement that the entire world becomes the domain of IT meaning we will have consumer-oriented applications with enterprise-scale manageability.

This is exciting and scary. But whoever could have imagined the internet in the 1980s or the smartphone during the 1990s, let alone self-driving cars?

IoT Analytics – A panel discussion

I was invited to participate in a panel called “IoT Analytics” last Thursday, March 23rd. This was organized for the IoT Global Council by Erick Schonfeld of Traction Technology Partner (New York). Besides me there were two other speakers: Brandon Cannaday, cofounder and chief product officer of Losant and Patrick Stuart, head of products at SkyCatch. For those of you not familiar with IoT, it stands for Internet of Things. There is another term called IIoT for Industrial Internet of Things. IoT has been in the lexicon for last few years signifying the era of “pervasive computing” where devices with an IP address can be everywhere – the freeze, microwave, thermostats, door knobs, cars, airplanes, electric motors, various sensors,…..constantly sending data. The phrases “connected home” or “connected car” are an upshot of the IoT phenomenon. However Gartner group showed IoT to be at the peak of the “hype cycle” couple of years back.

I emphasized on the “pieces of the puzzle” or the components of IoT Analytics – data ingestion at scale, handling streaming data pipeline, data curation and unification, and storing the results in a highly scalable NoSQL data store, as the steps before analytics can happen. Just dumping everything into a Hadoop data lake only addresses 5% of the problem (data ingestion). Transforming the data and curating it to make sense is a non-trivial step. Then I spoke about analytics which has several components – descriptive (what happened and why?), predictive (what is probably going to happen?), and prescriptive (what should I do about it?). Streaming analytics must filter, aggregate, enrich, and analyze high throughput of data from disparate sources to identify patterns, detect urgent situations (like a temperature spike in an engine), and automate immediate action in real time.

Patrick of SkyCatch showed how they are serving the construction industry in taking images (via drones) and accurately creating “earth maps” for self-driving bulldozers, thus saving human labor cost. Another example was taking images of actual progress in large construction sites and contrasting it against plan, to show offsets, thus detecting delays and taking corrective actions in time.

Brandon of Losant showed example of a large utility company in Australia that supplies high powered (expensive) pumps with sensors. By collecting data from the sensors and monitoring it centrally, they can identify problems and notify the maintenance teams for taking corrective actions. Previously they had to fly people around for maintenance and this new IoT Analytics has saved the company lots of cost. Both are startup companies in the IoT Analytics space and are tackling immediate issues in real time.

It was a good panel and I learnt a lot from my co-panelists.