Category Archives: Disruptive Technology

Blockchain in Healthcare

The application of blockchain technology in the healthcare industry will bring great benefits, most important being accuracy of data and lowering of cost.

Just as a reminder, blockchain technology provides these key facets:

  • a low-cost decentralized ledger approach to managing information (replicated at each node without any central hub),
  • giving simultaneous access to all parties a single body of strongly encrypted data (almost impossible for hackers to get to data),
  • creates an audit trail each time data is changed helping to ensure the integrity and authenticity of the information,
  • patients can see their data and will authorize other parties (doctors, hospitals, insurers).

Current problems in the healthcare industry is all about multiple sources of data for the patients and hence incorrect information which adds to cost. The various entities like hospital, doctor’s office, and insurance all maintain their own database and synchronization becomes a real issue and often causes error. Blockchain is a real solution to these ills. Several examples of applying blockchain are in the development stage.

  • Change Healthcare, a Nashville-based health network has introduced a blockchain system for processing insurance claims. While not all providers in the system are using it yet, the shared ledger of encrypted data represents a “single source of truth”. All involved parties can see the same accurate information about a claim in real time (rather than sending data back and forth). This relieves a patient from having to call multiple parties to verify information (a practice we all are familiar with). Each time the data is changed, a record of it is shown on the digital ledger identifying the responsible party. Any changes also require verification by each party involved, again enforcing the record’s accuracy
  • Last April, a group of companies like Humana Inc., Multiplan Inc., Quest Diagnostics Inc., and United Health Group’s Optum announced a pilot project using blockchain to have online directories of doctors and healthcare providers. Typically doctor groups, hospitals, insurers and diagnostic companies maintain their own online listings of contacts, practices and biographical details. Not only it is expensive, but they have to continually check and verify the accuracy of these directories. Using blockchain, a substantial saving (almost 75%) will occur. The goal of the pilot program is for providers to update their information themselves into the blockchain where all parties in the network can view it.
  • The MIT Media lab is developing a system called MedRec based on blockchain. Patients can manage their own records and give permission to doctors and providers to access and update the records. Success of the system or any such system will depend on large number of providers and doctors opting in to the program.

Most of the early efforts are in the “proof of concept” stage. But the potential of blockchain to help lower the healthcare cost and provide timely accurate information is very promising.

Advertisements

Crypto Hype vs. Blockchain

There is a lot of crypto hype these days from crypto currencies like Bitcoin to fundraising efforts like ICO (Initial Coin Offering) similar to an IPO. All this noise has obscured the real benefits of the underlying technology – Blockchain. The Internet brought us the “exchange of information” over last 3 decades. Blockchain will give us the new era of “exchange of values” or “exchange of assets” without an intermediary via highly secure transactions in a peer to peer network. New ways of transferring real estate titles, managing cargo on shipping vehicles, guaranteeing the safety of food we eat and much more mundane activities will be enabled by Blockchain. An article in today’s WSJ by Christopher Mims covers this in more detail.

Briefly Blockchain is essentially a secure database (or ledger) spread across multiple computers. Everybody has the same record of all transactions, so tampering with one instance of it will be meaningless. “Crypto” describes the cryptography that underlies it, which allows agents to securely interact (e.g. transfer assets) while also guaranteeing that once a transaction has been made, the Blockchain keeps an immutable record of it. This technology is well suited to transactions that require trust and a permanent record for traceability. It also requires the cooperation of many different parties. Here are some examples of actual deployment of Blockchain so far:

  • At Walmart 1.1 million items are on Blockchain helping the company to trace the item’s journey from manufacturer to store shelf. Global shipping company Maersk is tracking shipping containers making it faster and easier to transfer them and get them thru customs. Other companies using Blockchain technology for tracking are Kroger, Nestle, Tyson Foods and Unilever. In all these cases, IBM is providing the Blockchain technology.
  • CartaSense, an Israeli company uses Blockchain database for its customers to track every stage of the journey of a package, pallet or shipping container.
  • Everledge, a company started in 2014 uses a Blockchain-based registry of every certified diamond in the world (already 2.2. million in its registry). By recording 40 different measures of each stone, it is able to trace the journey of a stone from its source to the final sale to a customer.
  • Dubai has declared its goal to make itself a Blockchain powered government in the world by 2020. They want to streamline real estate transactions for faster and easier transfer of property titles. Other assets like birth/death certificates, passports, visa, etc. can also be managed at low cost with better efficiency.

It is a bit early to claim that Blockchain will revolutionize every industry including government, but it has that potential. It poses a tremendous challenge for the hackers to break into. It can impact on how we vote to whom we connect to what we buy.

The New AI Economy

The convergence of technology leaps, social transformation, and genuine economic needs is catapulting AI (Artificial Intelligence) from its academic roots & decades of inertia to the forefront of business and industry. There has been a growing noise since last couple of years on how AI and its key subsets like Machine Learning and Deep Learning will affect all walks of life. Another phrase “Pervasive AI” is becoming part of our tech lexicon after the popularity of Amazon Echo and Google Home devices.

So what are the key factors pushing this renaissance of AI? We can quickly list them here:

  • Rise of Data Science from the basement to the boardroom of companies. Everyone saw the 3V’s of Big Data (volume, velocity, and variety). Data is called by many names – oxygen, the new oil, new gold, or the new currency.
  • Open source software such as Hadoop sparked this revolution in analytics using lots of unstructured data. The shift from retroactive to more predictive and prescriptive analytics is growing, for actionable business insights. Real-time BI is also taking a front seat.
  • Arrival of practical frameworks for handling big data revived AI (Machine Learning and Deep Learning) which fed happily on big data.
  • Existing CPU’s were not powerful for the fast processing needs of AI. Hence GPU (Graphical Processing Units) offered faster and more powerful chips. NVIDIA provided a positive force in this area. It’s ability to provide a full range of components (systems, servers, devices, software, and architecture) is making NVIDIA an essential player in the emerging AI economy. IBM’s neuromorphic computing project provides notable success in the area of perception, speech and image recognition.

Leading software vendors such as Google have numerous projects on AI ranging from speech and image recognition, language translation, and varieties of pattern matching. Facebook, Amazon, Uber, Netflix, and many others are racing to deploy AI into their products.

Paul Allen, co-founder of Microsoft is pumping $125M into his research lab Allen Institute of AI. The focus is to digitize common sense. Let me quote from today’s New York Times, “Today, machines can recognize nearby objects, identify spoken words, translate one language into another and mimic other human tasks with an accuracy that was not possible just a few years ago. These talents are readily apparent in the new wave of autonomous vehicles, warehouse robotics, smartphones and digital assistants. But these machines struggle with other basic tasks. Though Amazon’s Alexa does a good job of recognizing what you say, it cannot respond to anything more than basic commands and questions. When confronted with heavy traffic or unexpected situations, driverless cars just sit there”. Paul Allen added, “To make real progress in A.I., we have to overcome the big challenges in the area of common sense”.

Welcome to the new AI economy!

Vitalik Buterin & Ethereum

Many of you may not have heard of this 23 year old Russian-Canadian, Vitalik Buterin. He is one of those geniuses who started loving computing and Math from an early age. His parents immigrated to Canada from Russia when he was 3 years old. After attending a private high school in Toronto, he joined the University of Waterloo (my alma mater), but dropped out after getting the Peter Thiel fellowship of $100K to pursue his entrepreneurial work in cryptocurrency.

After trying to persuade the Bitcoin community for a scripting language which got no support, he decided to start a new platform to serve cryptocurrency plus any asset like a smart contract. His first seminal paper in 2013 laid the foundation and the same year he proposed the building of a new platform called Ethereum with a general scripting language. In early 2014, a Switzerland company called Ethereum Switzerland GMBH developed the first Ethereum software project. Finally in July-August of 2014, Ethereum launched a pre-sale of Ether tokens (its own cryptocurrency) to public and raised $14M. Ethereum belongs to the same family as the cryptocurrency Bitcoin, whose value has increased more than 1,000 percent in just the past year. Ethereum has its own currencies, most notably Ether, but the platform has a wider scope than just money.

You can think of my Ethereum address as having elements of a bank account, an email address and a Social Security number. For now, it exists only on my computer as an inert string of nonsense, but the second I try to perform any kind of transaction — say, contributing to a crowdfunding campaign or voting in an online referendum — that address is broadcast out to an improvised worldwide network of computers that tries to verify the transaction. The results of that verification are then broadcast to the wider network again, where more machines enter into a kind of competition to perform complex mathematical calculations, the winner of which gets to record that transaction in the single, canonical record of every transaction ever made in the history of Ethereum. Because those transactions are registered in a sequence of “blocks” of data, that record is called the blockchain. Many Bitcoin exchanges use the Ethereum platform.

A New York Times article in January said, “The true believers behind blockchain platforms like Ethereum argue that a network of distributed trust is one of those advances in software architecture that will prove, in the long run, to have historic significance. That promise has helped fuel the huge jump in cryptocurrency valuations. But in a way, the Bitcoin bubble may ultimately turn out to be a distraction from the true significance of the blockchain. The real promise of these new technologies, many of their evangelists believe, lies not in displacing our currencies but in replacing much of what we now think of as the internet, while at the same time returning the online world to a more decentralized and egalitarian system. If you believe the evangelists, the blockchain is the future. But it is also a way of getting back to the internet’s roots”.

Vitalik wrote the idea of Ethereum at age 19. He is the new-age Linus Torvalds who fathered Linux that became the de-facto operating system for the Internet developers.

IBM’s Neuromorphic Computing Project

The Neuromorphic Computing Project at IBM is a pioneer in next-generation chip technology. The project has received ~$70 million in research funding from DARPA (under SyNAPSE Program), US Department of Defense, US Department of Energy, and Commercial Customers. The ground-breaking project is multi-disciplinary, multi-institutional, and multi-national and has a world-wide scientific impact. The resulting architecture, technology, and ecosystem breaks path with the prevailing von Neumann architecture and constitutes a foundation for energy-efficient, scalable neuromorphic systems. The head of this project is Dr. Dharmendra Modha, IBM Fellow and chief scientist for IBM’s brain-inspired computing project.

So why is the Von Neumann architecture inadequate for brain-inspired computing? The Von Neumann model goes back to 1946 where it dealt with 3 things – the CPU, memory and a bus. You move data to and from memory. The bus connects the memory & CPU via computation. It becomes the bottleneck, and also sequentializes computation. So if you have to flip a single bit, you have to read that bit from memory and write it back.

The new architecture is radically different. The IBM project takes inspiration from the structure, dynamics, and behavior of the brain to see if they can optimize time, speed, and energy of computation. Co-locate memory and computation and slowly intertwine communication, just like how the brain does, then you can minimize the energy of moving bits from memory to computation. You can get event-driven computation rather than clock-driven computation, and you can compute only when information changes.

The Von Neumann paradigm is, by definition, a sequence of instructions interspersed with occasional if-then-else statements. Compare that to a neural network, where a neuron can reach out to up to 10,000 neighbors. The TrueNorth (IBM’s new chip) can reach out to up to 256, and the reason for that disparity is because it is silicon and not organic technology. But there’s a very high fan-out, and high fan-out is difficult to implement in a sequential architecture. An AI system IBM developed last year for Lawrence Livermore National Lab had 16 TrueNorth chips tiled in a 4-by-4 array. The chips are designed to be tiled, so scalability is built in as a design principle rather than as an afterthought.

In summary, the design points of the IBM project are as follows:

  • The Von Neumann architecture won’t be able to provide the massively parallel, fault-tolerant, power-efficient systems that will be needed to create to embed intelligence into silicon. Instead, IBM had to rethink processor design.
  • You can’t throw out the baby with the bathwater: even if you rethink underlying hardware design, you need to implement sufficiently abstracted software libraries to reduce the pain of the software developer so that he can program your chip.
  • You can achieve power efficiency by changing the way you build software and hardware to become active only when an event occurs; rather than tying computation to a series of sequential operations, you make it into a massively parallel job that runs only when the underlying system changes.

AI is getting notable success in the area of perception such as speech and image recognition. In the field of reinforcement learning and deep learning, the human brain becomes the primary inspiration. Hence the IBM Neuromorphic chip design becomes a significant foundational technology.

iPhone’s tenth anniversary – iPhone X

Yesterday (September 12, 2017), Apple celebrated the tenth anniversary of its original iPhone, launched by Steve Jobs back in 2007 at the Moscone Center in San Francisco. It was a big day when Apple opened its brand new Steve Jobs Theater at the new Apple campus. The show began in front of 1000 invitees with a Steve Jobs video from the first iPhone event, thus inaugurating his own designed theater. His wife Lauren and co-founder Steve Wozniak were present. It was a big moment.

Besides introducing incremental upgrades to Apple Watch and Apple TV (4K support), Apple introduced two versions of iPhone 8, basically very similar to iPhone7. The brand new thing was Apple X (Ten, not X). This was a very different design. The screen is bigger (5.8″) using OLED technology for the first time. Ironically the OLED screen is developed by Samsung. The iPhone X is only slightly bigger than the iPhone 7, but its screen is larger than that of the jumbo-size iPhone 7 Plus.

Here are the highlights of iPhone X:

  • A gorgeous screen and beautiful design.
  • Great cameras, wireless charging, better battery life, and water resistance.
  • No home screen, side button is multi-tasked to do few functions.
  • The best mobile operating system.
  • All on a device that you’ll end up using several hours a day.

Facial-recognition is the most prominent new feature. Called Face ID, it will be the primary tool to unlock the nearly $1,000 iPhone X, which is scheduled to start shipping Nov. 3. A camera system with depth sensors project 30,000 infrared dots across a user’s face that computing systems use to create a mathematical model that is stored securely on the phone. Each time users hold the device to their faces, the technology verifies the mathematical model before unlocking the phone in an instant. Considering iPhone users on average unlock their devices 80 times a day, the success of Face ID could make or break the device, analysts says, especially after early users get their hands on it and begin sharing their experiences publicly. This is a crucial function that must be flawless. Yesterday the demo failed and that’s not very auspicious.

If it catches on, the facial-scanning technology in iPhone X could unlock other changes in how we use smartphones. In one small example, Apple also is using the system to capture facial expressions and use them to animate images of chickens, unicorns and other common emojis. Those animojis, as Apple calls them, can be captured and shared with friends.

iOS remains the best smartphone operating system and the iPhone’s biggest advantage over its competition. Apple’s operating system is the only smartphone platform that comes with consistent, guaranteed updates. And it’s the only one that routinely brings cutting-edge features, like augmented reality, to older phones.

Johny Ive’s new design elegance is clearly seen in iPhone X, as also in the round glass auditorium lobby of the Steve Job’s theater.

Serverless, FaaS, AWS Lambda, etc..

If you are part of the cloud development community, you certainly know about “serverless computing”, almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity.

We came from monolithic computing to client-server to services to microservices to serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events which follows certain rules. Functions are written in fixed set of languages, with a fixed set of programming model and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-native applications, it offers a new option. But the key question is what should you use it for and why.

Amazon’s AWS, as usual, spearheaded this in 2014 with a engine called AWS Lambda. It supports Node, Python, C# and Java. It uses AWS API triggers for many AWS services. IBM offers OpenWhisk as a serverless solution that supports Python, Java, Swift, Node, and Docker. IBM and third parties provide service triggers. The code engine is Apache OpenWhisk. Microsoft provides similar function in its Azure Cloud function. Google cloud function supports Node only and has lots of other limitations.

This model of computing is also called “event-driven” or FaaS (Function as a Service). There is no need to manage provisioning and utilization of resources, nor to worry about availability and fault-tolerance. It relieves the developer (or devops) from managing scale and operations. Therefore, the key marketing slogans are event-driven, continuous scaling, and pay by usage. This is a new form of abstraction that boils down to function as the granular unit.

At the micro-level, serverless seems pretty simple – just develop a procedure and deploy to the cloud. However, there are several implications. It imposes a lot of constraints on developers and brings load of new complexities plus cloud lock-in. You have to pick one of the cloud providers and stay there, not easy to switch. Areas to ponder are cost, complexity, testing, emergent structure, vendor dependence, etc.

Serverless has been getting a lot of attention in last couple of years. We will wait and see the lessons learnt as more developers start deploying it in real-world web applications.