Artificial Intelligence, Part II

Part II of III 

Artificial Intelligence Enablers

In part I, we looked at the contribution of visual theory to the Cambrian explosion in AI. Of course, sight and natural language processing alone are not enough to explain the rapid developments of the last decade.

Rather, it seems that there is a convergence of several factors. For many recent years, people have said that Moore’s law is over. Gamers and NVIDIA might disagree. The technology to power today’s console and computer games, relatively slow in individual instruction but massively parallel and fast in combination, have proven invaluable in emulating the massively parallel circuits of the human brain. With this compute capacity, raw electrical and cooling power, while no where near the remarkably elegant 50W light-bulb consumption level of the human brain, is no longer a constraining factor.  

A $200 graphics card from Amazon, NewEgg or BestBuy, as long as the crypto-currency miners haven’t driven it out of stock, is in many ways more powerful than the most recent generation of supercomputer. It has for sure contributed to the democratization of AI.

A world without rules

 Picture of apple tree at Trinity College, Cambridge by zenm at flickr

Picture of apple tree at Trinity College, Cambridge by zenm at flickr

In the late eighties, when I was personally contemplating the tree at Trinity College, Cambridge that was said to be the grafted descendent of the one that caused the curious gravitational incident with the apple for Sir Isaac, I was wondering if I had waded too deep into applied mathematics. 

For the next two decades, the basic realization was I had. But the professional migration for me away from rules-based automation of healthcare intelligence (we had 20,000 such rules at Healthways) to Baysean statistics and Markov trees that power today’s deep learning models, made me realize how valuable these foundational mathematical concepts are today; I am now of the opinion that we are likely teaching AI all wrong in most schools, by lumping it with robotics.

We are the data center

For many years as CIO of a publicly-held company that provided health services to large health plans and employers, much of my time was spent explaining to my customer counterparts why their data should be stored in “the cloud”, and not a private data center.

I was wrong on both counts. First, it’s not their data, it’s the consumer’s (member, employee, participant, etc.). This becomes easy to see in the wake of Cambridge Analytica, Equifax, etc., but should have always been the case.

Second, the cloud is not the best place. Each participant’s mobile devices such as Smart Phones, Apple TVs etc., have more compute power than most prior-generation super computers. They have the same fast graphics architecture that power the gamers’ desktops and living room consoles. And so TensorFlow, CoreML and other freely available technologies are now powering AI on the phone. In 2017 it was “train in the lab, deploy in the cloud”; in 2018, it’s “train in the lab/cloud, deploy on the phone”. We’ve designed our company on the belief that it will converge to train and run at the edge, wherever that might be.

So with enablers like sight and language processing, gaming chips, statistics 101, edge computing and a healthy dosing of empathy-based-design, we can finally design and implement Artificial Intelligences with broad, useful application. 

In part III, we’ll explore the implications for healthcare.

Guy Barnard is CEO & Co-Founder of Synchronous Health, an artificial intelligence behavioral health solutions provider. He was previous Chief Information Officer at Healthways, a $2B population health company, and held leadership positions at the Boston Consulting Group and Accenture. He holds an MBA from MIT and an MA and BA from Cambridge University.