I heard a great podcast discussion with Andreessen Horowitz’s Vijay Pande a few weeks ago on why this time is different for wearables and it’s worth diving into this topic a bit further. The discussion centered around what’s different about wearables now vs. when wearables first launched. Vijay’s primary point was that data science, particularly machine learning, is now being applied to biometric sensor data at scale and in context.

They didn’t have time to dive deep on the podcast, so let’s unpack 4 of those points in more detail:

  1. Machine learning – There are a few elements to this point. First, machine learning techniques have advanced rapidly in the last 5 years. In addition, machine learning and AI toolsets have largely been democratized by Amazon, Google, Microsoft and many others. So now teams can get access to powerful tools and the computing power necessary to use those tools at relatively low costs and ease-of-use. Lastly, machine learning is in the early stages of being applied toward large biometric data sets. More on that shortly.
  2. Biometric sensor data – The quality and accuracy of sensor technology has drastically improved in the last 5-10 years. We have moved from relatively basic accelerometer-based step counters (think the first Fitbit) to more advanced sensor modalities, including PPG and ECG sensor systems, capable of providing deeper insight into personal health and medical conditions. For example, the quality of the PPG sensors, which have become the dominant sensor technology in wearables, has improved to the point where it can now detect heart rate variability, atrial fibrillation, and blood pressure, among other biometrics. We at Valencell believe PPG sensor technology has a long runway of more advanced capabilities that will come to market in the next few years.
  3. At scale – There are now hundreds of millions wearables in use today – over 200M sold just since 2018 using IDC’s numbers – 70% of which have PPG sensors according to this study. This is critical because data science and machine learning rely on large data sets with a diversity of human biometric data for its effectiveness.  
  4. In context – This is also critical because this enables those large data sets to be labeled and contextualized with additional data from electronic medical records, clinical trial data, food diaries, environment sensors and much more. This is all happening in the context of a macro-level shift (albeit a relatively slow shift) from fee-for-service to value-based care, which is driving more payer interest in preventative care vs. a focus only on treatment and therapeutics. 

It’s important to note that wearables were WAY overhyped from the beginning. Many people thought they would replace the smartphone as the next big technology platform. Wearables were never going to replace smartphones. Wearables are becoming an important element of the technology ecosystem (Apple now says its wearables business is now the size of a Fortune 200 company), but it’s being driven by health and wellness, not mobile computing. 

And new we’re finally getting somewhere – wearables combined with machine learning are showing promising results. Here are just a few examples:

Everyone is looking for the next big thing – it’s not one thing, there’s no silver bullet. It’s the aggregation technologies, devices, and human behaviors in a complex adaptive system that’s leading this next phase of wearables growth. And we’re in the very early stages of the impact this convergence will have on individual and public health at scale.

As a sidenote, we wrote a post on this back in 2017 and it’s interesting to see how the market has played out. Check it out here: https://valencell.com/blog/next-phase-wearables-market-growth/


Also, here’s a link to the podcast. I highly recommend listening.