I’ll admit it: I have had a serious crush on a wearable device (or two). I’ve gotten down with a Jawbone Up, fallen for a Fitbit Force, and even messed around with a Misfit (Shine). I’ve serially dated activity-tracking apps like Moves and Breeze.
But I’ve had to break up with them all.
For starters, I’ve never gotten enough back to make these relationships worthwhile. Yes, I now know that I walk more on Mondays than on Fridays, and that my sleep deteriorates over the course of the week. Big whoop. Having this information hasn’t changed my life.
I was promised a superhuman level of understanding in exchange for the rights to collect, save and analyze my personal data. I still don’t have that. I’m still the same old me.
The news that Nike is abandoning wearables should be a wake-up call. We are missing an incredible opportunity. How do we get back this industry back on track?
It’s clear what the problem is. Wearables are great at surfacing a lot of data. They’re just not good at making sense of it.
What does it mean if I took 5,000 steps — but swam 20 laps in the pool? Did I hit my fitness goal or not? What does it mean that I only got three hours of deep, restorative sleep last night against my eight hours of rack time?
We can’t be sure — at least, not without some additional knowledge. As humans, we are left to draw our own conclusions, based on what we know already. We interpret the output of wearables in the context of what happened today, how we felt, what we’ve done previously and so on.
Without context, we’re all in the same boat as the protagonist (Guy Pearce) in the opening scene of the movie “Memento.” All we know is that we’re sitting in a diner with Carrie-Anne Moss.
Unfortunately, that fact alone tells us almost nothing. In order to figure out what’s going on, we need the context of the other events in the movie to explain who she is and why she’s there.
The same thing goes for step counters, sleep trackers and all of the other wearables out there. Instead of helping us understand why we couldn’t sleep, they provide us with data that doesn’t translate into knowledge that we can use. They’re not entirely unlike those Polaroids that Guy Pearce kept trying to make sense of in “Memento.”
In order to recapture our collective imaginations, wearables need to gather enough contextual knowledge to draw meaningful conclusions themselves — and to let us know when they’ve found something good.
For example, if I knew that I slept better when the average temperature in my bedroom stayed below 65, I’d be clamoring for my Jawbone Up to talk to my Nest. Tell me that I’m more active on days when I have more meetings, and I might actually want my Tempo calendar to accept invitations to more sit-downs.
So, why aren’t we seeing everyone race to use more contextual knowledge?
First off, contextual knowledge is still astonishingly hard to acquire. Our phones could know a lot about us, including where we go, what we’re doing, what we search for. No one’s really taking advantage of all of that information — yet.
That’s not to say we’re not making progress. Saga knows where I go — without me having to check in. (Disclosure: Saga is the flagship app of my company, A.R.O.) Cover and Aviate know which apps I use.
Second, access to the best sources of contextual knowledge depends on complete and unfettered access to data. Nothing held back. Unless they can always run in the background, most apps aren’t going to know about all the little things that make up the majority of my day. Those data points make it possible to understand what I’m really up to — along with why my blood pressure is skyrocketing today.
Finally, progress is blocked by siloed data. Even though I run with a different playlist, my wearables don’t know what I’m listening to — or catch that the timing of my run means my meeting ran late and I’m getting my workout in as the sun goes down.
Others have already called for unifying platforms that collide heterogenous data sets, eliminating sensor silos and producing non-obvious insights. And that’s the key. We’ve got to find ways to draw meaningful correlations between the different data silos we’re populating. That’s the only cure for the lackluster results we’ve gotten so far.
The next generation of wearables and sensor-data platforms — the ones that manage to be fulfilling and to be gratifying — will put these kinds of correlations at the center of their universe.
Nancy Harvey, formerly of WolframAlpha and now an executive in residence at the University of Chicago, is fond of saying “it is reasonable to expect.” I love that expression, too, because it forces me to deduce what’s really going to happen in a certain situation, given the facts available.
We have to build the next generation of intelligent systems in the same way. Systems have to know that it is reasonable to expect that people who drink an above-average amount of coffee will sleep a below-average number of hours per night. And that just as it’s not reasonable to expect a downpour when there’s not a cloud in the sky, it’s also not reasonable to expect me to get up early and go running if I’ve been out partying until 4 am.
Companies like PokitDok are taking some important steps in this direction right now. They’re drawing correlations between data captured from a patient’s wearable devices and the cost of their health insurance or even a health procedure. Could sharing your Fitbit data get you a lower premium or a cheaper knee replacement? Signs point to yes.
Finding meaningful correlations won’t necessarily be easy — but there will be riches for those who can find them reliably and systematically. A system that unites sensor data in one place is definitely useful. But a system that computes on that combined set and points out unexpected relationships between seemingly unrelated variables is invaluable.
Andy Hickl is CEO of A.R.O., a platform that turns sensor data into contextual intelligence you can use. A.R.O.’s flagship app, Saga, records your life automatically so you can share it with the people you care about. Reach him @andyhickl.