Stephen Fluin: Welcome everybody. Seems like lunch was pretty good because we got a lot of stragglers here, more, every minute here.
I’ll get started and as people file in, well, things will ramp up here. This is really going to be a presentation about three parts. Where first, we’re going to talk a little about my journey, my experiences, where I’ve come from in terms of technology, then, we’ll talk about just discovered wearable experiences in general. How to build them, how to assess them, how to judge them on their qualities and then I’ll turn it a little bit and we’ll talk about healthcare and what this could mean more globally for healthcare and for wellness for individuals.
First, a little bit about me. My name is Stephen Fluin, I’m head of strategy and innovation as Alex said. My role is really sitting at the nexus between the business side of things and the technology side of things.
Whenever you’re launching a product, whenever you’re building a business, there’s a lot that you need to know both from the business end terms of how do you address a market, how do you bring products and launch them successfully, how do you iterate on those pieces but then also a lot of those decisions need to be informed by technology.
There’s a huge number of opportunities that are coming with mobile and with wearable and actually being able to understand both sides of that coin makes the decisions that you’re going to make on both sides much more effective and so that’s really where I try and sit.
I have actually worked with something like 600+ companies now, trying to help them advise on their strategy and advise how to turn that strategy into effective tactics.
Just a little about MentorMate, the company I work for. We work with really small, medium and fortunately 500 size companies to help them imagine, design and deliver mobile and web applications.
We have the fortunate job of helping companies solve their problems using technology and trying to launch cool products and cool ideas.
I’m going to start a little bit with my story. If you think back to your first wearable device, or even your first smart watch. This was mine. This was an old Casio watch which has a lot of different cool features on it. You can set reminders or notifications. You can do calculations. You can even look up what time zones there are. A lot of capabilities here but for some reason this never caught on. My theory is that usability ends up being a lot of that.
If you flash forward a few years, then I got my first cellphone, during high school. If you remember these little audio box phones that are about that big and fit in your pocket, much smaller than any cellphone you got today.
Then, I have the Krzr which is the unpopular cousin of the Rzr. Then, a little bit further forward, we look towards on May of 2013; I had my first wearable. This is me wearing Google glass.
I participated in the Google IO Conference at 2012 and I heard about glass when they’re sky diving into the room which is a really exciting experience and I said, “Wow! I have to get a pair of this and try and experience and see what’s all about”.
For me, day 1 was all about learning, what it is, how to use it. Oh, my God I was spending about half the day staring up in the upper right hand corner of my screen and then day 2 was all about learning not to use it. Learning to let the technology get out of the day of my everyday interactions. Then, it was about two months after that, that I really integrated into your workflow.
I wear glasses everyday, a lot of people say, “Do you really wear that?” If you see me at a restaurant, if you see me out in about a target doing grocery shopping, I’m probably wearing glass. Part of the reason I do that is I’m turning myself into a robot so that you guys don’t have to.
This is a short circuit if anybody remembers that really antique movie.
What’s happened over the years though is I’ve tried giving up glass ever since I got it for the first time but it doesn’t work because what ends up happening is, I will have a situation or event like going to the grocery store, I’ll say, “Well, I don’t need glass.” I’m just going to the grocery store, why would I need a piece of technology? But then, I’ll be trying to figure out whether or not these plantains are ripe and I’ll need to ask a friend.
How do you ask a friend? You got to pull your phone out, take a picture send a message or give him a call or I can just take a picture and send it. I’ll try and do this right now.
We’ll give this a try. Okay glass, everyone smile. Take a picture. Okay glass, share this with Tweeter and then okay glass, add a caption, “Wearables at #MobCon”. It looks like it went out. You should see that shortly.
This has happened to me a huge numbers of time. In situations where I didn’t really think it would happen. I was in the garage cleaning out things a few weeks ago and I ended up coming on [cash that I had stored of jewel cases of games 00:05:31] that I had over the years. I was like, “Well, I want to throw these away but I also want to capture the moment, the memory”.
I could have very easily pull out a cellphone but that’s a little bit more hustle, it interrupts the workflow but instead I was able to just … As I was grabbing each piece of technology, take a picture of it and then throw it away and learn to move on.
I was really able to capture a lot of priceless memories.
Before I move on, I want to ask a question, how many people here have tried glass on? Okay, about half of you here.
How many people attended my session last year about wearable technology? Okay, quite a fewer number of people.
One of the things that I might want to understand is how deeply you guys want me to go into understanding the experience of wearing a piece of technology like a smart watch or Google glass because understanding that experience is inherently important to understanding how they design for it.
If we look forward now, so that was 2013. Now, it’s 2014, 2015 is coming. This is the beginning of the end for Google glass. If you look back in time, we had a long long period where we had regular watches, thousands of years. Then, we have these brief glorious period where our wrists were free.
Then, 2014 is coming, we got Fitbits, we got android wear, we got the Apple watch coming and now it’s the reign of smart watches’ beginning.
There’s a lot of different options when it comes to wearable technology that are going to continue to change and continue to load ourselves up with technology. It’s always going to be important to understand why am I doing that, what’s the value I’m going to be getting out of that.
Here’s a really interesting picture. This is after I got my Moto 360 which is one of the first round smart watches. I got really poorly targeted email from Motorola saying, “Hey! You should buy a Moto 360.” And I got this notification on my wrist which was a little bit awkward.
There’s not just android wear and there’s not just Google glass. There’s a lot of technologies that are coming out. One of the most exciting ones that I am sure many of you here has heard about which is the Apple watch but it’s not just glasses and it’s not just watches. There’s a huge realm of wearables that are going to end up being a part of our lives.
One of the things that I kick start and I haven’t yet received, we’ll see if they ever ship, it’s a headband called Melon. What it is? Is it’s actually an EEG that measures focus.
This is not intended to be worn everyday but the idea is that you wear this for a day for example and you figure out, “Okay, when do I focus best and when do I focus worst”, because sometimes for some people, maybe it’s first thing in the morning. I know a lot of people are not early risers. They need coffee, they need about four to six hours to get going. If you can understand that about yourself and if you can actually chart that on a quantitative way, you can rearrange your schedule and rearrange your lifestyle so the things that deserve attention actually get attention.
There’s a lot of cool ideas around wearables. We’re going to continue to see expand over time.
I want to give just a little bit of insight into some of the experiences that you have while wearing Google glass and some of the thoughts that have been put into building a piece of technology like this.
Everyone has seen glass, this is actually a headphone earbud that you can wear, which … It’s a very cool experience because it’s very hands free, you listen to everything by gestures and voice but it’s not very practical due to the technical limitations that have been built in the glass today.
The battery life it lasts about 16 hours but it completely depends on your use. If you are filming all the time which is a privacy concern a lot of people have is, “Are you filming me the whole time? No! I’m absolutely not, because my battery would last about half an hour.”
That’s not really a usable piece of technology if filming is part of my job. If we look at the companies that we talked to in terms of whether it’s airlines, whether it’s medical device companies, when they talked about adapting a technology like Google glass, it’s not ready yet just because the camera is not good enough to be doing scanning on a dark night, on an airport tarmac without a light and a better camera and a lens of some sort and it’s …
We have to wait for the technology to catch up. It’s the same thing with electric cars. If every electric car today had a 600 mile range, we probably see adaption a lot faster.
Flash forwarding a little bit into the experience. This is one of the most common things that I do with glass which is sending messages often while driving. It’s legal in the State of Minnesota as well as in California, I’ve checked. I’ve checked with a couple of police officer friend and they said, “Yeah, you should be fine.”
What’s really interesting about how they built this interaction is, I have about 2,000 context in my Gmail account and they’re not trying to expose those 2,000 context just because speech recognition is not really good enough for that but what they do is they try and figure out what am I most commonly trying to access and that’s really your favorite context as well as anyone you’ve recently conversed with.
This is a really nice laid out screen where you’re actually able to see and preview the list of acceptable people that you can reach out to, all with a voice command. Then, you are able to send a message to them or reply to a message in that same way.
Once you have passed that, you are actually able to send a message or take a picture. This is a really cool idea of not just being able to have words but also because there’s a camera integrated here I can send a picture. I almost find myself sending pictures about as often as I send text messages because I can send a message very easily but sending a picture, it really tells a story in a different way.
One of the things, so it does all voice transcription. As you say, “Okay glass send a message to blank”, and then you really just talk. What has happened to me is, a user of this technology is I stop reading the little preview, as you’re speaking you can see the words in white are those that have been confirmed by the system and the words in black, it’s still trying to figure out. When I said, hashtag Mobcon, it actually, it took like six guesses before it got it right and I didn’t have to interact with it at all.
It’s just looking at what is being said online and the symantecs of the words that I used to get there.
This is a case where I don’t really have to think about what I’m saying but if you compare that with a different use case, where I’m not just sending a spontaneous on the spot message, it maybe taking a note. This is something where I actually spend a lot more time crafting it, thinking about the way I’m saying it so that the speech recognition is higher quality. Then, I actually read the preview and if necessary I’ll actually take the time to swipe down if there’s a mistake and I’ll repeat it. That gets in one of the things that I’ll talk a little bit later about which is the reliability of these technologies where if you are trying to build a discoverable wearable experience, it has to be reliable because if your product mistranslates or captures information in the wrong way, it’s basically dead in the water.
The first time a user experience is that they’re going to give up. This is a great example with Evernote where they are just relying on Google and Google does a pretty good job in taking care of things.
If you want to look now a little bit, I want to talk through expanding that model from Google glass now down to smart watches. I’m going to talk a little bit about some of the experiences that ended up happening to me. I do want to contrast this. This is actually Windows 95 running on an android wear smart watch which I do not recommend you do.
If you look back at how Windows and Windows 95 and really all of the desktop operating systems work and even smart phone operating systems today, it’s a very personal action that gets a response. I choose to install an app. I go to the website. I go to the app store I say, “I want that app, give it to me now.”
What ends up happening is that this model does not extend well into wearable experiences. If I’m asking user to say, “Hey! Go download my watch app and install it.” Why would a user take the time to do that? You’re going to lose a huge percent of your market share.
I want to contrast that with what a lot of different companies have done, in particular Google is trying to improve the market place. They’ve got a lot of great examples.
This first example is the Google play music application. I’m a subscriber, I got a free subscription from whenever I go. It gives me access to all the music. I can trigger it with my voice and what ended up happening was I was on my phone listening to music one day via headphones and I looked down to my watch to see the time and there instantly without me ever have to choose to install an app or configure anything, I could see a control.
It both told me who’s playing in case I was listening to a radio and then I was able to stop the music and then if I swipe I could actually use a gesture to say next song, next song, next song.
It’s a magical experience the first time you see that because you didn’t choose it. Just by the nature and the virtue of having these devices, you got all of those benefits.
Here’s another example. I was using my phone again to stream to my comcast Netflix, so I’m watching Futurama. Again, I looked down at the watch to check the time after, for coming back from a snack break or something and there’s the controls again. There’s all the information I need so that I don’t have to pull things out of my pocket.
It seems that it may not be a big deal to pull your phone out of your pocket but in terms of how the human brain processes information, it can only handle about seven things and the 14 seconds or so that it takes for you to pull your phone out, turn it on, unlock it if you have a pass code or a touch ID and then launch an application and then make a choice, that uses up two or three little slots and pushes other things out of your brain.
If you can achieve these kind of interactions that are unexpected but then just kind of take care of themselves, you’re once again creating a magical experience.
Here’s a very similar use case. I was talking about Evernote on glass. Here’s me taking the same note or a similar note on my watch. Again, using a hot word, saying a note and then the default action here is to automatically save that. I can cancel it, I can redo it, I can undo it in case it’s misinterpreting what I’m saying but the default case is just going to take care of almost everything.
This is not me, I don’t have pants this nice but I was going on a biking journey in Europe a few months ago and what ended up happening was just like this gentleman, I would keep pulling my phone out to check my stats, because we were doing this bicycling ride through the mountains and so I kept going slower and faster and I really wanted to see how far we have left because it was like 15 miles and it was new for me.
What ended up happening is I got to the near, the end of the journey and I turned my wrist over to check the time again, and there was the same information. I didn’t need to be dangerously, shakily pulling up my phone, unlocking it and looking at that information. I could have just been looking at my wrist.
Direction is another great discovered experience. I think Apple is going to do a fantastic job with this but what Google has works quite well. You are able to both initiate navigation as well as to send all the instructions that are coming from your phone to your watch which you don’t think you need that until you put your phone back in your pocket. Again, that initial triggers, well, wearable experience is glanceable and it’s going to be faster and it’s going to be a better experience.
This is one of the weirdest apps I’ve used. Can anyone guess what app this is? This is actually the camera app. As soon as I launched the camera app on my phone, my watch allows me to have a remote shutter. What this does is this has two parts, first is I press the one button, this is the entire interface, it’s a button. You press the interface, it gives you a three second countdown and then your phone takes your picture.
I can set up phone up if there’s a group of us for example, and I can say, “Okay, take a picture.” What ends up happening is you get the preview of that picture on your watch.
Just think about that group picture taking scenario and how powerful that could be not having to ask someone to take your picture and not having to guess, take the picture, go get it, check it again, do it again. Again, it has no UX, it’s just a circle but because I’m already taking a picture, because it’s relative to the context I’m in, it makes sense.
I want to turn these examples and I’ll try and help you guys achieve some of these wearable experiences.
The first, the one thing I want everyone to take away from this is, don’t make the user choose you. If at all possible, your application should be launching itself, it should be attached to an existing application that may already have or already installing. These things do not have keyboard unless you’re Microsoft. They actually did release a keyboard for android wear which is an interesting choice on their part but you’re never going to ask a user to login. You should automatically be coming in the door with that context and then taking that and extending that, doing something useful and interesting with it.
The framework that I want you to think through really has four parts. Any application, any experience that you’re developing should be short, simple, context sensitive and reliable.
This is a slide that Google put together and I made a couple of modifications to it. What this tries to do is this tries to tell the story of an interaction with the phone.
Green is when the phone is sitting in your pocket, which is most of the time and then as we’ve heard a few times throughout the conference, 150 times a day you’re pulling your phone out of your pocket and then your launching and interact with the applications.
If we look at how this can change with the advent of wearable technology like glass or smart watches or headbands, there’s a very interesting piece of technology called the Motorola Hint.
How many people here have heard of Hint?
It’s a little bit like Google glass but it’s … Do you remember the bluetooth speakers? It’s exclusively a bluetooth speaker that sits entirely inside your ear. There’s a really funny episode of the office from about four years ago where they made up this idea of a bluetooth speaker that fits entirely in your ear and it was just a funny idea to them but now, Motorola has actually gone and built it.
What it does, is it actually extends the glass experience in a way where you don’t have the heads up display and everything you do is via voice and via audio. You can tap your ear and say, “Get directions” and then it will read directions to you just verbally, but then when we look at the glanceability that’s created by this experience, you’re going to see the same moments again but so much shorter.
What happens is there is an order of magnitude shifts. We used to have to go to a desktop computer, that was an order of magnitude worst than when we had to go to a laptop. A laptop is going to be much faster, it’s going to boot up, it’s going to be with you wherever you go, but then we have another shift from laptops down to cellphones and what we’re going to see is that we’re kind of going to hit the last order of magnitude shift down to wearable devices where it doesn’t feel like a lot but in the same way going from a laptop to a phone was a big deal.
Going from a laptop to a wearable was a big deal because it really lowers the barriers to engagement. You’re going to make different choices in terms of how you interact with brands and products.
We saw a lot of … There’s a talk going on right now for a company called Inboxstallers as well as … There’s a lot of other companies where they’re trying to monetize short brief interactions. Inboxstallers, you get an email or a survey and by responding to that survey you’re giving that information and you’re getting paid in exchange for that.
Imagine if that experience could go on a watch, I might do it a hundred times in a day rather than spending five minutes sitting down and doing it 20 times. That type of shift in the way that you engage with your customers can actually have a huge impact on the engagement and the results that you’re going to see from using that technology.
The second concept here is simple, you’ll note … Once again just like that camera application, not a lot of UI UX here. There’s basically one button on the screen at any given time, because a user of a watch is not going to be spending a lot of time poking, there’s not going to be a stylus for a watch, at least I hope nobody tries to release a stylus for a watch.
Making a combination of swipes and taps is going to be a really easy way for users to interact. When you think about, hey I want to give my users a menu of options, you have to try and think about how do we convert that menu of options into a consumable series, a small series of actions and then really sort it, use that context, use that information that you’ve got about the user, about the application in order to make it approachable.
Context sensitive, this is an application that is installed, called Coffee Time. How many people here have a Starbucks card or use the mobile app? Okay, a few people. This is an application designed for those people. It does one thing and it does it really well, which it takes your barcode that you have from the Starbucks app and it makes it available on your phone.
By saying, “Okay Google, start Coffee Time”, it will actually pull up on my watch my Starbucks card and then I’m able to swipe it.
You see the same thing with the Delta App where they’re trying to give your boarding pass automatically on your phone although I have had a terrible experience with Delta. I don’t know if anyone else has tried using that but the first time it ever came up for me, it had the wrong ticket. That was not a fun experience which is actually a great segway into the last piece which is your experiences have to be reliable.
If you’re Delta and you take an experience out into the wild where even 20% if your users fail to use the application successfully, you’re not going to win. You have to go back and rebuild those experiences before people will give you another chance.
This is actually one of the more common screens on Google glass today. I call it the SAD cloud. What this means is that there were some sort of error along the way and they weren’t able to reach the cloud in order to process your voice or display an action.
When you move to a wearable experience where I’m not just tapping a phone, it ends up becoming a much more public experience. I’m tapping in a phone, you don’t really know what app I’m running, you don’t really know what I’m typing.
Whereas if I’m saying, “Okay glass, take a picture”, everyone in this rooms is aware of what I’m doing. If I asked to that multiple times, it’s shame and embarrassment and I’ve been walking through a target and saying, “Okay glass, take a note. Hey! I should pick up that thing next time.” If that fails, I’m not saying anything again because everyone around will look at me funny and that would not be good.
This is another example of a wearable where reliability is really important. Does anyone recognize this? One, right. So this is called the Myo band. The idea was it actually sits on your arm and reads the neurons in your arm, the muscle and it can actually sense when you’re about to flex your muscle because before that muscle actually moves, your brain signals down your arm.
What’s cool about that is if you can read muscle movement at a [granular enough pace 00:24:53], you can do cool things like give someone volume control of their speakers. By just twisting my wrist, I’m turning the volume up or down or I can swipe music forward back presentations, things like that but I had both the alpha version of this and now the [correction 00:25:08] version of this and it’s got about a 30% error rate. That 30% error rate, it’s completely unacceptable.
I can not wear this band. I’ve not wanted it since day one. As an application development shop, we’re not going to build applications for it. The same thing really applies to if anyone seen delete motion. It’s a little sensor that sits in front of your computer. It’s the coolest idea ever. In the video they have chopsticks where they are playing Angry birds with chopsticks which … It’s such a magical experience to connect the real world and the digital world but if those experiences aren’t going to work and if they’re going to fail, it’s never going to be successful commercially.
I want to turn things now away from the general case and talk more specifically about how this can operate in a healthcare world. When I talk about healthcare, I’m primarily talking about it from a patient perspective. There’s a huge number of efficiencies that can be gained from physicians, clinicians, practicing nurses, things like that but those productivity gains are a little bit different than the ones in the consumer based because this to use as a consumer, that’s a social and that’s a personal choice.
Whereas if you can mandate that in a corporate or business or hospital environment, then there’s a completely different realm of possibilities. A hospital might have 30 pairs of these for their surgeons, so that they can get consults, they can get real time information about the actions they’re doing.
For example, we did a project with the company where … I’ll talk about this little bit later. Critical messaging can come in. Someone requests a consult for me, I don’t have to pull out my phone or reference information about that patient, I can actually pull it up in real time via glass which for a physician who’s trying to interact with a thousand of patients a day, that’s a big deal because the more patients you can interact with and the better the information you have, the better the outcomes are going to be.
I’m going to talk through a few examples, the first is building a reminders experience for wearables. Imagine if your watch or your glass, pair of glasses told you, take three units of insulin. That’s the short. Tell them exactly what they need to know. Simple.
There’s really only two actions a user should ever be taking. Whether they took it now or they didn’t take it now. That could be as simple as a swipe. You’ll see that a lot in any sort of reminder applications or you’ll see that in calendar notifications, things like that, even incoming phone calls.
Make it context sensitive, based that on sensors. There’s no way to know whether I should take three units of insulin unless you have data, both about the patient, about their history, about their trends inside their blood.
Did we actually detect high blood sugar? Is this the correct response for the right moment for this patient?
Lastly, use those screen based interactions and make it reliable because this is life and death. If you tell a patient to take three units of insulin and they’re not needing that, this could kill them.
Another example is exercise. This is much more on the wellness fitness side of things where imagine a golf application that you’re using to track your scores and then your watch is automatically tracking what you do.
Imagine it says, your last swing was 3,280 pounds of force. That’s a very short, concrete to the point interaction. Simple, tap to delete. Otherwise, just save it, no interaction.
I could play an entire round of golf without having to think about what I’m doing but then I get that data which might help me make become a better golfer. Make it context sensitive. Use the geo location. I should only see this if I’m on a golf course and if I’m actually swinging. Use that detection and then save that information.
Then, the last piece is make it reliable. Don’t miss a swing because if you are putting in false swings, if I’m just waving to friends or if the information is not useful, once again, I’m not going to use your application. You’re going to lose a huge opportunity.
The last example I referred to a little bit is this idea of clinical intervention, imagine if you could start a code blue just by saying that. Make it simple. If I’m a physician or a nurse that is called the code blue, show me the steps one at a time.
At Google IO they show a lot of examples of cook books and things like that but the exact same idea applies here. Figure out what I should be showing the user on the screen and then show it to them and then make it simple, they just go next next or dismiss the entire interaction.
Make it context sensitive. figure out, “What are the relevant doctors that I should be communicating with, what are the relevant symptoms that we’re detecting, what other sensors we have in the body, what interventions have we performed?”
One of the hardest parts of a code blue scenario is that there supposed to be a person sitting there whose entire job is to write down everything that’s done. Imagine if we didn’t need that person because we could collect that data with the appropriate sensors.
The last, they kind of pull in the patient history. Is this the fourth time this has happened? What did we do last time? Was it successful? Was it unsuccessful? What do we need to know to be effective in the moment because every second that passes there, if I have to pull out a phone or if I have to grab a tablet off of the cart, I’m going to be less effective.
Then, last, make it reliable. Don’t necessarily ask someone to use this, maybe automatically trigger it if sensors go into a certain range for a certain time and then maybe ask for permission or say, “Hey! We noticed the code blue, we triggered it, do you want to cancel that?” Making sure that the technology is doing the right thing in any situation.
I want to talk a little bit about the future of healthcare and where we’re going with this. If you add everything up, we’re going to have more sensors and more types of sensors.
If you look at a common smart phone that you get right now or common wearable, they add a barometer … Does anyone know why they add a barometer in a cellphone? What they’re actually doing is they’re measuring pressure change to figure out altitude which actually gives them a better lock on your GPS.
It’s very counter intuitive sometimes how we can use these sensors and the more sensors that we’re going to have, the more effective we’re going to be.
I’m going to walk through a little story here, imagine that you care about your health and so you’re exercising regularly. You’re wearing your Fitbit and maybe you’re hitting your 10,000 steps everyday. That can be an application. That’s going to be making you more effective.
Then, using that data and extending on that, you’re getting reminders about your own personalized goals and motivations. “Hey! I want to be healthy for my kids.” Telling a patient or persons care about wellness, why they’re doing something instead of just you hit 10,000 steps, show them a picture of their kid.
Show them, “Hey! This is why you’re doing it and you’re being successful.” Reward those experiences. Then, imagine you have visibility into your own progress.
Now, it’s not just a moment in time. I’m saying, “Oh, wow! I’ve made the right choices 16 days in a row and I should keep that up.” Then, imagine something starts changing, imagine your blood pressure starts going out of sight of your normal range, and that’s your normal range because everyone’s range is going to be different. We’re going to have data and we’re going to have information on that and then we could build an application or we can build an interaction that tracks that and informs user, “Hey! Somethings happening here, we should do something about it.”
Imagine, I see this and I say, “I should go to the doctor.” I schedule an appointment but then I’m using that same technology to share that information with the relevant physicians that are going to be taking a look at it. Then, the day comes for my appointment and my watch shows me, “Hey! I know where the appointment is. You should leave by 6pm to get to your appointment on time.” That’s a great information. It’s going to make me actually maybe go.
Then, I sit down with my doctor and I pull up my Evernote, whether it’s glass whether it’s wear, you pull out the information, say, “Hey! There’s four things I’ve been wanting to talk to you about. I’ve captured these over the last six months.” I know personally, if I don’t have that information, I kind of get in the doctor and I’m nervous and I forget to ask about things and I say, “Oh, Okay, well, I guess I’m fine, I’ll just leave then.” But if you can say, “Hey! Yeah, I had a really weird pain that came up after I went jogging.” Maybe, you end up having a conversation about that.
Lastly, imagine if your doctor could make better diagnosis. What’s ending up happening, in my perspective, this is not a medically validated opinion but I think doctors are guessing about 80% of the time. You come in, you just talk with them for about 5 minutes. You give them a really terrible description of what happened and then they’d say, “Yep! This is what’s happening. This is what you should do about that.”
Imagine if we could have more data, more information and say, “Hey! Here’s my last three years of blood pressure. Yeah, I was going crazy it’s not really a problem.” They can make better choices based on that and they can have better decisions and then the idea is using technology and using this data, we’re going to move away from that guessing universe into a universe where we’re actually making good choices.
Then, finally, imagine if we could do the follow up. “Hey! We found out that my blood pressure has been going up because I haven’t been exercising enough. Maybe we tweak my goals, maybe we tweak my motivation.”
All those other pieces and steps along the way, they automatically update with that new information and the new choices that I’ve made about my own healthcare.
When I think about healthcare, there’s really kind of three pieces. It’s the inputs to your body. This is everything from what you eat, to the medicines you take, to the air you breathe.
I for example, I have a scale called a Withing scale and that scale measures the CO2 in my house and you can see when it goes up and it gives you a little warning and says, “Hey! We are at 3,000 mg of CO2.” Anything over a thousand is associated with kind of fogginess and mental slowness.
Second is the activities. What am I doing? This is where the fitness trackers come in to play hugely but they can also be surgical interventions.
The last piece is the processes. So these internal processes that happen within our body and what’s going on. Our bodies are not the perfect machine. Some thing is going to go wrong along the way but the idea is that if you can combine all these things into one, via this personalized big data then we’re going to be more able to take control of them and keep them all in sync.
With that I want to thank everyone for coming. I want to turn it over a little bit to questions.
Audience: Would you expect wearables to do innovations with the implanted diagnosis?
Stephen Fluin: Sure. I have a personal rule on implanted diagnostics. I will do only the 10th version of an implantable. If the first 9, no good.
I personally will wait on any sort of implantable diagnostics but I think it’s a very exciting space that we have to try because every human being is different and we have to get that information in order to make positive choices.
I find it interesting that a 100 years ago, people would die of old age but that was just the name they gave it because they didn’t know what’s going on. Now, we had cancer for 20 or 30 years, where it was just cancer because we didn’t really know what’s going on.
Now, we have 70 different types of cancer and as more and more information comes out about the human body, we’re better and better reacting to those things providing treatment and taking care of patients, providing them better quality of life which is huge and I think at some point, there’s only so much data you can gather from outside the body and you have to turn that in.
Audience: [So the two part, the number one, it’s personalized big data. Where does that reside and absolutely I’ve … 00:36:56]
Stephen Fluin: Great question. It’s interesting because I don’t think we know where it’s going to reside right now because every single company wants to own that data.
You saw this on a much smaller scale with Fitbit and Jawbone and Microsoft where they all have their own fitness network and then, they build one by one connections to all the others because they both want to be the soul source of data but they want to get everyone else’s data.
What’s ended up happening is it actually created a very democratized environment where I can connect all of these different systems and networks any way I want and then you have huge players like Apple and Google that are trying to succeed here.
The thing I’ll say about their efforts is that they have materially failed in my opinion.
Healthcare which I’m very excited about … I think long term has a lot of great prospects, today it’s a failure. My iPhone sit on my pocket, counting my steps and pulling in my weight data from [inaudible 00:38:06] but that’s it.
I couldn’t access it online, I couldn’t access it on any other device. It was for security reasons completely lock down and completely useless for me.
Whereas if you turned that on to more Fitbit, I actually get dashboards, I’m seeing information I’m able to make personal choices.
Whether or not the healthcare companies will be able to build a platform or the providers themselves are going to be able to own that data, I don’t think that’s going to end up happening.
I think it’s going to end up being, the user has a relationship with one of these or many of these that ultimately even though their data is residing and stored by another company, they’re going to be the ones in control because of this platform capability, because we’re all rushing in there. It’s a little bit supply and demand.
We all want it which no company has all the answers which gives us all the power. Does that answer both parts of the question?
Audience: [inaudible 00:39:02]
Stephen Fluin: Do you have a better answer? Yeah?
Audience: [So with regards to the platform … 00:39:11]
Stephen Fluin: I think that’s already happening today.
There are insurance companies that whether or not they make that decision or whether they make it clear what’s happening, they will look at your Fitbit data. They want you to share your Fitbit data with them.
I don’t think they can own it though because unless they can take care of your whole healthy lifestyle from everything from inputs, the foods you eat to the interventions and the activities you’re doing. They can’t own that entire experience and so there’s going to maintain some level of portability.
Just flash and forward five years, think about how Tesla collects data around what your [Cardots 00:40:17] That’s much more appealing to a consumer than progressive snapshot.
I would much rather have the data that Tesla gives me about when I drive, how I drive, where I drive than what progressive gives me because progressive they offer me 100 bucks offer, whatever amount that is, but Tesla gives me something actually interesting and something I can make choices based on.
Thank you all for coming so much.