the physical web

The New Physical Web

October of 2014 was a big month for Technology Announcements. At the pace our industry is moving, every month is a big month for these sorts of announcements. With all the innovation that is going on, it’s easy to miss the minor technologies that could fundamentally change the ways we interact with technology. One of these technologies that is worth a second look is the Physical Web.

Started as a side project out of Google, but run independently from the broader organization, the Physical Web is a set of ideas and standards about how human beings and technology should interact.

The Way It Should Be


Story 1 – Retail
You walk into a Starbucks and on your phone’s lock screen you see a notification: “New Physical Devices Available”. You click on this notification and get a list of interactions available and offered by this store:

  • Jukebox Control – Pick a song from our approved library and add it to the queue
  • Digital Order – Place your order digitally and pay for it using your Starbucks card
  • App of the Day – Download the Starbucks app or song of the day

Each of these interactions are currently available in Starbucks stores around the country (Except Jukebox Control). The way you discover them is through signage and little cards next to the cash register, and through word of mouth. When you discover one of these capabilities, you most often have to search for them via your phone, type in a difficult URL, or scan a QR code.

Each of these interactions create friction for customers today. These experiences aren’t packaged and seamless, and often involve a fair amount of work on the hand of the customers in the form of typing, signing in, download applications, etc.

With the Physical Web, Starbucks would create one or many bluetooth beacons that broadcast the available physical services in a standardized way that phones can recognize and interact with.

This simple ability for my phone to connect me to these services without downloading a location or business specific application, while still protecting my privacy and security would create magical experiences.

Story 2 – Home Control
More and more devices are being connected to the internet every day. From toaster ovens to thermostats, and everything inbetween. Today for each device you add to your home, you need a custom application that speaks a custom protocol.

Tomorrow, after you plug your devices in, they may just show up on your phone automatically. Security will still be a problem that isn’t going away. You will likely need to type in an ID or press a button on your device. Devices would have the flexibility to lock a device to only you, or to share with others in your home, or even visitors. The overall interaction would still be much simpler than what we have today.

How it Works


The idea behind the Physical Web is that instead of all of the custom programming and protocols that have been built up around the Internet of Things, we can return to the same technology that made the web interoperable and standardized, the URL.

By using Beacon technology, homes and stores and offices can setup any number of web-based services that users already know how to interact with. This, combined with the latest and greatest in HTML5-based application development means that we have another solution to the problem of discovery and relevancy.

If a device manufacturer such as Nest wanted to, they could still build and ship a native application. Android has built-in capabilities for native applications to interpret and act on URLs. This would allow them to offer a superior user experience for frequent use cases where discoverability isn’t a problem. The nice thing about the combination of the Physical Web and native applications would be that the Physical Web could help users download and install the right applications.

Google I/O Leak

While this has been a slow sleepy project without a huge pace of innovation, there have been minor leaks that Google I/O this year will be taking advantage of the Physical Web to offer interaction throughout the conference. If this happens, it will likely happen in conjunction with standardized support throughout Android for the Physical Web, so we may be seeing Physical Web devices and services still this year.

apple watch display1

The Importance of Fashion

Fashion and Technology have long been frenemies. For decades, technologists and computer scientists have, in some ways, been shunned by society. It’s only been in the last 10 years or so with the rise of Apple and the consumerization of technology that these worlds have been able to intermingle. This ceasefire was great for consumers and device manufacturers until 2014, but with the explosion of Wearable technologies, fashion is now an integral part of launching a technology product.

When you look at this weeks’ announcement of the Apple Watch, they have effectively launched a single technology product with a number of different fashion choices. Instead of the price being based on the processing power and capability, they are asking users to pay hundreds (or thousands) of dollars for different looking metals and bands.

Apple Watch Sport

apple watch sport

$349

to

$399

Apple Watch (Standard)

apple watch

$549

to

$599

Apple Watch Edition

Apple Watch Edition

$10,000

to

$17,000

 

If you look at the incremental value that each of these pieces of technology provides, the $200 price difference between Sport and Standard upgrades your material from Aluminum to Steel. I firmly believe that if you put a bunch of people in a room without knowledge of the Apple Watch and asked them how much they would pay for their device to be Steel instead of Aluminum, I don’t believe any of them would go as high as $200.

This is the bridge that no other company has been able to make. When Lenovo or Samsung sells a device, they charge higher prices for more CPU speed, higher prices for more RAM, but Apple has cracked the code of getting people to pay more for the fashion and the social statement of a device. This allows Apple to decouple their costs from the price being paid by the users, and will allow them to dramatically increase their profit margins.

Even if they don’t sell more than 1,000 Apple Watch Edition devices, people will be more likely to purchase the Apple Watch Standard because “it’s cheaper”, even though it’s nearly the price of a new phone.


What do you think? Cast your vote below!

old software 729x4231 729x423

How the Appstore Changed Software Development

Long ago, in order to install apps on a computer, you had to go to a brick and mortar store and peruse aisles of boxed floppy disks or cd-roms. Consumers had to decipher an arcane list of hardware and system requirements to determine if the application they wanted to purchase would run on the computer they had at home. Return policies were stringent and unforgiving. If you went home and tried to install application only to find out your version of windows was too old or you didn’t have the necessary sound card, you were out fifty bucks and two hair pulling hours of your time.

computer smash

In 2008 Apple and Google introduced specialized app stores for installing software on their smart phone operating systems. Three years later Apple copied the distribution system for OS X. These app stores revolutionized software distribution. Consumers are only presented with apps that will actually run on their smart phone or computer, they can read reviews before installing, and updates are installed automatically. Buying software is as easy as tapping your finger on a screen or a trackpad.

The ease with which a user can install or uninstall software has had an effect on how consumers perceive software. What people would once pay $70 for at Best Buy, they now are only willing to spend money in the single digits. One reviewer of a popular video game writes, “Was great but not worth more than a dollar” for a game that cost $5. People have concluded that ease of acquisition is proportional to ease of development. They think that since the opportunity cost of a software purchase is less, than so to is the development cost.

top grossing apps ios thumb

As apps have gotten smarter and more sophisticated, users have come to expect more, but they are less inclined to part with money up front. In order to fund development, software makers have devised new and novel ways of monetizing their work. In addition to in-app advertisements, we now have the freemium pricing strategy. Freemium is somewhat analogous to the shareware of the pre and early internet days. Applications are free to download, but come with a limited set of features. Advanced functionality can be had by paying money to unlock them. Game developers restrict access to levels beyond the first few until a user unlocks them for a small fee. In free to play games such as Clash of Clans, players can buy in-game gems to speed development or purchase special items. Apps like Pocket and Evernote employ freemium subscriptions. Users have access to all the functionality of the application but there are limits. Pocket allows users to save and read as many items as they want for free, but for $49 a year, users have a permanent archive of the content they store, as well as the ability to tag and search their content. Evernote Premium increases the amount of storage space for notes and pictures and also allows users to collaborate on notes, as well as enabling the ability for the app to work offline.

The app stores have also helped customers and developers communicate better about software. Though in a somewhat broken way: reviews. Users employ the review system to not just convey the merits or defects of an app, but also to request features, report bugs, and ask for support. This is somewhat frustrating for developers. On one hand, they get feedback about their work quickly and from a wide range of users without having to seek it out, but at the same time users will try to bargain their rating for their pet feature request. Other times users will rate an application poorly because they can’t find a feature or because they are having issues with some aspect of the application that isn’t a bug or a crash. To mitigate these issues, Google wisely allows application developers to respond to reviews. Not only does this contextualize a negative review so as not to influence other potential users, but it also permits developers to reach out directly to users who are having problems and provide them with support.

App stores have also had an effect on how software is developed and released. With the rise of agile software development methodologies and product development strategies like minimum viable product, software makers release their applications with reduced feature sets and in unfinalized states. When Ulysses III, a text editor for OS X, was released to the OS X App Store in April of 2013, many users complained about missing functionality from the previous major version. This version was a complete rewrite of the seminal text editor, but in order to ship by their deadline they needed to leave out features that users had come to rely on in the previous version. The makers of Ulysses III, quickly iterated and enhanced the editor over the next year bringing in old features and adding new ones. Because app stores greatly reduce the cost of distribution, developers can get away with this. Often applications will be intentionally brought to market in an unfinished state and users will guide future development through feature requests and usage patterns.

In a way, users of software have become both beta testers and product developers. They try unproven features and provide feedback on usability or bugs. In the Android ecosystem, users also provide developers with data on phones and device configurations that the developer may not have had at her disposal when creating the application. By requesting features and making suggestions, users also guide the design of the product.

Though it is not without it’s problems, the rise of the app store has been a boon for both consumers and developers. This new ecosystem provides enhanced communication, faster development cycles, and stronger products.

A Quick Look at Cross Platform Philosophy 729x423 729x423

A Quick Look at Cross Platform Philosophy

When evaluating or designing an approach for reaching users across a wide variety of platforms and devices, MentorMate applies our core mantra of ‘Business Needs First, From a User’s Perspective’. This means that the chosen solution should support the outcomes and end state desired by the business in a way that supports and engages users effectively.

Across the industry, there is a significant amount of literature and time dedicated to the notion of Cross Platform Frameworks. These frameworks attempt to reduce development time and complexity by abstracting shared components across two or more mobile mobile platforms and optionally the web platform. For the purposes of this document, only iOS and Android are considered.

Cordova

Cordova is a framework that acts as a layer between HTML5-based applications and native capabilities. With Cordova, the native capabilities of mobile platforms such as Contacts Integration, GPS, Filesystem, and other sensors are exposed to the application via a series of Javascript APIs.

Cordova applications are built by taking a client-side web (no server side code is supported directly within Cordova) application of one or more pages and enriching it with native capabilities. Cordova applications can be mixed with native functionality and screens, but this use case is less often desirable.

Pros

If the application you are developing is an offline client-side responsive web application, you have already written most of the code that will allow your “application” to function. This means that a well written web application utilizing a framework such as AngularJS can be leveraged as-is to begin offering offline mobile application experiences.

One of the best aspects of Cordova is that it allows developers experienced in the Web platform to begin building mobile experiences that take advantage of mobile paradigms and capabilities without significant retraining.

Cons

Cordova has two main downsides. . First, all of your UI will be web-like. This means that your input fields will look like web forms rather than native input boxes, it means that transitions and animations will render and perform like those found on a website (or worse), and that standardized components such as the Android Action Bar, or the iOS Tab Navigation cannot be leveraged by your application. By not matching the standards of iOS and Android, your users will need a small amount of additional time adapting to your custom experience, or may feel like the experience is less modern. This can be mitigated somewhat through creative experience design and/or the use of cross platform design frameworks such as Material Design, but will still make your application feel less like a smooth shiny native application.

The second core downside is that Cordova applications are limited by the Cordova framework. The Cordova framework is always going to be several releases or features behind “main line” Android and iOS. A great example of this is that there is still not an official way to build Android Wear or Apple Watchkit applications with Cordova. There are 3rd party libraries (including one we have written), but they don’t have the focus or attention that the native tools have.

There are other downsides with Cordova for Android such as the widely varying versions of the WebView that ship across multiple Android devices. In All Android versions before 4.4, the WebView was bundled with the OS and immediately out of date upon shipping, meaning new features of HTML5 are inaccessible. This can be worked around by shipping Android applications with a custom-built WebView called crosswalk using the latest version of Chromium, but this introduces another dependency and more complexity.

Xamarin

Xamarin is a framework indirectly supported by Microsoft that allows developers to build iOS, Android, and OS X applications using C#. These C# applications can be designed to have both a data and services layer built in C# as well as a User Interface built in C#, or developers can blend C# and native functionality.

Pros

Unlike Cordova applications, Xamarin applications are compiled instead of being interpreted at runtime. This means Xamarin applications have better performance than Cordova applications. Additionally, Xamarin applications use reflection to expose native APIs automatically, meaning that Xamarin developers do not need to wait for the Framework to update in order to gain access to new or device-specific native APIs.

Cons

Xamarin is a new platform It does not have the same level of extensive and numerous application frameworks as HTML5. Tools like Xamarin Forms which allow users to build shared UI code that renders well across platforms are still in their infancy, and can create significant barriers to building top-notch experiences. User Experience elements such as gestures and animations can be difficult to achieve due to the fundamentally different ways these are natively achieved by the platforms.

It should also be acknowledged that because Xamarin ties so closely to the platforms, many things that one might expect to be cross-platform may not be. One example of this is Wearable interfaces. If you are building wearable experiences using Xamarin, you must still build them twice to target both iOS and Android.

According to Xamarin, best of breed applications are able to achieve around 70% of the code between platforms, so expect to be working on platform-specific behaviors as well as overcoming unexpected issues with the shared code.

Last, to get started or to continue working in Xamarin, each developer will need a license to Xamarin, which serves as an additional barrier to entry for new team members to get up to speed.

Native Development

Native development of an application for each target platform may seem to be the most costly approach, but with fewer risks and compromises required the cost can sometimes be less than an equivalent Cross Platform application.

Summary

Across both of these Cross-Platform frameworks, there may be an overall reduction in development time in exchange for limiting the types of interactions and experiences that can be built to those that are “lowest common denominator”, but these platforms are evolving every day. This must also be considered in combination with the additional developer skill sets implied and required by the use of an additional framework. For a Cordova solution to work successfully, the development team must be familiar with both Blink and Webkit capabilities and peculiarities (including their mobile components), as well as the best practices for building mobile applications. Xamarin developers need to understand the iOS and Android platforms and capabilities as well as have extensive experience building C# applications.

Finally, packaging and designing an application should be done intentionally for each platform. Each store has separate submission and review processes. Things like screen shots and marketing text may be significantly similar, but users hate seeing platform-specific images for the wrong platform in either store, and may risk the rejection of your application.

clutch co award

The Impact of App Platform Selection on Development Costs

MentorMate was recently selected for an interview with Clutch, the online authority on mobile app development companies, as part of a series of expert interviews on mobile app platform selection and the cost to develop an app.

Clutch sat down with Chief Strategy and Innovation Officer Stephen Fluin to pick his brain on topics such as how price quotes are determined, what the biggest cost drivers are, and whether iOS or Android costs more to develop:

In general, we treat them about the same. When it comes to developing for iOS versus Android, the net total, end of the day cost is going to be about the same. What we say internally is that the differences between one developer to the next are going to be greater than the differences between iOS and Android. Developers tend not to be fungible and all equivalent. Some are faster than others. A fast Android developer is going to do an Android app faster than a weak iOS developer and vice versa. A fast iOS developer is going to do an iOS app faster than a weak Android developer.

The interview also dove into different mobile app platform options, and what factors inform the platform decision:

Some of the first questions we ask when onboarding a client are, “Who are your customers?” “Who do you want your customers to be?” Then, depending on the demographic, the income bracket, the industry, and whether it’ll be used personally or professionally, we’re going to see different device needs.

For example, in the engineering industry, we see almost exclusively Android. Whereas in the medical space, we see almost exclusively iOS. We tend to see higher income individuals preferring iOS. We tend to see lower income individuals preferring Android. But, it’s also the number of people you’re trying to hit. So, if you’re trying to hit 60 percent or 70 percent market share in terms of mobile users, at that point you probably need both iOS and Android. It’s always going to come down to what the users are holding in their hands.

Read the full interview with MentorMate’s Stephen Fluin at Clutch.

android lollipop developers perspective1

Android Lollipop – A Developer’s Perspective

A Brave New World

The future of Android is here and it is bright indeed! The new Android 5.0 Lollipop is one of the most fundamental updates since the introduction of the OS, bringing a bold new visual style and a ton of performance enhancements. These changes provide new tools, but also lead to new challenges for Android developers. By mastering the components and guidelines of the new OS, developers will be able to create new exciting apps and reinvigorate old ones.

Android Runtime (ART)

One of the major changes in Android 5.0 is the new ART runtime that replaces Dalvik as the platform default. And what a change that is! The new runtime brings 64-bit CPU support, an entirely new memory allocator, ahead-of-time compilation and improved garbage collection. Users can expect upwards to x3 performance improvement in their applications, as demonstrated by the below graph from Google’s 2014 IO conference.

art dalvik speed
But how does this affect Android developers? Most apps should work just fine without any changes under ART. However, developers should be mindful that they can run into some issues if their app uses Java Native Interface (JNI) to run C/C++ code, or development tools that generate non-standard code, such as some obfuscators. Google has provided some helpful guidelines concerning these cases.

Material Design

Android 5.0 Lollipop brings one of the biggest visual redesigns the OS has seen to date. Material Design is based on what Google calls “unifying theory of a rationalized space and a system of motion” and it’s goal is to incorporate a consistent visual theme on multiple platforms, including mobile, desktop and wearables.

material design

The first step to creating a Material Design app is applying the material theme. To support older versions, a separate styles file will be required. Layouts and components in the application will have to follow specific Material Design Guidelines. These encompass the fundamentals of Material Design and both developers and designers must acquire a firm understanding of them before beginning development.

Of course the Material Design look and feel cannot be achieved without some new widgets. These include the new RecyclerView, CardView and more.

RecyclerView is a more advanced and more flexible version of one of the most used UI widgets in Android applications – the ListView. The power of the RecyclerView comes from the Layout Manager, which dictates how the items in the list will be ordered. This empowers developers to easily create imaginative and nonlinear item arrangements. Experienced Android developers know how to increase a ListView’s performance by using a pattern called ViewHolder. This pattern consists of a simple class that holds the references to the UI components for each row in the ListView. Now the new RecyclerView fully integrates this pattern and makes it mandatory.

CardView is a UI component that shows information inside cards and has a consistent look across the platform. Developers can customize the view’s corners, elevation and more.

Another new feature in Material Design is that UI components can now cast shadows. This is achieved by specifying an elevation value for a view, which creates the visual effect of the views floating in a 3D plane, while in fact they are two dimensional.

Animating transitions between activities is now even easier with the new Activity Transitions API. Designated shared elements between two views will automatically be animated when the user switches between activities.

Power Saving (Project Volta)

Project Volta was born from Google’s observation that roughly every 1 second of unnecessary active time (for example the processor waking up to do a task it could do later) results in a 2-minute reduction in stand-by time. In order to remedy this, the company takes an approach it calls ‘lazy first’. Lazy first is basically a principle that encourages developers to schedule non-urgent tasks to be executed in the last possible moment. This can be achieved with the new JobScheduler API that lets developers optimize battery life. by defining jobs for the system to run asynchronously, at a later time or under specified conditions. Such conditions include whenever the device is charging, idle or connected to a network.

New tools provide developers with improved ways of monitoring how their app utilizes the device’s battery. A simple way to obtain detailed power usage statistics since the device was last charged for a given app package, is by running the new dumpsys batterystats command: $ adb shell dumpsys batterystats –charged

Loading the dumpsys output into Google’s nifty Battery Historian tool will generate an HTML visualization of the data. The statistics including history of battery related events, approximate power use per UID and system component, global device statistics and others.

New Notifications

With the new Android 5.0 come new and improved notifications. These can now appear on the lockscreen, so users can read them at a glance, without unlocking their phones. Users can manage notification priority and toggle whether they want them to appear on the lockscreen from the Settings menu. Developers now have more control over how their notifications are displayed by setting their visibility property to Private, Public or Secret.

Support Library Additions

Wide spread adoption of Lollipop is still a ways off, but there are components that developers can use right now in order to bring their old apps up to speed. The new version of the Android Support Library comes with a substantial list of new additions, including the new Toolbar widget, RecyclerView, CardView and others.

Developers can start by letting their themes extend Theme.AppCompat which includes everything they need to support Material Design in previous versions. Using a base theme and overriding the final theme in the values-v21 folder is advised in order to retain full functionality for devices running Lollipop.

The new Toolbar widget is a generalization of the Action Bar pattern that gives much more control and flexibility. It can be placed as a view in the hierarchy just like any other, making it easier to interleave with the rest of the views, become animated, or react to scroll events. The Toolbar can also be set to represent the Activity’s action bar, meaning that standard options menu actions will be displayed within it.

In Conclusion

With Android Lollipop Google continues it’s attempts to innovate and grow its mobile platform on multiple fronts. In order to keep up with progress, developers must change the way they think about, design and develop their applications, in order to fascinate users and engage them on multiple levels.

Writing User Stories for Mobile Apps Adding Context 729x4231 729x423

Writing User Stories for Mobile Apps – Adding Context

I saw an interesting presentation at MobCon 2014 from Josh Bernoff over at Forrester Research on the concept of the Mobile Moment. This is the idea that there are two additional dimensions that you need to account for in your mobile apps: time and space. That is, when are your users using the app and where are they using it?

The Mobile Moment concept struck me as quite powerful: knowing where and when users are interacting with your app can help you craft more powerful, intuitive features. Mr. Bernoff provides examples for Starbucks in which the mobile experience might change if you’re not at Starbucks, standing outside a Starbucks, standing in line, at the register, and even after the purchase. Josh is now even talking about Micro Moments, in which the user is interacting with your app (or maybe, with only a push notification from your app) for a few seconds only. This got me thinking about the mobile apps I’ve been working on, and it made me think that I might be leaving out some important details in my users stories.

dilbert userstories

In the agile development world, user stories are king. They describe who will be using a feature, what they will be doing, and why they are doing it. Typically, they go like this:

As a [user role] I want to [perform some function] so that [some value is realized]

And this works really well as a starting point of discussion when reviewing product backlog items. The development team knows three dimensions of the requirement:

  1. Who will be using the feature
  2. What the feature is
  3. A notional idea of why the feature is important

Knowing these dimensions of the requirement helps guide the team to make the right decisions when there is ambiguity and helps guide the discussion of the story’s acceptance criteria (the how). Other forms of requirement development (you know, those requirements that start with “The SYSTEM shall…”) can often leave these dimensions out.

So the question then is: How do we account for the two additional dimensions represented by the Mobile Moment? The tried-and-true template doesn’t have any placeholders for where or when. Just who, what, and why. The easiest way is just to modify the template, adding the where and the when:

As a [user role], [in a certain mobile moment], I want to [perform some function] so that [some value is realized]

Let’s imagine that you are working on requirements for an urban touring app, and you want your users to be able to see information about different locations on a tour. You might write a story like this:

As a tourist I want to pull up Wikipedia articles about each stop on my urban tour so that I can learn more about each location.

urban street tour

The product development team would discuss that story and they may end up developing a feature in the app in which the user can pull up a map of the tour, choose their current location on the tour, and then tap a button to learn more about that location. While this works, it isn’t ideal. The tourist has to go through several steps to get to the information she wants.

How could capturing the mobile moment in the story help? Let’s re-write the story to include the mobile moment:

As a tourist on an urban tour at a stop on the tour I want to pull up a Wikipedia article about the current stop so that I can learn more about it.

The product development team would discuss this story and decide that the mobile device can detect that the tourist is on a tour, and where on the tour the tourist is, so they may develop a feature that launches the Wikipedia entry for the current location when they launch the app. They have captured that tourist’s mobile moment, providing her exactly what she wanted, when she wanted it.

The problem with including the Mobile Moment in the user story itself is twofold. First, writing user stories that coherently capture who, what, and why is hard enough. It often takes a while to format the sentence in a way that both makes sense and captures the intent. Adding the where and when factors complicates things even more and might frustrate the product owner trying to capture the requirement in the user story format. Secondly, not every story has a mobile moment. In fact, some apps, such as games, will have very few Mobile Moments.

Another approach to capturing the mobile moment is to place it in the body of the user story, near where the acceptance criteria reside:

As a tourist I want to pull up Wikipedia articles about each stop on my urban tour so that I can learn more about each location.

Mobile Moment: While on an urban tour, at a tour site
Acceptance Criteria:

  1. Item 1
  2. Item 2

 

This method keeps the Mobile Moment in the developers’ minds as they are reviewing the story and helps drive the discussion, but doesn’t get in the way of the user story itself. It may not be as effective as including the moment in the story template, but it also doesn’t get in the way if the story doesn’t have an obvious Mobile Moment.

A third approach is to place the Mobile Moment in the acceptance criteria themselves. I don’t like this option because it intermixes the important where and when with other run-of-the-mill acceptance criteria (i.e. the how). This diminishes their importance and top-of-mind relevance.

Some apps, such as Uber and Lyft, are defined almost entirely by their mobile moments, some have few or none at all (Angry Birds?), and most fit somewhere in-between. If you don’t have a way to capture these moments in your user stories, your development team won’t know about them and you’ll end up missing out on important opportunities. Find a way that works best for you to represent the where and the when in your user stories, and watch your development team turn those mobile moments into killer features.

app store success 725x423

Achieving App Store Success – 5 Case Studies

It is pretty rare for developers to reveal how well their projects did. For the most part, the number of downloads and the quality of user reviews is all you get when researching an app. This is a shame because so much is unknown about “making it” in the Appstore. We meet with a lot of indie developers, entrepreneurs, small/medium companies, and even large corporations and they ask us really hard questions like:

“How much is this going to cost?”
“Is this good idea?”
“Will this be successful?”

The answer is almost always “it depends” because every situation is unique and there are so many variables. Thankfully a small percentage of brave developers have been kind enough to share their app’s performances. My hope in presenting these five case studies is to reveal just how unpredictable the Appstore can be.

Recently the creators of the smash hit Monument Valley released a tell-all infographic, exposing both the cost and revenue of the game. Let’s start there.

Monument Valley

Developer: ustwogames (8 core developers)
Time/Cost: 14 months, $852,000
Total Sales: $8,858,625
Release Date: April 2014
Price: $2.99

monument valley screens

Monument Valley is a critically acclaimed and well received puzzle game. I remember anticipating the release of this game ever since I first saw the amazing art style. The game comes from ustwogames which is a subdivision of ustwo a pretty substantial international digital user interface design company. The game division had a total of three games under their belt before diving into Monument Valley. The development and marketing experience certainly helps.

monument valley revenue

Everything seemed to go as expected for them as Monument Valley was a an absolute smash with excellent excellent music, sound, design, gameplay and length. Further solidifying it’s success is how difficult it is to clone or even mimic. That doesn’t mean it wasn’t a risk however; the game cost nearly a million dollars to make. I wonder how many $1 million games are not seeing this kind of success… I am willing to bet it’s a lot.

Unread

Developer: Jared Sinclair
Time/Cost: 60-80 hours a week for 7 months.
Total Sales: $42,000
Release Date: February 2014
Price: $2.99
unread-images

Unread is a premium RSS reader sold for $2.99. The app had a strong opening weekend and it was supported by a multitude of prominent bloggers. Sales quickly started tapering off however, even being featured in the Appstore didn’t stop them.

unread-revenue

Despite all of these circumstances, Unread still only earned $42K in sales ($21K after taxes and expenses) and is on a course that doesn’t promise much growth. I conclude from all this that anyone who wants to make a satisfying living as an independent app developer should seriously consider only building apps based on sustainable revenue models. I suspect this means through consumable in-app purchases, like those in Candy Crush Saga or Clash of Clans, or through recurring subscription charges, like those in WhatsApp.

I have read the reviews of the app and they are very positive both from critics and users. Personally I feel that a $2.99 starting price point (the app is currently free with IAP) is a bit steep for a service which is commonly free in a saturated niche. However despite that, it is very strange that being featured on the Appstore didn’t result in a huge amount of sales which will become common as we investigate the other apps in this round-up.

Author Takeaways:

I worked on Unread seven days a week, at almost any hour of the day. I think the quality and polish of Version 1.0 is due to all that extra effort, but it was physically and emotionally taxing. It’s not a sustainable way to live, and I don’t recommend it.

Sustainable revenue must come from other sources than the original app purchase, either from consumable in-app purchases, or from recurring subscriptions.

Don’t launch your paid-up-front app at a reduced price. Demand for your app will likely never be higher again. Price it accordingly.

Overcast

Developer: Marco Arment
Time/Cost: 15 months full time
Total Sales: $234,477
Release Date: July 2014
Price: Free with $4.99 IAP
Relevant Blog: Overcast Sales Numbers

overcast_screens_play_podcast

Overcast is a free podcasting app with a $5 IAP to unlock all the premium features. The most notable is Smart Speed which subtly removes pauses from podcasts in hopes to save you time listening. User reviews confirm that Smart Speed is an excellent feature that does save valuable listening time.

Marco claims that his expenses are only about $750 a month and the single biggest cost was spending $12,000 on the trademark for the name Overcast. He does not list any marketing expenses which I find interesting. A late 2014 release of a podcasting app doesn’t seem like an organically buzz-worthy event…

It had a perfect launch that far exceeded my expectations — it was the best launch an indie developer could possibly hope for, with tons of great press, a mid-level App Store feature, and thousands of tweets on launch day.

Marco does have a weighty twitter following of 78k users and Overcast was reviewed by Macworld, MacStories and featured on Daring Fireball. It is possible all of this exposure came from Marco’s personal relationships but it cannot be underplayed. If Marco wasn’t so well-connected, he would have had to buy that exposure and it would have cost upwards of $10,000 to reach even close to that number of users.

overcast-revenue

Author Takeaways

Overall, I’m very satisfied with Overcast’s finances so far. It’s not setting the world on fire, but it’s making good money. For most people, the App Store won’t be a lottery windfall, but making a decent living is within reach for many.

After the self-employment penalties in taxes and benefits, I’m probably coming in under what I could get at a good full-time job in the city, but I don’t have to actually work for someone else on something I don’t care about. I can work in my nice home office, drink my fussy coffee, take a nap after lunch if I want to, and be present for my family as my kid grows up. That’s my definition of success.

Trainyard

Developer: Matt Rix
Time/Cost: 7 months part-time development
Total Sales: Undisclosed, but likely over a million dollars.
Release Date: May 2010
Price: $2.99
Relevant Blog: The Story So Far

Trainyard is a puzzle game developed by Matt Rix in his spare time. The puzzle game received limited publicity despite the best efforts of the creator. Submissions to major app review sites such as TouchArcade and SlideToPlay fell on deaf ears and the app was not featured in the Appstore. Trainyard sold 2338 copies in the first 4 months providing $3200 in revenue at $1.99 price point. User reviews were very positive.

Matt increased the price of the game to $2.99 and released a free Trainyard Lite version. The free version was reviewed on a popular Italian blog shooting both the free and paid versions to #1 of the Italian app store. Soon after, Apple featured the paid version of Trainyard in the US Appstore. Sales increased to massive numbers.

trainyard-rev-growth

After a calculated gamble to drop the price to .99 Trainyard overtook Angry Birds, taking the #2 spot in the US Appstore for a few days, then #3 for a few more days before dropping off the top ten.

Although Matt didn’t expose is exact sales numbers he did publish a breakdown of the value of your Appstore rank based on a $0.99c price point, after the 30% apple royalty fee (as of 2010):

  • Rank 300 overall – $1000/day
  • Rank 25 overall – $2500/day
  • Rank 10 overall – $5000/day
  • Rank 5 overall – $15000/day
  • Rank 2 overall – $30000/day
  • Rank 1 overall – $40-50k/day

Flappy Bird

Developer: Dong Nguyen
Time/Cost: 2-3 days
Total Sales: Speculated $1,400,000 ($50k a day for 28 Days)
Release Date: May 2013
Price: Free
Relevant Blog: Dong Nguyen Says Flappy Bird Is Gone Forever Because It Was An Addictive ProductThe phenomena that is Flappy Bird is certainly worth talking about. Contrary to other project-of-passion apps, Flappy Bird was created in just a weekend as a test project. Even moderate success should have been a miracle for a game like this. Everything about it is unremarkable from the graphics to the gameplay.

Flappy Bird was actually released in May and went totally unnoticed, unmarketed and unreviewed. Just another random pick-up and play game littering the floor. I can’t find any reason why it went viral half a year later, but it certainly did.

Everyone downloaded it, reported on it, blogged about it … hated it? It was reviewed poorly both by critics and users. The Flappy Bird metacritic page isn’t something to be proud of with a failing critic grade of 52/100 and a miserable user score of 3.7/10

Despite all of that, everyone needed to play it. So much so, that after the developer pulled the app because “he couldn’t take it anymore” people sold pre-loaded iPhones for… Tens of thousands of dollars.

flappy-bird-ebay

Don’t get me wrong, Flappy Bird isn’t a terrible game. It had plenty going for it:

  • Free
  • Cute retro graphics with a Mario reference
  • Gameplay takes only seconds to understand
  • Super fast play sessions
  • Fun to compete with friends
  • “Bird” in the title to leech some of that Angry Bird/Tiny Wings recognition

But none of those are unique, there are literally hundreds of games that fit those criteria, why did this one explode? Nobody knows for sure.

What It Means

I am hesitant to draw conclusions from such a small and unfocused sample size. Each app is unique, each released at a different time, under different circumstances and to a different target audience. It is best to take each case study anecdotally and try to learn general lessons from each one.

One important takeaway however is that you probably are not going to make the next Flappy Bird. Getting an app into the app store shouldn’t be treated like a gold rush. It’s easy to look at a simplistic game that is doing well and know you can do it better. Maybe you can, but attaining viral success is nothing more than winning the internet lottery. You can’t make that your goal.

scrooge2 725x423

How to Finance Your Mobile App

Before You Begin

The first thing you should do is create a business plan. The process of writing your business plan will force you to answer all of the questions a potential investor will probably ask you. The thoroughness of your answers will be a big signal to your level of preparedness and competence. Here are just a few of the topics that should be covered in your business plan:

  • How will your app make money? Monthly fee, initial download price, in-app purchases, demo/unlock model, advertising?
  • What is the expected revenue?
  • What is the expected profit?
  • What is the total cost of creating the app and all related architecture?
  • What are the recurring costs?
  • What is the project timeline? When will the project be done?
  • Do you have any competition? What are your key differentiators? What are your plans to deal with them?
  • How will you promote your app?

If you need help writing a business plan there is an extensive guide on entrepreneur.com plenty of examples outlines and suggestions. If a person really needs and in-depth look at your project, the business plan is the perfect solution.

kidizen page screen

For people who don’t want to invest all the time of reading a business plan, have marketing site. Your website is a bit like a portfolio. It grants you the opportunity to explain your concept quickly and succinctly. In addition to presenting an engaging demonstration of your application considering showing the qualifications of your team, statistics you have gathered about your audience or market, comparisons to competition, or even a careers section to attract some more talent.

51262107 29dc 47d3 8fdd ce19890d5e09

Bootstrapping

If you are fortunate enough to be in the position to fund your own app, do it! There is nothing better than having complete control of your project and every decision throughout the entire process. Unfortunately, most people are not prepared to comfortably sink $100,000+ into a new venture. However, you should use all of the resources you have to take your app as far as possible.

Risky? Absolutely. But, how do you expect other people to take a chance on your idea if you won’t? Develop a one-year plan and start putting away a portion of your earnings every month into your new start-up fund. Use that time to create whatever resources you will need once you launch. Of course you need to be responsible, but bank loans and credit cards are another way to help stretch your personal equity to the max.

Crowdfunding

Submitting your app to a crowdfunding website is a great first step toward getting some financial backing. Crowdfunding has many benefits. Primarily, you get to maintain full ownership of your idea, app or company. Additionally, it allows you to leverage a large social media audience or a viral message to generate funding. Small payments are welcome, which opens the door for many backers. However, without a large social media following it might be difficult to get the exposure you need to really take off.

Crowdfunding sites follow one of two models. The “All or Nothing” model grants funding to projects if and only if the pre-determined funding goal is reached or exceeded. Alternatively, the “Keep it All” model always grants the money raised at the end of the funding period regardless of how much was pledged.

Creating a successful crowdfunding campaign is as much luck as science, especially if you are not already bringing a strong social media following to the table. Sometimes you may actually need some funding to fund your crowdfunding campaign:

It turns out that running a successful crowd funding campaign costs money. In fact, Canary raised the seed money to ensure its Indiegogo campaign went smoothly. The funding helped bring on developers and solidify its product’s design in time for the campaign’s presentation. Why VCs Use Crowdfunding To Make Sure Their Hardware Startups Pay Out, Fast Company

Here are some of the more common crowd funding platforms:

Wikipedia has a comprehensive list along with their focuses and funding models.

Attempting a crowdfunding campaign is essentially risk-free. Either you get funded or nothing happens and you are free to try again. Look no further than the “Coolest” campaign for an example. What was a failure in December 2013 was a $13 million success in August 2014.

coolest

Contests Awards

Keep your ear to the ground regarding any competitions that award money for a mobile app or idea. This is certainly a right-place-right-time opportunity, but it does exist. Some contests simply award money, while others only offer exposure and recognition, both rewards are worthwhile. Here is a short list of mobile app contests I am aware of:

  • Google’s Android Developer Contest

    One of the most prestigious and lucrative awards, the Android Developer Challenge awarded approximately $2 million in 2010. Unfortunately there is no official word when we will have an ADC3 … If ever.

  • Apple’s Design Award

    Another very prestigious award will certainly get your plenty of recognition, as well as a feature on the app store front page.

  • Xammy Awards

    Xamarin give awards for best consumer app, best enterprise app, best emerging device app, best game, and a developers choice award. The only catch is you need to use Xamarin for development.

mobdemo-champ

Ask around your local community for similar local events. Chances are, there probably is one within driving distance. At the very least you will meet some people that might help you get funding in another way. If you know of any other contests not listed please let me know in the comment section.

Investor Funding

shark-tank

Getting funded by investors is often the first thing entrepreneurs or inventors think of (after using up their own resources). It isn’t a bad idea, but it does have some drawbacks. The biggest issue is you often need to give up some company control in order to get the financing required. As in the case of the Shark Tank, you are actually selling a stake in your company. On the other hand, an investor’s business-savvy might be helpful in making your product successful.

  • Friends and Family

    Ask all of your friends and family if they know of any investors you can meet with. It is very possible someone knows a guy who knows a guy who is interested. You might even discover that one of your friends or family members are interested themselves.

  • Attend Events

    Attend conferences and trade shows that are relevant to your app; be an exhibitor if possible. Show off your idea to everyone and seek out investors.

  • Social Media

    Use social media to send out some feelers. Search Linkedin and Twitter for investors and make a connection.

  • Websites

    There are websites that specialize in connecting investors and inventors such as Angel List, Flash Funders or Gust. These sites can be a great way to connect with like-minded investors since you can see what project they have funded previously. If you have a medical-themed idea, for example, you can scout for investors in the medical field who can best understand your concept.

Once you find an investor you still need to convince him or her to invest. I spoke with a few entrepreneurs about their experiences with the start-up scene and interacting with investors.

Tony Kramer is not only the entrepreneur behind Spark Starter (MobDemo 2014 Runner-up) and multiple other start-ups, he is also an investor. He has seen this business from both perspectives and he informed me that having a product you can show, and proof that users are engaged is by far the best indicator of a healthy start-up.

Even if you don’t have the knowledge or funding to develop your product, try to get a prototype off the ground. Even a simplistic web version of your app can start generating user interest. Investors want to see where the revenue will come from once the product goes live. They want to see the community backing the idea.

Dug Nichols from Kidizen (MobDemo 2013 Runner-up) adds that investors like to see people with skin in the game, people who are going out on a limb for their idea. They like a scrapper who has quit his job and committed all the way.

“Why should they invest money in your idea if you haven’t given it everything you have? They like knowing someone is out there fighting for their life.”

Dug also adds how helpful it can be to show a track record of success. If you don’t have any successes to show yet, Dug suggests finding someone who does. Bringing someone experienced on as a partner or consultant will give investors confidence that their money will not be mismanaged.

Persistence

Do not be discouraged if your first round of funding isn’t as successful as you hoped. There is plenty of chance involved and being at the right place at the right time has a lot to do with “making it.”

This XOXO presentation by Darius Kazemi is relevant to this topic. The presentations begins with parody of a successful start-up, but instead of a launching product or service, his goal was to win the lottery. It’s funny and clever but also surprisingly poignant. He points out the similarities in buying lottery tickets and trying to make as a tech start-up.

It might not be the most optimistic story, but it is a sobering look at just how much luck is involved. The positive take away is that not making it right doesn’t mean your idea isn’t viable, be persistent!

Discovered Wearable Experiences and the Future of Healthcare mobcon2 729x423

Discovered Wearable Experiences and the Future of Healthcare

MentorMate CSIO Stephen Fluin gave this presentation at MobCon 2014, a national mobile technology conference. In his presentation Stephen describes his most memorable experiences with wearable devices and makes predictions on the future of this technology, specifically in the field of healthcare. We are now making this presentation available to you in it’s entirety.

Transcription:

Stephen Fluin: Welcome everybody. Seems like lunch was pretty good because we got a lot of stragglers here, more, every minute here.

I’ll get started and as people file in, well, things will ramp up here. This is really going to be a presentation about three parts. Where first, we’re going to talk a little about my journey, my experiences, where I’ve come from in terms of technology, then, we’ll talk about just discovered wearable experiences in general. How to build them, how to assess them, how to judge them on their qualities and then I’ll turn it a little bit and we’ll talk about healthcare and what this could mean more globally for healthcare and for wellness for individuals.

First, a little bit about me. My name is Stephen Fluin, I’m head of strategy and innovation as Alex said. My role is really sitting at the nexus between the business side of things and the technology side of things.

Whenever you’re launching a product, whenever you’re building a business, there’s a lot that you need to know both from the business end terms of how do you address a market, how do you bring products and launch them successfully, how do you iterate on those pieces but then also a lot of those decisions need to be informed by technology.

There’s a huge number of opportunities that are coming with mobile and with wearable and actually being able to understand both sides of that coin makes the decisions that you’re going to make on both sides much more effective and so that’s really where I try and sit.

I have actually worked with something like 600+ companies now, trying to help them advise on their strategy and advise how to turn that strategy into effective tactics.

Just a little about MentorMate, the company I work for. We work with really small, medium and fortunately 500 size companies to help them imagine, design and deliver mobile and web applications.

We have the fortunate job of helping companies solve their problems using technology and trying to launch cool products and cool ideas.

I’m going to start a little bit with my story. If you think back to your first wearable device, or even your first smart watch. This was mine. This was an old Casio watch which has a lot of different cool features on it. You can set reminders or notifications. You can do calculations. You can even look up what time zones there are. A lot of capabilities here but for some reason this never caught on. My theory is that usability ends up being a lot of that.

If you flash forward a few years, then I got my first cellphone, during high school. If you remember these little audio box phones that are about that big and fit in your pocket, much smaller than any cellphone you got today.

Then, I have the Krzr which is the unpopular cousin of the Rzr. Then, a little bit further forward, we look towards on May of 2013; I had my first wearable. This is me wearing Google glass.

I participated in the Google IO Conference at 2012 and I heard about glass when they’re sky diving into the room which is a really exciting experience and I said, “Wow! I have to get a pair of this and try and experience and see what’s all about”.

For me, day 1 was all about learning, what it is, how to use it. Oh, my God I was spending about half the day staring up in the upper right hand corner of my screen and then day 2 was all about learning not to use it. Learning to let the technology get out of the day of my everyday interactions. Then, it was about two months after that, that I really integrated into your workflow.

I wear glasses everyday, a lot of people say, “Do you really wear that?” If you see me at a restaurant, if you see me out in about a target doing grocery shopping, I’m probably wearing glass. Part of the reason I do that is I’m turning myself into a robot so that you guys don’t have to.

This is a short circuit if anybody remembers that really antique movie.

What’s happened over the years though is I’ve tried giving up glass ever since I got it for the first time but it doesn’t work because what ends up happening is, I will have a situation or event like going to the grocery store, I’ll say, “Well, I don’t need glass.” I’m just going to the grocery store, why would I need a piece of technology? But then, I’ll be trying to figure out whether or not these plantains are ripe and I’ll need to ask a friend.

How do you ask a friend? You got to pull your phone out, take a picture send a message or give him a call or I can just take a picture and send it. I’ll try and do this right now.

We’ll give this a try. Okay glass, everyone smile. Take a picture. Okay glass, share this with Tweeter and then okay glass, add a caption, “Wearables at #MobCon”. It looks like it went out. You should see that shortly.

This has happened to me a huge numbers of time. In situations where I didn’t really think it would happen. I was in the garage cleaning out things a few weeks ago and I ended up coming on [cash that I had stored of jewel cases of games 00:05:31] that I had over the years. I was like, “Well, I want to throw these away but I also want to capture the moment, the memory”.

I could have very easily pull out a cellphone but that’s a little bit more hustle, it interrupts the workflow but instead I was able to just … As I was grabbing each piece of technology, take a picture of it and then throw it away and learn to move on.

I was really able to capture a lot of priceless memories.

Before I move on, I want to ask a question, how many people here have tried glass on? Okay, about half of you here.

How many people attended my session last year about wearable technology? Okay, quite a fewer number of people.

One of the things that I might want to understand is how deeply you guys want me to go into understanding the experience of wearing a piece of technology like a smart watch or Google glass because understanding that experience is inherently important to understanding how they design for it.

If we look forward now, so that was 2013. Now, it’s 2014, 2015 is coming. This is the beginning of the end for Google glass. If you look back in time, we had a long long period where we had regular watches, thousands of years. Then, we have these brief glorious period where our wrists were free.

Then, 2014 is coming, we got Fitbits, we got android wear, we got the Apple watch coming and now it’s the reign of smart watches’ beginning.

There’s a lot of different options when it comes to wearable technology that are going to continue to change and continue to load ourselves up with technology. It’s always going to be important to understand why am I doing that, what’s the value I’m going to be getting out of that.

Here’s a really interesting picture. This is after I got my Moto 360 which is one of the first round smart watches. I got really poorly targeted email from Motorola saying, “Hey! You should buy a Moto 360.” And I got this notification on my wrist which was a little bit awkward.

There’s not just android wear and there’s not just Google glass. There’s a lot of technologies that are coming out. One of the most exciting ones that I am sure many of you here has heard about which is the Apple watch but it’s not just glasses and it’s not just watches. There’s a huge realm of wearables that are going to end up being a part of our lives.

One of the things that I kick start and I haven’t yet received, we’ll see if they ever ship, it’s a headband called Melon. What it is? Is it’s actually an EEG that measures focus.

This is not intended to be worn everyday but the idea is that you wear this for a day for example and you figure out, “Okay, when do I focus best and when do I focus worst”, because sometimes for some people, maybe it’s first thing in the morning. I know a lot of people are not early risers. They need coffee, they need about four to six hours to get going. If you can understand that about yourself and if you can actually chart that on a quantitative way, you can rearrange your schedule and rearrange your lifestyle so the things that deserve attention actually get attention.

There’s a lot of cool ideas around wearables. We’re going to continue to see expand over time.

I want to give just a little bit of insight into some of the experiences that you have while wearing Google glass and some of the thoughts that have been put into building a piece of technology like this.

Everyone has seen glass, this is actually a headphone earbud that you can wear, which … It’s a very cool experience because it’s very hands free, you listen to everything by gestures and voice but it’s not very practical due to the technical limitations that have been built in the glass today.

The battery life it lasts about 16 hours but it completely depends on your use. If you are filming all the time which is a privacy concern a lot of people have is, “Are you filming me the whole time? No! I’m absolutely not, because my battery would last about half an hour.”

That’s not really a usable piece of technology if filming is part of my job. If we look at the companies that we talked to in terms of whether it’s airlines, whether it’s medical device companies, when they talked about adapting a technology like Google glass, it’s not ready yet just because the camera is not good enough to be doing scanning on a dark night, on an airport tarmac without a light and a better camera and a lens of some sort and it’s …

We have to wait for the technology to catch up. It’s the same thing with electric cars. If every electric car today had a 600 mile range, we probably see adaption a lot faster.

Flash forwarding a little bit into the experience. This is one of the most common things that I do with glass which is sending messages often while driving. It’s legal in the State of Minnesota as well as in California, I’ve checked. I’ve checked with a couple of police officer friend and they said, “Yeah, you should be fine.”

What’s really interesting about how they built this interaction is, I have about 2,000 context in my Gmail account and they’re not trying to expose those 2,000 context just because speech recognition is not really good enough for that but what they do is they try and figure out what am I most commonly trying to access and that’s really your favorite context as well as anyone you’ve recently conversed with.

This is a really nice laid out screen where you’re actually able to see and preview the list of acceptable people that you can reach out to, all with a voice command. Then, you are able to send a message to them or reply to a message in that same way.

Once you have passed that, you are actually able to send a message or take a picture. This is a really cool idea of not just being able to have words but also because there’s a camera integrated here I can send a picture. I almost find myself sending pictures about as often as I send text messages because I can send a message very easily but sending a picture, it really tells a story in a different way.

One of the things, so it does all voice transcription. As you say, “Okay glass send a message to blank”, and then you really just talk. What has happened to me is, a user of this technology is I stop reading the little preview, as you’re speaking you can see the words in white are those that have been confirmed by the system and the words in black, it’s still trying to figure out. When I said, hashtag Mobcon, it actually, it took like six guesses before it got it right and I didn’t have to interact with it at all.

It’s just looking at what is being said online and the symantecs of the words that I used to get there.

This is a case where I don’t really have to think about what I’m saying but if you compare that with a different use case, where I’m not just sending a spontaneous on the spot message, it maybe taking a note. This is something where I actually spend a lot more time crafting it, thinking about the way I’m saying it so that the speech recognition is higher quality. Then, I actually read the preview and if necessary I’ll actually take the time to swipe down if there’s a mistake and I’ll repeat it. That gets in one of the things that I’ll talk a little bit later about which is the reliability of these technologies where if you are trying to build a discoverable wearable experience, it has to be reliable because if your product mistranslates or captures information in the wrong way, it’s basically dead in the water.

The first time a user experience is that they’re going to give up. This is a great example with Evernote where they are just relying on Google and Google does a pretty good job in taking care of things.

If you want to look now a little bit, I want to talk through expanding that model from Google glass now down to smart watches. I’m going to talk a little bit about some of the experiences that ended up happening to me. I do want to contrast this. This is actually Windows 95 running on an android wear smart watch which I do not recommend you do.

If you look back at how Windows and Windows 95 and really all of the desktop operating systems work and even smart phone operating systems today, it’s a very personal action that gets a response. I choose to install an app. I go to the website. I go to the app store I say, “I want that app, give it to me now.”

What ends up happening is that this model does not extend well into wearable experiences. If I’m asking user to say, “Hey! Go download my watch app and install it.” Why would a user take the time to do that? You’re going to lose a huge percent of your market share.

I want to contrast that with what a lot of different companies have done, in particular Google is trying to improve the market place. They’ve got a lot of great examples.

This first example is the Google play music application. I’m a subscriber, I got a free subscription from whenever I go. It gives me access to all the music. I can trigger it with my voice and what ended up happening was I was on my phone listening to music one day via headphones and I looked down to my watch to see the time and there instantly without me ever have to choose to install an app or configure anything, I could see a control.

It both told me who’s playing in case I was listening to a radio and then I was able to stop the music and then if I swipe I could actually use a gesture to say next song, next song, next song.

It’s a magical experience the first time you see that because you didn’t choose it. Just by the nature and the virtue of having these devices, you got all of those benefits.

Here’s another example. I was using my phone again to stream to my comcast Netflix, so I’m watching Futurama. Again, I looked down at the watch to check the time after, for coming back from a snack break or something and there’s the controls again. There’s all the information I need so that I don’t have to pull things out of my pocket.

It seems that it may not be a big deal to pull your phone out of your pocket but in terms of how the human brain processes information, it can only handle about seven things and the 14 seconds or so that it takes for you to pull your phone out, turn it on, unlock it if you have a pass code or a touch ID and then launch an application and then make a choice, that uses up two or three little slots and pushes other things out of your brain.

If you can achieve these kind of interactions that are unexpected but then just kind of take care of themselves, you’re once again creating a magical experience.

Here’s a very similar use case. I was talking about Evernote on glass. Here’s me taking the same note or a similar note on my watch. Again, using a hot word, saying a note and then the default action here is to automatically save that. I can cancel it, I can redo it, I can undo it in case it’s misinterpreting what I’m saying but the default case is just going to take care of almost everything.

This is not me, I don’t have pants this nice but I was going on a biking journey in Europe a few months ago and what ended up happening was just like this gentleman, I would keep pulling my phone out to check my stats, because we were doing this bicycling ride through the mountains and so I kept going slower and faster and I really wanted to see how far we have left because it was like 15 miles and it was new for me.

What ended up happening is I got to the near, the end of the journey and I turned my wrist over to check the time again, and there was the same information. I didn’t need to be dangerously, shakily pulling up my phone, unlocking it and looking at that information. I could have just been looking at my wrist.

Direction is another great discovered experience. I think Apple is going to do a fantastic job with this but what Google has works quite well. You are able to both initiate navigation as well as to send all the instructions that are coming from your phone to your watch which you don’t think you need that until you put your phone back in your pocket. Again, that initial triggers, well, wearable experience is glanceable and it’s going to be faster and it’s going to be a better experience.

This is one of the weirdest apps I’ve used. Can anyone guess what app this is? This is actually the camera app. As soon as I launched the camera app on my phone, my watch allows me to have a remote shutter. What this does is this has two parts, first is I press the one button, this is the entire interface, it’s a button. You press the interface, it gives you a three second countdown and then your phone takes your picture.

I can set up phone up if there’s a group of us for example, and I can say, “Okay, take a picture.” What ends up happening is you get the preview of that picture on your watch.

Just think about that group picture taking scenario and how powerful that could be not having to ask someone to take your picture and not having to guess, take the picture, go get it, check it again, do it again. Again, it has no UX, it’s just a circle but because I’m already taking a picture, because it’s relative to the context I’m in, it makes sense.

I want to turn these examples and I’ll try and help you guys achieve some of these wearable experiences.

The first, the one thing I want everyone to take away from this is, don’t make the user choose you. If at all possible, your application should be launching itself, it should be attached to an existing application that may already have or already installing. These things do not have keyboard unless you’re Microsoft. They actually did release a keyboard for android wear which is an interesting choice on their part but you’re never going to ask a user to login. You should automatically be coming in the door with that context and then taking that and extending that, doing something useful and interesting with it.

The framework that I want you to think through really has four parts. Any application, any experience that you’re developing should be short, simple, context sensitive and reliable.

This is a slide that Google put together and I made a couple of modifications to it. What this tries to do is this tries to tell the story of an interaction with the phone.

Green is when the phone is sitting in your pocket, which is most of the time and then as we’ve heard a few times throughout the conference, 150 times a day you’re pulling your phone out of your pocket and then your launching and interact with the applications.

If we look at how this can change with the advent of wearable technology like glass or smart watches or headbands, there’s a very interesting piece of technology called the Motorola Hint.

How many people here have heard of Hint?

It’s a little bit like Google glass but it’s … Do you remember the bluetooth speakers? It’s exclusively a bluetooth speaker that sits entirely inside your ear. There’s a really funny episode of the office from about four years ago where they made up this idea of a bluetooth speaker that fits entirely in your ear and it was just a funny idea to them but now, Motorola has actually gone and built it.

What it does, is it actually extends the glass experience in a way where you don’t have the heads up display and everything you do is via voice and via audio. You can tap your ear and say, “Get directions” and then it will read directions to you just verbally, but then when we look at the glanceability that’s created by this experience, you’re going to see the same moments again but so much shorter.

What happens is there is an order of magnitude shifts. We used to have to go to a desktop computer, that was an order of magnitude worst than when we had to go to a laptop. A laptop is going to be much faster, it’s going to boot up, it’s going to be with you wherever you go, but then we have another shift from laptops down to cellphones and what we’re going to see is that we’re kind of going to hit the last order of magnitude shift down to wearable devices where it doesn’t feel like a lot but in the same way going from a laptop to a phone was a big deal.

Going from a laptop to a wearable was a big deal because it really lowers the barriers to engagement. You’re going to make different choices in terms of how you interact with brands and products.

We saw a lot of … There’s a talk going on right now for a company called Inboxstallers as well as … There’s a lot of other companies where they’re trying to monetize short brief interactions. Inboxstallers, you get an email or a survey and by responding to that survey you’re giving that information and you’re getting paid in exchange for that.

Imagine if that experience could go on a watch, I might do it a hundred times in a day rather than spending five minutes sitting down and doing it 20 times. That type of shift in the way that you engage with your customers can actually have a huge impact on the engagement and the results that you’re going to see from using that technology.

The second concept here is simple, you’ll note … Once again just like that camera application, not a lot of UI UX here. There’s basically one button on the screen at any given time, because a user of a watch is not going to be spending a lot of time poking, there’s not going to be a stylus for a watch, at least I hope nobody tries to release a stylus for a watch.

Making a combination of swipes and taps is going to be a really easy way for users to interact. When you think about, hey I want to give my users a menu of options, you have to try and think about how do we convert that menu of options into a consumable series, a small series of actions and then really sort it, use that context, use that information that you’ve got about the user, about the application in order to make it approachable.

Context sensitive, this is an application that is installed, called Coffee Time. How many people here have a Starbucks card or use the mobile app? Okay, a few people. This is an application designed for those people. It does one thing and it does it really well, which it takes your barcode that you have from the Starbucks app and it makes it available on your phone.

By saying, “Okay Google, start Coffee Time”, it will actually pull up on my watch my Starbucks card and then I’m able to swipe it.

You see the same thing with the Delta App where they’re trying to give your boarding pass automatically on your phone although I have had a terrible experience with Delta. I don’t know if anyone else has tried using that but the first time it ever came up for me, it had the wrong ticket. That was not a fun experience which is actually a great segway into the last piece which is your experiences have to be reliable.

If you’re Delta and you take an experience out into the wild where even 20% if your users fail to use the application successfully, you’re not going to win. You have to go back and rebuild those experiences before people will give you another chance.

This is actually one of the more common screens on Google glass today. I call it the SAD cloud. What this means is that there were some sort of error along the way and they weren’t able to reach the cloud in order to process your voice or display an action.

When you move to a wearable experience where I’m not just tapping a phone, it ends up becoming a much more public experience. I’m tapping in a phone, you don’t really know what app I’m running, you don’t really know what I’m typing.

Whereas if I’m saying, “Okay glass, take a picture”, everyone in this rooms is aware of what I’m doing. If I asked to that multiple times, it’s shame and embarrassment and I’ve been walking through a target and saying, “Okay glass, take a note. Hey! I should pick up that thing next time.” If that fails, I’m not saying anything again because everyone around will look at me funny and that would not be good.

This is another example of a wearable where reliability is really important. Does anyone recognize this? One, right. So this is called the Myo band. The idea was it actually sits on your arm and reads the neurons in your arm, the muscle and it can actually sense when you’re about to flex your muscle because before that muscle actually moves, your brain signals down your arm.

What’s cool about that is if you can read muscle movement at a [granular enough pace 00:24:53], you can do cool things like give someone volume control of their speakers. By just twisting my wrist, I’m turning the volume up or down or I can swipe music forward back presentations, things like that but I had both the alpha version of this and now the [correction 00:25:08] version of this and it’s got about a 30% error rate. That 30% error rate, it’s completely unacceptable.

I can not wear this band. I’ve not wanted it since day one. As an application development shop, we’re not going to build applications for it. The same thing really applies to if anyone seen delete motion. It’s a little sensor that sits in front of your computer. It’s the coolest idea ever. In the video they have chopsticks where they are playing Angry birds with chopsticks which … It’s such a magical experience to connect the real world and the digital world but if those experiences aren’t going to work and if they’re going to fail, it’s never going to be successful commercially.

I want to turn things now away from the general case and talk more specifically about how this can operate in a healthcare world. When I talk about healthcare, I’m primarily talking about it from a patient perspective. There’s a huge number of efficiencies that can be gained from physicians, clinicians, practicing nurses, things like that but those productivity gains are a little bit different than the ones in the consumer based because this to use as a consumer, that’s a social and that’s a personal choice.

Whereas if you can mandate that in a corporate or business or hospital environment, then there’s a completely different realm of possibilities. A hospital might have 30 pairs of these for their surgeons, so that they can get consults, they can get real time information about the actions they’re doing.

For example, we did a project with the company where … I’ll talk about this little bit later. Critical messaging can come in. Someone requests a consult for me, I don’t have to pull out my phone or reference information about that patient, I can actually pull it up in real time via glass which for a physician who’s trying to interact with a thousand of patients a day, that’s a big deal because the more patients you can interact with and the better the information you have, the better the outcomes are going to be.

I’m going to talk through a few examples, the first is building a reminders experience for wearables. Imagine if your watch or your glass, pair of glasses told you, take three units of insulin. That’s the short. Tell them exactly what they need to know. Simple.

There’s really only two actions a user should ever be taking. Whether they took it now or they didn’t take it now. That could be as simple as a swipe. You’ll see that a lot in any sort of reminder applications or you’ll see that in calendar notifications, things like that, even incoming phone calls.

Make it context sensitive, based that on sensors. There’s no way to know whether I should take three units of insulin unless you have data, both about the patient, about their history, about their trends inside their blood.

Did we actually detect high blood sugar? Is this the correct response for the right moment for this patient?

Lastly, use those screen based interactions and make it reliable because this is life and death. If you tell a patient to take three units of insulin and they’re not needing that, this could kill them.

Another example is exercise. This is much more on the wellness fitness side of things where imagine a golf application that you’re using to track your scores and then your watch is automatically tracking what you do.

Imagine it says, your last swing was 3,280 pounds of force. That’s a very short, concrete to the point interaction. Simple, tap to delete. Otherwise, just save it, no interaction.

I could play an entire round of golf without having to think about what I’m doing but then I get that data which might help me make become a better golfer. Make it context sensitive. Use the geo location. I should only see this if I’m on a golf course and if I’m actually swinging. Use that detection and then save that information.

Then, the last piece is make it reliable. Don’t miss a swing because if you are putting in false swings, if I’m just waving to friends or if the information is not useful, once again, I’m not going to use your application. You’re going to lose a huge opportunity.

The last example I referred to a little bit is this idea of clinical intervention, imagine if you could start a code blue just by saying that. Make it simple. If I’m a physician or a nurse that is called the code blue, show me the steps one at a time.

At Google IO they show a lot of examples of cook books and things like that but the exact same idea applies here. Figure out what I should be showing the user on the screen and then show it to them and then make it simple, they just go next next or dismiss the entire interaction.

Make it context sensitive. figure out, “What are the relevant doctors that I should be communicating with, what are the relevant symptoms that we’re detecting, what other sensors we have in the body, what interventions have we performed?”

One of the hardest parts of a code blue scenario is that there supposed to be a person sitting there whose entire job is to write down everything that’s done. Imagine if we didn’t need that person because we could collect that data with the appropriate sensors.

The last, they kind of pull in the patient history. Is this the fourth time this has happened? What did we do last time? Was it successful? Was it unsuccessful? What do we need to know to be effective in the moment because every second that passes there, if I have to pull out a phone or if I have to grab a tablet off of the cart, I’m going to be less effective.

Then, last, make it reliable. Don’t necessarily ask someone to use this, maybe automatically trigger it if sensors go into a certain range for a certain time and then maybe ask for permission or say, “Hey! We noticed the code blue, we triggered it, do you want to cancel that?” Making sure that the technology is doing the right thing in any situation.

I want to talk a little bit about the future of healthcare and where we’re going with this. If you add everything up, we’re going to have more sensors and more types of sensors.

If you look at a common smart phone that you get right now or common wearable, they add a barometer … Does anyone know why they add a barometer in a cellphone? What they’re actually doing is they’re measuring pressure change to figure out altitude which actually gives them a better lock on your GPS.

It’s very counter intuitive sometimes how we can use these sensors and the more sensors that we’re going to have, the more effective we’re going to be.

I’m going to walk through a little story here, imagine that you care about your health and so you’re exercising regularly. You’re wearing your Fitbit and maybe you’re hitting your 10,000 steps everyday. That can be an application. That’s going to be making you more effective.

Then, using that data and extending on that, you’re getting reminders about your own personalized goals and motivations. “Hey! I want to be healthy for my kids.” Telling a patient or persons care about wellness, why they’re doing something instead of just you hit 10,000 steps, show them a picture of their kid.

Show them, “Hey! This is why you’re doing it and you’re being successful.” Reward those experiences. Then, imagine you have visibility into your own progress.

Now, it’s not just a moment in time. I’m saying, “Oh, wow! I’ve made the right choices 16 days in a row and I should keep that up.” Then, imagine something starts changing, imagine your blood pressure starts going out of sight of your normal range, and that’s your normal range because everyone’s range is going to be different. We’re going to have data and we’re going to have information on that and then we could build an application or we can build an interaction that tracks that and informs user, “Hey! Somethings happening here, we should do something about it.”

Imagine, I see this and I say, “I should go to the doctor.” I schedule an appointment but then I’m using that same technology to share that information with the relevant physicians that are going to be taking a look at it. Then, the day comes for my appointment and my watch shows me, “Hey! I know where the appointment is. You should leave by 6pm to get to your appointment on time.” That’s a great information. It’s going to make me actually maybe go.

Then, I sit down with my doctor and I pull up my Evernote, whether it’s glass whether it’s wear, you pull out the information, say, “Hey! There’s four things I’ve been wanting to talk to you about. I’ve captured these over the last six months.” I know personally, if I don’t have that information, I kind of get in the doctor and I’m nervous and I forget to ask about things and I say, “Oh, Okay, well, I guess I’m fine, I’ll just leave then.” But if you can say, “Hey! Yeah, I had a really weird pain that came up after I went jogging.” Maybe, you end up having a conversation about that.

Lastly, imagine if your doctor could make better diagnosis. What’s ending up happening, in my perspective, this is not a medically validated opinion but I think doctors are guessing about 80% of the time. You come in, you just talk with them for about 5 minutes. You give them a really terrible description of what happened and then they’d say, “Yep! This is what’s happening. This is what you should do about that.”

Imagine if we could have more data, more information and say, “Hey! Here’s my last three years of blood pressure. Yeah, I was going crazy it’s not really a problem.” They can make better choices based on that and they can have better decisions and then the idea is using technology and using this data, we’re going to move away from that guessing universe into a universe where we’re actually making good choices.

Then, finally, imagine if we could do the follow up. “Hey! We found out that my blood pressure has been going up because I haven’t been exercising enough. Maybe we tweak my goals, maybe we tweak my motivation.”

All those other pieces and steps along the way, they automatically update with that new information and the new choices that I’ve made about my own healthcare.

When I think about healthcare, there’s really kind of three pieces. It’s the inputs to your body. This is everything from what you eat, to the medicines you take, to the air you breathe.

I for example, I have a scale called a Withing scale and that scale measures the CO2 in my house and you can see when it goes up and it gives you a little warning and says, “Hey! We are at 3,000 mg of CO2.” Anything over a thousand is associated with kind of fogginess and mental slowness.

Second is the activities. What am I doing? This is where the fitness trackers come in to play hugely but they can also be surgical interventions.

The last piece is the processes. So these internal processes that happen within our body and what’s going on. Our bodies are not the perfect machine. Some thing is going to go wrong along the way but the idea is that if you can combine all these things into one, via this personalized big data then we’re going to be more able to take control of them and keep them all in sync.

With that I want to thank everyone for coming. I want to turn it over a little bit to questions.

Audience: Would you expect wearables to do innovations with the implanted diagnosis?

Stephen Fluin: Sure. I have a personal rule on implanted diagnostics. I will do only the 10th version of an implantable. If the first 9, no good.

I personally will wait on any sort of implantable diagnostics but I think it’s a very exciting space that we have to try because every human being is different and we have to get that information in order to make positive choices.

I find it interesting that a 100 years ago, people would die of old age but that was just the name they gave it because they didn’t know what’s going on. Now, we had cancer for 20 or 30 years, where it was just cancer because we didn’t really know what’s going on.

Now, we have 70 different types of cancer and as more and more information comes out about the human body, we’re better and better reacting to those things providing treatment and taking care of patients, providing them better quality of life which is huge and I think at some point, there’s only so much data you can gather from outside the body and you have to turn that in.

Audience: [So the two part, the number one, it’s personalized big data. Where does that reside and absolutely I’ve … 00:36:56]

Stephen Fluin: Great question. It’s interesting because I don’t think we know where it’s going to reside right now because every single company wants to own that data.

You saw this on a much smaller scale with Fitbit and Jawbone and Microsoft where they all have their own fitness network and then, they build one by one connections to all the others because they both want to be the soul source of data but they want to get everyone else’s data.

What’s ended up happening is it actually created a very democratized environment where I can connect all of these different systems and networks any way I want and then you have huge players like Apple and Google that are trying to succeed here.

The thing I’ll say about their efforts is that they have materially failed in my opinion.

Healthcare which I’m very excited about … I think long term has a lot of great prospects, today it’s a failure. My iPhone sit on my pocket, counting my steps and pulling in my weight data from [inaudible 00:38:06] but that’s it.

I couldn’t access it online, I couldn’t access it on any other device. It was for security reasons completely lock down and completely useless for me.

Whereas if you turned that on to more Fitbit, I actually get dashboards, I’m seeing information I’m able to make personal choices.

Whether or not the healthcare companies will be able to build a platform or the providers themselves are going to be able to own that data, I don’t think that’s going to end up happening.

I think it’s going to end up being, the user has a relationship with one of these or many of these that ultimately even though their data is residing and stored by another company, they’re going to be the ones in control because of this platform capability, because we’re all rushing in there. It’s a little bit supply and demand.

We all want it which no company has all the answers which gives us all the power. Does that answer both parts of the question?

Audience: [inaudible 00:39:02]

Stephen Fluin: Do you have a better answer? Yeah?

Audience: [So with regards to the platform … 00:39:11]

Stephen Fluin: I think that’s already happening today.

There are insurance companies that whether or not they make that decision or whether they make it clear what’s happening, they will look at your Fitbit data. They want you to share your Fitbit data with them.

I don’t think they can own it though because unless they can take care of your whole healthy lifestyle from everything from inputs, the foods you eat to the interventions and the activities you’re doing. They can’t own that entire experience and so there’s going to maintain some level of portability.

Just flash and forward five years, think about how Tesla collects data around what your [Cardots 00:40:17] That’s much more appealing to a consumer than progressive snapshot.

I would much rather have the data that Tesla gives me about when I drive, how I drive, where I drive than what progressive gives me because progressive they offer me 100 bucks offer, whatever amount that is, but Tesla gives me something actually interesting and something I can make choices based on.

Other questions?

Thank you all for coming so much.