Approximately an hour into the 2.5 hr web cast of Google’s I/O, 2014 event, David Singleton, Director of Engineering for Google’s Android project, finally informs his audience about plans for voice features for this mobile O/S. Since human activity is so dependent on movement, would it have made more sense to address voice earlier in the presentation?
But Singleton adds a telling comment on this topic: “we’re just getting started on this”. Perhaps Android developers should closely consider the ramifications of this comment before committing resources for yet another Android app designed for a touch screen. This is not to say all of the pretty “cards” demonstrated, up to this point in the Keynote presentation, are not worth the effort. But for a consumer in transit, flipping through cards on either a smart phone, a smart watch, or even an automobile display, just will not work out. There will, nevertheless, be a lot of demand based on likely consumer need to dive into details about a geographical location, and some of the services nearby, and even deeper. But the real solution to the problem will be a set of highly sophisticated voice “cards” and not their older, currently available, visual siblings.
What is required for Android to traverse this pressing gap between visual and audible features? Perhaps success will depend on a set of highly detailed voice APIs, will it not? Not only did Singleton candidly inform his audience as to how much still needs to be done to put together these APIs, one of the examples he provided of an app built by a third party Android ISV is, perhaps, representative of how not to proceed on the project. The app he talked about is called Lyft. The voice function he presented is “our call a car intent”. But the syntax of the query, at least to this writer’s ear, “OK Google, call me a car”, is unnatural and not likely to be very useful to the average consumer. Unfortunately, Singleton provided no reference on the degree to which the logic behind this function may, or may not, be fuzzy, to provide a useful response to more naturally phrased queries. Finally, the notifications from Lyft, post query, which Singleton demonstrated, were all visual (once again, a deck of pretty cards), and not likely to be useful to mobile consumers who need to attend to where they are going, rather than a device screen.
Bottom line, when/if a thoroughly detailed set of voice APIs are available for app development for the Android O/S, it is likely the pretty cards and swipe gestures on a touch screen won’t have the same value they apparently have today. So Android ISVs might want to wait a bit before diving much deeper into the “wearables” opportunity.
© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved