17
Dec

Google Debuts Cloud Dataflow at Google I/O 2014

2-Color-Design-Hi-Res-100px-widthAt the end of a 2.5 hr plus webcast of the Keynote Presentation from Google I/O 2014 can be found the debut of Google Cloud Dataflow, the replacement for Google MapReduce. Readers unfamiliar with MapReduce, but avidly interested in the big data enterprise computing trend, need to understand MapReduce as the application at the foundation of today’s Apache Hadoop project. Without MapReduce, the Apache Hadoop project would not exist. So Google MapReduce is a software package worth some study, as is Cloud Dataflow.

But wait, there’s more. As Urs Hölze, Senior Vice President, Technical Infrastructure, introduces Google Cloud Dataflow, his audience is also informed about Google’s role in the creation of another of today’s biggest enterprise data analytics approaches — NoSQL (“Not only SQL”). He casually informs his audience (the segue is a simple “by the way”) Google invented NoSQL.

I hope readers will get a feel for where I’m headed with these comments about these revelations about Google’s historical role in the creation of two of the very big trends in enterprise computing in late 2014. I’m perplexed at why Google would, literally, bury this presentation at the very end of the Keynote. Why would Google prefer to cover its pioneering role in these very hot computing trends with a thick fog? Few business decision-makers, if any, will be likely to pierce this veil of obscurity as they search for best-in-class methods of incorporating clusters of servers in a parallel processing role (in other words “big data”) to better address the task of analyzing text data scraped from web pages for corporate sites (“NoSQL”).

On the other hand, I’m also impressed by the potential plus Google can realize by removing this fog. Are they likely to move in this direction? I think they are, based upon some of the information they reported to the U.S. SEC in their most recent 10Q filing for Q3 2014. Year-over-year, the “Other Revenues” segment of Google’s revenue stream grew by 50% from $1,230 (in 000s) in 2013, to $1,841 in 2014. Any/all revenue Google realizes from Google Cloud and its related components (which, by the way, include Cloud Dataflow) are included in this “Other Revenues” segment of the report. For the nine months ending September 30, 2014, the same revenue segment increased from $3,325 in 2013, to $4,991 in 2014. Pretty impressive stuff, and not likely to diminish with a revamped market message powering “Google at Work”, and Amit Singh (late of Oracle) at the head of the effort.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

22
Jul

Do Cloud App Consumers Really Want Merely a Simple Online Authentication System?

Anyone following media reports on the security consciousness of consumers of online apps, or the disinterest they exhibit in developing one, will likely be familiar with what this writer considers a long term trend to look for “something easy” to implement, even at the expense of any real promise of security. This trend was on display at the recent Google I/O 2014 Developer event. During the Android Apps for smart phone segment the audience witnessed a streamlined approach to device authentication. The presenter first noted how difficult it can be to repeatedly authenticate an Android smart phone via a PIN method, and then went on to show how the process can be circumvented by a new Android feature built on what could be called “proximity based authentication based on trusted, related devices”.

The presenter demonstrated a successful attempt to authenticate his smart phone via his Bluetooth smart watch. The phone had evidently been programmed to consider the smart watch a trusted object. So, bingo, with the smart watch strapped to his wrist, the presenter quickly gained access to the smart phone without any need to comply with the “complex” PIN method.

Anyone watching the web cast of this presentation will note the audience applause. So, it would appear, at least the app developer community favors this type of simple method of proving a user has a valid access to a device.

Fast forward a month after this event and read an article posted to the Wall Street Journal. This one, titled “The Password is Finally Dying Here’s Mine” was published on July 14, 2014 and was written by Christopher Mims. Mims presents this demonstration as an example of something with a real promise of data security: “It might seem foolish to replace an authentication token that you keep in your head (a password) with one you keep in your pocket (like a phone) but consider: The former can be obtained by hackers, and the latter you can shut down the moment it goes missing.”

This writer has a few questions: 1) Just because an online hacker isn’t wearing my Bluetooth watch, does this mean he/she can’t spoof it? 2) What about a “brick and mortar” thief, who steals my Bluetooth watch and my smart phone and my tablet? What’s to keep him/her away from my data?

Mims goes onto refer to a user’s ability to “wipe” a device, meaning a smart phone, etc. Readers may want to maintain a skeptical attitude of this claim, as well.

Bottom line, given the pervasive insecurity of online data communications, one would hope app consumers (at least smart ones) would favor security over convenience.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

9
Jul

A Report on Android for Automobiles is Included in the Keynote Presentation for Google I/O 2014

Patrick Brady Director of Engineering, Android, reported on Android’s activities for automobiles during the Google I/O, 2014 event. Brady’s presentation introduced Android for Auto, which might be more favorably titled Android smart phone for auto, based on the features demonstrated during this segment of the Keynote presentation.

Brady summed up the burning consumer needs for a solution like Android for auto with reference to the well publicized problem of widespread public disdain for traffic laws prohibiting the manual use of smart phones in cars: “Even though it’s unsafe and illegal, people use their phones while driving.” (quoted from a presentation on Android for Auto, included in the Keynote presentation of the Google I/O 2014 event, by Patrick Brady, Director of Engineering, Android)

Google is working to solve this problem. “In January, we announced the Open Automotive Alliance to make the connected car a reality . . . and today we are happy to announce Android auto”. Brady claims Android for Auto includes a ” . . . [redesigned] . . . Android platform for automotive, [which makes] it easier and safer to use the connected apps and services drivers want in the car.” (ibid)

How did Android arrive at this approach? “We looked at what people do with their phones in their cars, today and these things stood out to us: navigation, communication, music and other forms of streaming media. . . . Android auto puts these front and center.” (ibid)

Brady then claims the entire platform can be managed via voice commands. In this writer’s opinion, this approach is the right one, assuming Google has truly implemented it. But the demonstration provided during this segment was long on presenting how a driver (played by the current product manager for Android for Auto, Andrew Brenner) can use a dashboard screen to manage Android for Auto features, and short on the potential represented by the current voice control feature set.

Brenner demonstrated how Android for Auto can be used to read text messages, and then transform voice syntax into a text message to send a reply. But this feature is already available from Windows Phone 8, as well.

If this demonstration is consistent with the rest of Android’s architecture for the Android for Auto platform, the product has been designed, almost in entirety, to bolt on Android smart phone features to the intelligence systems used in cars. In this writer’s opinion, opting for this architecture for a platform, which, in theory, could be used for a much richer set of functionality, is not a right step forward. When the rudimentary set of voice commands available with the current version of the platform is brought into the picture, the platform looks even less promising.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

8
Jul

Should Android Developers Spend a Lot of Time on Visual Features, Like Cards, While APIs for Voice Features Still Need to be Defined?

Approximately an hour into the 2.5 hr web cast of Google’s I/O, 2014 event, David Singleton, Director of Engineering for Google’s Android project, finally informs his audience about plans for voice features for this mobile O/S. Since human activity is so dependent on movement, would it have made more sense to address voice earlier in the presentation?

But Singleton adds a telling comment on this topic: “we’re just getting started on this”. Perhaps Android developers should closely consider the ramifications of this comment before committing resources for yet another Android app designed for a touch screen. This is not to say all of the pretty “cards” demonstrated, up to this point in the Keynote presentation, are not worth the effort. But for a consumer in transit, flipping through cards on either a smart phone, a smart watch, or even an automobile display, just will not work out. There will, nevertheless, be a lot of demand based on likely consumer need to dive into details about a geographical location, and some of the services nearby, and even deeper. But the real solution to the problem will be a set of highly sophisticated voice “cards” and not their older, currently available, visual siblings.

What is required for Android to traverse this pressing gap between visual and audible features? Perhaps success will depend on a set of highly detailed voice APIs, will it not? Not only did Singleton candidly inform his audience as to how much still needs to be done to put together these APIs, one of the examples he provided of an app built by a third party Android ISV is, perhaps, representative of how not to proceed on the project. The app he talked about is called Lyft. The voice function he presented is “our call a car intent”. But the syntax of the query, at least to this writer’s ear, “OK Google, call me a car”, is unnatural and not likely to be very useful to the average consumer. Unfortunately, Singleton provided no reference on the degree to which the logic behind this function may, or may not, be fuzzy, to provide a useful response to more naturally phrased queries. Finally, the notifications from Lyft, post query, which Singleton demonstrated, were all visual (once again, a deck of pretty cards), and not likely to be useful to mobile consumers who need to attend to where they are going, rather than a device screen.

Bottom line, when/if a thoroughly detailed set of voice APIs are available for app development for the Android O/S, it is likely the pretty cards and swipe gestures on a touch screen won’t have the same value they apparently have today. So Android ISVs might want to wait a bit before diving much deeper into the “wearables” opportunity.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved