14
Jan

IBM introduces a new mainframe computer targeted to some markets better served by clusters of smaller servers

2-Color-Design-Hi-Res-100px-widthIBM debuted its z Systems line of mainframe computers in January, 2015. The product line is targeted to a hot segment of enterprise computing consumers, organizations with a burning need to manage mobile devices, users availing of cloud computing offers and, above all, secure online data processing.

Readers can learn more about this product line on its website, IBM z Systems. A quick glance at the marketing communications content reveals some popular and absolutely current computing themes:

  • mobile computing
  • enterprise social computing
  • real-time analytics and “in-transaction” analytics
  • secure, cloud computing

Video product presentations are available on the product website, along with a traditional datasheet in PDF format. A quick glance at the datasheet exposes a cluster approach to delivering the computing power of a typical high performance computing (HPC) system. No problem so far, but how does the introduction of a hardware computing platform with these capabilities align alongside IBM’s announced effort to become a major player in the public and private cloud market for IaaS, PaaS and SaaS? Does it make sense for developers building solutions for Hadoop, Map Reduce, and other cluster architectures optimized for lots of comparatively much smaller CPUs to focus on porting these applications over to IBM’s platform?

Notable on the datasheet is IBM’s suggestion about development platforms. The recommendation is for Java. But a lot of the most promising sector of app development, in, admittedly, very early 2015, is built on scripting languages, with JavaScript getting the most attention. There is also some substantial mention of traditional mainframe computing languages (COBOL) on the datasheet.

So one needs to question just how this product line adds value to IBM’s effort to catch up with its peers in the cloud computing business with systems like these. Certainly a review of the website for these new products is recommended given the themes articulated by the press releases about the z System computers. The most prominent of these are all allusions to mainframe computing, which, in 2014, seems to be something of anathema. Big iron is, unfortunately, no longer the recommended way for most enterprise businesses to proceed, at least not for the ones already committed to Azure, AWS and Google Compute Engine.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2015 All Rights Reserved

17
Dec

Google Debuts Cloud Dataflow at Google I/O 2014

2-Color-Design-Hi-Res-100px-widthAt the end of a 2.5 hr plus webcast of the Keynote Presentation from Google I/O 2014 can be found the debut of Google Cloud Dataflow, the replacement for Google MapReduce. Readers unfamiliar with MapReduce, but avidly interested in the big data enterprise computing trend, need to understand MapReduce as the application at the foundation of today’s Apache Hadoop project. Without MapReduce, the Apache Hadoop project would not exist. So Google MapReduce is a software package worth some study, as is Cloud Dataflow.

But wait, there’s more. As Urs Hölze, Senior Vice President, Technical Infrastructure, introduces Google Cloud Dataflow, his audience is also informed about Google’s role in the creation of another of today’s biggest enterprise data analytics approaches — NoSQL (“Not only SQL”). He casually informs his audience (the segue is a simple “by the way”) Google invented NoSQL.

I hope readers will get a feel for where I’m headed with these comments about these revelations about Google’s historical role in the creation of two of the very big trends in enterprise computing in late 2014. I’m perplexed at why Google would, literally, bury this presentation at the very end of the Keynote. Why would Google prefer to cover its pioneering role in these very hot computing trends with a thick fog? Few business decision-makers, if any, will be likely to pierce this veil of obscurity as they search for best-in-class methods of incorporating clusters of servers in a parallel processing role (in other words “big data”) to better address the task of analyzing text data scraped from web pages for corporate sites (“NoSQL”).

On the other hand, I’m also impressed by the potential plus Google can realize by removing this fog. Are they likely to move in this direction? I think they are, based upon some of the information they reported to the U.S. SEC in their most recent 10Q filing for Q3 2014. Year-over-year, the “Other Revenues” segment of Google’s revenue stream grew by 50% from $1,230 (in 000s) in 2013, to $1,841 in 2014. Any/all revenue Google realizes from Google Cloud and its related components (which, by the way, include Cloud Dataflow) are included in this “Other Revenues” segment of the report. For the nine months ending September 30, 2014, the same revenue segment increased from $3,325 in 2013, to $4,991 in 2014. Pretty impressive stuff, and not likely to diminish with a revamped market message powering “Google at Work”, and Amit Singh (late of Oracle) at the head of the effort.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

10
Dec

Use Hadoop to collect and analyze data with clusters of computer servers

Customers with large amounts of data, who are capable of supporting a distributed server architecture, as clusters, can benefit from a decision to implement Apache Hadoop® as the solution. The key operant principle is the notion of clusters. Readers eager to learn more about this benefit may want to take a few moments to review a short animation, titled Hadoop* Server Clusters with 10 Gigabit Intel® Ethernet, which is available for public viewing on a web site published by Intel.

I’m not recommending the video for the presentation of Intel’s high speed gigabit networking hardware. This segment takes up approximately the last 1-2 mins of the animation. But the opening section does more to present viewers with information about how Apache Hadoop is uniquely capable of adding value to any effort to implement data management and analytics architectures over comparatively lower cost server hardware than most of the hype otherwise available online on the notion of “big data”.

For readers looking for even more help drilling down to just what the value-add may amount to should a decision be made to implement Hadoop, a quick visit to a page on the MapR© web site titled What is Apache™ Hadoop®? will likely be worth the effort. The short presentation on the page, in my opinion, provides useful information about why clusters of servers are uniquely capable of servicing as the repository for an enormous number of web pages filled with information.

Certainly market consumers have opted to implement Hadoop for a lot of other purposes than its original “reason to be” as an evolution of “a new style of data processing known as MapReduce” (which was developed by Google) as the MapR presentation points out. These implementations provide a lot of the support for arguments for the notion of “big data”, at least the ones short on hype and long on sensibility.

What’s missing from the MapR presentation are customer success stories/case studies. Fortunately anyone looking for this type of descriptive content on just how real life businesses can benefit from an implementation of Hadoop can simply visit a page of the Hortonworks web site titled They Do Hadoop and watch some of the videos.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved