IBM introduces a new mainframe computer targeted to some markets better served by clusters of smaller servers

2-Color-Design-Hi-Res-100px-widthIBM debuted its z Systems line of mainframe computers in January, 2015. The product line is targeted to a hot segment of enterprise computing consumers, organizations with a burning need to manage mobile devices, users availing of cloud computing offers and, above all, secure online data processing.

Readers can learn more about this product line on its website, IBM z Systems (http://www-03 NULL.ibm NULL.com/systems/z/announcement NULL.html?lnk=ushpls1). A quick glance at the marketing communications content reveals some popular and absolutely current computing themes:

  • mobile computing
  • enterprise social computing
  • real-time analytics and “in-transaction” analytics
  • secure, cloud computing

Video product presentations are available on the product website, along with a traditional datasheet in PDF format. A quick glance at the datasheet exposes a cluster approach to delivering the computing power of a typical high performance computing (HPC) system. No problem so far, but how does the introduction of a hardware computing platform with these capabilities align alongside IBM’s announced effort to become a major player in the public and private cloud market for IaaS, PaaS and SaaS? Does it make sense for developers building solutions for Hadoop, Map Reduce, and other cluster architectures optimized for lots of comparatively much smaller CPUs to focus on porting these applications over to IBM’s platform?

Notable on the datasheet is IBM’s suggestion about development platforms. The recommendation is for Java. But a lot of the most promising sector of app development, in, admittedly, very early 2015, is built on scripting languages, with JavaScript getting the most attention. There is also some substantial mention of traditional mainframe computing languages (COBOL) on the datasheet.

So one needs to question just how this product line adds value to IBM’s effort to catch up with its peers in the cloud computing business with systems like these. Certainly a review of the website for these new products is recommended given the themes articulated by the press releases about the z System computers. The most prominent of these are all allusions to mainframe computing, which, in 2014, seems to be something of anathema. Big iron is, unfortunately, no longer the recommended way for most enterprise businesses to proceed, at least not for the ones already committed to Azure, AWS and Google Compute Engine.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2015 All Rights Reserved


Success Stories and Case Studies do serve a purpose for enterprise technology consumers

2-Color-Design-Hi-Res-100px-widthIf ISVs with offerings targeted to enterprise computing markets needed any more indication of the importance of case studies and success stories, they likely got what they needed in an article written by Elizabeth Dwoskin, which was published on December 16, 2014 on the Online Wall Street Journal web site.

The title of Dwoskin’s article is The Joys and Hype of Software Called Hadoop (http://www NULL.wsj NULL.com/articles/the-joys-and-hype-of-software-called-hadoop-1418777627?mod=LS1). The reason her article should alert any ISVs still in the dark as to why they absolutely require a marketing communications effort, which will produce success stories and case studies can be found in the following quote:

  • “Yet companies that have tried to use Hadoop have met with frustration. Bank of New York Mellon used it to locate glitches in a trading system. It worked well enough on a small scale, but it slowed to a crawl when many employees tried to access it at once, and few of the company’s 13,000 information-technology workers had the expertise to troubleshoot it. David Gleason, the bank’s chief data officer at the time, said that while he was a proponent of Hadoop, ‘it wasn’t ready for prime time.'” (quoted in entirety from Dwoskin’s article in the WSJ. I have provided a link to the entire article, above and encourage readers to spend some time on it)

This comment from a large enterprise consumer — BNY Mellon — which can be read as less than positive, can (and likely will) do a lot to encourage peers to look a lot closer at Hadoop prior to moving forward on an implementation.

Bottom line: enterprise businesses do not like to proceed where their peers have hit obstacles like the one Gleason recounts in his comment. Peer comparisons are, arguably, a very important activity for enterprise business consumers. So ISVs working with Hadoop on big data offers, or NoSQL databases and related analytics need to make the effort to queue up positive comments about consumer experiences with their products.

I recently wrote a set of posts to this blog on Big Data, NoSQL and JSON and must admit to experiencing some difficulty finding the case studies and success stories I needed to gain a perspective on just how enterprise consumers have been using products presented as solutions to the market for these computing trends. Hortonworks (http://www NULL.hortonworks NULL.com), on the other hand, is an exception. So I would encourage any readers after the same type of testimonial content about customer experience with products to visit Hortonworks on the web.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved


The NoSQL notion suffers from some of the same ambiguity plaguing the notion of big data

Readers interested in finding out what NOSQL is all about will benefit from simply developing some familiarity with the definition of this acronym. NOSQL stands for “not only SQL”. I found this definition to be very helpful as it helped me correct my first misunderstanding about this notion. I thought NOSQL referred to a set of software tools designed to work with text, document, databases lacking the columnar table structure their Structured Query Language (SQL) siblings thrive upon.

But my understanding was wrong, which, unfortunately for businesses championing a NOSQL approach, may be the case of a lot of the enterprise user segment of the enterprise computing market for NOSQL analytics and the tools required for their delivery. mongoDB (http://www NULL.mongodb NULL.com/nosql-explained) is an example of a database built to conform to NOSQL.

But as the cliche goes “the best of all intentions” can go astray, as is the case, in my opinion, for the mongoDB definition. The average consumer of enterprise computing solutions built to work with social media conversations culled from lots of web pages, likely a chief marketing officer for a popular consumer brand-name, isn’t likely to be able to understand how “Document databases pair each key with a complex data structure known as a document. Documents can contain many different key-value pairs, or key-array pairs, or even nested documents” (quoted from the mongoDB web page presentation).

Further, characterizing the choices facing the enterprise consumer as an either “RDBMS” or “non RDBMS” isn’t going to be helpful if the literal definition of the NOSQL acronym is applied. As MapR© points out on its web site, an optimum approach to implementing NOSQL analytics is to combine SQL and text query tools built with JSON components to digest the same data, which, admittedly be incorporated into a mongoDB database, but came, originally from an RDBMS.

What’s even more surprising about the page on the mongoDB website is the light it sheds on a programming effort by a much larger, and much more mature ISV, namely Microsoft: “Graph stores are used to store information about networks, such as social connections. Graph stores include Neo4J and HyperGraphDB”. Hmmm . . . Now “Office Graph”, which is the predecessor of “Delve”, makes a lot more sense.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved


Hadoop attracts support from Microsoft and Intel

The Apache Hadoop project (http://hadoop NULL.apache NULL.org/#What+Is+Apache+Hadoop%3F) “develops open-source software for reliable, scalable, distributed computing” (quoted from the “What is Apache Hadoop?” section of the site). So it makes sense for Microsoft and Intel to enthusiastically support the project. Microsoft is deeply committed to its cloud, IaaS effort, Azure (http://www NULL.azure NULL.com), and one of the prime revenue generators for Intel is its Data Center Business (http://www NULL.intel NULL.com/content/www/us/en/search NULL.html?keyword=data%20center). Azure and Intel’s Data Center business are both all about lots and lots of computer servers. The former consumes servers, while the latter provides the CPUs driving them.

As I wrote in the previous post to this blog, it’s likely a majority of the enterprise consumer segment of the tech reader community maintains a questionable understanding of the notion of “big data”. But, when correctly understood, it should not be a stretch for readers to understand why the Apache Hadoop project (or its OpenStack (http://www NULL.openstack NULL.org) competitor) are positioned at the very core of this technology trend.

Microsoft and Intel are not the only mature ISVs looking to benefit from big data. IBM and EMC are two other champions with solutions on the market to add value for enterprises looking to implement Hadoop.

Intel ostensibly understands the ambiguity of the notion of “big data”, and the imperative of providing the enterprise business consumer with a clearer understanding of just what this buzzword is really all about. A section of the Intel web site, titled Big Data, What It Is, Why You Should Care, and How Companies Gain Competitive Advantage (http://www NULL.intel NULL.com/content/www/us/en/big-data/big-data-101-animation NULL.html) is an attempt to provide this information.

But Intel’s effort to educate the consumer, in my opinion, falls into the same swamp as a lot of the other hype before it can deliver on its promise. The amount of data may be growing exponentially, as the opening of the short Intel animation on the topic contends, but there are a lot of mature ISVs (Oracle, IBM, Microsoft, etc) with relational database management systems, designed for pricey big server hardware, which are capable of providing a columnar structure for the data.

Even when “unstructured data” is mentioned, the argument is shaky. there are solutions for enterprise consumers like Microsoft SharePoint (specifically, The Term Store service), which are designed to build a method of effectively pouring text data into an RDBMS, for example SQL Server (the terms are added to SQL Server and are used to tag the text strings identified in unstructured data).

I am not arguing for the sole use of traditional RDBMSs, with SQL tools to manage a data universe experiencing exponential growth. Rather, I think big data proponents (and Hadoop champions) need to perform a closer study on what the real benefits are of clustering servers and then articulate the message for their enterprise computing audience.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved


As More Enterprise Businesses Embrace Hadoop, Intel Stands to Benefit

Of the 42 members of Hadoop’s Project Management Committee (http://hadoop NULL.apache NULL.org/who NULL.html), 8 are directly affiliated with Cloudera®, and another with Intel®. Patrick Hunt, an Engineer at Cloudera appears to have played a key role in the development of a keyword search feature for Hadoop, which is not a trial achievement for a database like Hadoop, which is designed for unstructured data. Intel has an investment in Cloudera. Therefore, Intel should benefit as more organizations choose to proceed with unstructured data, and Hadoop as its repository.

Some prominent online businesses, including:

  • Amazon
  • eBay
  • facebook
  • Twitter
  • and Spotify

have made major commitments to Hadoop.

Readers are recommended to review Who uses Hadoop? (http://wiki NULL.apache NULL.org/hadoop/PoweredBy) to familiarize themselves with the size of an average Hadoop implementation. Of course, very large repositories of data like these require a lot of CPU resources for processing. As the leading manufacturer of server CPUs, Intel benefits from all of this need for computing power, regardless of whether an organization implementing Hadoop runs it on the Apple OS X O/S, Ubuntu, or another Linux flavor. The recommended hardware for each of these is Intel.

The tools offered by Cloudera for managing Hadoop data repositories (http://www NULL.cloudera NULL.com/content/cloudera/en/solutions/enterprise-solutions NULL.html) are designed to provide enterprise businesses with familiar features and procedures. Since most of these enterprise data centers are already full of Intel hardware, Cloudera can be seen, perhaps, as another method Intel can leverage to maintain its position in these same installations.

What bearing does all of the above have on discussions about large data centers, a need for better power management, and the likelihood of hardware OEMs building solutions on the ARM architecture capturing substantial share? Given the importance of Hadoop to the leading cloud, IaaS vendor — Amazon, as well as to Microsoft Azure (http://azure NULL.microsoft NULL.com/en-us/solutions/big-data/?WT NULL.mc_id=azurebg_us_sem_bing_br_solutions_nontest_bigdata&WT NULL.srch=1) it doesn’t appear likely server cores running ARM architecture will quickly become the standard in these environments any time soon.

Further, Intel is certainly not standing by, but working, very actively to produce more power efficient hardware in very small form factors. One can argue Microsoft’s Surface Pro 3, which is powered by either an Intel Quad Core i3, i5, or even i7 is a tangible example of how much progress they have made to better satisfy consumer appetite for power thrifty, extremely thin computing devices.

Ira Michael Blonder (https://plus NULL.google NULL.com/108970003169613491972/posts?tab=XX?rel=author)

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved