23
Feb

Apps for SharePoint 2013 carry their own set of implementation risks

2-Color-Design-Hi-Res-100px-widthLarge organizations with an instance of Microsoft SharePoint running on premises may be thinking about migrating their customization process over from full trust solutions to a combination of HTML, CSS and JavaScript. Microsoft refers to this combination as the “SharePoint App Model”. A similar combination called the “Office App Model” is also being promoted for requirements to modify the components of Microsoft’s Office suite (“Office”) to meet the unique requirements of specific organizations.

Despite what I refer to as a near “binary” presentation, where the strengths of these app models (the “pluses”) are presented in direct comparison with the weaknesses of their full trust solution ancestors (the “zeroes”), readers with similar interests will benefit if they include a governance plan for customization along with the other migration components. Here is why:

jQuery is a popular function library for JavaScript. Since jQuery is actively supported in the user community, the library continues to evolve. Hence there are many different versions of the library. But not all features of all libraries are the same. So conflicts can arise from customizations built with earlier versions of the jQuery library. Especially when these customizations are actively used alongside other customizations built with other versions of the library.

The negative impact of these conflicts is greater when a central IT organization steps back and opts to empower line of business (LoB) units to build their own customizations for an on-premises complex computing platform like SharePoint 2013. On the surface this approach may look to be the correct one to take, especially if this stance has evolved after several years of an active BYOD policy.

Some proponents of Dev/Ops may recommend this kind of flexible posture on the part of enterprise IT. But if there is no central control over how jQuery libraries are to be implemented, then the risks of a breakdown in computer processing take on a more palpable shape. A far better policy calls for enterprise IT to directly arbitrate with LoBs on the question of how customizations are to be managed. In fact, enterprise IT ought to publish a set of standards for how customizations are to be built with SharePoint and/or Office apps. Finally, a set of tools should be implemented (and developed if they are not found to be available given the unique needs of a specific organization) capable of detecting processes running on internal on-premises computing systems to ensure any/all examples of app customizations are in conformance with this policy.

Without this kind of governance plan, larger organizations will face much the same odds of poor return on development investment from app model efforts, as would be the case if they simply proceeded with “legacy” customization techniques.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2015 All Rights Reserved

14
Jan

IBM introduces a new mainframe computer targeted to some markets better served by clusters of smaller servers

2-Color-Design-Hi-Res-100px-widthIBM debuted its z Systems line of mainframe computers in January, 2015. The product line is targeted to a hot segment of enterprise computing consumers, organizations with a burning need to manage mobile devices, users availing of cloud computing offers and, above all, secure online data processing.

Readers can learn more about this product line on its website, IBM z Systems. A quick glance at the marketing communications content reveals some popular and absolutely current computing themes:

  • mobile computing
  • enterprise social computing
  • real-time analytics and “in-transaction” analytics
  • secure, cloud computing

Video product presentations are available on the product website, along with a traditional datasheet in PDF format. A quick glance at the datasheet exposes a cluster approach to delivering the computing power of a typical high performance computing (HPC) system. No problem so far, but how does the introduction of a hardware computing platform with these capabilities align alongside IBM’s announced effort to become a major player in the public and private cloud market for IaaS, PaaS and SaaS? Does it make sense for developers building solutions for Hadoop, Map Reduce, and other cluster architectures optimized for lots of comparatively much smaller CPUs to focus on porting these applications over to IBM’s platform?

Notable on the datasheet is IBM’s suggestion about development platforms. The recommendation is for Java. But a lot of the most promising sector of app development, in, admittedly, very early 2015, is built on scripting languages, with JavaScript getting the most attention. There is also some substantial mention of traditional mainframe computing languages (COBOL) on the datasheet.

So one needs to question just how this product line adds value to IBM’s effort to catch up with its peers in the cloud computing business with systems like these. Certainly a review of the website for these new products is recommended given the themes articulated by the press releases about the z System computers. The most prominent of these are all allusions to mainframe computing, which, in 2014, seems to be something of anathema. Big iron is, unfortunately, no longer the recommended way for most enterprise businesses to proceed, at least not for the ones already committed to Azure, AWS and Google Compute Engine.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2015 All Rights Reserved

7
Jan

Has node.js and similar web server platforms led consumers to think apps aren’t connected to the Internet?

2-Color-Design-Hi-Res-100px-widthThe New York Times recently published a couple of articles written by Conor Dougherty. These articles point to what Dougherty described as diminished mobile app consumer access to the Internet. This shrinking online accessibility, Dougherty argues, can be attributed to mobile apps, themselves, which run fine without web browsing.

But the performance of every one of the apps mentioned in these articles, as I am completely sure Dougherty would agree, is entirely dependent on the Internet. The real issue for Dougherty, I would surmise, is the transparent nature of this performance, which, somehow “happens” without any requirement for user intervention. Therefore, Dougherty argues, the quality of the personal computing experience for app consumers, has been degraded.

Of course none of this performance would be possible without “web server platforms” like node.js. These app components add web server functionality. So the apps can perform, online, without a web browser. There is not much new about this beyond the fact node.js is an example of how to achieve this performance with JavaScript, which is a very popular scripting language at present.

The point I am trying to make is developers and the ISVs supporting their efforts need to be sensitive to public perception. Dougherty is not writing in a vacuum. His articles are read because consumers are interested in the topics covered. So it is likely safe to say some segment of the consuming public feels the same way, regardless of whether or not the public perception is actually correct.

Maintaining sensitivity to this type of public notion (perhaps “perception” is not the right word) should, in my opinion, translate into some type of communications effort (PR or MARCOM) to better inform consumers how apps actually work. At the same time, perhaps developers and ISVs have an opportunity to bake in some of the “choice” Dougherty finds missing from the app computing experience. Dougherty longs for a “unifying link”. Would it make sense for someone to build in a method of bolting otherwise unconnected apps together? Perhaps with some type of search feature? I am not saying yes, or no, but, perhaps someone would like to check it out.

If you find a market for these feature, do let me know.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2015 All Rights Reserved

8
Dec

Comments on some of the ambiguity about the notion of big data

A number of tech markets, including enterprise computing, cloud, SaaS, PaaS, IaaS and IoT have demonstrated a voracious appetite for data management and analysis. Anyone following data management technology may get lost in the notion of “big data”.

I say lost, as an enormous amount of hype has been built up around the “theme” of “big data.” But a lot of long standing data management methods — relational databases management systems (RDBMS) with a columnar architecture built to provide structure to data — work really well for, ostensibly, enormous amounts of information (meaning data). Readers may want to consider efforts like the Port Authority of New York and New Jersey, and the toll road system it manages. How many millions of vehicle transactions occur on a monthly basis? In turn, how many billions of bits of data does the history of vehicle transactions through toll machines represent? Has this enormous amount of data proven to be unmanageable?

The answers to each of the questions, just presented, all support an argument for RDBMS and Structured Query Language (SQL) as a useful method of working with enormous amounts of data. These questions and answers echo across a very wide of applications; for example, the purview of the U.S. National Weather Service; or the universe of drugs managed by the U.S. Food and Drug Administration.

So there is nothing inherently radical about the notion of “big data”, at least if the notion is correctly understood as merely the set of methods commonly in use to manage data. In fact, and this is where, in my opinion. commentator hyperbole has clouded the whole question of just what is changing — in a truly radical way — about data management methods, the notion of big data is NOT correctly understood as I’ve just presented it. The “big” piece of “big data” appears to have been meant to represent a scalable data management architecture (best typified by Apache Hadoop). Anyone reading the presentation on the Hadoop web site can’t help but understand the role of clusters of servers for Hadoop as a solution. Clusters of servers, in turn, provide a perfect rationale for the Apache project to provide the foundation for Hadoop.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

24
Mar

Does it Make Sense for Enterprise IT to Serve Their IDE Needs from SaaS in the Cloud?

On February 28, 2014, the CloudShare Community Blog published Chris Riley’s interview with Ken Walker of IBM®. The topic of the discussion was the recent increase in enterprise business interest in serving their needs for Integrated Development Environments (IDEs) from the cloud, and the rationale behind it.

Walker voiced his opinion on the rationale as follows: “The capabilities of SaaS and PaaS platforms and the power of the underlying JavaScript runtimes in every browser are at the tipping point where there’s no point in Developers maintaining private tool chains on their own machines.” (quoted from a February 28, 2014 interview of Ken Walker of IBM by Chris Riley of Cloudshare. A link to the full interview is published above).

Add to Walker’s opinion a recent substantial change in direction for the Microsoft® development model for its Office products, and enterprise business will likely find a lot of reasons to seriously consider cloud SaaS offers, as they grapple with just how best to provision IDEs for organization-specific requirements for custom software.

If these same enterprise IT organizations have already decided to reduce their expense for new desktop software by implementing Desktop as a Service solutions like VMware’s “Desktops in the Cloud” notion, then developers may find lots of reasons to abandon desktop computers with lots of RAM, very fast solid state drives, and numerous Virtual Machines (VMs) for thin clients persistently connected to the Cloud IDEs Walker and Riley discuss in their interview.

Anyone with an interest in following this trend will want to closely review sales reports from leading, publicly traded PC OEMs. Any substantial drop in high end PCs may indicate increased use of Cloud IDEs by enterprise-class businesses.

Just a note on the changes Microsoft introduced with its new 2013 development model: the development emphasis for the Office 2013 components, including SharePoint on premises and in the cloud via Office 365, is squarely on JavaScript (and the Open Source jQuery project) and HTML. I think this change is, potentially, very good news for business customers looking to bolster server defenses against online security threats.

In theory, when processes are written for browsers, the need for trusted solutions dependent on server cycles diminishes. Of course, every custom trusted solution brings its own security risks, which can then become threats to the server, itself.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved