2
Apr

Frequency and intensity of successful malicious exploits of online data call for a pooling of information between impacted parties

2-Color-Design-Hi-Res-100px-widthWhen hackers obtain otherwise legitimate credentials to online sites and the data repositories they contain, the likelihood of success for their efforts to depart with data they do not own is much greater — perhaps unstoppable. Therefore it makes sense for parties impacted by these attacks to pool their information so a new level of defense can be promptly implemented against further successful exploits with the same credentials.

Unfortunately this is the condition apparently in place in March 2015. On Sunday, March 29, 2015 the online edition of the Wall Street Journal ran a story written by the Associated Press titled Some British Airways Frequent-Flier Accounts Hacked (http://www NULL.wsj NULL.com/articles/some-british-airways-frequent-flier-accounts-hacked-1427598805?KEYWORDS=british+airways). Notable in the article is mention of what appears to be the method the hackers used to access the data: “The breach apparently was the result of a third party using information obtained elsewhere on the Internet”.

Then, through what looks like a brute force method of simply trying credential set after credential set against the access control method at the perimeter of the British Airways web site, the hackers eventually succeeded in their effort. Tellingly, the writers from the Associated Press note this attack is, apparently, the fourth such recent attempt. The other attempts compromised data owned by the “Hilton and Starwood” hotel brands and “United and American airlines”.

It is very hard to defend a data repository against requests for access based on legitimate credentials. Sure processes can be implemented to detect brute force access methods and to deny access — even to holders of legitimate credentials — when they are presented within the context of a brute force attack. But what if the “automated process” mentioned by the Associated Press amounted to a substantially more sophisticated tactic than a rapid, repeated completion of an online site access form? It would be much harder to detect a brute force attack should it transpire over days, or even weeks.

Regardless of how one argues data owners should defend themselves against these types of attacks, the substantial value of implementing data consortiums — literally groups pooling data about attacks — as a defense method should pass muster. One can argue law enforcement agencies already provide this type of knowledge “beyond the wall” and should be able to play this role. But there is another aspect to the potential of a data consortium for online data security, a similar opportunity to the concept of Key Risk Indicators (KRIs) as it has been applied to efforts to implement Operational Risk Management (ORM) solutions for global financial businesses. This application of a data consortium will not fall within the purview of a decision to look to law enforcement for “environmentally relevant” data about similar data security breaches. I have some experience with ORM solutions including KRIs and would be interested to speak with readers with an interest in hearing further about this notion. Please contact me to discuss.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2015 All Rights Reserved

29
Jan

Personal agents, and artificial intelligence may transform enormous amounts of information into manageable resources

2-Color-Design-Hi-Res-100px-widthPeter Holley of the Washington Post captured some thoughts Bill Gates articulated during a Reddit “Ask me Anything” session on the topic of a Microsoft project “called the Personal Agent”. Holley’s clips appear in an article titled Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’ (http://www NULL.washingtonpost NULL.com/blogs/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/). Holley writes: “He went on to highlight a Microsoft project known as the “Personal Agent,” which is being designed to help people manage their memory, attention and focus.”

But Holley doesn’t note the project has been discussed before, this time by Microsoft CEO Satya Nadella. Nadella touched on the very same theme on July 10, 2014, in his letter to Microsoft Employees, titled Bold Ambition & Our Core (http://news NULL.microsoft NULL.com/ceo/index NULL.html). Here is the quote from Nadella: “Computing is ubiquitous and experiences span devices and exhibit ambient intelligence. Billions of sensors, screens and devices – in conference rooms, living rooms, cities, cars, phones, PCs – are forming a vast network and streams of data that simply disappear into the background of our lives. This computing power will digitize nearly everything around us and will derive insights from all of the data being generated by interactions among people and between people and machines. We are moving from a world where computing power was scarce to a place where it now is almost limitless, and where the true scarce commodity is increasingly human attention.”.

I wrote earlier in this blog on these comments of Nadella’s. My formal education includes a Master’s degree in English Literature. I spent a lot of time working on the poems of Samuel Coleridge and couldn’t help noting Coleridge’s prescience when he wrote in the Rhyme of the Ancient Mariner: “Water, water everywhere, but nary a drop to drink”.

I think, in 2015, a lot of us are parched mariners, dying of thirst in a world flooded with too much information to be manageable. So, where Holley reads Gates’ comments as a portrayal of some of the features of Microsoft’s personal agent as a kind of personal tune up, I read them as depicting a set of components of a solution packed with artificial intelligence. The solution will be designed to sort, and prioritize information into useful, digestible chunks, which can provide the user with a reliable basis of beneficial activity.

Cortana, Siri, and Google Now are three very early stage examples of this type of effort, with, respectively, very limited results. But perhaps these tools will become useful in time.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2015 All Rights Reserved

7
Jan

Has node.js and similar web server platforms led consumers to think apps aren’t connected to the Internet?

2-Color-Design-Hi-Res-100px-widthThe New York Times recently published a couple of articles written by Conor Dougherty (http://topics NULL.nytimes NULL.com/top/reference/timestopics/people/d/conor_dougherty/index NULL.html?action=click&contentCollection=Technology&module=Byline&region=Header&pgtype=article). These articles point to what Dougherty described as diminished mobile app consumer access to the Internet. This shrinking online accessibility, Dougherty argues, can be attributed to mobile apps, themselves, which run fine without web browsing.

But the performance of every one of the apps mentioned in these articles, as I am completely sure Dougherty would agree, is entirely dependent on the Internet. The real issue for Dougherty, I would surmise, is the transparent nature of this performance, which, somehow “happens” without any requirement for user intervention. Therefore, Dougherty argues, the quality of the personal computing experience for app consumers, has been degraded.

Of course none of this performance would be possible without “web server platforms” like node.js (http://nodejs NULL.org/). These app components add web server functionality. So the apps can perform, online, without a web browser. There is not much new about this beyond the fact node.js is an example of how to achieve this performance with JavaScript, which is a very popular scripting language at present.

The point I am trying to make is developers and the ISVs supporting their efforts need to be sensitive to public perception. Dougherty is not writing in a vacuum. His articles are read because consumers are interested in the topics covered. So it is likely safe to say some segment of the consuming public feels the same way, regardless of whether or not the public perception is actually correct.

Maintaining sensitivity to this type of public notion (perhaps “perception” is not the right word) should, in my opinion, translate into some type of communications effort (PR or MARCOM) to better inform consumers how apps actually work. At the same time, perhaps developers and ISVs have an opportunity to bake in some of the “choice” Dougherty finds missing from the app computing experience. Dougherty longs for a “unifying link” (http://www NULL.nytimes NULL.com/2015/01/06/technology/tech-companies-look-to-break-down-walls-between-apps NULL.html?ref=technology). Would it make sense for someone to build in a method of bolting otherwise unconnected apps together? Perhaps with some type of search feature? I am not saying yes, or no, but, perhaps someone would like to check it out.

If you find a market for these feature, do let me know.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2015 All Rights Reserved

8
Dec

Comments on some of the ambiguity about the notion of big data

A number of tech markets, including enterprise computing, cloud, SaaS, PaaS, IaaS and IoT have demonstrated a voracious appetite for data management and analysis. Anyone following data management technology may get lost in the notion of “big data”.

I say lost, as an enormous amount of hype has been built up around the “theme” of “big data.” But a lot of long standing data management methods — relational databases management systems (RDBMS) with a columnar architecture built to provide structure to data — work really well for, ostensibly, enormous amounts of information (meaning data). Readers may want to consider efforts like the Port Authority of New York and New Jersey, and the toll road system it manages. How many millions of vehicle transactions occur on a monthly basis? In turn, how many billions of bits of data does the history of vehicle transactions through toll machines represent? Has this enormous amount of data proven to be unmanageable?

The answers to each of the questions, just presented, all support an argument for RDBMS and Structured Query Language (SQL) as a useful method of working with enormous amounts of data. These questions and answers echo across a very wide of applications; for example, the purview of the U.S. National Weather Service; or the universe of drugs managed by the U.S. Food and Drug Administration.

So there is nothing inherently radical about the notion of “big data”, at least if the notion is correctly understood as merely the set of methods commonly in use to manage data. In fact, and this is where, in my opinion. commentator hyperbole has clouded the whole question of just what is changing — in a truly radical way — about data management methods, the notion of big data is NOT correctly understood as I’ve just presented it. The “big” piece of “big data” appears to have been meant to represent a scalable data management architecture (best typified by Apache Hadoop (http://hadoop NULL.apache NULL.org)). Anyone reading the presentation on the Hadoop web site can’t help but understand the role of clusters of servers for Hadoop as a solution. Clusters of servers, in turn, provide a perfect rationale for the Apache project to provide the foundation for Hadoop.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

25
Nov

Android’s penetration of enterprise computing markets is constrained by a combination of limited upgrade options and too many distributions

It’s very late in 2014, but a lot of enterprise computing consumers still depend on a central support function. An enormous volume of content has been written on the topic of the consumerization of business computing, and how the role of technology leader has changed hands from the typical enterprise IT organizations, to power users playing any kind of role within the organization.

But when something breaks, whether the wreckage occurs at the Line of Business (LoB) level, or within enterprise IT, itself, it still has to be fixed. Fixing broken iOS devices, or Windows devices remains a preferred route. There are simply too many distributions of the Android operating systems, and too much difficulty bringing the ones in use within an organization up to the same level of functionality to make sense for most of the enterprise computing world.

So, with this notion of how hardware device standards, to some extent, still operate in the world of business computing, Samsung’s recent decision to partner with BlackBerry “to Provide End-To-End Security for Android” (http://press NULL.blackberry NULL.com/press/2014/blackberry-and-samsung-partner-to-provide-end-to-end-security-fo NULL.html) makes sense.

The BlackBerry Samsung partnership, should appear curious to anyone who reviewed the webcasts recorded at Google I/O 2014. After all, Google announced its plan to “[integrate] part of Knox right into Android” (quoted from Samsung and Google team up to integrate KNOX into Android’s next major release (http://www NULL.sammobile NULL.com/2014/06/25/samsung-and-google-team-up-to-integrate-knox-into-androids-next-major-release/), which was written by Abhijeet M, and published in June, 2014 on the SAMMOBILE web site)at its Google I/O 2014 event. So why would Samsung partner with BlackBerry on no less a mission than to provide the above-stated “end-to-end security for Android”?

A simple answer, in this writer’s opinion, would be to surmise Samsung has come to the realization enterprise IT organizations in the private and public sectors are still, for some reason, shrugging off Knox and passing on Android altogether. Bringing in BlackBerry, therefore makes sense. BlackBerry’s successful effort to convince the U.S. Federal Government, and some of its international peers to continue to use BlackBerry mobile computing devices as the most secure of any of their options. Perhaps some of this win can be attributed to the fact BlackBerry is built on proprietary IP, which, for better or worse, can be easily upgraded and is completely uniform in its presentation.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

24
Nov

Consumers should not be expected to deliver repeat buying opportunities for mobile operating systems in process

Android may be the leader in the mobile operating system popularity contest, but it shouldn’t take rooting a tablet or smart phone to migrate from JellyBean to KitKat or, most recently, Lollipop. Nevertheless, the only process this writer can find to upgrade a Samsung Galaxy Note 10.1 2.1 from Android 4.1.2, JellyBean to KitKat is to root the device.

Perhaps it would help readers to better understand the point by mentioning why an upgrade makes sense for this Samsung device. The multi-tasking capabilities available with JellyBean pale in comparison with the advances Android has made and included with KitKat. Why should consumers have to pay for improvements to features already available in an older version of an operating system?

Apple, in contrast, consistently offers a free-of-charge upgrade to the latest version of their iOS operating system. Not all of these upgrades go smoothly, but, if nothing else, Apple iPhone and iPad customers are relieved of the necessity of simply “repeat buying” a tablet or a smart phone they already own, for what amounts to no more than an enhancement to features already offered to them.

If one follows the reasoning here it should be plausible to attribute some of the pace of deceleration in consumer appetite for smart phones and tablets to a pervasive dissatisfaction with “half baked” feature sets. Samsung, to cite merely one Android OEM, has recently reported on this deceleration, and received a lot of investor punishment as the result. But as long as consumers have no option but to engage in a complex procedure to decouple a piece of hardware from its original operating system, it’s very likely consumer dissatisfaction will continue to mount and sales will continue to plummet.

This is regrettable. In fact, the multi-tasking improvement in Android KitKat is substantial and likely to be well received by even average consumers of these devices. In turn, should these upgrades be made available without additional charge to existing customers, sales should pick up. Android OEMs will realize the financial benefit. Enterprise customers, with a clear need for multi-tasking will be more likely to purchase the hardware, rounding off the benefit for the whole Android ecosystem.

The message for early stage ISVs is to think long and hard about the upgrade path consumers will have to traverse as new features are rolled into existing, core, products. In this writer’s opinion, “Ready, Fire, Aim” cannot be used as an excuse to justify raising consumers costs when limitations with advertised features are merely corrected.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

15
Oct

ISVs debut cloud, SaaS solutions to satisfy consumer appetite for Analytics and Data

On Monday, October 13, 2014, Salesforce.com announced the debut of a new cloud, SaaS solution named “Wave” (https://www NULL.salesforce NULL.com/company/news-press/press-releases/2014/10/141013 NULL.jsp). Back on September 16, 2014, IBM announced “Watson Analytics”, once again, a cloud SaaS, but, this time, a freemium offer. So it’s safe to say Analytics for the masses has become a new competitive ground for big, mature ISVs to contend for more market share.

A couple of points are worth noting about the Salesforce.com press release:

  1. GE Capital is mentioned as already using Wave. Given GE’s own recent PR campaign around its own data and analytics effort, one must wonder why the business finance component of the company opted not to use the home grown solution ostensibly available to it
  2. Informatica is mentioned as an “ecosystem” partner for Wave and released its own press release, titled Informatica Cloud Powers Wave, the Salesforce Analytics Cloud, to Break Down Big Data Challenges and Deliver Insights (http://www NULL.marketwatch NULL.com/story/informatica-cloud-powers-wave-the-salesforce-analytics-cloud-to-break-down-big-data-challenges-and-deliver-insights-2014-10-13)

The Wave announcement follows, by less than a month, IBM’s announcement of a freemium offer for “Watson Analytics”, and Oracle’s “Analytics Cloud”. Both of these offers are delivered via a cloud, SaaS model. So it’s likely safe to say enterprise technology consumers have demonstrated a significant appetite for analytics. The decision by Salesforce.com, IBM, and Oracle to all deliver their solutions via a cloud, SaaS offer speaks to the new enterprise computing topology (a heterogeneous computing environment) and the need to look to browsers as the ideal thin clients for users to work with their data online.

An ample supply of structured and unstructured data is likely motivating these enterprise tech consumers to look for methods of producing the kind of dashboards and graphs each of these analytics offers is capable of producing. With data collection methods advancing, particularly for big data (unstructured data), this appetite doesn’t look to abate anytime soon.

ISVs with solutions already available, principally Microsoft with its suite of Power tools for Excel (PowerBI, PowerPivot, etc), may also be participating in this “feeding frenzy”. It will be interesting to see how each of the ISVs with offers for this market fare over the next few business quarters.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

30
Sep

No Technology Solutions on the Near Term Horizon for a Better Defense Against Online Hacking

ISVs with popular online computing offers (notably Apple, Google, and Microsoft) have each adopted and endorsed an “App” model. This writer has a lot of conceptual familiarity with Microsoft’s version of this approach. Microsoft has positioned its Office 2013 App Model as a better approach to online security, but is it really?

For readers unfamiliar with the broad technical structure of “Apps” and how it might enhance online security for consumers, the key principle is “isolation”. In theory, “Apps” transition a lot of computer processing from servers to clients. In other words, a lot of the activity handled in the past by the server is transitioned over to the PCs, smart phones, tablets, and even game consoles consumers use to process computing tasks online. The method of processing this activity is to instruct these computing clients to act on commands written in some version of the JavaScript programming language, or the latest version of HTML (HTML 5 at the time of this post).

In the case of the Office 2013 App Model, the jQuery (http://www NULL.jquery NULL.org) function library is heavily used by developers to add procedures quickly, which already exist somewhere online, with all of the supporting libraries required for successful execution. But this practice poses several difficulties, a couple of which directly impact on online security for consumers. First, there are different versions of the jQuery function library. So, when an App is developed with one version, and another App is added to a computing environment (for example, Office 365), the potential for App conflict arises, which can result in degradation of service for the end consumer.

Second, inadvertently to advocates of this type of development, the App model’s reliance on a client-side method like JavaScript can be said to insulate the server, but, inadvertently, this approach shifts the burden of security over to the client. Since their are hundreds, if not thousands, and even millions of different clients in use to interact with one server (or many servers in a load-balancing scenario, which act as one server), there is a much higher likelihood of a security breach on a client machine. Once clients are successfully compromised, they can be added to bot networks and re-purposed for other types of malicious activity.

For better or worse, in late 2014 the best defense against malicious online activity remains best represented by a correct set of operational risk management processes, at least for large organizations of users.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

11
Sep

Secure, Cloud SaaS Offers Require Closely Managed Two Step Verification Controls

As both the number and intensity of successful attempts to subvert popular cloud, SaaS offers increases, some prominent industry experts are calling for mandatory two-step verification procedures. But, if past history provides us with any reliable metrics on the usefulness of these added security controls, two-step verification methods need to be tightly managed if they are to provide a useful deterrent to subversive attempts.

Just two days ago a post was published to this blog on a related topic. This post addressed the recent, highly publicized successful effort of hackers to penetrate a celebrity’s account on Apple’ iCloud storage service. This post advocated a broader, perhaps mandatory, requirement of consumers of services like iCloud, OneDrive, Google’s Drive, etc. Any/all users of these services should be required to implement two-step identity verification methods.

It was, therefore, encouraging for us to review a short video interview with Tim Bucher, a respected authority on online security topics. This interview, titled Apple iCloud options buried: Expert (http://video NULL.cnbc NULL.com/gallery/?video=3000308065), records very similar opinions, expressed by Bucher, to those voiced in the post to this blog.

But readers should be aware of a couple of instances, in the recent past, where two-step verification methods (including the RSA system Bucher describes in the interview) have been compromised.

Back in April, 2011, RSA’s SecurID system (http://bits NULL.blogs NULL.nytimes NULL.com/2011/04/02/the-rsa-hack-how-they-did-it/?_php=true&_type=blogs&_r=0) was, unfortunately, successfully hacked. Of course RSA has long since cleaned up the errors, and, to their credit, the fact an expert of Bucher’s authority makes reference to the system as a reliable safeguard is good news.

Back in 2013, Duo Labs (https://www NULL.duosecurity NULL.com/blog/bypassing-googles-two-factor-authentication) identified, and subsequently publicized potentially dangerous problems with Google’s two-factor authentication system. Once again, these problems have been corrected.

The point of offering these examples is not to discourage readers from implementing similar trusted solutions, but, rather, to illustrate that any/all of these controls have vulnerabilities. When considered outside of the context of a sound attempt to implement an operational risk management policy truly capable of safeguarding online interaction with a cloud, SaaS offer, no control should ever be considered a completely infallible defense against hackers.

Readers may wonder just what constitutes “a sound attempt to implement an operational risk management policy”. Such an attempt is defined as an effort persistently enforced over any/all daily online computing instances. Any breakdown in the persistence of these procedures can, and, unfortunately, often does lead to successful subversive efforts.

Unfortunately, “dumbing down” doesn’t work when online computing is the activity at hand and the need is to safeguard confidential information.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved