17
Mar

Does IT Need Better Risk Management, or More Software to Prevent Hacks?

Josh Tyrangiel of Bloomberg Business Week this week published an article on the successful attempt by unauthorized individuals to access customer information from the servers at Target. The title of Tyrangiel’s article, How Target Could Have Prevented Customer Data Hack tells his story. Click the link to watch Betty Liu of Bloomberg’s “In the Loop” morning show briefly interview Mr. Tyrangiel.

As anyone who takes the time to watch this short, 6:20 minute interview, will note, Tyrangiel found several glaring lapses in what can only be called operational risk management policy, at Target, which not only contributed to the Target breach, but can actually be said to have caused it. The hack could have been contained had these policies and procedures been followed and the whole mess, following the breach, would have been prevented.

So, if Tyrangiel is right, then the problem at Target wasn’t a lack of effective software to defend the servers from an attempt at unauthorized access, but, rather, a set of policies which were easily ignored, or even circumvented.

According to Tyrangiel, Target had already purchased, and implemented FireEye, reputed to be one of the most effective software solutions against hackers on the market today. But when an offshore team received alerts from the FireEye system (within only a day or two of the first attempts to penetrate the Target network), the alerts were ignored by their counterparts here in the US. Even worse, although the FireEye system offered a configuration option for automatic removal of malware, Target had opted to disable the feature.

Target had made a personnel change at the top of its IT threat management team in early October, 2013. When the first attacks occurred, on November 27, 2013, this team still operated without an executive at the top, and proved to be completely ineffective in its efforts to contain the problem.

Each of the above points fall, clearly, within the bounds of a set of operational risk management policies and procedures for Information Technology. Tyrangiel’s story should be a very loud and clear call to enterprise IT management anywhere to carefully review policies and procedures to ensure disasters like the Target breach do not occur again.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved

10
Sep

The NASDAQ Crash of August 21, 2013 Demonstrates an Absence of Oversight, Across the Board

On Sunday, August 24, 2013, the Wall Street Journal published yet another article in the ongoing publishing frenzy of articles on the abrupt halt on Thursday, August 21, 2013 of all stock market trading for NASDAQ securities. This halt lasted over three hours during the trading day and disrupted an undetermined number of trades.

This article, written by Scott Patterson, Andrew Ackerman and Jenny Strasburg, titled Nasdaq Shutdown Bares Stock Exchange Flaws, contends the crash of this stock market ” . . . exposed a weakness in the plumbing of the market that critics say reflects years of neglect by U.S. exchanges and regulators.” While we can see the writers’ collective point, we respectfully disagree with the point of emphasis. Until the public focuses on the real point of concern, which amounts to a complete lack of oversight, at each important touchpoint, we see little hope of a happy ending to this type of problem.

The real problem is either a toothless operational risk management program, or a non existent one at the NASDAQ market, the data fee supplier, and at the U.S. SEC and other regulatory bodies who are charged to safeguard the public from damages from this type of problem. Everything else is simply fluff.

With a well thought out IT Audit and Risk Assessment set of procedures in place, together with leadership with the power to get things done (the critical importance of this latter point cannot be overemphasized. IT audit and risk assessment could have revealed the key points made by each of the pundits expressing an opinion on this problem, but with powerless leadership all of these efforts would have been worthless), this event would likely not have occurred.

What we find shocking is the pervasive lack of interest in exploring how better oversight would have helped us all avoid this disaster. Everyone has a solution to the technical problem, but, apparently, few pundits have the foresight to look to the oversight function as truly where remediation must take place if we are to steer clear of anomalies like this one in the future.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2013 All Rights Reserved

5
Sep

U.S. Financial Services Continue to Scramble for an Effective Operational Risk Management Methodology

Very large financial services businesses based in the United States continue to struggle as they seek to implement genuinely effective operational risk management methods to safeguard the integrity of mission critical applications.

In late 2012 J.P. Morgan disclosed a substantial trading loss attributable to the uncontrolled activity on a team of stock traders. On August 21, 2013, the NASDAQ stock exchange was forced to cease activity for over 3 hours during the trading day as the result of technical malfunctions (the specific problems were not disclosed to the public as of the time this post was written).

Nevertheless, acknowledged experts agree the exposure created by this absence of effective controls is very large. So why is it so difficult for effective controls to be implemented?

We would not presume to present a simple answer to this question. When we consider the frequency of these problems, along with examples of a worrisome attitude in the financial services industry, we can’t help but envision the problem as something very complex. The worrisome attitude can be found in some recently published notions about data security, and, perhaps, risk management itself, as somehow less important, less mission critical than other, more pressing tasks required to keep operations running.

We hope our readers will agree, categorizing this industry attitude as “worrisome” is an obvious understatement. The kind of complete breakdown in proper functioning of financial markets as the NASDAQ shutdown of August 21, 2013 unfolded, is absolutely the kind of black swan event the entire financial services industry should be dedicated to avoid.

The sunny side of this story, if there is one, is the undiminished opportunity still before ISVs who want to enter this market. The market still needs effective operational risk management solutions. ISVs with the technology required to satisfy these market needs will undoubtedly be handsomely rewarded for their efforts.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2013 All Rights Reserved

5
Jul

Constraining Systems Administrators with a Two Man Rule May Not Solve Data Leak Problems

On Sunday, June 23, 2013, General Keith B. Alexander, Director of the U.S. NSA, publicly announced the implementation of a new control to manage the risk of another Edward Snowden emerging and absconding with classified information — a “two man rule”. We found a definition of this operational risk management concept on Glenn Brunette’s Event Horizon blog on Oracle.com.

Popular technology product themes, including big data, and Software as a Service (SaaS) cloud computing offers, will lose a lot of their attractiveness for larger organizations if a reliable method can’t be found to control the risk of a new Edward Snowden compromising yet another set of operational risk management controls and getting away with a lot of classified information. So we maintain a keen interest in this story.

We don’t think the “two man rule” will be a long term solution to this problem, for a few reasons:

  1. IT Systems Administrators have to move quickly to fix problems. Slowing them down by requiring a sign off by another systems administrator prior to implementing a fix will likely lead to dissatisfied users and organization-wide impatience with risk controls
  2. What’s to stop two systems administrators teaming up on an effort to data security?

A better idea is to analyze the current process of granting security clearances to make it substantially more difficult to obtain top security clearances. If these clearance procedures can be hardened, the problem will be controlled simply by denying admission to individuals capable of subverting data security measures. Why let these people into secure environments in the first place?

The “two man rule” is the type of control to implement in response to a problem. But we need to implement proactive controls, capable of eliminating the possibility of problems arising at all. These controls should be available, and used within staff selection procedures for IT roles requiring security clearances.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2013 All Rights Reserved

27
Jun

Plan Operational Risk Management Procedures Correctly to Mitigate Risk from IT Systems

The “big data” “surveillance” brouhaha of June, 2013 illustrates an emerging problem with operational risk management procedures, and, specifically, IT audit and risk assessment. Mr. Snowden’s possession of a top secret security clearance is at the heart of the brouhaha. If Snowden had not possessed this level of security clearance, the argument goes, June would have come and gone smoothly. IT Audit and Risk Assessment procedures were faulty and provided the basis for this event to unfold.

We like this argument. We think it’s safe to say the security clearance review procedure failed to identify Mr. Snowden as a high risk candidate for approval. We hope this problem hits home with enterprise decision-makers wrestling with a need to get better performance from these reviews. If topics of inquiry, and the questions crafted to get useful answers from candidates are not framed correctly, then the results of even a thorough evaluation of candidates for clearance will likely be useless.

Perhaps it would be more useful if enterprise organizations in need of IT audit and risk assessment procedures conducted an internal threat assessment prior to formulating their procedures. This assessment would identify not only areas of dangerous exposure, but, as well, the specific business procedures (as well as the policies behind them) producing them. Once the specific procedures are identified, then the task of managing the exposures can be customized to fit the unique enterprise under examination. Our point is as follows: Highly differentiated enterprise organizations cannot afford to simply implement a “standard” risk assessment process for IT systems.

Our interest in this topic results from how the surveillance controversy has been publicized. Prominent writers have called into question some of the key components of security safeguards for enterprise IT computing (within the public sector). A favorite topic has been whether Mr. Snowden, as a contractor, should have been granted the privileges he enjoyed. Shouldn’t he have been an employee? And, further, as an employee wouldn’t he have been far less likely to veer off course?

We don’t think it really matters whether he was an employee of the NSA or a contractor. The real problem is the screening method, meaning the IT Risk Assessment procedures driving the credentialing effort for top security clearances. The method needs a substantial makeover.

Ira Michael Blonder

© IMB Enterprises, Inc. & Ira Michael Blonder, 2013 All Rights Reserved