On the Need to Set Boundaries Around an Internet of Things
The March/April 2014 edition of Foreign Affairs includes an article titled As Objects Go Online, which was written by Professor Neil Gershenfeld of MIT, and JP Vasseur, Cisco Fellow and Chief Architect Internet of Things at Cisco Systems.
This article appears to have been published to coincide with a one day IOT Festival held on Saturday, February 22, 2014, on the MIT campus in Cambridge, Massachusetts.
While the enthusiasm of the authors is to be applauded, and the promise of increasing the scope of what I would call “rapid device to device data communication” (which presently depends entirely on one data communications transport — Ethernet, with a set of markup languages running at the application layer) is certainly an important objective (which, should we achieve it, will certainly expand the usefulness of devices, along with the range of what people can do with them), I think a lot of caution should be exercised about the entire notion.
Tellingly, it isn’t until approximately 5 paragraphs from the end of “As Objects Go Online” that the authors address the question of whether it makes sense, from the perspective of data security, to open the Smart Grid to data communications over the Internet of Things, which they champion. In light of the recent exposure of the Heartbleed security hole in the Open SSL protocol, in my opinion, the following claim by Gershonfeld and Vasseur should be very carefully considered by anyone seriously considering the “open” SmartGrid notion: “The history of the Internet has shown that security through obscurity doesn’t work. Systems that have kept their inner workings a secret in the name of security have consistently proved more vulnerable than those that have allowed themselves to be examined — and challenged — by outsiders. The open protocols and programs used to protect Internet communications are the result of ongoing development and testing by a large expert community.” (quoted from Gershonfeld and Vasseur’s article as published on the Foreign Affairs web site).
In the next paragraph they present their argument on the real cause of many of the “Internet” / “Web” serious security problems–human error. I certainly agree with this claim, which points to the predominant role played by human error, poor procedural planning, and a lack of effective risk management when one reflects over the history of successful, malicious attacks conducted over “The Web”. But this is, by no means, to excuse what can only be called shoddy software development at the foundation of the heartbleed problem. Procedures and controls are useless, even when correctly implemented, if the Open Source software the authors laud is, itself, full of holes and bugs.
As I wrote recently in this blog, in my opinion we need much better methods at the transport and application layers of the data communications protocol stack to ensure, at a minimum, the suitability and security of software before we condone using it for something as mission critical as the SmartGrid.
© IMB Enterprises, Inc. & Ira Michael Blonder, 2014 All Rights Reserved