In my discussions with cyber-security experts in business and academia, I am struck by the two things most of all. First, they all agree that the cyber-risk problem is much worse than the average person understands. Second, they all think that sooner or later some major cyber attack with wreak havoc in a developed nation, with devastating consequences.
With that background in mind, I came across a piece on NetworldWorld.com, posted on May 29th, about the new standard being developed by the Open Group (a consortium of 400+ companies, governments and organizations) that presents some basic “best practices” for reducing security risks within the IT supply chain. Among the practices the standard (Open Trusted Technology Provider Standard (O-TTPS) suggests are the following:
- Full documentation of the engineering process, configuration and components and tracking and, if need be, any that “are proven to be targets of tainting or counterfeiting as they progress through the lifecycle.”
- Established quality testing procedures, and security update and defect management processes.
- Threat analysis and mitigation to assess potential attacks, plus vulnerability analysis, patching and remediation.
- Secure coding practices and regular training of secure engineering, plus monitoring changes to the “threat landscape.”
- Risk-based physical security procures that are well-documented.
- Access controls established for all product-relevant intellectual property and assets, subject to audit.
- Background checks on employees and contractors “whose activities are directly related to sensitive product supply chain activities (within reason given local customs and according to local law).”
- Recommending O-TTPS to “relevant business partners.”
- Secure transmission and handling controls related to IT assets, plus physical security. Methods of verifying authenticity and integrity of products after delivery should be available.
- To keep malware out of components received from suppliers or in products delivered to customers and integrators, commercial malware detection tools need to be deployed as part of the code acceptance and development process, and before delivery.
The idea, as the graphic below illustrates, is to create a common set of practices across the various agents designing, creating and deploying IT gear, so that the risks of tampering, espionage or even system destruction are recognized and tackled head on.
Of course, what would probably surprise the average reader is that one would think all of these practices would be in place today. Who does not “fully document” their engineering process, someone might ask? Who does not have “access controls” for sensitive information in this day and age? Well, the reality is that the answer to both those questions are the same: lots of people. It’s not a surprise to find old technical information in engineering files or access to sensitive production facilities that does not always follow a rigorous logic or screening procedure. For organizations in this group, the new standard should be seen as wake-up call to do at least these things, given the prevalence of cyber-risks today.
For the best organizations, of course, this standard is merely a recap of what is already in place, and they need to continue to go beyond what the O-TTPS guidelines suggest. If my contacts are right about the future, the O-TTPS standard brings an organization up to what they would call “least acceptable practice” level. To be prepared for the kind of risks the experts foresee, an organization would have to go much farther. I hope that as the Open Group standard evolves, and these things always do, that they will develop a second, even more sophisticated, set of guidelines that organizations can adopt.