The news this week that Katherine Archuleta resigned as head of the U.S. Office of Personnel Management came as no surprise here in Washington. The massive theft of employee data of security at her agency is the worst reported breach of security in the history of the U.S. Government, so her assurances that she would stay on through the crisis were hardly realistic. Her resignation, however, begs the more important questions of what really is the state of U.S. Government cyber security and what, if anything, can be done to improve it.
As the NY Times notes:
The revelations have set off a crisis response by the administration to technology weaknesses that officials acknowledge plague the entire federal bureaucracy.
At the White House, Josh Earnest, Mr. Obama’s spokesman, said the administration was rushing to conduct a “rapid reassessment of the state of cybersecurity measures, and accelerate the implementation of reforms that need to be adopted.”
Those include the wider adoption of two-factor authentication, which requires anyone with the password to a system to use a second, one-time password to log in from an unrecognized computer, he said.
It’s sad to read that the next step in response to this amazing breach of security is to implement two-factor authentication, since that is something that (a) should already be in place and (b) is not really going to stop sophisticated hackers from stealing information in the future. After all the billions of dollars invested in cyber security solutions by private and public institutions, I think that continuing to look at cyber security as a software issue is too narrow a view. Speaking to a group of CIOs in 2014, I suggested that they should develop their security strategy with the premise that all of their data can at any time be accessed by an outside party. This position mandates that a security strategy not based on software has to be implemented, which is unfortunately not the norm in most organizations. That’s because until recently, hardware-based cyber-security was more of a concept that reality but, as a recent article in Computer Magazine outlines, we are getting now closer to this new cyber security model.
In her excellent overview of the topic in the April 2015 issue of Computer magazine (“Rethinking Computers for Cyber Security”), Dr. Ruby Lee (the Forrest G. Hamrick Proressor of Electrical Engineering at Princeton and former chief architect at Hewlett-Packard) notes that while many software avenues to security have been explored the computer itself has largely been left alone:
…security, which is essential foe such cyber transactions, has not been seriously considered in the basic design of computers. How can this be, given our reliance on computers for our daily activities, our commercial competitiveness, and our national security? One problem is that for decades. computers have been designed to improve performance and reduce energy consumption, cost, and size, while introducing compelling new features-but not to improve security.
Dr. Lee then lays out the case for the idea of a security architecture that combines both software and hardware. As she notes, this new model recognizes the fact that meeting the demands of security today:
…requires a combined software-hardware architecture: software to provide usability and flexibility, and hardware to enhance performance and provide mechanisms that can not be easily bypassed. If the OS can be compromised, then a more powerful software layer below the OS needs to be trusted: if this layer can also be compromised, then the hardware is the final defense: processors are the engines that execute all software.
She goes on to describe various approaches currently under development by Princeton (and other organizations) that combine paired software/hardware technologies to significantly enhance data security. One example of these techniques is called DataSafe, and she describes it as follows:
DataSafe is an architecture designed to support self-protecting data. Protected data is encrypted, and access control and usage policies are attached to the data. The only way to transfer protected data between machines is via this data package. Once an app or its user is authorized to access the data according to the access control policy, DataSafe software translates the high-level usage policy into low-level hardware ware tags. These are then associated with the memory locations where the decrypted data will be placed. The tags are propagated along with the data as it is being processed. When any output is requested, the hardware checks the tags to determine whether output for the data is allowed. This step prevents information leaking out of the machine in which access has been given. When the app completes its operation on the data., any data that has been modified is re-encrypted and repackaged with its original access control and usage policies, before being written back to storage or transfered to another machine.
The beauty of the DataSafe approach is that the ultimate layer of security rests neither at the App or OS level but in the actual hardware itself:
By interpreting any security policies regarding access control and usage and translating them into hardware tags at runtime, DataSafe software bridges the semantic gap between flexible, high-level software policies and specific hardware tags. As for lack of access to app source code, DataSafe imposes the stronger constraint that any app requesting data is not modified at all, and so is oblivious to whether that data is-or is not- protected. To deal with OS vulnerabilities, DataSafe assumes that the OS is untrusted, and its protection features do not rely on the OS [Emphasis mine].
The techniques Dr. Lee outlines promise a serious advance in cyber security, and they signal to a future where data and apps (and users too) are assumed to be compromised a priori, and so encryption/decryption must happen in another context. Moreover, as Dr. Lee further notes, through the use of virtual machines an approach like DataSafe can be applied in cloud-based contexts as well. Indeed, Intel itself has been documenting its own advances in this area, which include “enclave” technologies such as SGX, which creates special areas that are, as a 2013 Intel paper noted, protected from tampering from all software outside the enclave’s “trust boundary, even when the enclave is sent to disk or unprotected memory by the OS or VMM managing the system resources.”
It’s time that CIOs and the experts they keep in business look far beyond basic concepts like two-stage authentication to a cyber security model that is independent of user-level software. Thinking beyond this model, one can envision even further evolutions to contextual security, e.g., having a dedicated machine, one sending a specific signal for example, present on the receiving that enables a nearby computer to access specific data. Dr. Lee is optimistic that the approach she suggests is finally getting the needed attention:
While challenges for cybersecurity are enormous, it is promising to see growing interest among hardware vendors in building security mechanisms into chips and computers, and among researchers for creating better hardware-software security defenses. As threats proliferate and user demand for protection increases, computer designers will be called on to build security into the hardware foundations of all computing devices.
This evolution is long overdue, and the OPM disaster only reinforces the position that we must move beyond human policies or security software and rethinking the security architecture of the computer itself.