Much has been made about the cloud and its importance for business, but a recent report, “Cloud Computing: Risks, Benefits, and Mission Enhancement for the Intelligence Community”, has highlighted the importance of cloud for the intelligence community. The report, published by the Intelligence and National Security Alliance Cloud Computing Task Force notes the importance of cloud computing and its impact on operations:
Cloud computing provides information technology (IT) capacity in elastic ways that can expand to meet user needs and shrink when demand decreases. It enables far more agility in support of operational missions and expands access to computational power while potentially reducing operations and sustainment costs. throughout our analysis, we found that in their adoption of cloud computing, organizations had to take responsibility of new roles and functions and revise their policies and processes. cloud computing’s primary value does not lie in being a new technology; instead, it represents a business model change whose rapid adoption is driven by the transformative nature of its integration.
Within the IC [Intelligence Community], cloud computing uniquely addresses critical defense and intelligence mission needs by locating data and applying it to the mission at hand. as a bonus, cloud computing offers DOD and IC agencies the ability to increase efficiencies and potentially realize cost savings during their lifecycles to alleviate some of the pressure of budget reductions. Still, there is a significant gap in understanding cloud computing at all levels, which could impact the success of a cloud solution deployment.
The report recognizes that the intelligence community must change its mindset to encourage information sharing and it must also build security into the applications themselves.
1. Decision makers in the IC are appropriately focusing on the business model implications of cloud computing. cloud computing is not just a new technology, but a significant shift in the consumption of IT resources and allocation of IT funding.
2. Within the IC, the decision to adopt a cloud model is focused on mission enablement and must be determined on a case-by-case basis. the evaluation of cost savings must bear in mind costs over the complete lifecycle, rather than a periodic budget cycle.
3. Information security can be enhanced through a cloud computing approach, but only when it is built into the model’s design. If security is not part of the design, cloud computing architectures dramatically increase risk.
4. The type of cloud deployment model adopted will be determined by the sensitivity of data hosted.
5. Those looking to migrate to the cloud must consider impacts on organizational culture.
6. Improvements to how agencies acquire services, software, and hardware are strongly desired by most personnel involved in the implementation of cloud computing, and many believe that the adoption of a cloud solution may catalyze these changes.
7. As standards for cloud computing emerge, thoughtful federal input can contribute to greater security and cost efficiencies. any organization contemplating adopting a cloud architecture, including those within the IC, should include the ability to support multiple standards.
8. lessons learned from the IT industry, the private sector, and academia must inform IC decision making. Sharing lessons learned is essential to reducing risk.
These points are important to companies, as well as government agencies, including those in the IC, who are seeking to improve their use of information. Building privacy and security into the design of programs based upon sensitivity, is a critical issue as we try to build more effective and efficient information protections, and the work of the Lares Institute that has focused on proportionality and data sensitivity is central to solving these issues. A link to the original 2009 Privacy 3.0 article, published by Executive Director Andrew Serwin, which focuses on data sensitivity can be found here, and, as noted in that article:
This Article argues that questioning the assumptions of prior privacy theory is correct and indeed necessary at this time, but the questions that must be raised should not result in further use of theories that rely upon the common law. Indeed, further reliance upon common law, whether based upon “inviolate personality” or confidentiality, is not the correct solution and will not solve the underlying issue with common law based theories because of the inherent limitations of tort law in the privacy context. Indeed, when they are assessing constitutional rights of individuals regarding the disclosure of information, many courts are starting to recognize and reject the concept of “confidentiality” and examine whether information is “sensitive,” thus not considering whether the information is “public” or “private.”
The question confronting modern-day privacy scholars is this: Can a common law based theory adequately address the shifting societal norms and rapid technological changes of today’s Web 2.0 world where legislatures and government agencies, not courts, are more proactive on privacy protections?
This Article argues that the answer is no and instead argues that the common law based prior scholarship was relevant for its day, but it cannot account for the technology and societal values of today, our statutorily-driven privacy protections, and the Federal Trade Commission (FTC) enforcement centric model, and should therefore not provide the theoretical construct for existing or future laws or court decisions. This is all the more true in light of recent FTC guidance regarding behavioral advertising, in which the FTC expressly recognized the need to balance support for innovation and consumer protection, as well as the “benefits” provided to consumers by behavioral advertising.
. . .
We find ourselves today in a situation where we have more privacy regulation than ever, yet we lack a relevant and cohesive theory of privacy. This failure leads to situations where individuals feel their privacy is not being protected and people or entities that hold or process others’ data do not have clear guidance on proper information practices. As long as we rely on common law theories, no matter how many laws are passed, this will not change. All a common law based model will ensure is that, in many cases, these laws will impose inconsistent burdens on information that will not meet society’s need for privacy. Given the changes in society, as well as the enforcement mechanisms that exist today, particularly given the FTC’s new focus on “unfairness,” and the well-recognized need to balance regulation and innovation, a different theoretical construct must be created–one that cannot be based upon precluding information sharing via common law methods. Instead, the overarching principle of privacy of today should not be the right to be let alone, but rather the principle of proportionality. This is Privacy 3.0
This was a point that was reinforced in a 2011 law review article, published by Executive Director Andrew Serwin, on best practices and privacy-by-design, as well as the recent study on sensitivity and demographics. As noted by that article:
There is a path that would provide more flexibility to the FTC and more guidance to business in the Web 2.0 World. I have previously proposed Privacy 3.0, which is a model based upon data sensitivity that makes the safeguards required to be implemented for personal information contextually connected to the sensitivity of that information using a proportional methodology. Although this may seem like a radical departure from prior FTC enforcement, if the concept is put into different terms, it is truly just a small step away from prior guidance and enforcement, but this small step provides much-needed predictability and, perhaps even more importantly, flexibility as technology changes. Stated differently, examining the sensitivity of data through the totality of the circumstances surrounding the individuals and the context of the personal information is simply determining the risk of harm that can result from the improper or unauthorized disclosure or use of the personal information. The more sensitive the data is, the higher the risk of harm to consumers. This is a different approach at a certain level from the prior enforcement cases because although likelihood of harm is considered by the FTC, it is typically only done so in the context of a deception case, which requires a misstatement of some kind regarding privacy. Otherwise, the level of consumer injury for unfairness goes far beyond a risk of harm because actual harm appears to be required.
Moreover, if Privacy 3.0 were considered, it would not directly be the basis of enforcement by the FTC. Part of the rationale of using sensitivity rather than the Privacy 1.0 and 2.0 doctrines is that harm is frequently difficult to prove and therefore litigation frequently fails to address the stated concerns of individuals. Therefore, I would propose that the risk of harm analysis be used to create the Privacy 3.0 framework and the framework would be the basis of a safe harbor program administered by the FTC. The “Privacy 3.0 Safe Harbor” program would rely upon the four tiers of sensitivity, and as more fully detailed in Privacy 3.0, it would provide clear guidance regarding what information practices were permitted for each tier, including what level of consent, both implicit and explicit, would be required to process data. Companies that agreed to and implemented the data classification framework, and the resulting restrictions and permissions that would be created based upon the sensitivity of information, would not be subject to enforcement action if there was a data incident. However, companies that voluntarily chose to participate in the Privacy 3.0 Safe Harbor would be subject to enforcement if they failed to meet the requirements of the program or falsely claimed to comply when they in fact did not.
In addition, the INSA’s conclusions regarding cloud and improving the use of information also directly relate to Information Superiority and the Lares Institute’s work on these issues.
Data sensitivity is a critical component of designing protections and gaining benefit from information, and both the public and private sector should continue to focus on these issues as the models for information use continue to emerge.