Today is a wonderful time for anyone interested in Cloud Computing to be working with the US government. On the one hand, the government considers Cloud to be strategically important, and they already have a track record as an early adopter of Cloud Computing on a grand scale. On the other hand, the government is also in the unique position of being able to drive standards for the approach—and in fact, they are even responsible for establishing the most widely adopted definition of Cloud Computing.
The federal agency who has taken this leadership position is the National Institute for Standards and Technology (NIST), an agency of the US Department of Commerce. NIST’s formal definition of Cloud Computing is already well known—“a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Concise as that definition is, it only marks the beginning of the work NIST is doing to formalize and standardize the full breadth of Cloud Computing approaches, both within the government as well as for the world at large.
I learned about the breadth of NIST’s work on Cloud last week, when I had the pleasure of attending the NIST Cloud Computing Forum & Workshop. They are leading a cross-industry effort to “provide thought leadership and guidance around the cloud computing paradigm to catalyze its use within industry and government. NIST aims to shorten the adoption cycle, which will enable near-term cost savings and increased ability to quickly create and deploy enterprise applications. NIST aims to foster Cloud Computing systems and practices that support interoperability, portability, and security requirements that are appropriate and achievable for important usage scenarios.” To this end, they have followed up their formal definition with a Cloud Computing Technology Roadmap, which consists of three volumes: requirements to further US government adoption, information for Cloud adopters, and technical considerations for government Cloud Computing deployment decisions. They have also published a Cloud Computing reference architecture and standards roadmap.
To be sure, NIST has generated a daunting quantity of information here—but ignore these documents at your peril. If you work for a US government agency, then you likely have a mandate to move toward Cloud Computing, and NIST spells out many of the details. But even if you have nothing to do with the government, it’s important to remember that NIST is also a standards body in its own right, as well as a coordinating agency for other standards bodies. No other group or agency anywhere else in the world has achieved the same leadership position with respect to today’s nascent Cloud standards efforts.
One of the main reasons NIST is able to maintain this position is because they have an inclusive approach. Want to contribute? You’re welcome to. Have an issue with something in one of the documents? Then let them know. After all, one of the reasons they’ve generated so much content is because they have so many contributors, not just from the government, but from people around the world.
Even ZapThink isn’t above joining the fray. We’ve reviewed NIST’s documents in the context of ZapThink’s eye for agile enterprise architecture, and we’ve identified a missing link. Of course, looking at something and trying to identify what’s missing is always difficult, especially when so many contributors have already pored over the material so carefully. The trick is to break out of “horseless carriage” patterns of thinking: instead of considering the Cloud to be little more than an outsourced, virtualized data center, put on your architect’s hat and consider how Cloud Computing’s unique characteristics will change how you do architecture.
NIST’s Cloud Deployment Scenarios
I found this missing link when I reviewed the Cloud deployment scenarios in the NIST Standards Roadmap document. Here is their diagram illustrating the eight generic deployment scenarios that they identified:
Generic Cloud Computing Deployment Scenarios (Source: NIST)
They sort the various deployment scenarios into three categories:
- Scenario 1: Deployment on a single Cloud
- Scenario 2: Manage resources on a single Cloud
- Scenario 3: Interface enterprise systems to a single Cloud
- Scenario 4: Enterprise systems migrated or replaced on a single Cloud
Multiple Clouds (serially, one at a time)
- Scenario 5: Migration between Clouds
- Scenario 6: Interface across multiple Clouds
- Scenario 7: Work with a selected Clouds
Multiple Clouds (simultaneously, more than one at a time)
- Scenario 8: Operate across multiple Clouds
From ZapThink’s perspective, the most interesting of these are scenarios 1, 3, and 4, because they consider the relationships between enterprise systems and the Cloud. ZapThink has written about these relationships before, most recently in The Keys to Enterprise Public Cloud, but also back in mid-2010, when we discussed Cloud Architecture’s Missing Link.
The missing link we pointed out in that ZapFlash was the ability to compose Cloud-based Services with on-premise Services as part of an enterprise SOA effort. It could be argued, however, that composing Cloud-based Services falls under Scenario 3, since Services are a type of interface. But there’s more to this story—and to understand how the NIST folks missed it, it’s important to follow their line of reasoning.
NIST’s Blind Spot
NIST has divided their Cloud standards efforts into three categories: interoperability, portability, and security. Interoperability standards are the most straightforward, especially for anyone who has worked with Web Services, which of course are little more than standards-based interfaces intended to promote interoperability and loose coupling.
Portability standards are more complicated, because NIST considers both application portability and data portability. In the Cloud context, application portability centers on the ability to move virtual machine (VM) instances from one Cloud to another. Data portability, however, is more difficult, because applications process different kinds of data, and those data flow throughout an entire system. For one organization, data portability might mean moving a single database from one Cloud to another, but for a different organization, the requirement might be for the portability of an entire SaaS application, along with all of its distributed data.
NIST’s focus on interoperability and portability (and security, of course, which is an entire conversation in its own right) makes perfect sense in light of their focus on standards, since the standardization of these three capabilities will go a long way in furthering NIST’s core mission. So it’s no wonder that their three Cloud deployment scenarios that involve enterprise systems consist of deploying or migrating to a Cloud (facilitated by portability standards), or interfacing with a Cloud (facilitated by interoperability standards).
It should come as no surprise, therefore, that NIST missed another deployment scenario: building applications that leverage both on-premise and Cloud-based capabilities, where those applications rely upon more than interoperability, portability, and the ubiquitous security.
Building applications that are compositions of Cloud-based and on-premise Services is a simple example, but doesn’t go far enough, because even this scenario falls into the “horseless carriage” trap of considering the Cloud to be nothing more than a virtualized data center. Factor elasticity into the equation, however, and we must consider new approaches to architecting such applications that go beyond considerations of interoperability and portability.
Building the Cloud’s Inherent Elasticity into Hybrid Applications
More than any other characteristic, elasticity distinguishes true Clouds from simple virtualized data centers. If your app requires more resources, the Cloud will provision those resources automatically, and then release them when you’re done with them—until you need them again. Furthermore, those elastic resources may be among any of the different types of Cloud resources (networks, servers, storage, applications and Services, as per the NIST definition), or any combination thereof.
As a result, when you architect your app, you don’t know how many of each of these resources you will be using at any point in time, since the number can change with no warning. You must take this change into account when architecting your data, your middleware, your execution environments, your application logic, and your presentation tier—in other words, your entire distributed application.
Cloud providers do their best to hide the underlying complexity inherent in delivering elastic infrastructure. But when you’re building a hybrid app—that is, one that includes Cloud-based as well as on-premise capabilities—your architects must have deeper knowledge of the underlying capabilities of the Cloud environment than Cloud providers are typically comfortable revealing. In other words, even once Cloud interoperability and portability standards mature, architects will still require additional information about the underlying capabilities of their Cloud environments that such standards won’t cover.
The ZapThink Take
This ZapFlash may leave you wanting more—namely, how precisely do you architect with the Cloud in mind? Unfortunately, there isn’t enough room for the answer to that question in this ZapFlash, but fear not, we’ll be laying out more details in the weeks and months to come.
Can’t wait? Then come to our Licensed ZapThink Architect SOA & Cloud course in San Diego January 16-19. We’ll dive deep into architecting with the Cloud, as we enjoy warm breezes off the Pacific from our lush venue by Seaforth Marina on Quivira Basin. Or better yet, take advantage of ZapThink’s new Cloud Competency Center by dropping us a line at firstname.lastname@example.org. We’re here to help!