Companies need to know where they stand. The desire to measure progress is especially the case with regards to IT — something that many business people regard as intangible, unpredictable, and unreliable. One way of reining in all this uncertainty is to apply industry-wide metrics and measures to enterprise IT projects. Using these measures, companies can ideally determine if they are ahead of their competition and generally moving in the right direction with regard to their IT projects. Many companies look to certain industry-wide maturity models to fill this need. However, this is exactly where most maturity models fall flat.
Too often, companies flock to maturity models, such as the widely-famous (and too-often mimicked) Capability Maturity Model Integration (CMMI), without adequately understanding what they are meant to measure. Now, the point of this ZapFlash is not to explain CMMI or equate CMMI and SOA Maturity Models… they have nothing to do with each other from either content or context perspective. However, most SOA Maturity Models try to mimic the CMMI in structure and format without realizing that it’s not relevant.
Many abstractly measure an organization against arbitrary measures of maturity without factoring in an organization against others of its size, geography, or industry. Does it make sense that IT organizations in the retail sector should face the same rigorous discipline of IT measurement as companies in the financial sector? Does the measure of maturity factor in the average of all the activities in the organization, the most advanced, or the least advanced? Does the maturity model measure the company organization-wide, on a department-by-department basis, or on a per-project basis? Without understanding what the maturity model measures, it’s hard to say if it truly is any real measure of maturity.
More importantly, what is the maturity model meant to motivate? For many companies, achieving a certain level of maturity has primarily a marketing-only value for the end-users. Other maturity models are positioned primarily to sell vendors’ products. But scarce few are meant to advance the state of adoption of a particular IT practice in an organization. If the most popular maturity models fall flat here, this is even more so the case with SOA Maturity models. As we discussed in our Quantity is No Measure of Maturity and What to Look for in a SOA Maturity Model ZapFlash, most SOA maturity models fall into one of the three camps: ill-defined, abstract models of maturity that are primarily based on Service implementation rather than Service architecture, vendor-driven maturity models that attempt to push customers through SOA infrastructure buying decisions, and consultant-driven maturity models that attempt to push customers through architectural exercises that have not proven to truly advance the state of SOA.
It’s becoming clear that the industry doesn’t really need a SOA maturity model. The act of doing SOA properly in itself is an act of architectural maturity that many companies are having trouble grasping. Companies are trying to understand how to best apply SOA and realize the benefits against their own stated business goals. As such, what’s not needed is an abstract, enterprise-wide, industry-wide, artificial measure of maturity that complies with CMMI’s five levels, but rather a way of measuring the state of a SOA implementation against the fundamental goal of SOA itself: agility.
Introducing the Agility Model
What ZapThink is proposing in an ideal Agility Model is a measure the agility of a particular SOA implementation against solid measures of flexibility. Rather than shooting to measure a project against an arbitrary five levels of maturity, where the assumption is that the top level is the “best”, the idea is to measure the agility against the goals for the business and that project.
Measuring agility on a scale of 1 to 5 (as almost all maturity models do), is a pointless exercise. Simply put, not all Service-oriented projects need to have the same level of agility as others. Some project require deep levels of loose coupling at perhaps all the levels we discussed in our previous Seven Levels of Loose Coupling ZapFlash. Other projects might not need the same amount of loose coupling since each layer of coupling adds flexibility at the potential cost of complexity and efficiency. Good architects know when to make Services and processes loosely coupled enough to meet the business requirements and meta-requirements of agility, but not any more so. As such, we should consider the Agility Measure to be a spectrum of sorts, with the desired level of agility matching the business requirement.
As such, companies should shoot for “optimal” where anything outside of that is suboptimal. To be specific, if a business is aiming for a specific level of agility, but the projects have been implemented in a way that has made them more flexible then desired, it’s quite possible they might have been over-engineered. Likewise, projects that don’t achieve the desired Agility Measure are suboptimal and under-engineered. As a rule, the more agile a system is, the more expensive it will be, so it’s important for an organization to be able to prioritize their agility requirements in order to determine which ones they want to pay to satisfy, given budgetary and other constraints. As such, I wouldn’t recommend a corporate-wide Agility Model, as there will be varying requirements for agility at varying times and in different parts of the organization.
The key question to answer in considering the Agility Model is how does one measure Agility? One way to measure agility is to determine various degrees of freedom and variability allowed by the system in question. Considering these Agility Measures, the fewer degrees of freedom and variability, the less agile the system as a whole. Using the ideas from the Seven Levels of Loose Coupling as a starting point, would could start with a basic Agility model as such:
- Implementation Variability — Projects at the bottom of this measure of maturity are inflexible with respect to implementation changes, whereas those at the top allow for changes to Service consumers and producers without impacting each other.
- Infrastructure Variability — Projects that exhibit poor agility in this aspect of loose coupling are heavily dependent on the current infrastructure to provide all the requirements for SOA infrastructure, whereas those that show greatest agility can accommodate arbitrary changes, replacements, or additions to the infrastructure without skipping a beat.
- Contract Variability — Projects at one end of the spectrum don’t allow for flexible change to Service contracts while those at the other end are immune to such changes.
- Process Variability — SOA initiatives at the bottom of this agility measure don’t allow for dynamic and continuous process change while those at the top can handle any new process change or configuration requirement.
- Policy Variability — SOA projects that exhibit weak agility with regards to policy variability can only handle policy changes through redevelopment, redeployment, or even infrastructure change, whereas those that show greatest agility can handle any policy change or new policy requirement flexibly.
- Data Structure Variability — SOA initiatives that exhibit poor data structure variability cannot accommodate variations to the representation of data, whereas those that show greatest agility can handle such changes without having to refactor Service consumers or providers.
- Semantic Variability — Projects at the low end of the spectrum are inflexible with regards to changes to the meaning and semantics of information whereas those that show greatest flexibility can handle semantic differences between systems without requiring human involvement.
The result of the above analysis is a heat-map of sorts. Each project will exhibit characteristics of agility that might be more flexible at one level, while less at another. The key is not to achieve the highest level of agility for all measures, but to create the agility model for the particular SOA project and compare that against project-specific or broader Agility Models that act as a baseline for all subsequent SOA projects. In this way, companies can use agility models for both project planning as well as auditing and measurement. Companies can also, if they choose, compare their Agility Models with competing firms in the same industry and across multiple industries, but this is certainly not the goal of the Agility Model.
Of course, in a 1500-word ZapFlash, it’s hard to truly summarize the mechanism and methods for determining such levels of Agility. ZapThink has built significant knowledge and intellectual property in the calculation and measurement of the above Agility Measures as well as methodologies for helping companies set their Agility Model for the business on a strategic and per-project basis. If you want to successfully apply the Agility Model to your own company, you should approach it both as a project-planning tool as well as an auditing and measurement tool, keeping in mind that the projects might be flexible and agile at one level, but not at another. The idea is not to achieve some abstract “top” measure, but rather to achieve an optimum. The secret sauce is determining your optimum and finding concrete ways to measure distributed projects in the enterprise.
The ZapThink Take
It’s important to realize that the Agility Model itself is just one part of a larger collection of activities around agility. Businesses need to represent their functional and non-function requirements in terms of not only Service, process, and policy requirements, but also Agility Requirements. Those Agility Requirements then serve as the basis for determining how an Agility Model can be defined as a to-be planning activity. Once this Agility Model Baseline has been defined for the particular project, organizations can measure subsequent architectural artifacts and activities using Agility Measures to see how they match up. The resulting heat-map shows how the actually implemented architecture measures against planned agility goals.
Generic, cross-industry measures of maturity, such as CMMI and almost all SOA Maturity Models, have very little meaning or value outside of its potential marketing benefit. These generic models provide little guidance on how to manage or measure ongoing projects, nor do they handle the wide range of maturity of disparate projects in the enterprise or the necessary variability across different industry sectors and company types. Nor do these measures really address the core reality that different projects should necessarily be at different so-called maturity levels due to business requirements.
I believe that most SOA Maturity Models are meaningless measures of SOA activities against arbitrary measures of architectural capability. Instead, project-specific Agility Models paired with Agility Plans that indicate how the enterprise as a wide will deal with projects at different levels of agility will help to guide SOA implementations as well as provide an enterprise-wide view of the various projects and how they are contributing to an organization’s agility. A good Agility Model will help organizations advance their business to greater degrees of agility on a timeframe that matches their ability to invest. That matters a lot more than achieving some meaningless number on an arbitrary measure of maturity.