Archive

Archive for the ‘Methodology’ Category

A SOA Maturity Model

May 15, 2010 Leave a comment

Overview

Purpose and Scope

This describes proposed Service Oriented Architecture (SOA) Maturity Model that be can be customized for evaluating SOA maturity in your organization. The goal of the model is to provide a framework to evaluate the progress of an evolution to services orientation. Additionally, the model serves as guidance to develop an execution plan for incremental services adoption within an organization. Several existing models were consulted in the development of this model such as the open model currently being developed by the Open Group.

What is Services Orientation

Services orientation is NOT about creating web services. In fact, services orientation does not require web services technology; however, leveraging web services technology is tool for achieving some of the goals of services orientation.

Services orientation from a business process perspective is about defining discrete business processes/offerings and understanding how those discrete business processes can be composed into larger process or new offerings. For example, opportunity identification is a discrete business process that is part of the:

  • network formation
  • creating new client engagement
  • identify savings opportunity

processes.

 

Identifying discrete processes allows organizations deeply understand the processes, identify opportunities for optimization, and monitor the processes to ensure they are efficient. The more the underlying process is reuse, the more valuable these activities become.

Services orientation from a technical implementation perspective is an approach to designing, implementing, and deploying solutions such that a solution can be created from components implementing discrete business and technical functions. These components, called “Services”, can be distributed across geography, across enterprises, and can be reconfigured into new processes as needed.

Characteristics of Service Oriented Solutions

  • Component Oriented – Service oriented solutions are component oriented. Component orientation means that each service performs a cohesive and logical set of activities for a single concept. For example, a payroll service might provide functions such as issue W2s and issues checks or a data access technical service might provide functions such as persist entity.
  • Composability – Service oriented solutions support composition into new services and solutions without impacting the existing service implementation.
  • Platform and Location Transparency – Service oriented solutions do not require that services be implemented on the same technology platform or in the same location.
  • Broad Interoperability – Services oriented solutions should support standard protocols to ensure the highest level of interoperability.
  • Self Describing – Services should be self describing.  The interface to the service and its operations should be discoverable via meta-data, not out-of-band communications (e.g. document)
  • Message Oriented – Information is transmitted to and from services as messages. The focus in the definition of the interface and messages is in “what” a service does rather than “how”. The “how” is internal to the implementation of the service.

Business Benefits of Services Orientation

Service orientation has numerous benefits including:

  • Delivery Time and Cost – Knowledge transfer time is reduced within projects in a matrixed staffing model as resources transition between projects. If developers are trained on enterprise level shared services, the knowledge is transitive between projects. Further, each project does not have to go through the effort recreating  functionality that has been previously delivered. (see Evaluating Existing Assets for Reuse)
  • Business Responsiveness – Delivery of solutions through composition is faster than constructing solutions from scratch as higher levels of service maturity are achieved. This becomes especially tool as tools such as Enterprise Service Buses (ESB) are put in place that can mediate normally complex concerns such as service security and protocol variances across services.
  • Product Stability – Solutions that reuse services instantly gain the benefit of testing and tuning that has been performed. If the same functionality is developed over and over again each project has to potentially goes through the same learning and address the same defects.

Introduction

Model Overview

A maturity model is a benchmarking system to measure progress toward a goal based on a set of objectives. A maturity model can provide a means for developing a transformation roadmap to achieve a target state from a starting state. The model defines the change, by area of concern, required for an organization to mature.

This maturity model focuses exclusively on services. It can be used as a standalone maturity model or in conjunction with a more broadly scoped maturity model such as CMMI. This model will allow a organiation to objectively assess if it is gaining the capabilities necessary to leverage services in an increasingly more sophisticated and beneficial manner. This model is a customization of the multiple services maturity frameworks such as those  developed by the Open Group (OSIMM) and the services maturity model developed by Gartner. It also recognizes some of the work done in SoMa by IBM.  It is organized in a matrix of seven disciplines and four maturity levels as illustrated below:

Stage 1Introduction Stage 2Spreading Stage 3Collaborating Stage 4Optimized
Business {Goals} {Goals} {Goals} {Goals}
Organization & Governance {Goals} {Goals} {Goals} {Goals}
Methods {Goals} {Goals} {Goals} {Goals}
Operations {Goals} {Goals} {Goals} {Goals}
Architecture {Goals} {Goals} {Goals} {Goals}
Information {Goals} {Goals} {Goals} {Goals}
Infrastructure {Goals} {Goals} {Goals} {Goals}

Table 1 – Maturity Model Matrix

The model details:

  • the benefits that can expect to be gained by matriculation to higher levels of maturity
  • goals for each level

The model is guided by the principles specified in the services Manifesto at http://www.services-manifesto.org/.

Maturity Levels

Maturity Level Concept

Maturity levels are used to categorize, into distinct stages, how wide and deep transformation has occurred within the organization. The services model is composed of four levels:

  • Initial
  • Spreading
  • Collaborative
  • Optimized

with level 1 being the least mature services adoption and level 4 being the most mature. Higher degrees of maturity are likely to lead to a higher degree of business and technical agility, but also require that the organization become proficient in an escalating set of organizational and technical capabilities.  The organization must be committed to making deep and wide-spread changes in not only in technology but also how business processes are described, implemented, and measured. While this does seem broad reaching every effort should be taken to make an “appropriate bite” for your organization. If your organization has process improvement initiatives in place a top down domain modeling approach might be the best way the starts versus implementing low-level technical techniques for reuse.

This model represents a middle out approach first developing some technical assets for reuse for common cross cutting concerns and then address more coarse-grained scenarios such as actual processes. I am not arguing this the only or even the best way to go for all organizations.

The table below provides a brief overview of the high level goals of each level. Subsequent sections will examine each level in detail.

Stage 1Initial Stage 2Spreading Stage 3Collaborating Stage 4Optimized
Business Goals Address Specific Pain Process Integration Process Flexibility Services for Sale
IT Goals Project Level Guidance
Core Service Standards
Some Shared Services
Service Design Standards
Service Reuse
Service Composability
Business Process Monitoring
Runtime Service Governance
Software as a business service
Event Driven Architecture
Service Scope Single Application/Project Single Business Unit Multiple Business Units Customers and Partners

Table 2 – Maturity Level Overview

Stage 1: Initial

The Initial maturity level is the first stage of the model. Level one is focused on:

  • Achieving component orientation
  • Project level services
  • Establishing a baseline enterprise level technical standards and guidance
  • Defining an approach to services delivery

Within this level, initial R&D activities to evaluate services technologies and techniques are conducted in a laboratory environment. The selected techniques are then applied within projects to solve specific problems. Additionally, it is at this level when basic standards and organizational structures required to support services are defined.

Stage 2: Spreading

In the Spreading level, shared services are developed using the standards and processes defined in the initial stage. This level focuses on services enablement within the boundary of single a business domain. In addition to actual service development, the infrastructure is put in place to more pervasively share services from project to project such as executing service governance processes and a service registry.

Stage 3: Collaborating

The Collaborating level is focused on leveraging the services developed for individual business domains to:

  • Compose cross business domain services
  • Identify business metrics to monitor on a real-time basis (e.g. number of  price comparisons per hour)

In addition, the services previously developed will be evaluated for overlap, redundancy and opportunities for consolidation.

From a services infrastructure perspective, the goal will be to implement tools that support runtime governance and management of services to improve configurability and location transparency which results in improved agility.

Stage 4: Optimized

The Optimized level is most sophisticated level of service maturity. It builds on the accomplishments of previous levels and focuses on service enablement externally to customers.

Disciplines

Disciplines represent different views or concerns of the organization

Business

The Business discipline addresses the organization’s current business practices and policies, how business processes are designed, structured, implemented and executed.

Organization & Governance

The Organization & Governance discipline address the structure and design of the organization itself and the measures of organizational effectiveness in the context of services governance.  The Organization aspect is focused on organizational structure, relationships, roles and the empowerment necessary to adopt a service- oriented strategy. Governance is associated with formal management processes to keep IT activities, service capabilities, and services solutions aligned with organizational needs, guidelines, and standards.

Methods

The Methods discipline is focused on the methodology and processes employed by your organization to design, deliver, and manage service oriented solutions. Existing processes such as the Software Development Lifecycle (SDLC), operations management, and portfolio management will need to be updated to include services related check points.

Operations

The Operations discipline addresses the structures, processes, and assurances that must be in place for operational supportability of service oriented solutions. If a solution is not supportable are runtime or issues cannot be quickly identified and corrected reuse is unlikely even if the solution is functionality sound.

Architecture

The Architecture discipline is focused on the structure of the architecture which includes topology, integration techniques, enterprise architecture decisions, standards and policies, web services adoption level, experience in services implementation, services compliance criteria, and which architecture artifacts are produced.

Information

The Information discipline is focused on how information is structured, how information is modeled, the method of access to enterprise data, abstraction of the data access from the functional aspects, data characteristics, data transformation, handling of identifiers, knowledge management, business information model, and content management.

Infrastructure

The infrastructure discipline is focused on the physical hardware and tools that are in place to support service oriented solutions.

Assessing Maturity

Maturity Goals Matrix

Stage 1

Introduction

Stage 2

Spreading

Stage 3

Collaborating

Stage 4

Optimized

Business
  • Define business domain prioritization
 

  • Business Service/Function Catalog
  • Process and Decomposition Models
  • Application and Component to Business Process Matrix
  • Domain Level Services
  •  

  • Composable Business Processes
  • Service to Business Process Matrix
  •  

  • Services for sale/use by external partners
  • Real-time business process metrics
  • Organization & Governance
    • Services COE Team Created
    • Pilot Services Governance Plan
    • Developers learn component oriented development skills
     

    • Implement services Governance Plan
    • Design time capture reuse metrics
    • Developers learn service oriented development skills
    • Refine SOA metrics including ROI
    • Runtime Service Governance Standards
    • Services Support Team
    • Runtime capture reuse metrics
     

    • Integrate Services Management into Product Management
    • Define vendor on-boarding and certification practices
    Methods
    • Standardize Development Methodology
    • Reuse Evaluation Strategy Defined

     

    Service  Design Methodology  

    __

     

    __

    Operations
    • Binary dependency management guidelines
    • Source management standards
     

  • KPIs and SLA Categories and Metrics Defined
  •  

  • Monitor KPIs and SLAs real-time
  • Service virtualization
  • Service configurability to support non-functional needs (e.g. translation between WS* protocols)
  •  

  • Runtime service governance
  • Dynamic exception and SLA management
  • Architecture
    • Service Development Standards and Guidelines

     

     

  • Service Design Guidelines
  • Service Reference Implementation
  • Establish architecture repository
  •  

  • Transaction Management Standards
  • Best practice guidance for event driven SOA
  •  

    __

    Information
    • In scope data domains
    1. Domain Level Canonicals
     

  • Enterprise Business Data Directory
  • Enterprise canonicals
  • Data as a service/data federation
  •  

  • Industry canonicals
  • Analytics  as a service
  • Infrastructure
    • Shared Services Physical Environment (UIT, SIT, PROD)
     

  • Services Repository
  • ESB
  • Shared Services Physical Environment (PERF TEST)
  •  

    • BPM/BAM Middleware
    • Runtime Service Governance/Management Platform (with Policy Registry)
    • Security Federation Platform
    • Business Rules Engine
    • XML Firewall
    • Complex Event Processing Engine (CEP)

    Maturity Benefits Matrix

    Stage 1Introduction Stage 2Spreading Stage 3Collaborating Stage 4Optimized
    Benefit
    • Reduction in project delivery effort related to technical asset reuse
    • Improved cross project consistency
    • Increased developer knowledge or core component and service concepts
     

  • Increased ease of modification
  • Improved operational supportability
  • Reduction in functional defects
  • Project cost and effort reduction
  •  

    • Project cost and effort reduction
    • Improved business responsiveness
    • Improved application availability
    • Closer IT alignment to driving business value
     

    • Enhanced ability to proactively  optimize business processes  (due to availability of real-time process metrics)
    • New revenue opportunities
    • Project cost and effort reduction

    Assessment Process

    It is recommended that services maturity be assessed at least on an a bi-annual  basis. For each goal, by discipline the team should define and collect metrics and information to determine if has obtained the goal.

    Assessment Questions and Metrics

    TBD

    Evaluating Existing Assets for Reuse

    April 15, 2010 Leave a comment

    Solution architects frequently have to analyze existing internal and external assets for reuse. While reuse can reduce delivery time, improve stability, and reduce delivery cost these benefits  should be not assumed to be automatic. Architects should take great care in determining what they are attempting achieve when suggesting reuse of any asset (patterns, open source, internal, SAAS, or COTS).  Key considerations in reuse decisions should include:

    • Where and how will assets be reused in the solution
    • The expected benefits of asset reuse e.g. functionality, quality, schedule impact or effort savings
    • Possible constraints for asset reuse
    • Cost and budget for the reuse of the asset

    The diagram below illustrates the some of the decision making categories should be considered when determining if an asset is suitable for reuse. The key thing to note is that it is not sufficent to simply evaluate functional requirements and determine if a component is reusable. For example, an asset developed for a batch processing application may be fundamentally unsuitable for a real time application since the two solutions have very different architectural characteristics even if the functional requirements are the same.

    Analyzing Existing Assets

    While this process does not have to be formal a result in excessive documentation, I recommend that you always take the effort to perform it and capture it somehow. In Agile organizations we’ve done this one white boards and taken a snap shot so that six months later the rationale for the solution can be easily recalled or communicated to new team members that might not have been involved in the decision making process.

    Evaluating Assets for Reuse  Use Case

    The process below can be used as a guide for solution architects to evaluate assets for reuse. This should not be considered a perspective process, but rather a useful process flow that you can follow for ensuring that consistently holistically consider what you are reusing rather than making emotive or decisions based on biased information

    1. Review needs and risk for the current the solution.
    2. Define where and how in the solution, based on the proposed candidate architecture, there is an opportunity for reuse.
    3. Define the expected benefit of reuse. (e.g schedule, quality improvement, and/or reduced delivery effort.)
    4. Define testable/measurable reuse criteria.

    4.1.   Define functional requirement criteria for reuse. The functional criteria identify the general features that the asset  provides for reuse. This list should be driven by the identified needs and risks for the solution.

    4.2.   Define quality/non-functional requirements characteristic criteria for reuse.(e.g. defect rate, compliance to standards, availability of support resource or documentation)

    4.3.   Define project and strategic criteria for reuse. These criteria must capture any long and short term effects of reusing the asset beyond the product release. (e.g. cost, effort saved, vendor/open source community stability)

    4.4.   Define the architecture compatibility criteria for reuse. (e.g. service orientation, support for native .Net extensions)

    1. Identify candidate assets for reuse. Common sources of reusable assets are:

    5.1.   PMO portfolio management portfolio. This may be an informal spreadsheet or something more formal like Clarity in your organization

    5.2.   Review common assets in source control or dependency management systems (e.g. Ivy)

    5.3.   Research open source, SAAS, and commercial products

    1. Analyze candidates against identified criteria. Leverage existing documentation and performance tests where available and considered reliable sources of information.
    2. Perform an architectural spike or prototyping exercise to confirm results if time permits. This is strongly encouraged.
    3. Select proposed candidates for reuse

    Categories of Evaluation Criteria

    Evaluation criteria can be categorized into four main areas:

    • functional requirements
    • quality/non-functional characteristics
    • strategic/project concerns
    • domain and architecture compatibility.

    Functional Requirements

    Functional requirements refer to identifiable functions, features or characteristics that must be supported. These criteria can be defined based on the solution vision, user stories, use cases, or needs statements. For example:

    • Must support creation of surveys with a configurable number of survey questions

    Quality/Non-Functional Characteristics

    Quality or non-functional characteristics are more generalized than functional requirements. They constraints for how the asset must perform or is operate. There is typically some commonality to these characteristics from project to project but the acceptable thresholds might change. For example:

    • Defect rate
    • Compliance with a legal guideline like SOX
    • Availability and completeness of documentation

    Strategic/Project Concerns

    These criteria address the effects of selecting the asset beyond the solution development lifecycle once it is operating in the environment. For example:

    • Cost of ownership
    • Vendor stability
    • Availability of support resources

    Domain and Architecture Compatability

    Domain compatibility refers to how well the reuse candidate and its features map in to the organizational domain terminology and concepts. For example in an ERP system there may be a concept called party that maps to one or more concepts in the business domain. This can be problematic in cases where there is a complete mis-match. Architecture compatibility refers how well the candidate supports the software, data, and technical architecture requirements defined by the candidate architecture.  For example:

    Domain

    • Can the levels of the logistic hierarchy be modeled

    Architecture

    • Is a REST API supported

    In conclusion, solution architects should always strive to develop models and frameworks for conducting their work. The technique discussed in this article is one such technique. It is not to be considered a replacement for actual experience, but, it can improve the quality of your product selections.

    Example

    General

    Architectural Layer Model {location}
    Expected Benefits of Reuse
    1. Compliance with enterprise security standards
    2. Improved software quality
    3. Improved supportability
    4. Delivery Speed

     

    Areas for Reuse Resource Access LayerDomain LayerUser InterfaceUser Interface Orchestration

    Reuse Criteria

    ID Criteria Category Supported Benefit
    1 Less 5 defects discovered per 30 day period Quality 2,3
    2 No more than 1 major release per quarter Quality
    3 Supports performing all CRUD actions to a relational database Functional 4
    4 Supports creation of surveys with a configurable number of questions Functional 4
    5 Provides REST web service API Domain Compatibility 3
    6 Component cost less than $1000.00 Strategic

    Component Evaluation

    YourCompany.Common.DataAccess

    Layers

    Resource Access

    Evaluation Matrix

    Criteria  ID Satisfied Comments
    1 Yes
    2 Yes
    3 Yes Testing for component only performed against SQL Server 2008
    4 N/A
    5 No API access via native .Net. Solution is component oriented.
    6 Yes

    Feedback Server

    Layers

    UI, UI Orchestration, Domain, Resource Access

    In conclusion, solution architects should always strive to develop models and frameworks for conducting their work. The technique discussed in this article is one such technique. It is not to be considered a replacement for actual experience, but, it can improve the quality of your product selections.