Archive for the ‘Architecture Tools’ Category

3 Ideas to Make Architecture Executable in Visual Studio 2010

May 19, 2010 Leave a comment

One of the main complaints people have about architects is that we set a bunch of standards and produce a bunch of paper but little else even when they appreciate the visionary aspect we bring to the table.  While concepts, standards, and white papers  can provide valuable guidance, ultimately the value proposition is only as good as organizational adoption.

One of the question I always ask myself is how can I make this concept tangible to my audience. Further, how can I make what I’m promoting normal part of the workflow without  introducing additional ceremony.

I think Visual Studio 2010 introduces new opportunities for organizations using the MS platforms to make technical architecture executable for their technical teams. Here are few ideas you might want to incorporate:

  1. Make coding standards executable: StyleCop and Code Analysis (or  FxCop) are free code analyzers. StyleCop works on non-compiled code while Code Analysis/FxCop work on IL. While these are not new tools for VS2010,they were improved, and  you should consider incorporating them into your build processes. Instead of just writing coding standards  (which usually also include  simple design rules in addition to formatting guidelines)  configure StyleCop and CodeAnalysis to evaluate the rules at build time or compile time. This can be done on developer machines, even if you haven’t implemented a CI process. Modify your coding standards document indicate which rules are executable.  Custom rules can  be implemented to reflect the needs of your organization. (Note: integrating the execution of StyleCop into TFS 2010 will require you to create a custom workflow activity.) Choosing this strategy instead of a document has several benefits: allows standards to be quickly and consistently evaluated so that code reviews can focus on more valuable design related activities, provides metrics regarding standards compliance — maybe your standards are unrealistic and need to be evaluated or developers require more education, and allows standards to be available from within in the normal development workflow instead of spending time switching between coding and reading a document to ascertain compliance (a not very Lean practice).  More information on the tools are available at:
  2. Make high level architectural patterns available in the IDE. Layers (  are probably one the most commonly used architectural pattern. 
    Layer Diagram

    Pretty Boxes


    There are few systems that don’t refer to themselves as n-tier or MVC these days. Layering is even used to describe infrastructure in commonly used tools such as the OSI model.  Solutions architects typically articulate their grands visions for how their systems should be implemented using layers and how those layers should interact in a few ways: reference implementations, pretty boxes (my fave – you can’t tell me you’ve never drawn a bunch of nebulous boxes on the white board:)), or UML package/component diagrams. Why not use the same pretty picture for your software architecture document to actually validate the code that is being delivered. I will wait a moment for your wow :).  A new option with Visual Studio 2010 are layer diagrams.Layer digrams allow you to define the layers of your system within Visual Studio, define the relationship and constraints among layers, associate the layers namespaces, and validate that the implemented code conforms with layer diagram. You can easily imagine a scenario where you can quickly perform an audit to determine how many new web  applications developed for  the quarter conform with the MVC architecture. You could also use the diagrams as part of SLA agreements with your solution providers to ensure they are conform with approved enterprise architectural patterns. More information on the use and capabilities of layer diagrams is available at:

  3. Generate code. There have long been numerous model driven architecture tools available for .Net.  Visual Studio 2010 is throwing it’s hat in the arena to arguably make it simpler. VS2010 allows you to create UML diagram and apply stereotypes to the diagrams that represent your intended design. You can then use T4 to generate code from the models. I estimate that 40%-60% of code could be generated in this way for most common applications. (Note: True DSL support for the MS platform is a little spotty but most organizations have reached a point which true DSL use is pervasive.) It should be noted that code generation as a strategy is most effective if you have well-defined standards for development including cross cutting concerns. For example if my ORM standard is nHibernate I can easily generate a domain model that leverages nHibernate for persistence. The benefit of code generation are numerous: time and delivery cost, delivery consistency, increased focused on value add portions of the application that actually drive competitive advantage instead of expending resources on infrastructure code, increased focused on business needs and domain modeling and code as a mechanism to support it instead of the reverse, and rapid spiking/prototyping. Additional information is available regarding UML and code generation at:

I am not suggesting that these things were not achievable before using a combination of open source tools and commercial products for the MS platform. However, Visual Studio 2010 provides a fairly significant set of tools that may not be being completely leveraged it the tool is primarily used for “just coding”. Obviously you don’t have to implement all of these ideas, but some may provide benefits in your organization.


A SOA Maturity Model

May 15, 2010 Leave a comment


Purpose and Scope

This describes proposed Service Oriented Architecture (SOA) Maturity Model that be can be customized for evaluating SOA maturity in your organization. The goal of the model is to provide a framework to evaluate the progress of an evolution to services orientation. Additionally, the model serves as guidance to develop an execution plan for incremental services adoption within an organization. Several existing models were consulted in the development of this model such as the open model currently being developed by the Open Group.

What is Services Orientation

Services orientation is NOT about creating web services. In fact, services orientation does not require web services technology; however, leveraging web services technology is tool for achieving some of the goals of services orientation.

Services orientation from a business process perspective is about defining discrete business processes/offerings and understanding how those discrete business processes can be composed into larger process or new offerings. For example, opportunity identification is a discrete business process that is part of the:

  • network formation
  • creating new client engagement
  • identify savings opportunity



Identifying discrete processes allows organizations deeply understand the processes, identify opportunities for optimization, and monitor the processes to ensure they are efficient. The more the underlying process is reuse, the more valuable these activities become.

Services orientation from a technical implementation perspective is an approach to designing, implementing, and deploying solutions such that a solution can be created from components implementing discrete business and technical functions. These components, called “Services”, can be distributed across geography, across enterprises, and can be reconfigured into new processes as needed.

Characteristics of Service Oriented Solutions

  • Component Oriented – Service oriented solutions are component oriented. Component orientation means that each service performs a cohesive and logical set of activities for a single concept. For example, a payroll service might provide functions such as issue W2s and issues checks or a data access technical service might provide functions such as persist entity.
  • Composability – Service oriented solutions support composition into new services and solutions without impacting the existing service implementation.
  • Platform and Location Transparency – Service oriented solutions do not require that services be implemented on the same technology platform or in the same location.
  • Broad Interoperability – Services oriented solutions should support standard protocols to ensure the highest level of interoperability.
  • Self Describing – Services should be self describing.  The interface to the service and its operations should be discoverable via meta-data, not out-of-band communications (e.g. document)
  • Message Oriented – Information is transmitted to and from services as messages. The focus in the definition of the interface and messages is in “what” a service does rather than “how”. The “how” is internal to the implementation of the service.

Business Benefits of Services Orientation

Service orientation has numerous benefits including:

  • Delivery Time and Cost – Knowledge transfer time is reduced within projects in a matrixed staffing model as resources transition between projects. If developers are trained on enterprise level shared services, the knowledge is transitive between projects. Further, each project does not have to go through the effort recreating  functionality that has been previously delivered. (see Evaluating Existing Assets for Reuse)
  • Business Responsiveness – Delivery of solutions through composition is faster than constructing solutions from scratch as higher levels of service maturity are achieved. This becomes especially tool as tools such as Enterprise Service Buses (ESB) are put in place that can mediate normally complex concerns such as service security and protocol variances across services.
  • Product Stability – Solutions that reuse services instantly gain the benefit of testing and tuning that has been performed. If the same functionality is developed over and over again each project has to potentially goes through the same learning and address the same defects.


Model Overview

A maturity model is a benchmarking system to measure progress toward a goal based on a set of objectives. A maturity model can provide a means for developing a transformation roadmap to achieve a target state from a starting state. The model defines the change, by area of concern, required for an organization to mature.

This maturity model focuses exclusively on services. It can be used as a standalone maturity model or in conjunction with a more broadly scoped maturity model such as CMMI. This model will allow a organiation to objectively assess if it is gaining the capabilities necessary to leverage services in an increasingly more sophisticated and beneficial manner. This model is a customization of the multiple services maturity frameworks such as those  developed by the Open Group (OSIMM) and the services maturity model developed by Gartner. It also recognizes some of the work done in SoMa by IBM.  It is organized in a matrix of seven disciplines and four maturity levels as illustrated below:

Stage 1Introduction Stage 2Spreading Stage 3Collaborating Stage 4Optimized
Business {Goals} {Goals} {Goals} {Goals}
Organization & Governance {Goals} {Goals} {Goals} {Goals}
Methods {Goals} {Goals} {Goals} {Goals}
Operations {Goals} {Goals} {Goals} {Goals}
Architecture {Goals} {Goals} {Goals} {Goals}
Information {Goals} {Goals} {Goals} {Goals}
Infrastructure {Goals} {Goals} {Goals} {Goals}

Table 1 – Maturity Model Matrix

The model details:

  • the benefits that can expect to be gained by matriculation to higher levels of maturity
  • goals for each level

The model is guided by the principles specified in the services Manifesto at

Maturity Levels

Maturity Level Concept

Maturity levels are used to categorize, into distinct stages, how wide and deep transformation has occurred within the organization. The services model is composed of four levels:

  • Initial
  • Spreading
  • Collaborative
  • Optimized

with level 1 being the least mature services adoption and level 4 being the most mature. Higher degrees of maturity are likely to lead to a higher degree of business and technical agility, but also require that the organization become proficient in an escalating set of organizational and technical capabilities.  The organization must be committed to making deep and wide-spread changes in not only in technology but also how business processes are described, implemented, and measured. While this does seem broad reaching every effort should be taken to make an “appropriate bite” for your organization. If your organization has process improvement initiatives in place a top down domain modeling approach might be the best way the starts versus implementing low-level technical techniques for reuse.

This model represents a middle out approach first developing some technical assets for reuse for common cross cutting concerns and then address more coarse-grained scenarios such as actual processes. I am not arguing this the only or even the best way to go for all organizations.

The table below provides a brief overview of the high level goals of each level. Subsequent sections will examine each level in detail.

Stage 1Initial Stage 2Spreading Stage 3Collaborating Stage 4Optimized
Business Goals Address Specific Pain Process Integration Process Flexibility Services for Sale
IT Goals Project Level Guidance
Core Service Standards
Some Shared Services
Service Design Standards
Service Reuse
Service Composability
Business Process Monitoring
Runtime Service Governance
Software as a business service
Event Driven Architecture
Service Scope Single Application/Project Single Business Unit Multiple Business Units Customers and Partners

Table 2 – Maturity Level Overview

Stage 1: Initial

The Initial maturity level is the first stage of the model. Level one is focused on:

  • Achieving component orientation
  • Project level services
  • Establishing a baseline enterprise level technical standards and guidance
  • Defining an approach to services delivery

Within this level, initial R&D activities to evaluate services technologies and techniques are conducted in a laboratory environment. The selected techniques are then applied within projects to solve specific problems. Additionally, it is at this level when basic standards and organizational structures required to support services are defined.

Stage 2: Spreading

In the Spreading level, shared services are developed using the standards and processes defined in the initial stage. This level focuses on services enablement within the boundary of single a business domain. In addition to actual service development, the infrastructure is put in place to more pervasively share services from project to project such as executing service governance processes and a service registry.

Stage 3: Collaborating

The Collaborating level is focused on leveraging the services developed for individual business domains to:

  • Compose cross business domain services
  • Identify business metrics to monitor on a real-time basis (e.g. number of  price comparisons per hour)

In addition, the services previously developed will be evaluated for overlap, redundancy and opportunities for consolidation.

From a services infrastructure perspective, the goal will be to implement tools that support runtime governance and management of services to improve configurability and location transparency which results in improved agility.

Stage 4: Optimized

The Optimized level is most sophisticated level of service maturity. It builds on the accomplishments of previous levels and focuses on service enablement externally to customers.


Disciplines represent different views or concerns of the organization


The Business discipline addresses the organization’s current business practices and policies, how business processes are designed, structured, implemented and executed.

Organization & Governance

The Organization & Governance discipline address the structure and design of the organization itself and the measures of organizational effectiveness in the context of services governance.  The Organization aspect is focused on organizational structure, relationships, roles and the empowerment necessary to adopt a service- oriented strategy. Governance is associated with formal management processes to keep IT activities, service capabilities, and services solutions aligned with organizational needs, guidelines, and standards.


The Methods discipline is focused on the methodology and processes employed by your organization to design, deliver, and manage service oriented solutions. Existing processes such as the Software Development Lifecycle (SDLC), operations management, and portfolio management will need to be updated to include services related check points.


The Operations discipline addresses the structures, processes, and assurances that must be in place for operational supportability of service oriented solutions. If a solution is not supportable are runtime or issues cannot be quickly identified and corrected reuse is unlikely even if the solution is functionality sound.


The Architecture discipline is focused on the structure of the architecture which includes topology, integration techniques, enterprise architecture decisions, standards and policies, web services adoption level, experience in services implementation, services compliance criteria, and which architecture artifacts are produced.


The Information discipline is focused on how information is structured, how information is modeled, the method of access to enterprise data, abstraction of the data access from the functional aspects, data characteristics, data transformation, handling of identifiers, knowledge management, business information model, and content management.


The infrastructure discipline is focused on the physical hardware and tools that are in place to support service oriented solutions.

Assessing Maturity

Maturity Goals Matrix

Stage 1


Stage 2


Stage 3


Stage 4


  • Define business domain prioritization

  • Business Service/Function Catalog
  • Process and Decomposition Models
  • Application and Component to Business Process Matrix
  • Domain Level Services

  • Composable Business Processes
  • Service to Business Process Matrix

  • Services for sale/use by external partners
  • Real-time business process metrics
  • Organization & Governance
    • Services COE Team Created
    • Pilot Services Governance Plan
    • Developers learn component oriented development skills

    • Implement services Governance Plan
    • Design time capture reuse metrics
    • Developers learn service oriented development skills
    • Refine SOA metrics including ROI
    • Runtime Service Governance Standards
    • Services Support Team
    • Runtime capture reuse metrics

    • Integrate Services Management into Product Management
    • Define vendor on-boarding and certification practices
    • Standardize Development Methodology
    • Reuse Evaluation Strategy Defined


    Service  Design Methodology  




    • Binary dependency management guidelines
    • Source management standards

  • KPIs and SLA Categories and Metrics Defined

  • Monitor KPIs and SLAs real-time
  • Service virtualization
  • Service configurability to support non-functional needs (e.g. translation between WS* protocols)

  • Runtime service governance
  • Dynamic exception and SLA management
  • Architecture
    • Service Development Standards and Guidelines



  • Service Design Guidelines
  • Service Reference Implementation
  • Establish architecture repository

  • Transaction Management Standards
  • Best practice guidance for event driven SOA


    • In scope data domains
    1. Domain Level Canonicals

  • Enterprise Business Data Directory
  • Enterprise canonicals
  • Data as a service/data federation

  • Industry canonicals
  • Analytics  as a service
  • Infrastructure
    • Shared Services Physical Environment (UIT, SIT, PROD)

  • Services Repository
  • ESB
  • Shared Services Physical Environment (PERF TEST)

    • BPM/BAM Middleware
    • Runtime Service Governance/Management Platform (with Policy Registry)
    • Security Federation Platform
    • Business Rules Engine
    • XML Firewall
    • Complex Event Processing Engine (CEP)

    Maturity Benefits Matrix

    Stage 1Introduction Stage 2Spreading Stage 3Collaborating Stage 4Optimized
    • Reduction in project delivery effort related to technical asset reuse
    • Improved cross project consistency
    • Increased developer knowledge or core component and service concepts

  • Increased ease of modification
  • Improved operational supportability
  • Reduction in functional defects
  • Project cost and effort reduction

    • Project cost and effort reduction
    • Improved business responsiveness
    • Improved application availability
    • Closer IT alignment to driving business value

    • Enhanced ability to proactively  optimize business processes  (due to availability of real-time process metrics)
    • New revenue opportunities
    • Project cost and effort reduction

    Assessment Process

    It is recommended that services maturity be assessed at least on an a bi-annual  basis. For each goal, by discipline the team should define and collect metrics and information to determine if has obtained the goal.

    Assessment Questions and Metrics


    Evaluating Existing Assets for Reuse

    April 15, 2010 Leave a comment

    Solution architects frequently have to analyze existing internal and external assets for reuse. While reuse can reduce delivery time, improve stability, and reduce delivery cost these benefits  should be not assumed to be automatic. Architects should take great care in determining what they are attempting achieve when suggesting reuse of any asset (patterns, open source, internal, SAAS, or COTS).  Key considerations in reuse decisions should include:

    • Where and how will assets be reused in the solution
    • The expected benefits of asset reuse e.g. functionality, quality, schedule impact or effort savings
    • Possible constraints for asset reuse
    • Cost and budget for the reuse of the asset

    The diagram below illustrates the some of the decision making categories should be considered when determining if an asset is suitable for reuse. The key thing to note is that it is not sufficent to simply evaluate functional requirements and determine if a component is reusable. For example, an asset developed for a batch processing application may be fundamentally unsuitable for a real time application since the two solutions have very different architectural characteristics even if the functional requirements are the same.

    Analyzing Existing Assets

    While this process does not have to be formal a result in excessive documentation, I recommend that you always take the effort to perform it and capture it somehow. In Agile organizations we’ve done this one white boards and taken a snap shot so that six months later the rationale for the solution can be easily recalled or communicated to new team members that might not have been involved in the decision making process.

    Evaluating Assets for Reuse  Use Case

    The process below can be used as a guide for solution architects to evaluate assets for reuse. This should not be considered a perspective process, but rather a useful process flow that you can follow for ensuring that consistently holistically consider what you are reusing rather than making emotive or decisions based on biased information

    1. Review needs and risk for the current the solution.
    2. Define where and how in the solution, based on the proposed candidate architecture, there is an opportunity for reuse.
    3. Define the expected benefit of reuse. (e.g schedule, quality improvement, and/or reduced delivery effort.)
    4. Define testable/measurable reuse criteria.

    4.1.   Define functional requirement criteria for reuse. The functional criteria identify the general features that the asset  provides for reuse. This list should be driven by the identified needs and risks for the solution.

    4.2.   Define quality/non-functional requirements characteristic criteria for reuse.(e.g. defect rate, compliance to standards, availability of support resource or documentation)

    4.3.   Define project and strategic criteria for reuse. These criteria must capture any long and short term effects of reusing the asset beyond the product release. (e.g. cost, effort saved, vendor/open source community stability)

    4.4.   Define the architecture compatibility criteria for reuse. (e.g. service orientation, support for native .Net extensions)

    1. Identify candidate assets for reuse. Common sources of reusable assets are:

    5.1.   PMO portfolio management portfolio. This may be an informal spreadsheet or something more formal like Clarity in your organization

    5.2.   Review common assets in source control or dependency management systems (e.g. Ivy)

    5.3.   Research open source, SAAS, and commercial products

    1. Analyze candidates against identified criteria. Leverage existing documentation and performance tests where available and considered reliable sources of information.
    2. Perform an architectural spike or prototyping exercise to confirm results if time permits. This is strongly encouraged.
    3. Select proposed candidates for reuse

    Categories of Evaluation Criteria

    Evaluation criteria can be categorized into four main areas:

    • functional requirements
    • quality/non-functional characteristics
    • strategic/project concerns
    • domain and architecture compatibility.

    Functional Requirements

    Functional requirements refer to identifiable functions, features or characteristics that must be supported. These criteria can be defined based on the solution vision, user stories, use cases, or needs statements. For example:

    • Must support creation of surveys with a configurable number of survey questions

    Quality/Non-Functional Characteristics

    Quality or non-functional characteristics are more generalized than functional requirements. They constraints for how the asset must perform or is operate. There is typically some commonality to these characteristics from project to project but the acceptable thresholds might change. For example:

    • Defect rate
    • Compliance with a legal guideline like SOX
    • Availability and completeness of documentation

    Strategic/Project Concerns

    These criteria address the effects of selecting the asset beyond the solution development lifecycle once it is operating in the environment. For example:

    • Cost of ownership
    • Vendor stability
    • Availability of support resources

    Domain and Architecture Compatability

    Domain compatibility refers to how well the reuse candidate and its features map in to the organizational domain terminology and concepts. For example in an ERP system there may be a concept called party that maps to one or more concepts in the business domain. This can be problematic in cases where there is a complete mis-match. Architecture compatibility refers how well the candidate supports the software, data, and technical architecture requirements defined by the candidate architecture.  For example:


    • Can the levels of the logistic hierarchy be modeled


    • Is a REST API supported

    In conclusion, solution architects should always strive to develop models and frameworks for conducting their work. The technique discussed in this article is one such technique. It is not to be considered a replacement for actual experience, but, it can improve the quality of your product selections.



    Architectural Layer Model {location}
    Expected Benefits of Reuse
    1. Compliance with enterprise security standards
    2. Improved software quality
    3. Improved supportability
    4. Delivery Speed


    Areas for Reuse Resource Access LayerDomain LayerUser InterfaceUser Interface Orchestration

    Reuse Criteria

    ID Criteria Category Supported Benefit
    1 Less 5 defects discovered per 30 day period Quality 2,3
    2 No more than 1 major release per quarter Quality
    3 Supports performing all CRUD actions to a relational database Functional 4
    4 Supports creation of surveys with a configurable number of questions Functional 4
    5 Provides REST web service API Domain Compatibility 3
    6 Component cost less than $1000.00 Strategic

    Component Evaluation



    Resource Access

    Evaluation Matrix

    Criteria  ID Satisfied Comments
    1 Yes
    2 Yes
    3 Yes Testing for component only performed against SQL Server 2008
    4 N/A
    5 No API access via native .Net. Solution is component oriented.
    6 Yes

    Feedback Server


    UI, UI Orchestration, Domain, Resource Access

    In conclusion, solution architects should always strive to develop models and frameworks for conducting their work. The technique discussed in this article is one such technique. It is not to be considered a replacement for actual experience, but, it can improve the quality of your product selections.