Service Architecture and the Service Delivery Process: The Transformation Road Less Travelled

The Transformation Road Less Travelled

by Terri Richards
May 25, 2012


A few years ago, as Intel’s IT service transformation initiatives were really gaining momentum, the topic of integrated frameworks was getting a lot of attention in our enterprise architecture group. As an ITSM subject matter expert, I was pulled into several workgroups. One of those teams was tasked with standardizing the artifacts required for service architecture. It seemed to be very focused on existing infrastructure and not even remotely service-oriented.

I had spent years preaching to operations and engineering, but had not brought the message to my own group. Up to this point, we had really just focused on the service management processes themselves and their interactions with each other. It was good work, but what else could they be integrated with? Who was the consumer? I began to think about what the service delivery process might look like, and I knew that some standardization would be needed to drive consistency. Although it’s true that not every process needs to be modeled from stem to stern, I wondered to what extent they needed be modeled at all.

This line of inquiry led us to the service delivery process template, and in particular, the service interaction model. As a reference point for those who despair at their lack of progress, it took about a year for this to evolve into something that was ready for consumption, if only by those who were of a teachable mind set. Although not everyone was a fan, it really helped people make the transition. It was like a map that gave them the “You are here!” view, which eliminated a certain amount of resistance.

Service design and the service level management process define high-level methods, best practices, and guidance for service definition. These components themselves come with their own interpretations relative to the redefinition and standardization of current services, to align with business activities, as well as the appointment of a governance body to approve new (future) services. Additional guidance will likely be necessary to train personnel on the process and its interactions with other enterprise processes, such as governance, incident/service request, change and release management, etc. Once your service has been defined in accordance with the framework, you should petition to have it added to the service catalog (internal or external, as appropriate).

Bear in mind, individual service owners may decide to define and document their service workflows in more detail. In this case, a template should be provided for format continuity and ease of integration with other processes and capability architectures (see activity #4 for additional information on this topic). There are six high-level activities common to any service delivery process:

1. Manage and Plan Service Products

Services may have one or many products that are utilized within the service value chain. Some service owners manage the products within their services, while others leverage engineering or capability product teams. This activity focuses on the interaction of the roles, not who will fulfill the roles. However, the roles and their respective activities, regardless of who fulfills them, must integrate seamlessly to provide the desired outcome.

Specific activities may differ from one owner to another, but at a basic level, this activity involves vendor relations and management, product roadmaps, planned releases, and so on. It requires communication and integration with the affected service owners, providers, and change/release/configuration management. Note that this activity is not meant to replace a formalized product management process, but to provide the interaction (and, to a point, the separation) with and within the service delivery process.

The inputs to this activity are service demand, the service plan, and the service owner’s service road map. The outputs are the product lifecycle triggers that could impact/affect the service, such as the road map, cost, and release forecast/schedule.

2. Manage and Plan the Service

Every service has unique needs when it comes to managing, planning, and integrating it into the service level management process. Therefore, this activity will contain owner-identified subactivities/processes, which might include things like cost controls, road map planning, operations reviews, etc.

Inputs to this activity include product management (if applicable). Product lifecycle events, like major upgrades, vendor changes, etc., will likely cause a change to the service plan. The service owner should work with the product owner to ensure that he/she is aware of issues, usage trends, and forecasts.

The outputs of this activity are the service plan, road map, and forecast, which augment the service agreements (SLA and OLA) used to manage the customer relationship, and which are themselves outputs of the initial service creation (design) activities.

3. Request and Provision Service

Service owners/providers should have an efficient and effective method for receiving and provisioning requests for service. If possible, a sustainable model would put this activity at the service desk level, so that the requests can be tracked and executed at a lower cost and, ideally, in an automated fashion.

When the service is designed (or redesigned), service owners should be able to articulate their requests and provide documentation for provisioning procedures. These procedures can be the basis for workflow decomposition (illustrated above). If the service is not available, the requester can log his/her capability request. That information can then be routed to, tracked, and trended by the service owner or customer relationship manager, as appropriate.

4. Deliver the Service

The purpose of this activity is to describe a day in the life of service. It is an essential step toward understanding and documenting the roles that are part of your service delivery process. It is typically best to stick with high-level role categories: analyst, engineer, site representative, service owner, service provider, vendor, etc.

The question of how deep or how broad you should model a service process is based entirely on the service it describes. A consistent approach would be to focus on the service’s high-level objective, then deconstruct the activities that enable each objective and illustrate their integration with other core processes, tools, and service providers. For example, if the objective of the service is 98 percent availability of a hosted web service, where availability is defined as access to specific data and the ability to download it, what are the technical and manual sequences of activities required to make that data available? Which providers perform those activities?

Another model that is very useful here is the service interaction model, which can illuminate gaps in process, technology, and support structure. This view is useful to all audiences, whether they are focused on processes, data, or technology. It is also very handy in pilot situations, as it provides a high-level, single view of the service structure. Often, a team review of this model can identify where “deep dives” are needed to improve or modify processes and/or technology. Additionally, this overhead view of the service interaction can highlight single points of failure and capture the potential incident scenarios. This could, in turn, trigger a number of related decisions, including the pursuit of automatic remediation or a change in footprint to provide redundancy, among other things. Finally, this view is often a reference for modeling the service footprint in the CMDB.

A quick word on roles: Tradition dictates that the people who fulfill these roles should work for the same organization. In the service organization, that may not be true…and that’s okay. Operational level agreements and integrated processes accommodate external roles and are the glue that holds the service value chain together.

5. Monitor and Report

The service level management process requires service owners to continually report and review service performance and costs. To do so, service owners review reports of business service performance and look to their providers to report on individual performance and cost data. Likewise, service providers review reports on operational services and may also have subproviders in their service value chains who are doing the same. It is a bottom-up approach that produces cost and performance transparency for solutions and services. Embedded in that process are the critical products (hardware and software) and their road maps, as well as support strategies and lifecycle management. Remember, policies govern how often a formal review takes place, but it’s up to the service/solution owners to make this activity part of everyday operations.

For this activity, the inputs are the relevant performance and cost data that are reviewed and reported to the upstream service/solution owners. The output is the knowledge to answer this question: “Is a change to the plan or service improvement indicated?”

The service owner should work closely with manageability and automation specialists to develop the proper monitors for the service value chain. For example, establish criteria (i.e., green/yellow/red) to identify products that are health or unhealthy. Additionally, collect data that helps you determine what constitutes an incident, both from the infrastructure and user perspectives (where appropriate). Every incident will not be a service incident. Use the data you collect to create a taxonomy upon which you can base your ticketing and/or integrated menu views for various tool sets in the environment. To do this, it is critical that you understand what data is required for the service desk (i.e., level 1 support), as well as what data from the service desk feeds other processes, like change, release, problem management, etc.

External reporting is usually quite simple. For example, if the objective for a certain service is 98 percent availability, the report should reflect actual availability, nothing more. Internal metrics are more comprehensive (i.e., server uptime, application/product unavailability, capacity trending, etc.). While these are not made available to the user population, they are critical data for IT, helping the organization understand the cost of service provisioning and resources. The service owner should look for:

  • Performance data: Did we deliver what we promised?
  • Cost-effectiveness analysis: Is the run rate as expected? Are incidents/problems trending up? Are we (via providers) being forced to release more than we planned in order to fix bugs/issues?
  • The road map and forecast for new/increased business and product/provider lifecycle triggers on the horizon: Do we need to increase/decrease capacity? Is the business/technology changing?

Service improvement brings to mind a project plan with a thousand lines, but this is not necessarily the case. Often the change or correction may simply be to update an agreement (SLA/OLA/UC) or execute a very low-impact maintenance action. On the other hand, if the issues warrant root-cause analysis, the problem management process should be initiated. That doesn’t mean the players will necessarily change (though they could), but the process will. You need not, however, reinvent problem management within your service delivery process; rather, integrate or “dip” into the problem management process, reusing or utilizing those best practices, tools, and methods that perform well consistently.

6. Terminate the Service

There are usually two types of service termination:

  • Instance termination: In this case, a user or customer that has been subscribing to a service no longer has a need for it, but the service is still being offered. This type of termination should be included in the service delivery workflow and activities.
  • End-of-life termination: In this case, the service owner has decided to discontinue the service. This could be due to cost factors, competition, new product road maps, or new technology. This falls within the service level management process, since it has implications for the service catalog and the infrastructure, and it requires a lot of coordination with and through change management.

That’s it: A standardized approach for creating a service delivery business process. It may differ slightly from service to service, to accommodate specific delivery activities, but the flow and lifecycle should remain constant and consistent. In the next issue of SupportWorld, we will discuss the service interaction model in greater detail. It focuses on the more technical side of a service, and brings together engineering and support.

One of the most difficult aspects of service transformation is presenting it in a way that resonates with operations and the engineering and architecture communities; for example, a simple map that any IT employee could wrap his or her head around regardless of where they are on the journey to service maturity, and that tells a multifaceted story about end-to-end service. Sound like fiction? Meet the service interaction model. Derived from a classic business interaction model, this artifact illustrates the integration of the key service players and their systems, providing a consistent view of how solutions and capabilities work together in the context of an end-to-end service.

Architectural artifacts typically focus on individual applications or capabilities. The documents are technically focused, depicting deployment footprints, communication protocols, routing architectures, application functionality, etc. This is all good information, but how does it accomplish the objectives of an end-to-end service? Let’s consider the model itself (illustrated below).


The consumers may be users (i.e., people), infrastructure devices/applications, or services. For users, we can model the various ways that an individual might consume our services (e.g., phone, laptop, or both) and understand the impact this has on support. Note that you may want to segment your user community into, for example, external users, contract employees, full-time employees, etc.


If the systems are owned by the service being modeled, they are depicted by system-type icons. If they belong to another service, use the service icon. In the illustration above, we have a simple view of the e-mail service. The systems are represented by infrastructure icons, such as servers, storage, and network devices, and are all under the management of the e-mail service owner. For example, in the e-mail service model, the icon for global directory services indicates that the underlying systems are owned by that service owner and are not detailed in the model. (In situations where you do want to identify the key system, it can be added it as a drawing object, but it should not be directly connected to the model. The interaction is to the service not the drawing.)

For those who use a common, structured model repository, these can be represented by a reuse glyph that indicates the availability of an underlying service interaction model (along with details on where it is located). In this way, you can discourage redrawing the same services over and over again in various models, without oversight. If large enterprises used a structured repository like this, they would know how many models featured a certain icon, which would determine the extent to which the architecture would need to be changed to accommodate a major release.


The service owner is always in the left quadrant; he or she is the one who owns the agreements with other IT providers (OLAs) or vendors (UCs). If a provider’s services/systems are depicted in the model, then a corresponding OLA should be in place and represented in the provider quadrant. This may seem like overkill, but this activity often highlights potential gaps that could be trouble down the road.

Service Desk

Located in the top quadrant, the service desk (or desks, if appropriate) ensures that everyone is clear about how consumers will contact the service desk. If the consumer is a system, there should be processes in place to alert support personnel. If a third party provides support, or if the provider is operating multiple service desks, the model should depict specific events or user scenarios and should include corresponding knowledge articles and/or troubleshooting scripts, as needed.

Once the model has been drafted and made available for review by the key IT stakeholders, an interesting phenomenon occurs: The parties discuss the model, provide input, and eventually agree that this is how the service actually works. Service owners love it because roles and responsibilities become clearer, they can see how the provider’s systems truly enable the service, and they can identify and mitigate the risks associated with each interaction. Let’s explore that process.

First, start with the services and their taxonomy (i.e., service descriptions and models), then check the service catalog for information, objectives, and components. It is best to do this as you are defining or redefining a service, and it may actually prompt service owners to change the way they offer their services.

Next, identify your subject matter (i.e., topic) experts. This should be done in partnership with the service owner; he or she should identify the engineering and support personnel that can best support the effort. Start by asking the experts to provide any documentation they may have, including (but not limited to) technical drawings and/or formal solution/reference architectures. The experts should be able to answer the following questions:

  • Who are the consumers (i.e., machines, applications, people, etc.)?
  • How do they consume the service(s)?
  • What are the key systems?
  • What are the key transactions?
  • Who are the providers?

This process may take several reviews and updates, but it is time well spent. In my experience, as the model evolves, even reluctant parties begin to take a real interest. This is key, because everyone must agree to the deliverables and relationships set down in the model; no connecting line should be left blank.

What Do Participants Learn from This Exercise?

Consumers: Is a business process analysis or improvement indicated? Are there policies in place that affect access, security, or usage? Are the correct troubleshooting documents available? For example, does the service desk know whether consumer access their e-mail via phone, the Internet, or the LAN? If not, that should be one of the first questions an agent asks, or the first prompt in the self-service system.

Systems: Do we really understand what constitutes “nonavailability”? Are the correct knowledge articles available? Do we have the right event management monitors in place? Is additional analysis required? Are there any automation opportunities? Have we identified the systems and components to include in the CMDB? Is there justification for going deeper/wider?

Providers: Have we identified all our providers and do we have agreements in place? Do we have the right support model? Are there any gaps for second- or third-level escalations? Do the providers’ objectives enable the objectives of the end-to-end service (i.e., the service owner promises 99.5 percent availability, but a key provider is offering only 98 percent)?

Build Your Own Interaction Model

There are many ways to approach the interaction model, but I have found that there are three standard methods. Here are some examples.

1. Service offering (e.g., e-mail, unified messaging, etc.): The sample model on the previous page illustrates the service offering approach. This approach may require deep dives to identify underlying system interaction details, like gateways. For inbound/outbound e-mails from outside the company, there is a series of gateways through which the e-mail travels. Since they are owned by the e-mail service owner, they would need to be detailed in an underlying system interaction.

2. Layered technology (e.g., internal and external application hosting, infrastructure, storage, backup and recovery, etc.): For more complex service offerings, like hosting, we take a layered approach, illustrating the interactions at various levels.

3. Activities (e.g., request, join, or deliver a video conference, etc.): The activities approach works best when you need to illustrate the consumer’s and the provider’s view of the systems in play, and it would be too messy address both in a single model. Take video conferencing, for example. There may be different processes and infrastructure/providers for scheduling, joining, and delivering video conferences. Adding the delivery pieces for the conference itself may be too complex to fit onto a page and still be easy to follow.

At this point, there is no process, method, or technology that eliminates the painful and time-consuming transformation process through which IT shops become truly service-focused. But the service interaction model has certainly helped us find that “end-to-end” point of view that unites architects, engineers, and operators. Perhaps it will help you navigate the winding road of transformation in your organizations.

Terri Richards joined Intel’s IT operations group in 2000 and the Strategy, Architecture, and Innovation (SAI) group in early 2004, just as Intel was beginning to consider ITIL adoption. During that period, Terri and her team developed ITIL education and implementation strategies for the IT organization. She is an ITIL v3 Expert and has been recognized as one of Intel’s ITSM subject matter experts. Terri is currently an IT enterprise architect, where she continues to pursue the deployment of ITSM through strategic planning, methods, and architecture processes.

Tag(s): process, practices and processes


More from Terri Richards :

    No articles were found.