University of California Approach

Introduction
Performance measurement is an important cornerstone of the contracts between the University of California and the U.S. Department of Energy for the operation of its laboratories. Performance metrics should be constructed to encourage performance improvement, effectiveness, efficiency, and appropriate levels of internal controls. They should incorporate "best practices" related to the performance being measured and cost/risk/benefit analysis, where appropriate.

The Department of Energy has promulgated a set of Total Quality Management guidelines that indicate that performance metrics should lead to a quantitative assessment of gains in:

  • Customer Satisfaction
  • Organizational Performance
  • Workforce Excellence

The key elements of the performance metrics to these guidelines should address:

  • Alignment with Organizational Mission
  • Cost Reduction and/or Avoidance
  • Meeting DOE Requirements
  • Quality of Product
  • Cycle Time Reduction
  • Meeting Commitments
  • Timely Delivery
  • Customer Satisfaction

The Process
The first step in developing performance metrics is to involve the people who are responsible for the work to be measured because they are the most knowledgeable about the work. Once these people are identified and involved, it is necessary to:

  1. Identify critical work processes and customer requirements.

  2. Identify critical results desired and align them to customer requirements.

  3. Develop measurements for the critical work processes or critical results.

  4. Establish performance goals, standards, or benchmarks.

The establishment of performance goals can best be specified when they are defined within three primary levels:

Objectives: Broad, general areas of review. These generally reflect the end goals based on the mission of a function.

Criteria: Specific areas of accomplishment that satisfy major divisions of responsibility within a function.

Measures: Metrics designed to drive improvement and characterize progress made under each criteria. These are specific quantifiable goals based on individual expected work outputs.

The SMART test is frequently used to provide a quick reference to determine the quality of a particular performance metric:

S = Specific: clear and focused to avoid misinterpretation. Should include measure assumptions and definitions and be easily interpreted.

M = Measurable: can be quantified and compared to other data. It should allow for meaningful statistical analysis. Avoid "yes/no" measures except in limited cases, such as start-up or systems-in-place situations.

A = Attainable: achievable, reasonable, and credible under conditions expected.

R = Realistic: fits into the organization's constraints and is cost-effective.

T = Timely: doable within the time frame given.

Types of Metrics
Quality performance metrics allow for the collection of meaningful data for trending and analysis of rate-of-change over time. Examples are:

  • Trending against known standards: the standards may come from either internal or external sources and may include benchmarks.
  • Trending with standards to be established: usually this type of metric is used in conjunction with establishing a baseline.
  • Milestones achieved.

Yes/No metrics are used in certain situations usually involving establishing trends, baselines, or targets, or in start-up cases. Because there is no valid calibration of the level of performance for this type of measure, the should be used sparingly. Examples are:

  • Establish/implement a system.
  • Reporting achieved (without analyses).
  • System is in place (without regard to effectiveness).
  • Threshold achieved (arbitrary standards).
  • Analysis performed (without criteria).

Determining the Quality of Metrics
The following questions serve as a checklist to determine the quality of the performance metrics that have been defined.

  1. Is the metric objectively measurable?

  2. Does the metric include a clear statement of the end results expected?

  3. Does the metric support customer requirements, including compliance issues where appropriate?

  4. Does the metric focus on effectiveness and/or efficiency of the system being measured?

  5. Does the metric allow for meaningful trend or statistical analysis?

  6. Have appropriate industry or other external stands been applied?

  7. Does the metric include milestones and/or indicators to express qualitative criteria?

  8. Are the metrics challenging but at the same time attainable?

  9. Are assumptions and definitions specified for what constitutes satisfactory performance?

  10. Have those who are responsible for the performance being measured been fully involved in the development of this metric?

  11. Has the metric been mutually agreed upon by you and your customers?
 

 

Copyright 2005 Oak Ridge Associated Universities