3.0   Evaluation Design

Previous page | Table of Contents | Next page

3.1    Purpose and Scope

This evaluation was designed to address standard evaluation issues pertaining to program relevance and performance, as well as senior management information requirements regarding the effectiveness of decision making related to departmental scientific activity in support of weather predictions. The period examined is from 2007-08 to 2009-10. 

The evaluation focused on the relevance and performance of scientific activities supporting the successful delivery of weather prediction products and services to clients. As described in section 2.2, science is defined, for the purpose of this evaluation, as comprising research, development, production and monitoring activities, further categorized as "R&D" and "related science activities."

In order to properly assess the relevance and performance of science activities, evaluators judged it necessary to examine all the activities along the MSC’s "innovation chain"15 (see Figure 3), including all R&D and related science activities, as well as delivery of weather prediction products and services. The inclusion of service delivery in the scope of the evaluation was dictated by the assumption that the fundamental "business" of weather prediction is the generation of information or knowledge that will be transferred to enable users to make informed decisions. As such, the success of the science component of weather prediction was determined by the degree to which it has met the needs of its users.

Figure 3: Weather Prediction Innovation Chain

Figure 3: Weather Prediction Innovation Chain

Click to enlarge.

Consequently, the evaluation focused on measuring the adequacy and effectiveness of the processes and functions enabling the transfer of science knowledge throughout the weather prediction innovation chain and to its target end users, rather than focusing on measuring the quality of the weather prediction products and services themselves.

As shown in Table 1, this "outside-in" perspective informed the types of evaluation questions examined as well as the analysis of the findings.

Table 1: Evaluation Issues and Questions16

Issues

Evaluation Questions

Relevance

1. To what extent is there a continued need for R&D, production, monitoring, and service delivery in support of weather prediction?

2. To what extent is R&D, production, monitoring and service delivery in support of weather prediction consistent and aligned with federal government priorities and departmental strategic outcomes?

3. Is there a legitimate and necessary role for government to undertake scientific activity in support of weather prediction?

 
 

Performance: Achievement of Expected Outcomes

4. To what extent and in what circumstances are the needs of internal and external clients of R&D, production, monitoring and service delivery in support of weather prediction being met?

5. In what situations, contexts, and circumstances does the current approach to knowledge transfer used by weather prediction work, and for whom? 

6. To what extent and in what circumstances are priority‑setting and decision‑making mechanisms and practices pertaining to R&D, production, monitoring and service delivery in support of weather prediction effective for meeting weather prediction client needs?

7. To what extent and in what circumstances does the QMS contribute to effective R&D, production, monitoring and service delivery in support of weather prediction?

8. What external factors influence the capacity of R&D, monitoring, production and service to support weather prediction, and how?

 
 

Performance: Demonstration of Economy and Efficiency

9. Does the level of effort expended for weather prediction activities yield commensurate value? Are there more efficient ways of delivering science to support weather prediction?

Top of Page

3.2    Evaluation Methodology

The following data collection methods were used for this evaluation.

Review of Documents: The evaluation team reviewed key documents, including relevant legislation, federal budgets, Reports on Plans and Priorities (RPPs), Departmental Performance Reports (DPRs), departmental and other relevant websites, internal program documentation, relevant publications, and past audit and evaluation reports. A list of key documents reviewed is contained in Annex 2.17

Key Informant Interviews: Key informant interviews were undertaken to gather the views of a range of program stakeholders, and detailed information related to most evaluation questions and issues. Interviews were conducted in-person (in the National Capital Region) or by telephone, using an interview guide tailored to the specific key informant groups.18 Qualitative analysis of the interview findings was conducted.19  

Table 2 shows the total number of interviews completed by directorate / interviewee category.

Table 2: Interviews Completed by Directorate / Interviewee Category

Interview Group

Number of Interviews

Assistant Deputy Ministers

2

MSC Weather and Environmental Monitoring Directorate

5

MSC WEO Directorate 

8

MSC Business Policy Directorate

2

MSC WEPS

11

S&TB (ASTD)

7

CIOB

2

External clients and peers

6

Total

43

Top of Page

3.3    Methodological Limitations

This evaluation relied on two lines of evidence (interviews and document review) and did not benefit from directly relevant performance data since the topic (science support) isn’t tied to either a functional or results structure within the department for which performance data could have reasonably been expected. Findings and conclusions were therefore mostly based on qualitative data from key informant interviews but were supported by extensive documentation.

A limited number of external weather prediction clients were interviewed as part of this evaluation due to difficulties encountered in identifying and reaching such clients within the data collection timeframe. Indirect information on client needs was however obtained through interviews with representatives involved in the delivery of the MSC’s services and a review of secondary program data and external documentation.

Due to the unusual focus of the evaluation (on a cross-cutting support function rather than a clearly delineated organisational function or a departmental result), an innovative approach had to be developed and standard evaluation questions and tools had to be custom tailored in order to adequately address the evaluation  issue of performance. The conceptual approach taken for this evaluation entailed measuring success in terms of a process of knowledge translation that should support effective priority setting and decision making and, ultimately, result in external clients needs being met. While this approach did not allow for the extensive coverage that traditional evaluation questions provide, it enabled the evaluation team to produce a useful portrait of the circumstances in which science activities effectively support weather prediction.

The scope of this evaluation excluded measurement of the quality of science conducted in support of weather prediction, such measurement requiring expert skills and knowledge better found in an external panel of academics. However, such a study was conducted in 200220 and was taken into consideration in the current evaluation.


15 In the context of this evaluation, the MSC’s "innovation chain" refers to the end-to-end process, from R&D to production to services, that enables the design and delivery of MSC products and services that remain relevant and adapted to scientific advancements and client needs.

16 An evaluation matrix including evaluation questions, indicators and methods/sources is included in Annex 1.

17 A complete list of all documents reviewed is included in a technical appendices report.

18 Separate guides were prepared for the following groups: EC staff working in research, development, production, monitoring and service; EC enabling functions; EC ADMs; external clients; peers and collaborators; and international peers and collaborators. These guides are included in a technical appendices report.

19 In summarizing the degree of consensus in key informant interview findings, the following guidelines were used: "a few interviewees": less than 25%; "a minority of interviewees": 25-44%; "approximately half of interviewees": 45-55%; "a majority of interviewees": 56-75%; "most interviewees": 76-94%; and "almost all interviewees": 95% or more.

20 Elbert W. Friday, Jr. (Panel Chair) et al. Independent Review of the Research and Development Program of the Meteorological Service of Canada, January 2002.

Previous page | Table of Contents | Next page