Evaluation of the Canada-U.S. Clean Energy Dialogue

February 2011

Previous page | Table of Contents | Next page

3.0 Evaluation Design

3.1 Purpose, Objectives and Scope of the Evaluation

Evaluation Purpose and Scope

The purpose of this evaluation is to fulfill Treasury Board Secretariat requirements and to meet the information needs of senior management for the Clean Energy Dialogue (CED) under the Clean Air Agenda (CAA).

Although CED activities, outputs and outcomes are associated with the CAA International Actions Theme, this initiative was introduced into the CAA architecture after the International Actions Theme was into its third year of implementation and after the CAA thematic evaluations were already underway. As well, separate funding for CED activities over two years (2009-10 and 2010-11) was obtained as it was not part of the submission for the International Actions Theme. As a result, the evaluation of the CED was conducted separately from the evaluation of the CAA International Actions Theme and CED evaluation results will contribute to reporting at the CAA level.

This evaluation examined all activities included in the CED that were conducted from its start in 2009–2010 to the summer of 2010–2011. The focus of the evaluation was on issues related to relevance and performance—specifically on outputs and progress towards immediate outcomes. Given that the evaluation was conducted early in the implementation of the program, performance information for final and some intermediate outcomes was not available. Where processes are ongoing, the evaluation examined the extent to which outcomes have been achieved and/or the likelihood that planned activities will achieve expected results. In addition, a key aspect was to examine how external factors have impacted, or are likely to impact, the theme’s ability to achieve its expected outcomes.

Evaluation Issues

This evaluation examined five issues:

  1. Continued need for the program
  2. Alignment with government priorities
  3. Alignment with federal roles and responsibilities
  4. Achievement of expected outcomes
  5. Demonstrated efficiency and economy1

Evaluation Audience

The primary audience for this evaluation includes deputy ministers from the departments involved in delivering the CED: EC, DFAIT and NRCan.

The secondary audience for this evaluation includes senior managers in each of the involved departments, CED Co-chairs and Working Group members, Departmental Evaluation Committees (DECs) responsible for recommending approval of the evaluation, and the Treasury Board Secretariat. The tertiary audience for this evaluation includes members of Parliament and the Canadian public.2

Top of Page

3.2 Evaluation Approach and Methodology

Two main activities3 were undertaken to develop the evaluation findings: document review and key informant interviews.

1. Document Review:

An in-depth review of all relevant documents provided by the departments engaged in the CED was conducted and further documents were requested where gaps were identified or where additional information was needed to supplement the evidence base. The project team reviewed each document to understand the mandated requirements, goals and objectives, governance structure, authorities, activities, outputs, and outcomes for each component. The overall adequacy of the document evidence base (e.g., with regard to availability of performance information) was also assessed. All evidence from the documentation was recorded and checked against the defined evaluation issues. This data collection method addressed evaluation questions 1 to 14.

2. Key Informant Interviews:

Interviews were conducted with key informants identified by the departments involved in the CED, as well as some additional informants suggested by the Evaluation Consultant Team (e.g., external contacts, U.S. partners). Interviews were conducted either in person or by phone. Interviews provided information on the program components’ relevance and performance. Additional interviews were conducted as the evaluation progressed based on feedback from interviewees and information learned through the process. Twenty-eight interviews were conducted in total, including

Interviews addressed evaluation questions 1 to 14.

Top of Page

3.3 Limitations

The evaluation focused on the completion of outputs/deliverables and early outcomes over the first 18 months (February 2009–present), plus anticipated outcomes to March 2011. As such, this evaluation assesses the achievement of, or progress towards achieving, the intended immediate outcomes. An assessment of the full success of the CED and whether it has achieved all its intermediate and final outcomes cannot be made until all activities have been completed and their impact has been realized over time.

Given that dialogue among its key players is fundamental to the CED, quantitative data alone do not capture the full performance story of the program. As a result, the perspectives of staff, partners and stakeholders were obtained through interviews in order to provide a more complete understanding of the program’s relevance and performance. When available, program documents were used to substantiate these data from interviews.

Access to U.S. and private-sector stakeholders was limited. As a result, it was challenging to develop an external perspective on the ongoing relevance and performance of the CED. This is particularly relevant to assessing the impact of the DFAIT Outreach component, which was targeted at U.S. audiences.

 


1 The specific evaluation issues, questions and indicators are provided in Annex 1.

2 Final evaluation reports become publicly available documents once approved, and will be accessible from Environment Canada’s website.

3 The interview guides used for data collection and a bibliography of documents reviewed are provided under separate cover in a technical appendix.

 

Previous page | Table of Contents | Next page