Home > Proactive Disclosure > Audits and Evaluations > A&E Reports > 2010-2011 > Chief Audit Executive / Director General, Audit and Evaluation Branch – Annual Report – 2009-2010
Chief Audit Executive / Director General, Audit and Evaluation Branch – Annual Report – 2009-2010
June 2010
Previous page | Table of Contents | Next page
4 Performance Summary: Evaluation
4.1 Progress Against Plan
Progress Against Plan: Evaluation
Evaluation Projects Planned for this Period
34
- 24 evaluations
- 7 evaluation plans
- 3 other projects
The plan consisted of 33 evaluation projects of which 15 were carried-over from the previous year. One unplanned interdepartmental evaluation was introduced in the fourth quarter of 2009-10:
- Genomics Research and Design Initiatives
Three projects do not lead to a final product and as such are not listed
in the tables below: CAA Evaluation
Oversight is an ongoing interdepartmental role; recommendations follow-up
is an ongoing function and one project consisted of time needed to complete
the posting of an already-approved evaluation report.
Evaluation Projects Completed during this Period
17
- 12 evaluations
- 5 evaluation plans
Of the 21 evaluation projects planned for completion in 2009-10, 14 were carried-over from the previous fiscal year and have been completed.
The following were reviewed and approved by DEC:
Carried-over from 2008-09
- EcoAction Funding (May 2009)
- Environmental Damages Fund (July 2009)
- Grants to support Environmental Research and Development (July 2009)
- Contributions to support Environmental Research and Development (July 2009)
- Contributions to support Environmental and Sustainable Development Initiatives (July 2009)
- Contributions to support Canada's International Commitments (July 2009)
- National Agri-environmental Standards Initiatives (July 2009)
- Clean Air Regulatory Agenda Evaluation Plan (October 2009)
- Weather Predictions Evaluation Plan (October 2009)
- Habitat Stewardship Contribution Program (October 2009)
- Invasive Alien Species Strategy (October 2009)
- Enforcement Program (October 2009)
- Chemicals Management Plan – Performance Measurement and Evaluation Plan (October 2009)
- Strategic Environmental Assessment (October 2009)
Planned for 2009-10
- Evaluation of the National Air Quality Health Index (February 2010)
- Federal Species at Risk Evaluation Plan (February 2010)
- Renewable Fuels Evaluation Plan (April 2010)
Evaluation Projects on Hold
- Clean Air Agenda Community Partnerships Theme – Program not yet implemented.
Planned Carried-Over Evaluation Projects
10
- 9 evaluations
- 1 evaluation plan
- Building Public Confidence in Pesticide Regulation and Improving Access to Pest Management Products (tabling Q1 – 2010-11)
- Wildlife Habitat Canada Foundation (tabling Q2 – 2010-11)
- Freshwater Initiatives Evaluation Plan (tabling Q1 – 2010-11)
- Clean Air Regulatory Agenda Theme (tabling Q1 – 2010-11)
- Clean Air Agenda International Actions Theme (tabling Q1 – 2010-11)
- Clean Air Agenda Management and Accountability Theme (tabling Q1 – 2010-11)
- Research and Development, Production and Monitoring in Support of Weather Predictions (Delayed from Q1 to Q2 2010-11)
- Genomics Research and Design Initiatives (Interdepartmental led by Natural Resources Canada) (tabling Q4 – 2010-11)
- Meteorological Program for the 2010 Winter Games (interdepartmental led by Canadian Heritage) (tabling Q4 – 2010-11)
- Chemicals Management Plan Evaluation (Interdepartmental with Health Canada) (tabling Q4 – 2010-11)
Unplanned Carried-Over Evaluation Projects
3
- 2 evaluations
- 1 evaluation plan
- Water Management Evaluation Plan (tabling Q1 – 2010-11)
- Great Lakes Action Plan (tabling Q1 – 2010-11)
- Improved Climate Change Scenarios (tabling Q1 – 2010-11)
Carry-overs were due to a variety of factors such as complexity arising from evaluating interdepartmental initiatives or as uncovered during the design phase, external events and delays in the approval process.
Percentage of Plan completed
Evaluation recommendations followed up
Follow-up was conducted on 41 evaluation recommendations: of these six had no documentation, one demonstrated some progress, ten demonstrated moderate progress, 12 demonstrated significant progress, and 12 were implemented.
29 percent of the 41 followed-up recommendations were fully implemented.
A detailed report on the status of management action plans for evaluation is provided in Appendix D.
Top of Page
4.2 Professional and Practices Development
In April 2009, the TB Policy on Evaluation came into effect and has been a major driver of the activities of the Evaluation Division. While many aspects of the new Policy were already in place at EC, the new Policy also brought with it a number of new requirements for the Evaluation Division. To address these requirements, an implementation plan was created in order to put timelines and milestones into place for the following activities to ensure compliance with the new policy:
- Estimate of resource requirements: A refined estimate of resource requirements is currently being developed using data from the Time Recording System (TRS). The estimate will illustrate the level of resources required in order to meet the evaluation coverage requirement of all direct program spending over five years beginning in 2013-14 as well as increased responsibility in the area of performance measurement.
- Five-year evaluation plan: Full evaluation coverage on a rolling five-year cycle will be required beginning in 2013-14. In 2009-10, a five-year departmental evaluation plan was created for 2010-11 to 2014-15, linked to the new 2010-11 PAA and including all direct program spending and grant and contribution programs.
- Revised departmental Evaluation Policy: The departmental Evaluation Policy was revised to reflect the provisions and requirements of the new Policy on Evaluation, to incorporate the new Directive on the Evaluation Function and Standard on Evaluation and to include Terms of Reference for Departmental Evaluation Committee (DEC) members.[3]
- Revised generic evaluation framework: The generic evaluation framework was revised to include a focus on value-for-money through inclusion of the required evaluation issues of relevance and performance outlined in the new Directive on the Evaluation Function.
- Performance measurement activities: The Evaluation Division has begun working with the Corporate Management Directorate (CMD) to develop a strategy to ensure that the department meets the new Evaluation Policy requirement for the availability of appropriate performance measurement in support of evaluation. Preparatory work was undertaken for the first annual report on the state of performance measurement in support of evaluation.
- Revised IQA protocol: The IQA protocol was revised to reflect the provisions and requirements of the new Policy on Evaluation, to incorporate the new Directive on the Evaluation Function and Standard on Evaluation. The revised protocol was completed for application in the IQA for 2009-10.
As well, to enable the implementation of the Policy on Evaluation, the Department has developed and implemented a strategy whereby department managers are informed of the requirements of the Evaluation Policy pertaining to performance management. The Department continues to monitor management's responses to previous evaluations. The Department's Evaluation Plan has also been expanded to include a monitoring component related to performance measurement to support future evaluations.
Evaluation leadership has been demonstrated through participation on TBS working group on competencies; hosted the second annual Environmental Evaluators Network (Canada) Forum; presentations at the Canadian Evaluation Society Annual National Conference, and to the Heads of Evaluation on best practices as highlighted in the OAG Audit of Evaluation; and engagement on the development of guidelines and handbook for Performance Measurement and Evaluation Plans for regulatory initiatives.
Top of Page
4.3 Follow-up on Recommendations and Management Action Plans
As noted above under 3.5, similar procedures have been used to manage and follow-up on evaluation recommendations and to regularly report to the DEC on the status of the management actions taken in response to evaluation recommendations. The Evaluation Division is currently transferring its evaluation follow-up recommendations to TeamMate. Follow-up on recommendations is conducted on an ongoing basis to ascertain the degree to which the action plans in response to recommendations made in previous evaluations have been implemented, and to determine outstanding gaps/risks. Progress reports on follow-ups are provided on a regular basis to the DEC and the Deputy Minister.
Top of Page
4.4 Other Accomplishments
In 2009-10, a progress reporting tool was developed and implemented to track the progress of evaluation projects against the risk-based evaluation plan, including timelines for completion and a rationale for deviations from the plan. The progress report is presented at every DEC meeting.
Top of Page
4.5 Client Surveys
After each evaluation, the Division sends surveys to clients in order to assess evaluation products and services. Evaluation Committee members are asked to provide feedback on the quality of evaluation products and services, and the value of the evaluations themselves, and ADMs with functional responsibility for the program that was evaluated are asked about the value of the evaluation. During this year, a question was added to both surveys on whether: "The evaluation contributed to information required in the context of program renewal." Surveys were sent out for eight evaluations and the Evaluation Division received responses from Evaluation Committee members for 100 percent of the evaluations, and from ADMs for 50 percent of the evaluations.
The Division received recurrent comments praising the professionalism and skill of the evaluators, the quality of evaluation reports, and the applicability and transferability of evaluation report findings, conclusions, and recommendations.
The Division also received repeated comments indicating problems in the working relationship with consultants, the quality of first drafts produced by consultants, and the length of the online publishing timeline. These statements reflect anomalous experiences this year with our consultants; the Division is aware of and managing these risks. In terms of the survey answers (which ranged from Strongly Disagree to Strongly Agree, as well as "No Answer / Don't Know"), most of the answers were positive. For the questions on the quality of evaluation services and products (asked of Evaluation Committee members), the proportion of positive responses (i.e., those responding Agree or Strongly Agree) ranged from 79 to 100 percent. For the questions related to the value of the evaluation, the proportion of positive responses ranged from 50 to 93 percent for Evaluation Committee members and 60 to 100 percent for ADM respondents.
Top of Page
4.6 Quality Assurance
The Evaluation Division undertakes an annual IQA process to ensure that the Division is meeting accepted professional standards. The IQA standards and criteria for assessment have traditionally been based the IIA Standards in order to establish a relatively similar self-assessment process for both the audit and evaluation functions. Since the last IQA, however, both Evaluation Policy and Evaluation Standards have been updated, and an Environment Canada-specific Evaluation Policy has been developed. While the current IQA continues to reflect IIA standards where possible, it also integrates these new and updated evaluation policies and professional standards. The IQA is conducted by an Evaluation Manager within the Evaluation Division and validated by the Division Director. The IQA results for 2009-10 indicate that the Evaluation Division is meeting all identified standards.
The OAG's Fall 2009 Report examined the effectiveness of evaluation functions within a number of Government departments; EC was one of those departments. The Department was cited for good practices including having formal processes to identify opportunities for improvement and a formal time reporting system. Two recommendations did, however, result and related to the need to develop and implement action plans to ensure that ongoing program performance is collected to support effective evaluation.
Top of Page
4.7 Lessons Learned
Lessons Learned by Evaluation relevant to the overall Department:
- There is a continued need to invest in developing clearly articulated roles and responsibilities early within the design of programs, as much within a program as well as with respect to other programs and departments. These roles and responsibilities must be clearly identified, finalized, consistently applied and appropriately updated, with any changes being effectively communicated to all implicated program staff.
- Reliable financial information is essential for evaluators to understand the cost-effectiveness and efficiency of programs; however, programs still have some difficulties in presenting a clear financial picture of their activities. Despite recent advancements, issues surrounding financial reporting within program evaluations remain present. Although the tools for coding financial information are in place for A-base and time-limited funding, issues arise in the clear and consistent application of the tools to track where resources come from and how they are expended within the department.
- Evaluation experiences, as well as, the recent audit of the evaluation function by the OAG have indentified a weakness in performance measurement. The absence of adequate performance measurement information may result in a weak performance story and difficulties in reaching conclusions about the achievement of program outcomes. Furthermore, the availability of sufficient performance information to support evaluations is now a requirement within the new TB Policy on Evaluation for all direct program spending. In addition, there will be a broader need for programs to be able to demonstrate results in the current context of governmental fiscal restraint. While steps have been taken to inform program managers and staff about performance measurement requirements, there are still lessons that have been learned within recent evaluation experience:
- There needs to be a stronger focus on obtaining quality performance measurement information that is appropriately linked to outcomes and that is validated by the program.
- Several programs have not been able to develop and record performance measurement information in a consistent ongoing way, for example because of competing work demands. As a result, some programs have not realized the full benefits of performance measures in the application of their programs as a tool to support informed decision making.
- The presence of uniform and common performance measures contributes to timely and accurate reporting of performance data on the part of individual partner organizations which then supports the overall horizontal performance story.
- Evaluations of interdepartmental initiatives have provided some insight with respect to lessons learned regarding the implementation of this type of initiative.
- Engagement can be a challenge when implementing an interdepartmental initiative. Evaluation experiences have indicated that the support and buy-in of organizations involved in an interdepartmental initiative is essential to the success of the initiative and that certain aspects can have implications for the level of engagement. For example, engagement can be enhanced by the degree to which the intent and objectives of an interdepartmental initiative are clearly outlined and understood at its outset; by the alignment of vertical (departmental) and horizontal (shared) objectives and a clear understanding of this alignment among partner organizations; and by the level of materiality, government priority, and public interest associated with an initiative.
- Evaluation experiences have also indicated that interdepartmental initiatives can benefit from a strong governance framework. This can be aided through the establishment of a secretariat that serves as a point of contact to facilitate communication, coordination, and ensure that individual roles and responsibilities and clear and universally understood by all involved parties. In addition, rather than focusing on the completion of individual outcomes, leadership and effective governance mechanisms that guide the management of horizontal initiatives in a collaborative manner can facilitate achievement of horizontal objectives.
- Interdepartmental initiatives may have longer timelines for the expected achievement of results, due to potential constraints and longer timelines associated with communication, decision-making, or implementation processes.
Lessons Learned by Evaluation relevant to the AEB:
- Evaluation committees can have a high frequency of turnover, occasionally resulting in issues related to the successful implementation and timeliness of evaluations (e.g., process, briefing senior management). This may be resolved by identifying key points within an evaluation where information needs to be briefed up to senior levels.
- The Evaluation Division has participated in multiple interdepartmental evaluations. Through these experiences, several challenges have arisen, such as diverse levels of engagement due to different degrees of vested interest. Participants in interdepartmental evaluations are accountable to their departments with their respective priorities, and, therefore, have vertical accountability structures which impose hurdles when trying to obtain horizontal agreement across all groups during the evaluation process. Through these experiences the division has developed the following lessons learned:
- The necessity of working with other evaluation groups during horizontal evaluations, at times, results in increased delays in obtaining deliverables due to variable levels of capacity. This could be mitigated by clearly identifying the roles and responsibilities and solidifying them in records of decision after every meeting and signed by responsible authorities, to ensure clarity of the process.
- Interdepartmental evaluations are more time consuming, both in terms of effort and duration, due to additional approval processes; therefore, the Evaluation Division should anticipate additional time and resources for interdepartmental evaluations.
- There exists an opportunity for AEB to use the rich database of multiple completed evaluations as a source of evidence from which we can further examine and assess trends and lessons learned over the longer-term.
- In light of increased requirements related to the 2009 Policy on Evaluation, the Evaluation Division has identified the opportunity to streamline the evaluation process to utilize time and resources in the development of formal evaluation plans. The Division has identified some recurring issues and is starting to take action on streamlining the process, for example:
- Strategically requesting program documentation and refining document request templates to avoid "data dumps" by programs;
- Applying stricter timelines and "cut-off" dates for requests;
- Creating a binder to present to program staff and managers to inform them about the evaluation process, thereby reducing delays associated with the programs trying to understand the process; and
- Early consulting with the Directors General and Associate Deputy Ministers allows for better understanding of the key program issues.
[3] The revised departmental Evaluation Policy was approved by the DEC in October 2009 and tabled for the Executive Management Committee (EMC) in January 2010.
Previous page | Table of Contents | Next page