Formative Evaluation of the Federal Contaminated Sites Action Plan

Final Evaluation Report

| ToC | Previous | Next |


2.0 Evaluation Methodology

In this section we provide an overview of the approach and methods that were used to conduct the formative evaluation of the FCSAP.   

2.1 Overview of Approach8

The evaluation team employed a phased approach to the evaluation.  This commenced with a design phase comprising the project initiation and method development (Phase 1), followed by data collection and technical reporting (Phase 2), and finally analysis and integrated reporting (Phase 3).  Throughout each of these phases, specific deliverables were produced to assist the project authority in having input to the evaluation process, and ensuring that the evaluation met the needs of FCSAP participating Custodians.

The evaluation approach was based on multiple lines of evidence (qualitative and quantitative), where data was triangulated from several sources (interviews, case studies, online survey, admin data, document review, etc.).  Comments were weighted based on the number of people who stated the comment, and the source.  Single responses/opinions relating to evaluation questions that were contrary to the majority were weighted down.  Suggestions and recommendations made by only one or a few persons may be retained however, since they may be very insightful and not apparent to most interviewees. Where possible, we have attempted to balance qualitative and quantitative methods with the understanding that this is a formative evaluation with a heavy emphasis on design and delivery questions, which by their nature often require more qualitative techniques to be adequately addressed.  

The methods used for the evaluation are described below.

Document Review9

A document review was undertaken of 22 key program documents using a standardized review grid based on the evaluation questions and indicators.

Key informant interviews10

A total of 75 key informant interviews were conducted.  Interviewees consisted of the FCSAP Secretariat (7); Expert Support (21); Custodians (25); external scientific experts (5); liability experts (3); industry representatives (10); community representatives (1); and other partners such as provincial governments and utilities (3).

Administrative data review

A review of IDEA and FCSI databases was undertaken, consisting of interviews with representatives responsible for the databases and analysis of the contents.  

Because FCSI contains sites that are not funded by FCSAP, federal site identifiers in IDEA were matched with records in FCSI to identify the FCSAP funded sites contained therein.  Data for all 16 Custodians were analyzed.  Projects that received FCSAAP funding in 2003–2004 and 2004–2005 were included in the analysis, since these submission years contained some sites with large liabilities, and they also offered an opportunity to view project and site progress over a 3–4 year period.  Projects that first began receiving FCSAP funding in the 2005–2006 and 2006–2007 submission years were included in the analysis, but those that first began receiving FCSAP funding in 2007–2008 were excluded from the analysis because it was believed that there had not been sufficient time for these projects to demonstrate progress.  The analysis therefore covered some 7000 FCSAP funded sites in FCSI.   

Online consultations with industry 

Using a list of people that attended the Contaminated Sites National Workshop in Vancouver in 2008 provided by Treasury Board Secretariat, the evaluation team developed a list of employees from 51 firms (consulting/engineering firms which have had a role in FCSAP projects, having been contracted to work on contaminated sites) to be invited to participate in the online consultation. An email invitation was sent to employees requesting that each firm provide one consolidated response.  Of 51 firms that were invited to participate in the online consultation, 23 (or 45%) completed the online questionnaire.

Thematic case studies

During the course of the evaluation, the evaluation team noted several key issues raised by key informants and identified in the document review that could be explored more thoroughly in thematic case studies.  The themes proposed to and accepted by the evaluation working group were site selection and prioritization; role of Third Party Expert Panel; awareness, usefulness and use of tools developed by FCSAP; custodial challenges with FCSAP funding; and, role of the IRWG.

The thematic case studies were developed from key informant interviews, documents, and administrative data.

Project/site case studies

Custodian departments (DFO, DND, TC, and INAC NAO) were each asked to recommend four projects or sites that met as many of the following criteria as possible:  

  1. Very successful remediation;
  2. Significant and positive community involvement; 
  3. Significant challenges overcome;
  4. Important lessons learned that could help others remediating sites;
  5. Innovative technology used in remediation; and
  6. Very large liability/scope.

From the sixteen projects/sites proposed by departments, seven were selected so as to ensure that as many as possible of the above criteria were met, and that there was a mix of geographic locations across Canada.  The seven selected sites/projects were:

Panel of Experts 

The panel of experts provided informed external perspectives on the quality and content of evidence, and the findings, interpretations and conclusions of the evaluation team. Three individuals were chosen to participate in the expert panel and included recognized experts in the fields of ecological risk, human health risk, and financial liability estimates.

The experts reviewed components and findings specific to their field of knowledge following a review guide developed during the design phase. The expert review panel provided guidance by vetting the revised findings, contextualizing issues, and interpreting technical details.

Response Scale

The following quantitative scale is used throughout the report to indicate the relative weight of the responses for qualitative responses within respondent groups (interview programme, case studies, and industry consultations).

2.2 Limitations

Certain limitations, including the reliability of data gathered in the evaluation, should be noted.  First, the number of firms (n=23) that completed the online consultation is relatively low, and does not constitute a sample that is statistically representative of the Canadian industry as a whole.  It was apparent during the design phase of the evaluation that a survey frame of Canadian firms that assess and remediate contaminated sites had not been identified. Given our past experience with similar situations, we advised against the use of statistical surveys given the unavailability of a reliable, comprehensive survey frame.  Development of survey frames, while possible, can be very resource and time intensive. Moreover, they are not likely to be suitable for a formative evaluation.  While we did not believe that a statistically reliable survey could be completed for the evaluation, we did think that a broader consultation of groups outside the federal government would be desirable and assist in addressing some of the evaluation questions.  The online consultation therefore complements the key informant interviews with industry stakeholders. 

Second, insights gained from the analysis of the FCSI and IDEA databases depend on the completeness of the data that they contain.  It is the responsibility of Custodians to ensure that information in the databases is correct and up to date.  During the analysis of the FCSI database in particular, it became apparent that some data from the Custodians have yet to be accepted by TBS as being complete and accurate.

Third, it was hoped that the evaluation would provide the opportunity to solicit input from communities located near federal contaminated sites.  Evaluators attempted to solicit names of community representatives from custodians and other interviewees in order to conduct interviews with these persons.  Unfortunately, only six names were provided, resulting in only one interview.  Correspondingly, the evaluation received limited input from community representatives in the interview programme.


| ToC | Previous | Next |


8 The evaluation approach is based upon the Methodology Report, approved in April 2008.

9 Appendix B presents a list of all documents examined as part of the evaluation.

10 Appendix A presents a more detailed list of KI interviews.