On April 5, 2017, Canada released the report of the independent panel on environmental assessment processes. I was privileged to be a member of this panel.
The panel’s report, Building Common Ground, devotes attention to a number of topics that arose from the panel’s national consultation and then provides a vision to guide the way forward.
This article in no way replaces or substitutes for what is set out in the report. I am not the panel and I am not speaking for the panel. My purpose is simply to introduce seven topics that illustrate the new vision set out in the report. I hope this article will lead you to read the panel’s report.
What should trigger project assessment?
CEAA 2012 implements a project-based trigger for EA. It replaced decades of reliance on decision-based triggers. Thus, CEAA now applies to a list of projects, not a list of federal decisions. This list has made the rules for triggering EA very clear. However, the project list does not align with the Constitution or all federal interests, creating many instances where a project is listed for EA but has no federal regulator, or has a federal regulator but is not listed for EA. The panel recommends continued emphasis on a “project” trigger, but recommends defining “project” to include only those physical activities that affect at least one recognized federal interest.
The Panel approach was directed at minimizing unwarranted assessments as well as gaps in what requires federal assessment. The key reform to address these two objectives was the panel recommendation to make federal assessment mandatory where a project may have “consequential” impacts on a recognized federal interest. Further, the panel recommended that this term be provided in a statute and framed so that its meaning was clear and objective, not subjective or discretionary. The panel report recognizes that the term, “consequential”, is novel. Nevertheless, this novel term is assigned a key role in the new approach: if narrowly defined, then few projects will be triggered; if broadly defined, then many projects will be triggered. The panel recommended an approach that would provide a middle ground between the CEAA 2012 approach of triggering assessments for dozens of projects per year and the CEAA 1992 approach of triggering assessments for thousands of projects per year.
How should federal assessment fit with other assessment processes?
CEAA 2012 favours a unilateral approach to federalism. This replaced decades of assessment implementing cooperative federalism. Examples of this new approach include narrowing the range of projects that are available for joint federal-provincial panels, imposing timelines that are incompatible with provincial processes, and implementing a new approach to “substitution” that promotes “provincial-only” reviews in place of joint reviews. These changes have also given rise to other problems. One problem is duplicative assessments by different governments because there is no longer ability to harmonize different processes. Another problem is that substituted assessments create assessment gaps because a province cannot fully substitute for federal participation on interests like fisheries. The panel recommends renewed federal support for cooperative assessments and the longstanding principle of “one project, one assessment,” where possible.
The panel also recognized that the fit of federal assessment with other government assessment processes is about more than old-CEAA, new-CEAA differences or federal-provincial relations. For more than two decades, the applicable CEAA has permitted coordination or cooperation by defining “jurisdiction” to include the provinces, many types of aboriginal governments, and additional government bodies created by provincial law. Yet neither CEAA regime has promoted or implemented a broadened approach to jurisdictional cooperation. Most recently, in 2016, Canada decided to fully support the United Nations Declaration on the Rights of Indigenous Peoples, and the panel’s mandate demanded that it propose how federal assessment could “reflect” the principles of UNDRIP. There was no precedent to guide the panel, but the report’s specific section on “Indigenous Considerations” expressly addresses this mandate. Further, the report promotes new tools for multi-jurisdictional cooperation, led by regional offices of a new assessment authority to provide an “on the ground” presence to facilitate discussions and cooperation among all interested governments.
How can public trust be restored?
In its terms of reference, the panel was expressly tasked with providing recommendations “to regain public trust” in federal assessment processes. The panel’s response to this challenge was to re-think federal environmental assessment. The report sets out a process where every participant would do better: proponents, provinces, indigenous peoples, residents, ENGOs, and experts.
The key to this effort was envisioning an assessment process that is based on common ground, supporting the title of the report. One aspect of the panel’s vision for common ground was an inclusive process – both for who is encouraged to participate, as well as the scope of relevant issues. The report provides a specific section on how to improve public participation, and a specific section on using the best evidence – science, as well as indigenous knowledge and community knowledge. However, though the report presents a vision for future assessment, it is not a complete code providing details for future legislation. There is much more to be done even if the panel’s vision is accepted.
What should assessments focus on?
The panel recommends a major change in the focus of assessment. Many submissions to the panel expressed concern with the longstanding federal test of avoiding significant adverse environmental effects. Many people think that federal assessment should demand more than just that a project avoids causing “significant” harm. Multiple sources asked the panel to focus on assessing sustainability. The panel agreed and made this another important recommendation.
Related to this new focus on sustainability, the panel also recommended that the federal process give attention to a broader range of impacts – positive and negative – than the longstanding focus on biophysical effects. It recommends that assessment address impacts on and across five pillars of sustainability – environmental, health, social, cultural, and economic. This gives rise to the panel’s further recommendation that federal assessment be renamed “impact assessment” instead of environmental assessment.
What is the right starting point for assessments?
A cornerstone of panel recommendations for an inclusive process is a new “planning” phase to start the assessment process. This approach responds to many concerns that the current process was starting too late – after proponents had locked down all project details and formalized them in a detailed “project description report.” As set out in the report, the panel recommends a process that will seek to bring people together early in project planning, so that they can share knowledge and agree on what does and does not require future detailed study in the impact study. I return to this topic below.
The panel also advises of its expectation that this effort to build common ground will also alter the assessment process fundamentally. Right now, assessments seem to start with a narrow focus and then expand in scope as new participants and new issues emerge. This is creating many difficulties for assessment decisions. The panel has sought a very different approach: instead of accepting this `broadening` approach to current EAs, the panel has recommended a `narrowing down` process.
These contrasting approaches are set out in an illustration in the report (Figure 4, p.75). Under the proposed vision for a ‘narrowing down’ process, each IA process should start broadly and, through discussion and study, systematically work through topics to narrow issues. By the end of the process there is either consensus or a hearing that is focused on a few key issues. This narrowing down assessment process should also narrow the key questions for a decision.
What is the best way to advance federal assessments to trusted results?
The panel also recognized that its vision for a new assessment process will not work unless it is properly managed. The panel has therefore recommended that there be a new federal assessment authority with responsibility to facilitate assessment discussions from the outset and throughout the process. The panel report suggests that this approach should work well for most projects; however, the panel also recognizes that some projects can raise fundamental issues from the outset. These projects may not advance through consensus. The panel therefore also recommends that the new assessment authority have the decision-making powers of a quasi-judicial tribunal. This would provide assessments with independent and expert decisions on issues of non-consensus.
Providing decision making authority to the new IA authority and tribunal is an important reform. It reflects the panel`s view that an independent and expert tribunal is best placed to make the vast majority of assessment decisions. Moreover, the panel does not seek to prevent current options for going to court or engaging cabinet; however, the panel seeks to make these legal and political options appeals, not original decisions. This approach to decision-making contrasts with the current approach. Today, cabinet is the decision-maker on many assessments, and the Federal Court Trial Division (not the Court of Appeal) is the court responsible for hearing most federal EA court challenges.
How can this new vision meet proponent needs for predictable timelines and costs?
Many topics introduced above should assist proponents, but the panel report also devoted attention to what proponents told the panel they wanted: predictable timeframes, costs, and results. The current CEAA 2012 reformed many aspects of the earlier CEAA 1992 regime to promote these proponent objectives; however, the last few years of federal assessment have made apparent various difficulties with achieving these objectives.
Much was expected of legislative reforms to set out binding timelines. However, these reforms have not resulted in clear timeframes for an assessment. One problem is that the legislated timeframes are not absolute. In particular, they do not apply where government departments, agencies, or panels seek more information on an assessment: these information requests trigger a “time out” from the legislated timelines throughout the process. Every time out adds time to the stated timeline. As recent assessments have produced multiple time outs, there is not a predictable timeframe. Yet beyond this formal problem, a bigger issue is that projects are not all equal. Different projects present different challenges, so it is difficult for every project to have the same timeline. A further issue creating uncertainty with timeframes is the increasing range of events happening at the end of assessment processes that add time to the process. These include court challenges, parallel consultations, additional studies, and approval documents that translate general mitigation measures into hundreds of detailed conditions.
These end-of-assessment events do more than create an uncertain timeframe. They also add costs. The panel has recommended major reforms to provide timeframe and cost discipline to each assessment. There are many ways to do this compared to the present approach, but the proposed approach to timeframes and costs is intended to be comprehensive, not piecemeal.
Likely the most important reform to address timing and costs issues is a reform designed to improve public trust: the panel recommendation that the impact statement – the core document to every assessment – be prepared by or for the new authority instead of by the proponent. This approach is novel in Canada, but it has been part of federal impact assessment in the United States for almost 50 years. The panel made this recommendation to improve trust in the impact statement, but increased trust should reduce the time required for stakeholder and expert review and thus improve timelines. The panel also recommends that the new authority be tasked with providing early timelines and budgets for each assessment. Coupled with its recommendation that hearings be limited to issues of non-consensus, the panel expects greater certainty of timeframes and costs.
These reforms may also improve the current uncertainty about assessment results. The many end-of-assessment events now in play raise difficult questions about the merits of the assessment that has occurred. Some events, like court challenges, threaten to send the process backwards to revisit completed assessment steps; other events, like parallel consultations, try to enhance the assessment that was completed. All of these events undermine the assessment process and trust in its results.
The panel believes that its vision for assessment and its efforts to build common ground will also restore trust. If trust is restored, assessment decisions will be respected and many of these quagmires will end.
This panel report presents a new vision of federal assessment, but it is not a complete code. It is simply a first step. The release of the report has now been followed by the recent period of public comment on it. Future steps will include the federal government’s response to this report, release of the independent review of the National Energy Board, nation to nation consultation with indigenous peoples, release of draft legislation and regulations, and future parliamentary actions.
Did the panel succeed in recommending means to restore trust to federal assessments? It should be clear to all that trust must be earned, not presumed. I now look forward to seeing whether there is support for how the panel has addressed the many concerns and ideas it received, and how the panel’s work guides future legislation to deliver better federal assessments.