On October 12, 2016, the U.S. Department of Education issued its long-awaited final rule regarding teacher preparation programs. After nearly five years and a fair amount of compromise, the final regulation has garnered some praise, but leaves many dissatisfied.

Under the new rule, states now are required to evaluate the effectiveness of teacher preparation programs as either “effective,” “at-risk,” or “low performing.” The regulation permits states to determine exactly what metrics to use when evaluating such programs, provided the metrics are “relevant to student outcomes.” Examples of acceptable metrics could include teacher evaluations, job placement rates, graduation rates, or student test scores (meaning the students taught by the graduates of the teacher preparation program).

Programs need to be classified as “effective” for at least two of the three previous years or risk losing access to Teacher Education Assistance for College and Higher Education (TEACH) Grant program funding. As those involved in the space are well aware, this is a meaningful “hook,” as TEACH grants provide important support for future educators considering a career in high-need areas (the grants offer $4,000 of tuition assistance per year to individuals who agree to work in high-need areas for at least four years following graduation).

Regardless of the metrics used for program evaluation, states also are required to report annually each program’s placement and retention rate, results of graduate and employer surveys, and performance data for students who are being taught by teacher preparation program graduates. The new rule requires that all reporting systems must be in place by the 2018-2019 school year. TEACH grant eligibility will be based on program performance starting in the 2021-2022 school year.

Mixed reaction

Those who have followed the rulemaking process will not be surprised by the mixed reaction that has greeted the issuance of the final regulation. Indeed, since the rule was first proposed in 2014, it has provoked controversy. In its original form, the regulation was chastised for being too similar to the No Child Left Behind legislation, relying too heavily on flawed metrics that were dictated to the states by the Department. As noted above, the final rule, in contrast, affords states more flexibility in determining which metrics to use to evaluate their programs.

Despite this increased flexibility, criticism of the final rule still abounds. For example, Sen. Lamar Alexander (R-TN), chair of the Senate committee that oversees higher education, has openly questioned whether the regulation oversteps federal authority, while Rep. John Kline, chair of the House (R-MN), has criticized the rule as being nearly impossible to implement effectively, noting that teacher preparation should have been addressed during the reauthorization of the Higher Education Act.

Outside of the Beltway, reaction to the final regulation has been mixed, at best. Some argue that the information to be gathered under the new rule will enable schools to better assess their graduates, prepare prospective teachers, and generate data regarding the impact of their teacher preparedness programs. This praise is balanced against concerns that the rule puts unfounded emphasis on faulty metrics, and will penalize institutions that place graduates in low-performing schools or operate in depressed markets.

Beyond getting familiarized with the rule and watching for guidance from the Department, institutions offering covered teacher preparation programs should directly, or through their trade and lobbying associations, plan to work closely with their states to ensure they are engaged in the metrics discussion. In addition, institutions should continue to engage with lawmakers on the Hill as the reauthorization of the Higher Education Act moves forward. It is possible that Congress could undo much of what the regulation has done, particularly if Republicans retain control of the Senate following November’s elections.

Special thanks to Gregory W. Coutros, our Lobbying and Policy Intern, who contributed significantly to this post.