Summary

Earlier this month the English Technology and Construction Court1 considered a legal challenge to a tender evaluation process conducted by the Milton Keynes Council. In his judgment, Justice Coulson was highly critical of the way in which the Council conducted the tender evaluation.

The Court found:

  • the Council failed to properly document its decision making and its evaluation of tender responses against the evaluation criteria
  • the Council’s evaluation process was so flawed that a different outcome should probably have ensued.

This article considers key aspects of the flawed tender process and identifies a number of points directly relevant to the Commonwealth’s 2014 Procurement Rules.

Background

The Milton Keynes Council (Council), the defendant in this case, conducted a tender process to select a contractor to undertake asbestos removal and site remediation services. The potential contract value was some £8 million over four years.

Five tenderers bid. The plaintiff in these proceedings, Woods Building Services (Woods) submitted the cheapest bid but at first instance lost to a rival, European Asbestos Services (EAS) on quality aspects.2  Woods was the incumbent.

Woods challenged the outcome of the Council’s tender process on a number of grounds, including alleging that:

  • there was a lack of transparency in the process and in the outcomes
  • the Council did not treat the tenderers equally
  • the Council made manifest errors in conducting the process and in evaluating the tender responses.

While not agreeing with the arguments raised by Woods on all counts, Justice Coulson of the English Technology and Construction Court (Court) did substantially find in favour of the plaintiff and strongly suggested the victory of EAS over Woods should be reversed.3

So what went wrong, and what are the lessons from this case relevant to the 2014 Commonwealth Procurement Rules (CPRs)?4

Transparency

First of all, transparency. That procurement processes and associated decision-making are required by law to be transparent is enshrined in the CPRs. See for example CPR 4.4(c).

Transparency requires, among other things, appropriate documentation of decisions made by the procuring agency regarding the procurement.

CPR 7.2 sets out relevant requirements, including that documentation provide ‘accurate and concise information on… relevant decisions and the basis of those decisions’ (see CPR 7.2e). Adequate documentation of decision making processes also supports debriefing activities under CPR 7.1.5 and are important in dealing with any disputes or issues raised in respect of the tender process.

In this case, the Court was highly critical of the Council’s failure to properly document its decisions and the reasons for them.  Justice Coulson noted that, despite there having been three tender evaluations – the initial evaluation, followed by two internal reviews – ‘the process produced next to no contemporaneous documentation or notes’. (More about the internal review process below).

What written records there were tended to amount to no more than a paraphrasing or repetition of the relevant selection criteria – there was no statement of reasons as would be required to satisfy the ‘transparency’ requirement.

The Council’s failure to prepare adequate contemporaneous documentation meant it could not adequately respond to a challenge made by Woods to the decision-making process, following which the Council found itself involved in these proceedings.

Take Home #1:

Adequate contemporaneous documentation of decisions made in connection with the procurement process is key.

When drafting comments against evaluation criteria, to satisfy the ‘transparency’ requirement and to protect the procuring agency in the event of any dispute, do not simply paraphrase or repeat the text of the relevant criterion. Ensure an explanation for the relevant decision of the outcome is given.

Equal treatment

Together with transparency, it is fundamental to any procurement process that respondents are treated fairly, equitably and in a non-discriminatory manner (see for example CPRs 4.4a, 6.5, 6.6a and 10.8). Here too the Council appears to have fallen into error. Justice Coulson identified a number of areas of concern in this regard.

First of all, Justice Coulson was concerned about the involvement in the evaluation of a former employee of one of the tenderers (Woods, the plaintiff). Justice Coulson thought this person should not have been involved in the evaluation. The reasons for his concerns in this regard are not given in the judgment, but presumably Justice Coulson thought this person’s previous employment relationship held the potential for the process to be biased – whether for or against Woods. It would be difficult for a dispassionate observer to say there was not at least the potential for an apprehension of bias.

Take Home #2:

Carefully consider any relationships, whether past or present, that may exist between members of the evaluation team and any of the respondents. Consider excluding from the evaluation team any potential member with any such relationship, remembering to consider the potential for perceived or apprehended bias as well as any actual bias.

Second, looking at the tender responses themselves, the Court was clearly troubled by the Council’s apparent preference for or bias toward EAS over Woods. First of all, the tender response prepared by EAS appeared to be inferior. Justice Coulson felt that, on the whole, the EAS response was ‘almost studiedly vague’, aspirational in tone and ‘light on detail’.5 Justice Coulson went so far as to express the view that in its response, EAS ‘commits to and promises nothing’.6 By contrast, Justice Coulson felt the Woods response ‘could fairly be said to bristle with detail and comment’.7

Next, the Council appeared to prefer EAS to Woods. Justice Coulson felt the Council’s response to Woods’ complaints about the evaluation process was ‘motivated by a simple desire to stick with EAS, whatever the circumstances’8 and remarks that ‘after the first flawed stage of the process, the EAS tender was regarded by Council as being in pole position, a position which it never properly reviewed’.9

Justice Coulson suggests that when the results of the tender evaluation were made known to Council management, Council management itself was concerned the incumbent had not won, despite tendering the lowest price.10 An internal review process then commenced, but then it appears error began to pile up upon error. Rather than undertaking a review from scratch, the initial evaluation was reviewed and revised. While the gap between Woods and EAS narrowed as a consequence of this review, EAS remained the preferred bidder.

Justice Coulson felt that by taking the initial evaluation as a benchmark, the review ‘was inevitably going to start with the subconscious assumption that the EAS tender was better than the Woods tender’.11 In fact the Court expressed a concern that this ‘subconscious assumption’ had not only resulted in a flawed review process but had ‘informed the Council’s approach throughout’.12

Take Home #3A:

A tender process must be genuine and fair in the sense that all respondents are treated equally. An evaluation will be legally flawed if it is conducted on the basis of pre-conceptions about the outcome, or with the intent of excluding a particular respondent (eg, an incumbent with whom the procuring agency may have become dissatisfied).

Take Home #3B:

To address potential or perceived flaws in the original evaluation process, any review should be ‘de novo’ to ensure there is no bias (actual or apprehended) in the mind of the reviewer or members of the review team.

A second aspect of fairness and equitable treatment considered by the Court relates to the drafting of evaluation criteria. CPR 7.10 requires that evaluation criteria should ‘enable the proper identification, assessment and comparison of submissions on a fair, common and appropriately transparent basis’. In this case, the Council failed to treat tenderers fairly and equitably by applying evaluation sub-criteria that had not been disclosed as part of the evaluation criteria published in the RFT documentation. Section 6.3 of the judgment13 provides a good illustration of this error. The relevant tender question asked each tenderer to ‘specify the members of [its] delivery/project team, including their roles and responsibilities’. EAS was ranked higher against this requirement than Woods. The reason for EAS receiving a higher score was the Council’s perception (a perception, incidentally, not supported by the relevant tender responses) that EAS was offering a dedicated project manager. The Council apparently placed significant importance on the contractor providing a person to be responsible for the work on a day to day basis, but this was not specified as a requirement in the RFT documentation.

Justice Coulson stated as follows:

‘In my view, if this was an important matter to the Council, such as to justify giving different scores… solely on this basis, then it ought to have been disclosed in the scoring criteria. I find that the failure to do so was a breach of the rule as to transparency… It was an undisclosed sub-criterion.’14

Take Home #4:

All criteria used to evaluate a response should be disclosed to tenderers in the RFT documentation. The evaluation team must not introduce additional considerations into its evaluation where respondents have not been asked to address those considerations in their tender responses.

Scored evaluations

In this case, the Council applied scores based on a table15 ranging from:

  • zero points for a response that did not meet tendered requirements and/or was unacceptable or contained insufficient information to demonstrate the tenderer’s ability to deliver the services, to
  • ten points for a response “meet[ing] requirements to a very high standard with clear and credible added values and/or innovation”.

It is a common practice to score evaluations on the basis of applying numbers to the evaluation of individual criteria and to overall outcomes. Such scoring provides a readily accessible snapshot of the outcomes of an evaluation process and facilitates the ranking of respondents in order. However numbered scoring is not without its potential pitfalls if misused, as this case demonstrates. It can be tempting to write up evaluations and then use numbered scoring to differentiate between bidders. This is wrong.

A great deal of care needs to be taken to ensure scores applied have a justifiable and transparent basis to be found in the procuring agency’s written reasons. Numbered scoring is not a mechanism for giving effect to substantive views or emergent preferences for one bidder’s response over another’s. An example from this case illustrates. One of the evaluation criteria asked respondents to provide details of their “proposed communication procedures in relation to the requirements of the contract”.16 Despite some significant omissions from the EAS response against this criterion, the Council gave EAS the highest possible score of 10. By contrast, the Council gave Woods a score of six. No reason for this differential was given in the Council’s written evaluation (and tellingly, no rational explanation was given by the Council for the different scores in the ensuing proceedings). In comparing the two responses, the Court felt the Woods response was “more detailed”17 and in fact “better”18. The Court reduced the EAS score so both respondents were awarded six points against this criterion.

Another issue to be careful of when using numbered scoring is to ensure that scoring is properly applied in accordance with scoring criteria. A failure to do so will result in manifest error in the evaluation process. Justice Coulson notes such scoring criteria can “suggest a degree of rigidity which may not [be] the intention”,19indicating careful consideration needs to be given to how such criteria are drafted to ensure they do not result in unwanted inflexibility. In this case it appears significant inflexibility arose from the requirement to award zero points where a response had any of the characteristics referred to above.

The Court was unhappy with the way the Council applied its own scoring system. In a number of cases, it reduced quite high scores given by the Council to certain EAS responses to zero.20 Other EAS scores were reduced by the Court. The result of the Court’s adjustments to the scores was to decrease EAS’s overall score from 104 to 64 and to increase Woods’ score from 88 to 94. Thus EAS went from clear winner to clear runner up, in the Court’s view.

Take Home #5A:

Numbered scoring is not a tool for differentiating between bidders. Numbered scores must be supported by written reasons and objective explanations. In particular, where scores for respondents against one criterion vary, the reasons for the variation must be specified and supported by a logical explanation.

Take Home #5B:

The basis for numbered scoring must be correctly and consistently applied. Overly restrictive or inflexible definitions for scoring criteria should be avoided.

Conclusions

There is a strong inference in this case that the Council was, from the beginning, predisposed to selecting EAS over Woods. Thus from the outset, the process could probably not have been transparent or fair. The failure by the Council to properly document decisions and outcomes does not of itself prove this was the case but such failures:

  • probably led to Woods being dissatisfied with the Council’s account of itself, leading to this litigation
  • prompted the Court to be “sceptical”21 about the Council’s evaluation.

We have sought to identify above where the lessons from this case are directly applicable to the Commonwealth Procurement Rules. Nothing can save the legality of a tender process that treats tenderers inequitably from the outset. If only one lesson is drawn from the unhappy experience of the Milton Keynes Council in this process, it should be that adequate contemporaneous documentation of decisions made in a tendering process is vital.