Ok, excuse that bad joke. But the recent decision in In re: Biomet, the hip replacement multi-district litigation out of the Northern District of Indiana, is noteworthy because it discusses proportionality and predictive coding in the same space.
The mere fact that predictive coding is an available tool doesn’t mean that it should be applied to every document in a client’s possession, custody or control. Rather, it can be ok – defensible, even – to use it only on a subset of what is identified as potentially relevant. The open question is how and when to create the universe of documents subject to machine learning.
In Biomet, the defendant identified nearly 20 million documents for possible review, and then reduced that universe by using keyword searches. After deduping, that left 2.5 million documents and attachments.
Biomet did some random sampling on the excluded universe and determined that less than 1.33 percent of that universe would be responsive. It then used Axelerate, a predictive coding tool by Recommind, on the remaining 2.5 million.
Some predictive-coding experts, like Ralph Losey, have opined that keyword searching has a critical place in the predictive coding module. He calls it “multimodal because, although the predictive coding type of searches predominate. . . other modes of search are also employed.”
But the Plaintiff’s Steering Committee in Biomet contended that the initial use of keywords “tainted” the results, because key words typically only capture about 20 percent of the responsive documents. Thus, by using key words in the first instance, Biomet excluded documents from the predictive coding universe.
The Steering Committee, who faulted Biomet for starting the process before the cases were centralized, wanted Biomet to go back to the original 19.5 million documents (which the Court called Square 2, since Square 1 was everything within the four walls of the company) and use predictive coding there. Biomet’s discovery costs already topped $1 million, and Square 2 review would have cost a seven-figure sum.
Judge Robert L. Miller Jr. looked at numerous Sedona Conference papers, the Seventh Circuit Principles Relating to the Discovery of Electronically Stored Information, and the rules themselves and concluded that Biomet fully complied with its discovery obligations and its approach was proportionate given the issues at stake in the case.
“Even in light of the needs of the hundreds of plaintiffs in this case, the very large amount in controversy, the parties’ resources, the importance of the issues at stake, and the importance of this discovery in resolving the issues, I can’t find that the likely benefits of the discovery proposed by the Steering Committee equals or outweighs its additional burden on, and additional expense to, Biomet,” the Court wrote.
Further, the duty of cooperation does not require “counsel from both sides to sit in adjoining seats while rummaging through millions of files.”
The Court concluded that if the Steering Committee wanted more, it was going to have to pay for it.
(On a side note, one issue apparently not argued by the parties, as it is not addressed in the opinion, is that Biomet used eight contract attorneys to educate the machine about the case. The usual predictive coding workflow is to have subject-matter experts about the case, usually a senior associate or junior partner, making decisions on the seed set. Even Axelerate’s website says that the tool requires “input from a case expert.” The opinion doesn’t say whether the contract attorneys were trained to be experts on the case, but we hope they have long contracts. We are firm believers that subject-matter experts must be involved for predictive coding to be at its best.)