In his March 2, 2015 order issued in Rio Tinto PLC v. Vale S.A., et al., No. 14-Civ-3042 (S.D.N.Y.), Magistrate Judge Andrew Peck brought the world of predictive coding back to the future.  Quoting the first line of the order directly, “[i]t has been three years since my February 24, 2012 decision in Da Silva Moore v. Publicis Groupe & MSL Grp., 287 F.R.D. 182 (S.D.N.Y. 2012) (Peck, M.J.), aff’d, 2012 WL 1446534 (S.D.N.Y. Apr. 26, 2012).”  There is a strong element of déjà vu here: Peck also began his seminal Da Silva Moore decision by quoting himself, then from articles he had authored on the subject.   As you may know, Peck’s Da Silva Mooredecision had a snowball effect, including the effort by one party to request, and when that was unsuccessful, to force, Peck to recuse himself.  The judiciary was unreceptive, and the unhappy party’s efforts up the chain—to the point of filing a petition for the United States Supreme Court to weigh in on the issue (which was denied)—were fruitless.

Putting aside the theatrics, the recent Rio Tinto order may prove to be as important as Peck’s willingness to go out on a limb in Da Silva Moore.  Back then he was the first judge to formally bless the use of predictive coding—deeming it “judicially-approved in appropriate cases”—paving the way for widespread use and acceptance of technology assisted review (“TAR”).  Peck aptly notes in Rio Tinto that since his Da Silva Moore order, the case law has developed to the point that it is now “black letter law that where the producing party wants to utilize TAR for document review, courts will permit it,” citing to courts in jurisdictions across the country that have all followed his lead. 

Now Peck has gone further, proving once again to be the pioneer for the advancement of predictive coding, and more broadly, the need to understand and respect the interrelatedness of technological advancement and the law.  Specifically, his Rio Tinto order suggests that it is an open issue whether or not a party utilizing predictive coding must share the results of the phases of iterative review (a/k/a “seed sets”) used to train the predictive coding software.  Of course, cooperation and transparency might favor the requirement for disclosure.  On the other hand, the need to protect against inadvertent disclosure of privileged information favors the counterargument.

Just as in Da Silva Moore, Peck offers important food for thought.  As he states, “while I generally believe in cooperation, requesting parties can [e]nsure that training and review was done appropriately by other means, such as statistical estimation of recall at the conclusion of the review as well as whether there are gaps in the production, and quality control review of samples from the documents categorized []as now responsive.”  Peck wisely deferred the ultimate question to another day because he issued the order in the context of approving a jointly proposed protocol for the use of TAR by which the parties agreed to disclose all non-privileged documents in the seed sets.

But yet again, Peck is at the forefront and started an important discussion—are we at a point where parties should be permitted to unilaterally implement the use of predictive coding in discovery?  In my view, Peck is correct that “it is inappropriate to hold TAR to a higher standard than keywords or manual review…[d]oing so discourages parties from using TAR for fear of spending more in motion practice than the savings from using TAR for review.”  And his supporting foundation is solid: Rule 1, Rule 26, proportionality, judicial economy and judicial efficiency, to name a few bases.  In any event, the debates over predictive coding are likely to start again, because the person at the top of the pecking order on this subject has spoken.