Published in The Legal Intelligencer and included in the Special Section: E-Discovery 2023

By Jordan Blumenthal, Sarah Mahoney, and Matt Rotert

When facing a motion to compel (or a motion for sanctions)—as highlighted by the court’s order in Garner v. Amazon—pulling back the curtain a bit on defensibility data can be essential to fashioning a winning argument.

In an ongoing case in federal court in Washington state, Amazon has been waging an effort to cast its opposing party’s electronically stored information discovery requests as too heavy to bear. See Garner v. Amazon.com, No. C21-0750RSL, 2022 WL 16553158 (W.D. Wash. Oct. 31, 2022). In its response to a motion to compel, Amazon argued that the plaintiffs’ requests were not proportional to the needs of the case and therefore should not require a response. Such arguments have become standard fare since Rule 26(b)(1) was amended in 2015 to emphasize proportionality as a limit to discovery. But in an order late last year, the court rejected those arguments, finding that Amazon, on this occasion, failed to deliver the goods.

In particular, the court reasoned that Amazon did not point to specific evidence to defend its assertion that responding to plaintiffs’ requests would impose an undue burden. The court pointed to the absence of certain “key data”—such as time and money likely to be expended—that could have substantiated Amazon’s burden claim.

“Key data” of the type sought by the court are one type of what can be thought of as defensibility data, or—in the discovery context—the variables involved in a party’s calculus of whether, when, and how to collect, review, and produce potentially discoverable material. In short, defensibility data are the specific facts and reasoning underlying the choices a party makes in discovery. As closely held information, these data points are instrumental to internal strategic planning of the discovery process in the first instance. But when facing a motion to compel (or a motion for sanctions)—as highlighted by the court’s order in Garner v. Amazon—pulling back the curtain a bit on defensibility data can be essential to fashioning a winning argument.

Defensibility Should Guide Discovery Practice

Defensibility is a crucial facet of discovery practice that is too often neglected. When members of a litigation team speed through discussions of defensibility with a shrug or a wink, the team is doing itself and its client a disservice. Defensibility should not be a veil shrouding parties’ earnest hunches or incongruous choices that they hope are never forced into the light of day. Nor can it be merely a buzzword that litigators pepper into conversations with their clients to quell bubbling anxiety.

Defensibility should be the foundation of the discovery edifice in any case. Each decision the litigator makes during discovery must be assessed against that foundation: Will it hold? Is the current blueprint workable? Do any structural supports need shoring up? Defensibility data provides the substantive answers to those questions, allowing litigators to make evidence-based decisions instead of relying on instinct and self-confidence.

The stage of discovery, along with the particular issue being assessed, will determine which form (or forms) of defensibility data are most relevant.

Some examples of defensibility data applicable to various stages of discovery include:

Time requirements: Time typically is a significant concern at the document review stage, when considering the hours needed for review and therefore the expected date of completion. But time can also be influential at other stages, such as when weighing the need to conduct a laborious collection of stored hard-copy material. And, of course, time is money.

Expected costs: Document review costs tend to be top of mind due to the high visibility of hourly rates for attorney reviewers. But costs related to the preservation, collection, and processing stages can also be significant, particularly when broad litigation holds are required or legacy technologies are involved.

Document culling metrics: Though largely relevant to the processing stage, document culling metrics may sometimes be applicable at other times. Two illustrative data types in this bucket are hit counts for search term variations (which inform the iterative development of search terms) and comparisons of a search term’s global hit count and its unique document hit count (which can reveal the usefulness of a particular term). If using technology assisted review (TAR) mechanisms for culling, an additional domain of metrics and variables will come into play (e.g., predictive coding ranks; responsiveness cutoff points; model certainty scores; etc.).

Document review/analysis metrics: Metrics relating to the document review process are, of course, most relevant to the review stage, but they may occasionally be of use in other stages. Examples include document-focused data, such as responsiveness rates and rates of use for particular issue tags, as well as reviewer-focused data, such as reviewer coding rates (which can inform revised timeline and budget models).

Quality control metrics: Generally, quality control metrics will involve comparisons of actual results to expected results, often but not exclusively as part of the collection stage (e.g., to assess whether additional documents may have been missed), the processing stage (e.g., to identify potential errors), and the review stage (e.g., to detect possible coding anomalies). Here, again, the use of TAR opens up additional possibilities, such as error rates resulting from testing samples of machine-coded documents against human coding decisions of the same documents.

Articulable reasoning: The concept of defensibility data must be broad enough to contend with those times when litigators’ analysis is not readily reducible to numbers or charts. Recognizing articulable reasoning as a type of defensibility data fills those gaps. For example, for a quality control metric to have value, the baseline figures used must be reasonable; the articulable reason a particular baseline value is chosen (e.g., previous experience; expert advice; accepted wisdom) is a defensibility datapoint. Other examples would include the reasons that initial search terms are chosen (before any measurable testing) or the method of identifying which client employees to place on a litigation hold.

Importantly, all these data points should be understood to be internal information in the first instance—and most will remain internal-only throughout the life of a case. In the ordinary course of discovery, defensibility data are not an appropriate target of interrogatories or requests for production (i.e., “discovery on discovery”).

But precisely because defensibility data are to be used to guide a litigation team’s discovery-related decision-making, they may at times be strategically revealed. Certain defensibility data should, for example, be shared with opposing counsel to inform Rule 26(f) conferences and to provide the specificity required for objections to discovery requests. When a team’s decisions become the subject of discovery motions practice, though, defensibility data may—either by choice or by coercion—end up exposed not just to opposing parties, but also to courts and potentially to the public. Still, as Garner v. Amazon demonstrates, keeping defensibility data under wraps when push comes to shove in a discovery dispute may effectively preclude a court from agreeing to limit discovery—and that, of course, can be extremely costly.

Case in Point: Proportionality and Defensibility Data

When push came to shove for Amazon at the end of last year in Garner v. Amazon, the ultimate issue was proportionality and possible undue burden during the data processing and review stages.

The plaintiffs proposed a series of search term iterations, each of which Amazon rejected due to the volume of resulting hits. When the dispute reached the court, Amazon proffered a hit count estimate, but no additional defensibility data points: no estimate of attorney review hours; no quote for costs of review; no hit counts for search term alternatives; no responsiveness rate estimate from the use of available analytics or even from basic review of a sample set of the proposed terms’ hits; no sufficiently articulated reasoning for why plaintiffs’ proposed terms were immaterial. Without any such data presented, the court held that it was “impossible to conclude that the burden imposed … is ‘undue.’”

In other words, bringing defensibility data to bear is not just helpful—it’s essential.

On that point, the order in Garner v. Amazon sounds a useful warning bell for litigators, but by no means a unique one. As proportionality arguments proliferate in the wake of the 2015 amendments to Rule 26 (which clarified proportionality’s shared primacy with relevancy in analyzing discovery obligations), courts continue to reject boilerplate or otherwise superficial appeals to purportedly undue burdens—the insistence on specifics is unabating. See, e.g., Human Rights Defense Center v. Jeffreys, No. 18 C 1136, 2022 WL 4386666, at *3 (N.D. Ill. Sept. 22, 2022) (“A specific showing of burden is commonly required by district judges faced with objections to the scope of discovery.”). This is nothing new—litigators should not be caught off guard. But, more to the point: if the proper groundwork is done, providing specific support for such arguments should not pose a problem.

Evidence in defense of discovery decisions should be readily available because the litigation team should be engaged with such evidence throughout the discovery lifecycle. Defensibility data is not just the currency of discovery disputes; it should also be the foundational support of the overall discovery plan.

Practice Tips

Develop a comprehensive discovery plan in advance that includes anticipated defensibility data types, and identify those data points that you are prepared to proactively share and discuss with opposing parties.

Document your process for each discovery stage and each unique data source—and be specific. Documentation might include memoranda, decision logs, audit trails, or iteration charts. Where appropriate (e.g., search term iterations), include rejected alternatives and reasoning.

Build various review workflow models during the budgeting process—and, importantly, document the assumptions that were used to develop those models (e.g., data expansion rates, document review rate, responsiveness rates, volume reduction from de-duplication, email threading, or predictive coding). Evidence-based, realistic budgets may be invaluable support to burden and proportionality arguments.

Work to understand what advanced analytics and other TAR options may offer not just as a replacement for human analysis and review but as evidentiary support for appropriately limiting human review.

Don’t guess. Work with discovery experts, including technology vendors and/or discovery counsel, to understand discovery burdens—whether related to a particular data source or a case in its entirety.

Jordan Blumenthal is counsel with the law firm Redgrave LLP. Blumenthal focuses his practice on e-discovery issues in complex litigation and investigations in the private and public sectors. He can be reached at jblumenthal@redgravellp.com.

Sarah Mahoney is a managing director with the firm. She focuses on e-discovery consulting, evaluation of legal technology software, and process improvement for the discovery phase of litigation and information governance. She can be reached at smahoney@redgravellp.com.

Matt Rotert is counsel with the firm. He focuses his practice on complex e-discovery and data privacy issues, including developing and refining processes to address the discovery process, from identification and production through trial use. He can be reached at mrotert@redgravellp.com.