You are on page 1of 5

Foreword: Productivity in Public

Adjudication
JORDAN M. SINGER*

here is a certain conceit to measuring the work of the American


judiciary. The act of measurement is premised on objectivity and
universality: to measure something is to say both that its qualities
can be measured (that is, the chosen metric meaningfully reflects empirical
reality) and should be measured (that is, the chosen qualities meaningfully
reflect the work being done). These assumptions hold up well if we are
measuring the work of an Olympic sprinter (how much time does it take to
run 100 meters?) or a competitive eater (how many hot dogs can be
consumed in twelve minutes?). They also hold up well for measuring
manufacturing work (how much time and money does it take to produce
each widget, and how many widgets pass uniform quality standards?). But
the work of the judiciary is fundamentally different from that of an athlete,
competitive eater, or manufacturer. Not every case looks alike, and not
every result should be identical. Rather than aiming for a fixed set of
criteria, judges must balance competing concerns about cost, delay, due
process, fealty to the substantive law, and equitable considerations in every
case that comes before them.
The measurement of judicial work also presents a host of definitional
and methodological questions. Should we evaluate courts as a unit or focus
on individual judges? What are the proper criteria for measurement? For
that matter, what does it mean to say that a court is efficient? Productive?
Accurate? What does excellent (or adequate, or inadequate) judicial
performance look like? If we cannot measure all aspects of the judicial
process to our satisfaction, should we move forward with what we can
measure, or wait until sufficient metrics are in place across the board? And
in an era when budget pressures are an everyday reality, how should we
pay for any of this?
Given these questions and challenges, it may be tempting to save
measurement of the judiciary for another day. And yet the public nature of
judicial work demands assessment now. Courts are public institutions, and
judges are public servants. Their work affects not only the parties before
them, but also the perceptions of every other person who comes into
* Associate Professor of Law, New England Law | Boston; A.B. Harvard College; J.D.
Harvard Law School.

445

446

New England Law Review

v. 48 | 445

contact with the systemwhether as a juror, witness, interested onlooker,


or simply an engaged member of the public. Those perceptions matter,
both for courts own legitimacy and for ongoing public confidence in the
judicial system. In short, courts must undertake the challenge of selfassessment and self-improvement to fulfill their public mission. Even as
cases reach different resolutions, courts must continue to be guided by
common standards of adjudicative excellence.
This symposium issue of the New England Law Review examines efforts
to evaluate the judiciarys work in light of these complex realities, and the
contributing authors wrestle with these issues eloquently, intelligently, and
with an array of different approaches. For several contributors, the starting
point is a productivity model for federal district courts that Judge William
Young and I proposed in a recent pair of articles.1 That model posits that
court productivity measures must go beyond longstanding efficiency
metrics like time to disposition and docket clearance, and explicitly account
for the quality of adjudication as well.2 We further suggested that
adjudicative quality should be defined as a function of both the accuracy of
case outcomes and the fairness of the procedures used to reach those
outcomes.3 We then proposed that procedural fairness can be meaningfully
approximated at the federal district court level through a new metric called
bench presence, which reflects the total number of hours that an active judge
in a given district spends on the bench, adjudicating issues in open court. 4
The bench presence metric is intended to capture the essential
relationship between open court adjudication and the primary
determinants of procedural fairness: opportunities for litigants
participation and voice, neutrality of the forum, trustworthiness of legal
authorities, and the degree to which all people are treated with dignity and
respect.5 Not only do these values reach their highest expression in the
open courtroom, but in some instances (involving litigant participation or
public demonstrations of dignity and impartiality) they can barely exist
outside the courtroom. The primary purpose of introducing the bench
presence metric and our productivity model, then, was to advance the
conversation as to what productive trial courts should really look like. Our
sense was that measuring bench presence (and combining it with existing

1 See Hon. William G. Young & Jordan M. Singer, Bench Presence: Toward a More
Comprehensive Model of Federal District Court Productivity, 118 PENN ST. L. REV. 55 (2013)
[hereinafter Bench Presence]; Jordan M. Singer & Hon. William G. Young, Measuring Bench
Presence: Federal District Judges in the Courtroom, 20082012, 118 PENN ST. L. REV. 243 (2013).
2
3
4
5

Bench Presence, supra note 1, at 5758.


See id.
Id. at 89.
Id. at 80.

2014

Foreword

447

efficiency measures now and to-be-developed accuracy measures later)


brings us closer to describing the real work of the federal district courts
both descriptively and normatively.
Our productivity model is new, but Judge Young has laid its
groundwork for more than a quarter-century. He is one of our countrys
most passionate defenders of the jury trial, an articulate and inexhaustible
advocate for the notion that [p]roperly charged, American jury verdicts
come closer to genuine justice than any other human institution ever
conceived.6 Judge Young also originated the push for better federal
district court productivity measures. In his keynote speech for this
symposium, contained in full in this volume, he describes his own efforts
to identify Americas most productive federal district courts, based on a
combination of bench presence, trial hours, and actual trials.7 These
listings, which he continues to update every year, were the forerunners of
our productivity model and in a sense represent Bench Presence 1.0.
Judge Youngs keynote also reminds us, in his inimitable style, why court
measurement matters: the experience of fair, accurate, efficient, and public
justice in the courtroom affects every user or potential user of the justice
system.
Our productivity proposal drew a range of highly thoughtful reactions.
In their contribution to this symposium, Professor Steven Gensler and
Judge Lee Rosenthal remind us that the core elements of procedural
fairnessparticipation, neutrality, trustworthiness, and dignityare fully
consistent with active case management in the district courts. Good
judicial case management, they tell us, is participatory and interactive,
not remote and dictatorial. . . . The best case-management practices do not
cause judges to vanish from view; they cause them to reappearto the
parties, to the lawyers that represent them, and to the public.8 To illustrate
the point, they offer a range of tools that combine case management with
direct interaction between judges and parties: live Rule 16(b)
conferences, pre-motion conferences for discovery and dispositive motions,
and hearing oral argument on motions that only later proceed to full
briefing.9 These tools and others underscore the point that good
management and traditional adjudication go hand in hand.10

6 Hon. William G. Young, Keynote: Mustering Holmes Regiments, 48 NEW ENG. L. REV.
451, 463 (2014).
7

Id. at 455.
Steven S. Gensler & Hon. Lee H. Rosenthal, Pretrial Bench Presence, 48 NEW ENG. L. REV.
475, 477 (2014).
9 Id. at 49091. See generally Steven S. Gensler & Lee H. Rosenthal, The Reappearing Judge,
61 U. KAN. L. REV. 849, 85765 (2013).
8

10

Gensler & Rosenthal, supra note 8, at 487.

448

New England Law Review

v. 48 | 445

The court productivity model proposed in the Bench Presence articles


included accuracy of outcomes as a central component in adjudicative
quality, but deliberately set aside the question of how such accuracy might
be consistently and universally measured. In his contribution to the
symposium, Professor Chad Oldfather challenges this piece of the model,
arguing that outcome accuracy cannotor at least should notbe
included in a court productivity measure. He explains that [a]ccuracy, in
cases where it counts, depends on too many assessments that are too
contestable or indeterminable in too many respects.11 Using an extended
case study to illustrate his point, Professor Oldfather concludes that efforts
to objectively assess the accuracy of an outcome would require replicating
the entire case, and even then would be subject to the very real possibility
that different people would come to different conclusions about the same
evidence, or even that the same person would come to different
conclusions at different times.12
While Professor Oldfather worries about accuracy on its own terms,
Professor Mark Spottswood worries about accuracys sublimation to more
easily accessible efficiency metrics. After cataloging the wide range of
federal and state efforts to boost the speed of case processing over the last
half-century, he asks what has been gainedor lostas a result. Among
his many insights, Spottswood argues that emphasizing (and sometimes
exclusively measuring) case processing speed comes at a cost to outcome
accuracy, does not necessarily result in happier or more satisfied litigants,
and does not guarantee lower litigation costs.13 In a word, advantaging
efficiency over other measuresat least without concluding that delay has
a harmful impact on more fundamental litigation values[]is perilous.14
Public courts operate within a political environment, and their work is
influenced by legislative goals and policies. In her contribution, Professor
Carolyn Dubay reminds us of that broader picture, cautioning that in the
current economic climate, measurements of federal court productivity
must reconcile increasing pressure on the courts to do more with less.15
That is, one cannot reasonably expect courts to continually improve their
efficiency and productivity when judicial vacancies remain unfilled,
courthouses are closed, and budgets are slashed. Professor Dubay offers no
magic bullet for the current predicament, but provides something all the

11

Chad M. Oldfather, Against Accuracy (as a Measure of Judicial Performance), 48 NEW ENG.
L. REV. 493, 494 (2014).
12 Id. at 500.
13 Mark Spottswood, The Perils of Productivity, 48 NEW ENG. L. REV. 503, 528 (2014).
14 Id.
15 Carolyn A. Dubay, A Country Without Courts: Doing More with Less in the Twenty-First
Century Federal Courts, 48 NEW ENG. L. REV. 531, 543 (2014).

2014

Foreword

449

more useful: an analytical framework for evaluating proposed judicial


productivity measures. In particular, she argues that productivity
measures must account for both the extent to which such measures
incentivize judges to engage in activity that promotes prudential and
constitutional goals, and the impact of the measures on judicial resources
in the current climate of aggressive cost containment. 16
Another broadening perspective comes from Malia Reddick, who
helpfully turns the spotlight to the appellate courts. Dr. Reddick, the
former Director of the Quality Judges Initiative at the Institute for the
Advancement of the American Legal System at the University of Denver
(IAALS), explains that just as trial court evaluation should measure the
primary work product of trial judges (courtroom activity, periodic rulings,
and trials), appellate evaluation should measure the primary work product
of appellate judges: the written opinion.17 She goes on to describe a
carefully constructed opinion review process, developed by IAALS over
the course of two years. The opinion review is designed to fit neatly into
judicial performance evaluation programs that already existor are being
consideredin a number of states.
Judge Young and I conclude this issue with a brief response to the rich
set of ideas that are contained herein.18 We offer a few additional
observations about the future of the bench presence metric, our
productivity model, and the general measurement of judicial activity. But
response is not quite the right word for our essay, because it
inadequately captures our delight at the sophistication and insight of the
other contributions. I am personally thankful to all of the symposium
participants for their perceptive comments and ideas, and I am also deeply
grateful to the editors of the New England Law Review for providing a forum
for discussing these important issues. Our hope at the outset of the bench
presence project was that it would spark a national conversation on court
productivity and performance measures, and this symposium issue makes
clear that the conversation has begun in earnest.

16

Id.
Malia Reddick, Evaluating the Written Opinions of Appellate Judges: Toward a More
Qualitative Measure of Judicial Productivity, 48 NEW ENG. L. REV. 547, 552 (2014).
18 Jordan M. Singer & Hon. William G. Young, Bench Presence 2014: An Updated Look at
Federal District Court Productivity, 48 NEW ENG. L. REV. 565, 56578 (2014).
17

You might also like