We’re back!

Oral argument

On to a presentation on oral argument from our friend Mark Fleming at WilmerHale – how can practitioners optimize their performances so judges will want arguments?

First, the moot. One point Mark made was the usefulness of having someone there to take notes, since if you are being mooted, you may not be in a position to remember specific takeaways. Another good suggestion was to stop the moot when you’re giving a really bad answer, and to work out the good answer then and there, because otherwise the bad answer might get wired into your brain. The Q&A that followed included a discussion as to how we make mooting less expensive. In keeping with a recent blog entry, the availability of law school programs was supported. Even if a moot is not in the cards, you can probably convince a lawyer buddy to just read the briefs and jot down questions for you.

An obscure point mentioned was that the U.S. Supreme Court has a guide for oral argument in which it opines that flipping through a notebook looks more professional than flipping through a legal pad.

Both Mark and the judges in other sessions noted it’s important to recognize and articulate where the specific issue addressed in the appeal sits in broad pantheon of the law and the ripple effects of the decision. That’s where those pesky hypotheticals come from. It also was much easier to understand that situational point with old fashioned book research, using key notes. With computer research, a pinpoint question can be asked and answered, and sometimes a newbie won’t even read the whole decision in which the issue is being discussed, let alone give any thought to how the proposed answer regarding the issue will affect the larger law.

A.I.

This was perhaps the most thought provoking session, with the speaker, Professor Linna, from my old alma mater, Northwestern. Good for him for standing up to a room of middle aged appellate practitioners, including many proud Luddites, and explaining to them how artificial intelligence is going to replace a lot of what we do now as lawyers.

It’s already happening. The court in British Columbia has a wholly computer-based dispute resolution system for landlord-tenant disputes and small claims and is moving on to motor vehicle injuries. The UK is investing $1 billion in its on-line systems, which, unlike the BC system, will still have some limited role for lawyers. Estonia is experimenting with AI for on-line decision-making. We all know about Legal Zoom. Big firms now have processes for data breaches. A company called Legal Nation is selling software to various clients where you can load in a simple complaint, like a slip and fall, and it will almost instantly spit out an answer and a discovery request.

There are three phases of A.I. The first is rule-driven – like Turbo Tax, fill in the blanks and a response is spit out based on governing rules. The next phase is data driven, where the computers learn and get better based on all the data you give them – you train the system, like your spam filter can improve over time. The last phase is where the computer not only learns how to improve its work over time, but it gets better at it than we are. Fortunately, it sounds like I’ll be retired before that happens.

Professor Linna’s basic theme was that there’s no point in just rejecting A.I. There are good things from it – computers can help fight human biases, reduce costs and improve access to justice. There are a lot of bad things that can happen, too – biases can get wired into that computer system, for example – so lawyers need to stop simply being hostile and start participating to try to stop the bad stuff from happening. The bottom line is lawyers need to think more like engineers than artists.

Given the large number of attendees who went to law school because they couldn’t handle math and science, the reaction to his presentation wasn’t exactly a warm hug. But the world is changing – liberal arts is fading, while grammar school students now learn to code.

Iacta alea est.

Ethics

The final session was on ethics — isn’t it always, to make sure you stick around? But this was one of the most interesting lectures on the topic I’ve ever been to, because it wasn’t about how conduct X violated rule Y and so on. Instead, it was about why people violate the rules – behavioral psychology. See Curated Resources > Behavioral Ethics, Ethics Unwrapped, McCombs School of Business, University of Texas.

The idea is to understand what’s motivating people to make the wrong decisions, make them more aware of these pressures, and therefore, hopefully pause and better deal with them.

Most of us know about the whole “thinking fast and slow” phenomenon – our brains make two distinct kinds of decisions, quick, gut ones, and more deliberate ones. The #1 category of decisions vastly outnumbers #2. Importantly, once you make a #1 decision, your #2 system is wired to want to justify the #1 decision – to dig in. This is why it’s so hard to change people’s minds politically. And ethically, people are wired to cheat as much as they can and still think of themselves as honest people.

Hence, the importance of a conscious understanding of all the pressures – e.g. the boss told me to do it; money – and the excuses – e.g., no harm, no foul; everybody else does it. We tend to frame a question with a self-serving bias in our brain, and we are also wired to please people in authority. In short, it’s hard if your boss is telling you to do something, other people do it, there’s a reward for your or the firm for doing it, and little risk of getting caught.

These are straight forward points, but the more we understand our motivations, the more we can control our conduct, or put processes in effect to reduce their influence. For example, the firm of one of the speakers has a system in his firm where the decision-maker as to whether a conflict exists is not aware of the monetary value of the representation.

In sum, it was an excellent meeting.