Listen to Karl Schieneman, Founder and President of Review Less and a predictive coding consultant talk with his hand-picked panel of predictive coding “power users” about different approaches to using predictive coding and analytical review tools. The panel is comprised of:
- Maura Grossman – Wachtell Lipton, TREC Co-Coordinator & Co-Author of Technology-Assisted Review in E-Discovery Can Be More Effective and Efficient Than Exhaustive and Manual Review
- Tom Gricks – Schnader Harrison, Carnegie Mellon trained chemical engineer & lead litigator in Global Aerospace e.g. “The Predictive Coding Virginia Case”
- Bennett Borden – Williams Mullen & Co-Author of The Demise of Linear Review
- Dr. Herb Roitblat – Chief Scientist OrcaTec, Chairperson of the E-Discovery Institute & Co Author of Document Categorization in Legal Discovery: Computer Categorization v. Manual Review, and
- Dr. David Lewis – Computer Scientist consultant and expert in Kleen Products case.
This is one of the most entertaining shows I have ever had and it was an appropriate bookend show to the judicial panel on predictive coding podcast, which is also available on ESIBytes. The underlying subplot was Tom Gricks’ and Maura Grossman’s “heated” debate over the best approach for training predictive coding software. Is it better to train the software by using a random selection of documents from a collection or by identifying seed documents?. With the aid of a 4th C wine and a beer consuming audience, we had all the ingredients for the equivalent to boxing’s “Thriller in Manilla” which I now dub the “Clash in Carmel.” Listen to these two predictive coding heavy weights take off the gloves and debate each other using all sorts of litigation tricks to convince the audience that their approach is preferable. Add into the mix Herb Roitblat and David Lewis, two prominent computer scientists and Bennett Borden, a top notch litigator who leverages advanced analytics, and you have the makings of a “free-for-all” of ideas and experiences.
The key common thread between us is that we are all experienced power users of predictive coding and advanced analytics. The biggest testament to the strength of the panel is they all survived this debate making good points, highlighting the fact made by one of the panelists at the end that predictive coding is in many ways a GPS and there are any number of appropriate ways to use these tools. The key for the field is to experiment and move beyond linear review.
You know the panel has delivered an exceptional program when the time slot you are given is 4PM to 5PM on the second day of a conference, 85% of the conference and well over 200 attendees show up for your session, and the presentation goes 20 minutes long because of audience questions. Or it could be that 5:20PM is when we ran out of beer. So grab a beer or glass of wine if you have one handy, listen to the show and the round of applause at the end, and you be the judge as to whether or not we delivered. Also, a special thanks goes out to Chris La Cour, the creator of the Carmel Valley E-Discover Retreat, for enabling us to record this session and post it on ESIBytes.