Two Forms of AI: What Expert Witnesses Need to Know

Wayne Bennett, DC, DABCC, DABCO

Out here in the West where I live, whether you are riding fence or reviewing records, one rule holds: tools matter—but judgment matters more.

Artificial intelligence has come into the expert witness arena fast, and with real utility. For those of us sorting through volumes of medical records, depositions, and literature, AI offers something undeniably valuable: speed, organization, and pattern recognition at scale. It can summarize records, flag inconsistencies, identify documentation gaps, and even suggest lines of inquiry that might otherwise take hours to uncover.

Used properly, it’s like a good horse. Strong. Efficient. Worth having in the barn.

The trouble starts when you let it pick the trail.

There is a growing temptation—especially when timelines tighten—to allow artificial intelligence to move beyond its proper role as a tool and into something closer to an analyst. That shift is subtle. A clean summary reads like understanding. A list of “key findings” feels complete. Citations—confidently presented—can give the impression of verification.

But impressions are not evidence.

Artificial intelligence does not know anything in the way an expert witness must know it. It predicts and assembles language based on patterns. It may omit critical context, overstate conclusions, or—most concerningly—produce hallucinated citations that look entirely legitimate but do not exist, or do not support what they claim.

In clinical practice, that’s a problem.

In litigation, it’s a liability.

An expert who leans too heavily on AI-generated analysis risks more than getting a detail wrong. They risk undermining the foundation of their opinion, particularly under deposition, where opposing counsel will test every assertion with the patience of someone who has nowhere else to be.

Which brings us—quietly, but importantly—to the second form of AI.

Actual intelligence.

This is the part you were retained for.

Actual intelligence is the ability to read between the lines, to recognize what is missing, to weigh competing explanations, and to apply clinical and regulatory judgment to complex facts. It is what turns information into opinion—and opinion into something defensible.

The best practice is not to reject artificial intelligence, but to keep it in its proper place and pair it with actual intelligence:

  • Use artificial intelligence to gather, organize, and accelerate
  • Use actual intelligence to verify, interpret, and conclude

That pairing requires discipline. Every AI-assisted output should be treated as a draft, not a determination. Citations must be independently verified. Conclusions must be checked against the record. Language must be refined until it reflects what is supportable—not just what sounds right.

Because out here, you don’t stake your reputation on what sounds right. You stake it on what holds up.

And if that sounds a little old-fashioned, it’s because it is.

The alternative is not theoretical. It’s sitting in a deposition, steady as you please, referencing a study that turns out not to exist—while opposing counsel slides a document across the table showing exactly that. In that moment, the difference between the two forms of AI becomes painfully clear.

One helped you write it.

The other should have stopped you.

Wayne Bennett, DC

Chiropractic Expert Witness & Consultant

Diplomate, American Board of Chiropractic Consultants

Diplomate, American Board of Quality Assurance & Utilization Review Physicians

Diplomate, American Board of Chiropractic Orthopedics

Objective, evidence-based standard of care and documentation opinions from a former regulatory board chairman with 30+ years of clinical experience

Cogent Chiropractic Witness