Dark Free Referral Logo

The Disqualification of Expert Witness for Artificial Intelligence Use

In a 2024 New York court decision, an expert witness was disqualified due to concerns about the reliance on artificial intelligence (AI) in their testimony. This ruling reflects growing judicial scrutiny over the admissibility of AI-related evidence and the qualifications of witnesses who interpret AI data in legal proceedings.

Written by: Consolidated ConsultantsOct 15, 20244 minute(s) read
Share Via:
The Disqualification of Expert Witness for Artificial Intelligence Use
Back to News

A Court’s Perspective on Emerging Technology and Artificial Intelligence (AI)

In a recent ruling, a court addressed a growing issue related to the use of artificial intelligence (AI) in legal proceedings. The decision sheds light on the complexities trial courts face as they navigate the evolving role of AI in expert testimony.  In Matter of Weber (2024 NY Slip Op 24258), the Surrogate’s Court addressed a dispute involving the administration of a trust. Susan Weber, the trustee, was accused by Owen K. Weber (the beneficiary) of breaching her fiduciary duties by retaining and using a property in the Bahamas. The court evaluated whether this retention violated the Prudent Investor Act and if her trips constituted self-dealing. Ultimately, the court found no breach of fiduciary duty or damages, as the trustee’s actions did not negatively impact the trust’s financial performance.  This case involved Mr. Ranson, an expert witness whose reliance on Microsoft Copilot, an AI-powered chatbot, raised serious questions about the credibility and admissibility of his testimony.

While the court ultimately found Mr. Ranson’s overall testimony to be unreliable, a key portion of his evidence required closer examination. Mr. Ranson admitted to using Copilot to cross-check his calculations for a Supplemental Damages Report. However, he was unable to provide critical details about the inputs he used, the sources Copilot relied on, or how the tool reached its conclusions. This lack of transparency posed a significant problem for the court.

To underscore its concerns, the court conducted its own experiment, using Copilot to perform a similar financial calculation to the one presented by Mr. Ranson. The court received three different results across three different computers—none of which matched Mr. Ranson’s figure. Although the variations were relatively minor, the inconsistency called into question the reliability of AI-generated calculations as admissible evidence in legal proceedings.

Moreover, when queried directly, Copilot acknowledged its limitations. In response to questions about its accuracy and reliability, the AI tool emphasized that its outputs should always be verified by experts and should not be solely relied upon in critical matters, such as legal proceedings. This self-admission reinforced the court’s view that AI tools like Copilot should be used cautiously and under human oversight, particularly in contexts where accuracy is paramount.

Mr. Ranson defended his use of AI, arguing that tools like Copilot represent the future of fiduciary analysis. However, he failed to provide any authoritative sources or industry publications to support the claim that AI-generated calculations are generally accepted within the field. This lack of evidence further weakened his testimony.

Under New York State law, expert testimony must meet the standards set forth in Frye v. United States (1923), which requires that the methods used be generally accepted in the relevant field. In this case, the court found no evidence to suggest that Copilot’s use met the Frye standard. While AI is increasingly prevalent across industries, its mere presence does not automatically make it admissible in court. Courts have expressed concerns about due process when software programs, rather than human analysts, generate critical decisions.

The court cited prior decisions, including People v. Wakefield (2019, 2022), where AI-assisted technology was admitted as evidence after a full Frye hearing that involved expert testimony and peer-reviewed studies. In Mr. Ranson’s case, no such supporting evidence was presented regarding Copilot’s reliability. Without this, the court could not accept the AI-generated calculations as accurate or admissible.

In addressing the broader issue, the court defined AI as any technology using machine learning or computational processes to simulate human intelligence, such as document generation, evidence analysis, or legal research. It also distinguished between “generative AI,” which creates new content, and “assistive AI,” which supports human-generated materials.

In what may be a groundbreaking decision for surrogate’s court practice, the court ruled that AI-generated evidence must be disclosed by counsel before being introduced and should be subject to a Frye hearing to determine its admissibility. This hearing, either pre-trial or at the time the evidence is offered, will allow the court to assess the scope and reliability of AI-generated materials before they are used in court.

As AI continues to evolve and its use expands, this decision underscores the need for clear legal standards and human oversight when it comes to integrating AI technology into the legal system.

About the Author

Consolidated Consultants

We are a expert witness referral company based in Chula Vista, California. Since 1995, our team is dedicated to locating quality expert witnesses for our clients. We believe in that listening intently and asking the right questions, we can find the right experts and make a positive impact on people’s lives.  We strive to create a website environment that is both useful and enjoyable to use along with tools that help those in the legal industry find the right expert to fit their needs.

Share Via:

Find An Expert Witness

Search our directory of 30,000+ categories of CV's of highly-qualified and hard-to-find experts and consultants.

We take care of the details and search for you. we respond in as fast as within 1 hour.*