After an attorney using artificial intelligence presented false information in federal court last week, the judge in the case is taking a stance to — hopefully — prevent it from happening again.
Attorney Steven Schwartz used ChatGPT to “supplement” his legal briefing, but the popular AI tool provided him with several cases that were completely made up. Schwartz apologized, saying he “greatly regrets” the mishap, but Judge Brantley Starr is taking steps to make sure it’s a one-time incident.
The judge said any lawyer presenting a case at the U.S. District Court for the Northern District…
Read more on google