By Alexander M. Slizewski, Esq., Lewis Brisbois, LLP, Co-Chair of WBASNY Technology Committee
For years, AI has been used in spam filters, fraud detection systems, recommendation engines, and search tools; day-to-day applications that most persons wouldn’t directly interact with regularly. Today, AI now allows people to interact with it in a direct, conversational way. As a result, AI is no longer something that operates quietly in the background. It is becoming part of how professionals, including lawyers, directly interact with on a recurring basis.
For lawyers, the most widely discussed concern with using AI is the phenomenon commonly called “hallucination.” A generative AI system may invent a case citation, misstate the holding of a real case, or produce a quotation that does not exist. However, lawyers should resist reducing AI risk to hallucinations alone. Even when an AI system does not invent authority, it may still produce incomplete, stale, or skewed output. For example, if the information it relies on is outdated, the resulting answer may also be outdated. This phenomenon is called “data drift,” which can negatively impact the AI’s accuracy.
More broadly, AI systems can reinforce common assumptions and smooth over outliers, which can create what’s called an “echo-chamber.” An AI echo chamber is when an AI creates a bias by repeatedly reinforcing the same ideas, opinions, or assumptions over time, often based on limited data or user input, without adequately considering alternative perspectives.
Confidentiality presents a separate and equally serious concern. Lawyers cannot assume that information entered into an AI system is automatically confidential. How information entered is handled depends on the platform, settings, contract terms, and product tier. Some systems may retain data, monitor it, or use it for training unless proper controls are in place. If confidential information is entered, the lawyer may effectively be disclosing this to a third party.
There is also a subtler danger: overreliance, where users trust AI too quickly and accept its output without sufficient scrutiny. Recent research highlighted by the American Psychological Association suggests that heavy reliance on AI at work may weaken independent thinking and reduce a person’s sense of ownership over ideas. That concern should matter to lawyers. Overreliance on AI Programs May Undermine Confidence at Work, Am. Psych. Ass’n (Apr. 16, 2026). Legal practice is not just the production of text; it requires judgment, analysis, and accountability. If a lawyer accepts AI output too passively, the risk is not only that the answer may be wrong, but that the lawyer may inherently lose critical thinking skills over time.
To paraphrase from Frank Herbert’s Dune, technology is a useful servant but a poor substitute for human judgment. Before using any AI tool, a lawyer should determine whether it is a closed system and whether it retains or uses data for training. Lawyers should also be in the loop to ensuring meaningful human review by checking every output for accuracy and completeness, and to avoid overreliance on an AI’s outputs in order to retain one’s critical thinking and case analysis skills.
