AI Transcription Bots: The Legal Risks and the Need for Organizational AI Policies
12/18/2025
It is now commonplace when participating in a video call that a recording agent (sometimes referred to as “bot”) will pop up. Sometimes the bot will appear automatically and other times a participant will have “invited” the bot. The purpose of the agent is to record the call and prepare an AI-generated summary. Many organizations have become quite accustomed to relying on such bots and enjoy the apparent benefit of not having to take careful notes.
What could go wrong?
Confidentiality Risks
If the conversation is intended to be confidential – such as an attorney-client communication, discussion of a publicly traded company’s non-public material financial information, or the sharing of HIPAA-protected medical information – inviting the bot will almost certainly destroy confidentiality. To understand why involves understanding how the summary is generated. The bot sends the digital recording of the conversation to a Large Language Model (LLM) AI processor (such as OpenAI), and the processor then uses that recording not only to prepare a summary of the conversation but also to “train” the model and use it to generate content for any other user of the processor. In short, by using the bot, the participants are disclosing their conversation to third parties, albeit indirectly.
The risk of such inadvertent disclosure is not miniscule or hypothetical. The U.S. Department of Health and Human Services, which is the governmental entity that enforces HIPAA, has warned that “generative AI tools have produced in their output the names and personal information of persons included in the tools’ sources of training data. Similar uses of generative AI by regulated entities, including the training of AI models on patient data, could result in impermissible uses and disclosures, including exposure to bad actors that can exploit the information.” No matter what the industry, this likely creates regulatory compliance risk.
Inaccurate or Misleading Summaries
The AI-generated summary will most likely be inaccurate. It may miss important nuance or context, over-simplify complex discussions, misidentify speakers, invent or infer details that were not stated on the call (i.e., “hallucinations”), and misrepresent decisions and action items. While estimates vary, and different AI models have different outcomes, AI-generated content is reported to be inaccurate anywhere from 15% to 39% of the time. The lack of careful notetaking during the call only exacerbates the problem and is fodder for frustration. One participant likely will defend the AI-generated transcript as infallible and another will view it as workslop.
Potential Expansion of Liability
AI-generated call summaries become written artifacts, subject to discovery in litigation if relevant to the claims asserted. As such, they have the potential to create unintended evidence, undermine witness credibility by conflicting with participants’ recollections of the call, and be inconsistent with official minutes or documentation.
Consent and Notice Issues
Some participants may be in jurisdictions that legally require all-party consent to the recording of a call. The participant deploying the bot may be violating the rights of such other participants when the consent of all participants has not been obtained. There are currently a series of lawsuits against one of the better known AI transcription bots, Otter.ai, making exactly that claim. In re Otter.AI Privacy Litigation, No. 5:25-cv-06911 (N.D. Cal.).
Managing the Risks
Given the expansive and disruptive nature of AI in the workplace, employers are in dire need of carefully considered, plainly structured AI policies for staff. The use of transcription bots should fall within such a policy and it should take into account the above legal risks.
For support in developing your organization’s approach to AI, including managing employee use of AI transcription bots, developing AI policies and procedures, and reviewing AI vendor contracts, please contact:
Jay Sabin, Esq., Member, Labor and Employment Practice at 917.596.8987 or jsabin@bracheichler.com
Related Practices: Healthcare Law, Labor and Employment
Related Attorney: Lani M. Dornfeld, Jay Sabin
Related Industry: Healthcare
