Legal Dispute Arises as Zoom Entangles Itself in Customer Data Usage for AI Model Training
Zoom finds itself embroiled in a legal conflict over its utilization of customer data for the training of AI models, sparking a dispute concerning privacy and data rights.
Three years ago, Zoom reached an agreement with the FTC after facing allegations of deceptive marketing related to security claims. The company had been accused of exaggerating the strength of its encryption. Now, Zoom is facing a potential similar issue in Europe concerning its privacy terms and conditions.
The recent controversy revolves around a clause added to Zoom's legal terms in March 2023. A Hacker News post brought attention to this clause, claiming that it permitted Zoom to utilize customer data for training AI models without an opt-out option. This sparked outrage on social media.
Upon closer examination, some experts suggested that the "no opt out" provision might only apply to "service generated data," such as telemetry, product usage, and diagnostics data. This would exclude customer activities and conversations on the platform.
Despite this clarification, discontent persisted. The prospect of personal data being repurposed to fuel AI models, potentially leading to job redundancies in an AI-driven future, was disconcerting to many.
The contentious clauses in Zoom's terms and conditions are found in sections 10.2 through 10.4. Notably, the last bolded line emphasizes the consent for processing "audio, video, or chat customer content" for AI model training. This consent claim follows a lengthy passage where users grant Zoom extensive rights for various types of usage data, including non-AI training purposes.
Beyond the evident risk of damage to its reputation due to customer backlash, Zoom also faces potential legal implications in the European Union. The region's data protection laws, including the General Data Protection Regulation (GDPR) and the ePrivacy Directive, impose privacy-related obligations on Zoom.
The ePrivacy Directive, extended to cover over-the-top services like Zoom, prohibits interception or surveillance of communications without user consent. Zoom's response to the controversy included an updated version of the terms, highlighted in a blog post. The company claimed that it would not use customer content for AI model training without consent.
However, Zoom's communication approach was criticized for being vague and evasive, failing to directly address concerns about data usage. Questions regarding the legal basis for AI model training on EU user data and the relationship between data usage and generative AI features remained unanswered.
Despite its attempts to clarify the situation, Zoom's response left many uncertain. The company seemed to allow administrators to provide consent on behalf of a group, while other participants received notice of the admin's decision. This approach raised further questions about the validity of consent in such scenarios.