Zoom, a videoconferencing company, is facing legal challenges in Europe regarding its use of customer data for training artificial intelligence models. In March 2023, a clause in Zoom’s terms and conditions attracted European Union (EU) scrutiny because it seemed to grant the company use of customer data for AI training without providing an opt-out option. The public reaction was strong, and concerns were raised about the consent required for processing customer information for AI model training, particularly clauses 10.2 through 10.4 in Zoom’s terms and conditions.
Zoom’s attempt to clarify its position and respond to the controversy was met with further criticism and suspicion that the company might be withholding information. The dispute involves significant EU privacy laws, including the General Data Protection Regulation (GDPR) and the ePrivacy Directive, which mandate clear and explicit user consent for any personal data processing. Zoom’s approach, such as pre-checked boxes for consent options and the bundling of different data processing purposes, most likely does not comply with EU standards. Additionally, Zoom’s assertion that users’ metadata can be used without consent contradicts EU law.
Despite claims that Zoom relies on customer consent as a legal basis for its AI data mining, legal experts argue that the company is actually relying on a “performance of a contract” legal theory, which allows for the essential use of user data in order to provide the contracted services; therefore, non-essential processing for uncontracted services like AI training may not be legal. This situation parallels the case of OpenAI’s chatbot service, ChatGPT, which had to switch to the legal basis of “legitimate interests” to process data for AI model training. However, it remains with EU courts to decide if Zoom, too, needs to reform its system in order to comply with EU privacy laws.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…