The potential of Artificial Intelligence (AI) tools, such as Google’s Bard and OpenAI’s ChatGPT, has triggered a mixed response in the tech world. These tools can generate human-like responses by accessing databases of content, but this capability has raised concerns about privacy, security, and intellectual property rights. In a research paper, Mathieu Gorge, VigiTrust’s founder, shared the results of an interview he conducted with 15 Chief Information Security officers, all of whom expressed apprehension about generative AI. Possible IP leakages and confidentiality issues were among the most pressing worries. In addition, there’s a risk of creating shadow IT.
These tools often process data via the internet, without specifying where the data is stored. Furthermore, they may store user input to personalize future interactions, such as individual preferences and query phrasing. There are also doubts about the terms and conditions of these services, emphasizing the need to read them before sharing personal information.
The behavior of generative AI tools may differ considerably. Some rely on outdated data, while others search actively for available data. Google’s Bard and OpenAI’s ChatGPT both have data privacy policies which allow users to delete individual conversations, all data, or their accounts within a 30-day timeframe. In addition, they monitor queries to prevent abuse. However, these measures may not be sufficient to protect businesses from the risks posed by these tools, which have less control over the training data compared to enterprise-based machine learning solutions.
Industry experts recommend that generative AI services should have clear guidelines and policies related to data privacy, including transparency about data storage and usage for training and improvement purposes. Obtaining consent for sharing client data with internally operated systems is usually straightforward, but it is more complicated for public chatbots. Italy and the UK have recently taken steps to limit the use of such tools, citing security and compliance issues.
Despite these concerns, the potential of generative AI is acknowledged by many experts. CIOs and Chief data officers are looking forward to the possibilities of this technology. However, it needs more development before it can be used in corporate settings. The quality and consistency of the responses should be monitored, and enterprises should have control over the finetuning and controlling of the models. Ethical considerations should be addressed and biases mitigated to ensure fairness.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…