Artificial intelligence (AI) encompasses a wide range of considerations, including privacy, security, ethics, and learned bias. The AI field is continuously expanding, requiring businesses to reassess their policies and practices, even if they have not officially embraced AI technology yet. During the Fortune Brainstorm Tech conference in Deer Valley, Utah, Check Point’s Chief Technology Officer, Dorit Dor, highlighted that many companies are unaware of the extent to which internal data is already being utilized by AI tools within their organization. Dor emphasized the importance of recognizing that their data and information are already available to these tools.
At the conference, executives from companies such as PagerDuty, Salesforce, and Signal gathered to share their perspectives and experiences regarding data in the age of AI. One particular concern discussed was the practice of employees using tools like ChatGPT, effectively feeding internal data into these AI tools. This raises significant issues regarding data leakage, which could jeopardize both proprietary competitive information and personal customer data. Clara Shih, the CEO of Salesforce’s AI business, pointed out that there is a lack of clear separation that enterprises typically expect from secure databases when it comes to certain AI tools used by employees at different companies. Shih explained that large language models, which power generative AI tools, require as much context as possible from the user to generate relevant and accurate responses. However, without careful architecture, the context given into the prompt is learned by the model itself, potentially compromising security.
Sean Scott, the chief product development officer of PagerDuty, echoed these concerns but emphasized the importance of following security best practices. Defining policies, identifying valuable data, and educating employees are crucial steps in protecting data. Consistent monitoring is necessary to ensure adherence to these policies.
Alongside protecting internal data, companies also face challenges in assessing the quality of external data they incorporate. Signal’s President, Meredith Whittaker, highlighted that most off-the-shelf large language models used in AI are opaque to users. While these models know the data they have been trained on, users remain unaware of their content. Implementing such AI tools entails a risk of obtaining incorrect or offensive results due to the elusive nature of the underlying data. Whittaker stressed the importance of fine-tuning these models with additional data that aligns with a specific domain or purpose, aiming for less-offensive outputs. However, she emphasized the need for greater clarity and agency in addressing these concerns, calling for more regulation to curtail the circulation of problematic data.
Although regulation is seen as a starting point, Dor from Check Point cautioned that it would only raise the minimum requirements and not guarantee complete safety. Currently, the burden of dealing with data in the AI era primarily falls on overburdened chief information security officers (CISOs) within companies. Dor emphasized that CISOs were already exhausted, and they now face the additional challenge of navigating legal aspects surrounding data in the AI landscape.
The ever-expanding world of data-related AI poses numerous considerations for businesses. From safeguarding internal data to scrutinizing external data sources, companies need to be vigilant in their practices. While regulation can play a role in mitigating risks, it alone is not sufficient. Collaboration between industry experts, regulators, and businesses is crucial to adapt to the multifaceted challenges posed by data in the era of AI
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…