OpenAI recently unveiled Sora, a remarkable “text-to-video” tool. The tool allows users to input a prompt, and in response it creates a realistic video almost instantly. The public did not have immediate access to Sora. Reportedly, OpenAI has a red team of security experts that will assess the model to understand its potential for deepfake videos, misinformation, bias, hateful content, and compromise. Red teaming, traditionally used in cybersecurity, is a military concept that was not originally intended for broad application in the private sector. However, recently it has proven valuable in identifying technology vulnerabilities, although it does not currently address the lack of government regulation of technology.
The concept of red teaming originated from Sun Tzu’s military strategy in ‘”The Art of War.” In that book, the red team plays the role of the adversary and identifies hidden vulnerabilities in the defenses of the blue team. More recently, red teaming has been used as a trusted cybersecurity technique. Red teaming aims to find existing computer and network vulnerabilities and fix them. However, in the fast-paced world of AI development, this may not always be effective. It is unclear who plays the blue and red teams in this context, and there is ambiguity about the role of each team and if and how the adversarial exercise ultimately benefits the public.
The U.S. National Institute of Standards and Technology (NIST) was directed to develop science-based guidelines to support the deployment of safe, secure, and trustworthy AI systems by President Joe Biden. NIST has started implementing its new duties and has also began a consortium to assess AI systems and improve their safety and trustworthiness. Red teaming can help protect AI models from cyber threats, such as data theft or sabotage, by identifying and ranking vulnerabilities. However, it is not a comprehensive solution and should be incorporated alongside other evaluation, assessment, and mitigation techniques. In these early times of AI technology development, red teaming alone is insufficient for addressing the potential risks introduced by AI models. Instead, a comprehensive set of strategies is essential– rather than only a recycled military game popular with cybersecurity groups.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…