Artificial intelligence (AI) programs, like their human developers, are not perfect and can make errors, produce inaccurate information, and display biases. New research suggests that human users may unconsciously absorb these biases. The study showed that bias introduced by an AI model can persist in a person’s behavior even after they stop using the AI program.
It is important to understand how AI can influence human decisions and learn from biased training models. The study simulated a medical diagnostic task and found that participants who received biased suggestions from a fake AI communicator incorporated the bias into their decision-making. While the research offers insight into how people learn from biased AI, it has limited implications for physicians.
The study suggests that AI can influence human behavior for the worse and that interactions with problematic AI models can have lasting effects. AI models can become even more biased than humans, and there is a risk of attributing undue objectivity to machine-learning tools. Algorithms lack the subtle cues of uncertainty in human communication, which can lead to misunderstandings. Transparency from AI developers is crucial to addressing AI bias, but there are challenges in obtaining this transparency. Increased knowledge, understanding and reporting of AI systems are necessary to minimize the impacts of AI bias. The study notes that it is important to prevent a cycle where biased humans create increasingly biased algorithms.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…