Imagine a room that has one locked door with a single mail slot in the door, offering no view of what’s in the room. Outside the room is a person who is fluent in Chinese, and who will occasionally push a Chinese message written on paper into the room through the mail slot. Inside the room is a person who doesn’t understand a word of Chinese, but is able to respond in Chinese by following a set of instructions. (Those instructions might follow this type of algorithm: “When you see this set of input characters: “a, b, c”, then respond with this set of output characters: “x, y, z.”) Outsiders might think the person in the room is fluent in Chinese, but in reality, that person is merely executing pre-defined instructions quickly. This concept, known as the ‘Chinese Room’ argument, raises questions about the true understanding of language. It leads us to wonder: How can we determine if AI (the person in the room) truly comprehends language?
When we interact with AI, such as with ChatGPT, it often appears as though the AI genuinely grasps our language. However, beneath the surface, these AI models are merely following complex instructions; therefore, we cannot rely solely on their performance to determine their understanding of language. To gain further clarity, let us revisit the Chinese room scenario. However, this time picture two Chinese rooms: one housing an individual fluent in Chinese, and the other an imposter. Although the input and output of both rooms may seem identical, the process employed by each person couldn’t be more different. In one room, the person fluent in Chinese uses his brain as a ‘scratch pad’ to read the Chinese message and respond. The non-Chinese speaking person uses an actual ‘scratch pad’ to quickly determine the best pre-programmed response for the message’s characters.
If we wish to determine if AI comprehends language like humans do, we need insight into AI’s inner workings. Just as the Chinese room scenario possessed a hidden scratch pad, as alluded to above, the brain also possesses a metaphorical scratch pad. By utilizing techniques like fMRI or EEG, we can capture snapshots of the brain while it reads, providing us with fuzzy yet meaningful glimpses into the brain’s processing and its ordering of information.
Similarly, AI has its own scratch pad housed within neural networks–– complex systems of interconnected artificial neurons. When we input a word into the neural network, each artificial neuron computes a numerical value, contributing to the overall understanding, or ordering, of language. Collectively, these computations form the AI’s scratch pad, portraying how it perceives and represents language. Luckily, like fMRI or EEG for the brain, there are techniques to capture AI computational processes, too.
In order to map the similarities and differences between the processes of the brain and of AI, researchers have devised a novel approach. They train an AI model to predict the brain’s scratch pad (functions) by analyzing the neural network’s scratch pad (functions) for a given word. Astonishingly, this comparison yields results that surpass random chance, suggesting some degree of similarity between the brain and AI! This holds true not only for individual words, but also for more complex linguistic constructs like sentences and stories.
While AI does not replicate the brain’s functioning precisely, it also does not operate randomly. As neural network technology becomes more accurate, and more like the structure of biological neurological systems, it is not surprising that AI’s ‘scratch pad’ usage will more strongly mimic brain-like patterns. These findings extend beyond language comprehension, with similar trends being observed in AI vision domains, too. Thus, it is crucial to explore the inner workings of AI, comparing them to human patterns and understanding, in order to illuminate our understanding of AI’s grasp of language, and of our own, too.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…