In 2020, a two scientists at the Massachusetts Institute of Technology spearheaded a project introducing an innovative neural network, which was inspired by the biological intelligence of the microscopic C. elegans roundworm. This development led to the creation of liquid neural networks (LNNs). Following a significant advancement last year, these unique networks have demonstrated the potential to replace conventional ones in certain cases. Researchers recognized that C. elegans could serve as an ideal organism for devising robust neural networks which are adaptable to unexpected situations. This minute ground-dwelling creature, with its fully mapped nervous system, exhibits a diverse array of sophisticated behaviors and the ability to learn from prior experiences. Therefore, it is an ideal model for a light-weight, adaptable and efficient neural network.
Currently, artificial intelligence (AI) seems fully focused on large language models (LLMs), and on ever bigger neural networks. However, not all applications can support the computational infrastructure and memory demands of these large models. To address this issue, researchers from MIT CSAIL are developing LNNs as a substitute. LLNs are compact, adaptable and efficient which makes them more suitable for certain AI problems, such as robotics and self-driving cars.
Traditional deep learning models struggle to fit certain applications, which have a smaller portable footprint, because they require a lot of computer power, storage space and cooling. LNNs, on the other hand, are designed to be small, accurate and efficient, allowing them to run on small portable computers, such as robots, without being connected to the Cloud.
LNNs use algorithms that are less computationally expensive and can stabilize neurons during training. They also can adapt to new situations after training. This adaptability is an important feature not found in typical LLM neural networks. In addition, LNNs have an architecture which allows them to incorporate continuous-time models, leading to real-time behavior modification.
Currently, ‘interpretability’ is a significant challenge with LLM AIs. Interpretability is the ability to trace how an AI arrived at a particular decision. One of the key criticisms of current AI systems, like chatbots, is that the reasons for their responses are often encapsulated and inscrutable. LLMs are said to rely on ‘blackbox’ data, making it almost impossible to understand how the AI arrived at a particular decision. Interpretability is not an architectural obstacle with LNNs.
LNNs have a better grasp of causal relationships and can generalize to unseen situations. They can focus on the task (rather than on the context of the task), allowing LNNs to perform well even when underlying conditions change. LNNs are best suited for handling continuous, time-sensitive data streams, such as video, audio, or temperature measurements, and for computationally constrained and safety-critical applications like robotics, hospital systems, and autonomous vehicles.
The MIT CSAIL team plans to continue testing and developing LLNs, using them them in multi-robot systems in order to further explore both their capabilities and limitations.
The whytry.ai article you just read is a brief synopsis; the original article can be found here: Read the Full Article…