Artificial intelligence startup and MIT spinoff Liquid AI Inc. today launched their first wave of generative AI models, and they stand out remarkably from competing models because they’re built on a fundamentally new architecture.
The new models are called ‘Liquid Foundation Models’ or LFMs and are said to deliver impressive performance that is comparable to, or even superior to, some of the best large language models currently available.
The Boston-based startup was founded by a team of researchers from the Massachusetts Institute of Technology, including Ramin Hasani, Mathias Lechner, Alexander Amini and Daniela Rus. They are said to have pioneered the concept of ‘liquid neural networks’, a class of AI models that are very different from the Generative Pre-trained Transformer-based models we know and love today, such as OpenAI’s GPT series and Google LLC’s Gemini Models.
The company’s mission is to create highly capable and efficient general purpose models that can be used by organizations of all sizes. To do that, it is building LFM-based AI systems that can operate at any scale, from the network edge to enterprise-level deployments.
What are LFMs?
According to Liquid, the LFMs represent a new generation of AI systems designed with both performance and efficiency in mind. They use minimal system memory while delivering exceptional computing power, the company explains.
They are based on dynamical systems, numerical linear algebra and signal processing. That makes them ideal for processing different types of sequential data, including text, audio, images, video and signals.
Liquid AI first made headlines in December when it raised $37.6 million in seed funding. At the time, it explained that its LFMs are based on a newer, Liquid Neural Network architecture originally developed by MIT’s Computer Science and Artificial Intelligence Laboratory. LNNs are based on the concept of artificial neurons, or nodes, for transforming data.
While traditional deep learning models require thousands of neurons to perform computational tasks, LNNs can achieve the same performance with significantly less. It does this by combining these neurons with innovative mathematical formulations, allowing it to do much more with less.
The startup says its LFMs retain this adaptable and efficient capability, allowing them to make real-time adjustments during inference without the massive computational overhead associated with traditional LLMs. As a result, they can efficiently process up to 1 million tokens without any noticeable impact on memory usage.
Liquid AI will launch with a family of three models, including LFM-1B, a compact model with 1.3 billion parameters designed for resource-constrained environments. Slightly more powerful is LFM-3B, which has 3.1 billion parameters and is aimed at edge deployments, such as mobile applications, robots and drones. Finally, there is LFM-40B, a much more powerful ‘mix of experts’ model with 40.3 billion parameters, designed for deployment on cloud servers to handle the most complex use cases.
The startup believes its new models have already shown ‘state-of-the-art results’ in a number of key AI benchmarks, and believes they will become formidable competitors to existing generative AI models such as ChatGPT.
While traditional LLMs see a sharp increase in memory usage when performing long-context processing, the LFM-3B model notably maintains a much smaller memory footprint (above), making it an excellent choice for applications that require processing large amounts of sequential data. incorporated. . Example use cases include chatbots and document analysis, the company said.
Strong performance on benchmarks
In terms of performance, the LFMs delivered some impressive results, with the LFM-1B outperforming transformer-based models in the same size category. Meanwhile, LFM-3B can compete well against models like Microsoft Corp.’s Phi-3.5. and the Llama family of Meta Platforms Inc. As for the LFM-40B, its efficiency is such that it can even outperform larger models while maintaining an unrivaled balance between performance and efficiency.
Liquid AI said the LFM-1B model performed particularly dominantly on benchmarks such as MMLU and ARC-C, setting a new standard for 1B parameter models.
The company makes its models available in early access through platforms such as Liquid Playground, Lambda – through the chat and application programming interfaces – and Perplexity Labs. That gives organizations the chance to integrate their models into different AI systems and see how they perform in different deployment scenarios, including edge devices and on-premises.
One of the things it’s working on now is optimizing the LFM models for use on specific hardware built by Nvidia Corp., Advanced Micro Devices Inc., Apple Inc., Qualcomm Inc. and Cerebras Computing Inc., so users can get even more performance out of them by the time they are generally available.
The company says it will release a series of technical blog posts that will delve deep into how each model works ahead of the official launch. Furthermore, it encourages red-teaming and invites the AI community to test its LFMs to the limit, to see what they can and cannot do.
Image: SiliconANGLE/Microsoft Designer
Your support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, in-depth, and relevant content.
Join our community on YouTube
Join the community of over 15,000 #CubeAlumni experts including Amazon.com CEO Andy Jassy, Dell Technologies Founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more celebrities and experts.
THANK YOU