Loading filters...
LFMs (Liquid Foundation Models)

LFMs (Liquid Foundation Models)

New category of generative AI models designed by Liquid AI, optimized for both efficiency and scalability across various data types like text, audio, and video.

Liquid Foundation Models (LFMs) stand out due to their ability to perform high-level tasks with fewer resources compared to conventional transformer-based models. They use a dynamic architecture, allowing them to manage long-context inputs (up to 32,000 tokens) without a significant increase in memory consumption, making them suitable for tasks such as document analysis and autonomous systems while being deployable on both large cloud servers and smaller edge devices​.

LFMs' architecture is based on principles of dynamical systems and numerical linear algebra, offering superior performance in sequential data processing. The models are self-regulating, adapting their complexity based on task demands, and incorporating mechanisms for continuous learning. This adaptability, combined with their reduced memory footprint, makes them particularly effective for long-context tasks like summarization or conversational AI. LFMs have demonstrated exceptional benchmark results, competing with larger models like Meta’s LLaMA and OpenAI’s GPT, all while using fewer parameters, enhancing both efficiency and cost-effectiveness​.

First introduced in 2024, LFMs quickly attracted attention for their innovative design and potential to reshape AI scalability. The foundational work for these models was developed by MIT researchers Ramin Hasani, Mathias Lechner, and Daniela Rus, whose earlier work on liquid neural networks influenced the architecture. These researchers continue to be key figures in the advancement of LFMs​.

Generality: 0.345