Increasingly, the AI industry is moving toward generative AI models with longer contexts. But models with large context windows tend to be compute-intensive. Ori Goshen, the CEO of AI startup AI21 Labs, asserts that this doesn’t have to be the case — and his company is releasing a generative model to prove it.
Contexts, or context windows, refer to input data (e.g. text) that a model considers before generating output (more text). Models with small context windows tend to forget the content of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.
AI21 Labs’ Jamba, a new text-generating and -analyzing model, can perform many of the same tasks that models like OpenAI’s ChatGPT and Google’s Gemini can. Trained on a mix of public and proprietary data, Jamba can write text in English, French, Spanish and Portuguese.
Jamba can handle up to 140,000 tokens while running on a single GPU with at least 80GB of memory (like a high-end Nvidia A100). That translates to around 105,000 words, or 210 pages — a decent-sized novel.
Meta’s Llama 2, by comparison, has a 32,000-token context window — on the smaller side by today’s standards — but only requires a GPU with ~12GB of memory in order to run. (Context windows are typically measured in tokens, which are bits of raw text and other data.)
On its face, Jamba is unremarkable. Loads of freely available, downloadable generative AI models exist, from Databricks’ recently released DBRX to the aforementioned Llama 2.
But what makes Jamba unique is what’s under the hood. It uses a combination of two model architectures: transformers and state space models (SSMs).
Transformers are the architecture of choice for complex reasoning tasks, powering models like GPT-4 and Google’s Gemini, for example. They have several unique characteristics, but by far transformers’ defining feature is their “attention mechanism.” For every piece of input data (e.g. a sentence), transformers weigh the relevance of every other input (other sentences) and draw from them to generate the output (a new sentence).
SSMs, on the other hand, combine several qualities of older types of AI models, such as recurrent neural networks and convolutional neural networks, to create a more computationally efficient architecture capable of handling long sequences of data.
Now, SSMs have their limitations. But some of the early incarnations, including an open source model from Princeton and Carnegie Mellon researchers called Mamba, can handle larger inputs than their transformer-based equivalents while outperforming them on language generation tasks.
Jamba in fact uses Mamba as the base model — and Goshen claims it delivers three times the throughput on long contexts compare to transformer-based models of comparable sizes.
“While there are a few initial academic examples of SSM models, this is the first commercial-grade, production-scale model,” Goshen said in an interview with TechCrunch. “This architecture, in addition to being innovative and interesting for further research by the community, opens up great efficiency and throughput possibilities.”
Now, while Jamba has been released under the Apache 2.0 license, an open source license with relatively few usage restrictions, Goshen stresses that it’s a research release not intended to be used commercially. The model doesn’t have safeguards to prevent it from generating toxic text or mitigations to address potential bias; a fine-tuned, ostensibly “safer” version will be made available in the coming weeks.
But Goshen asserts that Jamba demonstrates the promise of the SSM architecture even at this early stage.
“The added value of this model, both because of its size and its innovative architecture, is that it can be easily fitted onto a single GPU,” he said. “We believe performance will further improve as Mamba gets additional tweaks.”