Alibaba launched greater than 100 open-source AI fashions together with Qwen 2.5 72B which beats different open-source fashions in math and coding benchmarks.
A lot of the AI trade’s consideration in open-source fashions has been on Meta’s efforts with Llama 3, however Alibaba’s Qwen 2.5 has closed the hole considerably. The freshly launched Qwen 2.5 household of fashions vary in dimension from 0.5 to 72 billion parameters with generalized base fashions in addition to fashions targeted on very particular duties.
Alibaba says these fashions include “enhanced information and stronger capabilities in math and coding” with specialised fashions targeted on coding, maths, and a number of modalities together with language, audio, and imaginative and prescient.
Alibaba Cloud additionally introduced an improve to its proprietary flagship mannequin Qwen-Max, which it has not launched as open-source. The Qwen 2.5 Max benchmarks look good, nevertheless it’s the Qwen 2.5 72B mannequin that has been producing many of the pleasure amongst open-source followers.
The benchmarks present Qwen 2.5 72B beating Meta’s a lot bigger flagship Llama 3.1 405B mannequin on a number of fronts, particularly in math and coding. The hole between open-source fashions and proprietary ones like these from OpenAI and Google can be closing quick.
Early customers of Qwen 2.5 72B present the mannequin coming simply wanting Sonnet 3.5 and even beating OpenAI’s o1 fashions in coding.
Open supply Qwen 2.5 beats o1 fashions on coding 🤯🤯
Qwen 2.5 scores greater than the o1 fashions on coding on Livebench AI
Qwen is slightly below Sonnet 3.5, and for an open-source mode, that’s superior!!
o1 is sweet at some exhausting coding however horrible at code completion issues and… pic.twitter.com/iazam61eP9
— Bindu Reddy (@bindureddy) September 20, 2024
Alibaba says these new fashions had been all skilled on its large-scale dataset encompassing as much as 18 trillion tokens. The Qwen 2.5 fashions include a context window of as much as 128k and might generate outputs of as much as 8k tokens.
The transfer to smaller, extra succesful, and open-source free fashions will probably have a wider influence on a number of customers than extra superior fashions like o1. The sting and on-device capabilities of those fashions imply you will get a number of mileage from a free mannequin working in your laptop computer.
The smaller Qwen 2.5 mannequin delivers GPT-4 degree coding for a fraction of the fee, and even free should you’ve bought a good laptop computer to run it domestically.
We’ve got GPT-4 for coding at residence! I appeared up OpenAI?ref_src=twsrcpercent5Etfw”>@OpenAI GPT-4 0613 outcomes for numerous benchmarks and in contrast them with @Alibaba_Qwen 2.5 7B coder. 👀
> 15 months after the discharge of GPT-0613, now we have an open LLM underneath Apache 2.0, which performs simply as effectively. 🤯
> GPT-4 pricing… pic.twitter.com/2szw5kwTe5
— Philipp Schmid (@_philschmid) September 22, 2024
Along with the LLMs, Alibaba launched a major replace to its imaginative and prescient language mannequin with the introduction of Qwen2-VL. Qwen2-VL can comprehend movies lasting over 20 minutes and helps video-based question-answering.
It’s designed for integration into cellphones, cars, and robots to allow automation of operations that require visible understanding.
Alibaba additionally unveiled a brand new text-to-video mannequin as a part of its picture generator, Tongyi Wanxiang massive mannequin household. Tongyi Wanxiang AI Video can produce cinematic high quality video content material and 3D animation with numerous creative kinds based mostly on textual content prompts.
The demos look spectacular and the device is free to make use of, though you’ll want a Chinese language cellular quantity to join it right here. Sora goes to have some severe competitors when, or if, OpenAI finally releases it.