
Newly introduced artificial intelligence (AI) tools get all the applause no matter what, and it's been a longstanding ritual, but Elon Musk-backed AI models stand out to gain global acclaim, which you might have witnessed with the release of Grok-2 large language model (LLM).
Grok-2 is available for an $8 monthly subscription on X (formerly Twitter).
Much to the delight of those who found the Grok successors to be lagging with output responses or with any other functionality, Grok-2 and Grok-2 mini have undergone a transitional phase, one which has infused a catalyst for better analysing information and addressing queries.
Read more: Former Character.AI founder joins Google to co-lead Gemini, other AI projects
The improvement has emerged after two developers at xAI rewrote the inference code stack for Grok-2 and Grok-2 mini in just three days.
Taking to X (formerly Twitter), an xAI developer Igor Babuschkin posted: “Grok 2 mini is now 2x faster than it was yesterday. In the last three days, @lm_zheng and @MalekiSaeed rewrote our inference stack from scratch using SGLang. This has also allowed us to serve the big Grok 2 model, which requires multi-host inference, at a reasonable speed. Both models didn’t just get faster, but also slightly more accurate. Stay tuned for further speed improvements!”
The developers who brought about this feat are Lianmin Zheng and Saeed Maleki, according to Babuschkin’s post.
The post mentions that the Grok-2, Grok-2 mini developers worked on SGLang, an open-source highly efficient system for achieving complex language model programmes, grabbing up to 6.4 times higher data throughput than existing systems.