Multiverse Computing shrinks OpenAI language model by half, cutting memory use

techradar.com

Spanish AI company Multiverse Computing has released a compressed version of OpenAI's large language model, significantly reducing its memory requirements. The new model, HyperNova 60B 2602, cuts memory usage by 50% while maintaining near-equivalent tool-calling performance. This advancement could lower infrastructure costs and enable deployment on less powerful hardware. Multiverse's proprietary CompactifAI technology uses quantum-inspired tensor networks to restructure model weights, offering a more efficient representation without removing parameters, unlike traditional compression methods.


With a significance score of 4.2, this news ranks in the top 4.7% of today's 32040 analyzed articles.

Get summaries of news with significance over 5.5 (usually ~10 stories per week). Read by 10,000+ subscribers:


Multiverse Computing shrinks OpenAI language model by half, cutting memory use | News Minimalist