ByteDance's Doubao AI team has open-sourced COMET, a Mixture of Experts (MoE) optimization framework that improves large language model (LLM) training efficiency while reducing costs. Already ...
A Nature paper describes an innovative analog in-memory computing (IMC) architecture tailored for the attention mechanism in large language models (LLMs). They want to drastically reduce latency and ...
A Chinese AI company's more frugal approach to training large language models could point toward a less energy-intensive—and more climate-friendly—future for AI, according to some energy analysts. "It ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results