
The Huawei Technologies’ lab in charge of large language models (LLMs) has defended its latest open-source Pro MoE model as indigenous, denying allegations that it was developed through incremental training of third-party models.The Shenzhen-based telecoms equipment giant, considered the poster child for China’s resilience against US tech sanctions, is fighting to maintain its relevance in the LLM field, as open-source models developed by the likes of DeepSeek and Alibaba Group Holding gain...