Kavli Affiliate: Ke Wang | First 5 Authors: GLM-4. 5 Team, GLM-4. 5 Team, , , | Summary: We present GLM-4.5, an open-source Mixture-of-Experts (MoE) large language model with 355B total parameters and 32B activated parameters, featuring a hybrid reasoning method that supports both thinking and direct response modes. Through multi-stage training on 23T tokens […]
Continue.. GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models