Kavli Affiliate: Hsiaowen Chen| First 5 Authors: [#item_custom_name[1, [#item_custom_name[2, [#item_custom_name[3, [#item_custom_name[4, [#item_custom_name[5| Summary:Large language models (LLMs) exhibit complementary strengths arising from differences in pretraining data, model architectures, and decoding behaviors. Inference-time ensembling provides a practical way to combine these capabilities without retraining. However, existing ensemble approaches suffer from fundamental limitations. Most rely on fixed fusion […]
Continue.. AdaFuse: Adaptive Ensemble Decoding with Test-Time Scaling for LLMs