SCALM: Towards Semantic Caching for Automated Chat Services with Large Language Models

Kavli Affiliate: Feng Wang

| First 5 Authors: Jiaxing Li, Chi Xu, Feng Wang, Isaac M von Riedemann, Cong Zhang

| Summary:

Large Language Models (LLMs) have become increasingly popular, transforming a
wide range of applications across various domains. However, the real-world
effectiveness of their query cache systems has not been thoroughly
investigated. In this work, we for the first time conducted an analysis on
real-world human-to-LLM interaction data, identifying key challenges in
existing caching solutions for LLM-based chat services. Our findings reveal
that current caching methods fail to leverage semantic connections, leading to
inefficient cache performance and extra token costs. To address these issues,
we propose SCALM, a new cache architecture that emphasizes semantic analysis
and identifies significant cache entries and patterns. We also detail the
implementations of the corresponding cache storage and eviction strategies. Our
evaluations show that SCALM increases cache hit ratios and reduces operational
costs for LLMChat services. Compared with other state-of-the-art solutions in
GPTCache, SCALM shows, on average, a relative increase of 63% in cache hit
ratio and a relative improvement of 77% in tokens savings.

| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3

Read More