Kavli Affiliate: Jia Liu
| First 5 Authors: Yuchen Ling, Shengcheng Yu, Chunrong Fang, Guobin Pan, Jun Wang
| Summary:
Context: Crowdsourced testing has gained popularity in software testing,
especially for mobile app testing, due to its ability to bring diversity and
tackle fragmentation issues. However, the openness of crowdsourced testing
presents challenges, particularly in the manual review of numerous test
reports, which is time-consuming and labor-intensive. Objective: The primary
goal of this research is to improve the efficiency of review processes in
crowdsourced testing. Traditional approaches to test report prioritization lack
a deep understanding of semantic information in textual descriptions of these
reports. This paper introduces LLMPrior, a novel approach for prioritizing
crowdsourced test reports using large language models (LLMs). Method: LLMPrior
leverages LLMs for the analysis and clustering of crowdsourced test reports
based on the types of bugs revealed in their textual descriptions. This
involves using prompt engineering techniques to enhance the performance of
LLMs. Following the clustering, a recurrent selection algorithm is applied to
prioritize the reports. Results: Empirical experiments are conducted to
evaluate the effectiveness of LLMPrior. The findings indicate that LLMPrior not
only surpasses current state-of-the-art approaches in terms of performance but
also proves to be more feasible, efficient, and reliable. This success is
attributed to the use of prompt engineering techniques and the cluster-based
prioritization strategy. Conclusion: LLMPrior represents a significant
advancement in crowdsourced test report prioritization. By effectively
utilizing large language models and a cluster-based strategy, it addresses the
challenges in traditional prioritization approaches, offering a more efficient
and reliable solution for app developers dealing with crowdsourced test
reports.
| Search Query: ArXiv Query: search_query=au:”Jia Liu”&id_list=&start=0&max_results=3