Graph-Theoretic Analysis of $n$-Replica Time Evolution in the Brownian Gaussian Unitary Ensemble

Kavli Affiliate: Cheng Peng | First 5 Authors: Tingfei Li, Cheng Peng, Jianghui Yu, , | Summary: In this paper, we investigate the $n$-replica time evolution operator $mathcal{U}_n(t)equiv e^{mathcal{L}_nt} $ for the Brownian Gaussian Unitary Ensemble (BGUE) using a graph-theoretic approach. We examine the moments of the generating operator $mathcal{L}_n$, which governs the Euclidean time […]


Continue.. Graph-Theoretic Analysis of $n$-Replica Time Evolution in the Brownian Gaussian Unitary Ensemble

Phase transition in a doubly holographic model of closed $mathbf{dS_{2} }$ spacetime

Kavli Affiliate: Cheng Peng | First 5 Authors: Wen-Hao Jiang, Cheng Peng, Yun-Song Piao, , | Summary: Double holography has been proved to be a powerful method in comprehending the spacetime entanglement. In this paper we investigate the doubly holographic construction in ${mathrm{dS_{2}} }$ spacetime. We find that in this model there exists a new […]


Continue.. Phase transition in a doubly holographic model of closed $mathbf{dS_{2} }$ spacetime

Simulation as Reality? The Effectiveness of LLM-Generated Data in Open-ended Question Assessment

Kavli Affiliate: Long Zhang | First 5 Authors: Long Zhang, Meng Zhang, Wei Lin Wang, Yu Luo, | Summary: The advancement of Artificial Intelligence (AI) has created opportunities for e-learning, particularly in automated assessment systems that reduce educators’ workload and provide timely feedback to students. However, developing effective AI-based assessment tools remains challenging due to […]


Continue.. Simulation as Reality? The Effectiveness of LLM-Generated Data in Open-ended Question Assessment

Delta — Contrastive Decoding Mitigates Text Hallucinations in Large Language Models

Kavli Affiliate: Cheng Peng | First 5 Authors: Cheng Peng Huang, Hao-Yuan Chen, , , | Summary: Large language models (LLMs) demonstrate strong capabilities in natural language processing but remain prone to hallucinations, generating factually incorrect or fabricated content. This issue undermines their reliability, particularly in high-stakes domains such as healthcare and legal advisory. To […]


Continue.. Delta — Contrastive Decoding Mitigates Text Hallucinations in Large Language Models

Superconducting Properties of the Titanium-Based Oxides Compounds: A Review

Kavli Affiliate: Yi Zhou | First 5 Authors: Junqi He, Yi Zhou, , , | Summary: In recent years, the superconductivity of novel layered materials, titanium-based pnictide oxides, was discovered. Due to the properties of possessing both cuprate and iron-based superconductors, these compounds have attracted the interest of researchers. Titanium pnictide oxides were reported to […]


Continue.. Superconducting Properties of the Titanium-Based Oxides Compounds: A Review

MedMimic: Physician-Inspired Multimodal Fusion for Early Diagnosis of Fever of Unknown Origin

Kavli Affiliate: Yi Zhou | First 5 Authors: Minrui Chen, Yi Zhou, Huidong Jiang, Yuhan Zhu, Guanjie Zou | Summary: Fever of unknown origin FUO remains a diagnostic challenge. MedMimic is introduced as a multimodal framework inspired by real-world diagnostic processes. It uses pretrained models such as DINOv2, Vision Transformer, and ResNet-18 to convert high-dimensional […]


Continue.. MedMimic: Physician-Inspired Multimodal Fusion for Early Diagnosis of Fever of Unknown Origin

MedMimic: Physician-Inspired Multimodal Fusion for Early Diagnosis of Fever of Unknown Origin

Kavli Affiliate: Yi Zhou | First 5 Authors: Minrui Chen, Yi Zhou, Huidong Jiang, Yuhan Zhu, Guanjie Zou | Summary: Fever of unknown origin FUO remains a diagnostic challenge. MedMimic is introduced as a multimodal framework inspired by real-world diagnostic processes. It uses pretrained models such as DINOv2, Vision Transformer, and ResNet-18 to convert high-dimensional […]


Continue.. MedMimic: Physician-Inspired Multimodal Fusion for Early Diagnosis of Fever of Unknown Origin

Beyond Prompt Content: Enhancing LLM Performance via Content-Format Integrated Prompt Optimization

Kavli Affiliate: Cheng Peng | First 5 Authors: Yuanye Liu, Jiahang Xu, Li Lyna Zhang, Qi Chen, Xuan Feng | Summary: Large Language Models (LLMs) have shown significant capability across various tasks, with their real-world effectiveness often driven by prompt design. While recent research has focused on optimizing prompt content, the role of prompt formatting, […]


Continue.. Beyond Prompt Content: Enhancing LLM Performance via Content-Format Integrated Prompt Optimization

ShapeShifter: 3D Variations Using Multiscale and Sparse Point-Voxel Diffusion

Kavli Affiliate: Matthew Fisher | First 5 Authors: Nissim Maruani, Wang Yifan, Matthew Fisher, Pierre Alliez, Mathieu Desbrun | Summary: This paper proposes ShapeShifter, a new 3D generative model that learns to synthesize shape variations based on a single reference model. While generative methods for 3D objects have recently attracted much attention, current techniques often […]


Continue.. ShapeShifter: 3D Variations Using Multiscale and Sparse Point-Voxel Diffusion

Psychometric-Based Evaluation for Theorem Proving with Large Language Models

Kavli Affiliate: Long Zhang | First 5 Authors: Jianyu Zhang, Yongwang Zhao, Long Zhang, Jilin Hu, Xiaokun Luan | Summary: Large language models (LLMs) for formal theorem proving have become a prominent research focus. At present, the proving ability of these LLMs is mainly evaluated through proof pass rates on datasets such as miniF2F. However, […]


Continue.. Psychometric-Based Evaluation for Theorem Proving with Large Language Models