Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm

Kavli Affiliate: Zhuo Li | First 5 Authors: Zhuo Li, Yuhao Du, Xiaoqi Jiao, Yiwen Guo, Yuege Feng | Summary: Selecting high-quality and diverse training samples from extensive datasets plays a crucial role in reducing training overhead and enhancing the performance of Large Language Models (LLMs). However, existing studies fall short in assessing the overall […]


Continue.. Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm

A Hybrid CNN-Transformer Model for Heart Disease Prediction Using Life History Data

Kavli Affiliate: Ting Xu | First 5 Authors: Ran Hao, Yanlin Xiang, Junliang Du, Qingyuan He, Jiacheng Hu | Summary: This study proposed a hybrid model of a convolutional neural network (CNN) and a Transformer to predict and diagnose heart disease. Based on CNN’s strength in detecting local features and the Transformer’s high capacity in […]


Continue.. A Hybrid CNN-Transformer Model for Heart Disease Prediction Using Life History Data

Portable transcranial therapeutic ultrasound enhances targeted gene delivery for Parkinson’s disease: from rodent models to non-human primates

Kavli Affiliate: Vincent Ferrera | Authors: Alec J. Batts, Robin Ji, Sua Bae, Fotios N. Tsitsos, Sergio Jiménez-Gambín, Nancy Kwon, Samantha L. Gorman, Deny Tsakri, Rebecca L. Noel, Jonas Bendig, Daniella A. Jimenez, Melody DiBenedetto, Sofia A. Del Castillo, Filimon B. Keleta, James Caicedo, Alexander Romanov, Colleen T. Curley, Yulia Dzhashiashvili, Greglynn D. Walton-Gibbs, Bradley […]


Continue.. Portable transcranial therapeutic ultrasound enhances targeted gene delivery for Parkinson’s disease: from rodent models to non-human primates

The Stochastic Siren: Astrophysical Gravitational-Wave Background Measurements of the Hubble Constant

Kavli Affiliate: Daniel E. Holz | First 5 Authors: Bryce Cousins, Kristen Schumacher, Adrian Ka-Wai Chung, Colm Talbot, Thomas Callister | Summary: Gravitational waves from individually resolved compact object mergers can be used as standard sirens, offering a novel self-calibrating precision probe of cosmology. While the standard siren method has been well-explored, the gravitational-wave background […]


Continue.. The Stochastic Siren: Astrophysical Gravitational-Wave Background Measurements of the Hubble Constant

No [CII] or dust detection in two Little Red Dots at z$_{rm spec}$ > 7

Kavli Affiliate: Kohei Inayoshi | First 5 Authors: Mengyuan Xiao, Pascal A. Oesch, Longji Bing, David Elbaz, Jorryt Matthee | Summary: Little Red Dots (LRDs) are compact, point-like sources characterized by their red color and broad Balmer lines, which have been debated to be either dominated by active galactic nuclei (AGN) or dusty star-forming galaxies […]


Continue.. No [CII] or dust detection in two Little Red Dots at z$_{rm spec}$ > 7

Mapping the merging zone of late infall in the AB Aur planet-forming system

Kavli Affiliate: Ruobing Dong | First 5 Authors: Jessica Speedie, Ruobing Dong, Richard Teague, Dominique Segura-Cox, Jaime E. Pineda | Summary: Late infall events challenge the traditional view that planet formation occurs without external influence. Here we present deep ALMA $^{12}$CO $J=2-1$ and SO $J_{N}=5_6-4_5$ observations toward AB Aurigae, a Class II disk system with […]


Continue.. Mapping the merging zone of late infall in the AB Aur planet-forming system

Reweighting and Analysing Event Generator Systematics by Neural Networks on High-Level Features

Kavli Affiliate: Mihoko M. Nojiri | First 5 Authors: Amon Furuichi, Sung Hak Lim, Mihoko M. Nojiri, , | Summary: The state-of-the-art deep learning (DL) models for jet classification use jet constituent information directly, improving performance tremendously. This draws attention to interpretability, namely, the decision-making process, correlations contributing to the classification, and high-level features (HLFs) […]


Continue.. Reweighting and Analysing Event Generator Systematics by Neural Networks on High-Level Features

Global Neutrino Constraints on the Minimal U(1)$_{L_μ-L_τ}$ Model

Kavli Affiliate: Satoshi Shirai | First 5 Authors: Masahiro Ibe, Satoshi Shirai, Keiichi Watanabe, , | Summary: We examine the minimal U(1)$_{L_mu-L_tau}$ gauge model in light of the latest neutrino data, including neutrino oscillations, cosmological observations, direct mass measurements, and neutrinoless double-beta decay. Using the most conservative oscillation data, we find that normal ordering is […]


Continue.. Global Neutrino Constraints on the Minimal U(1)$_{L_μ-L_τ}$ Model

OptMetaOpenFOAM: Large Language Model Driven Chain of Thought for Sensitivity Analysis and Parameter Optimization based on CFD

Kavli Affiliate: Long Zhang | First 5 Authors: Yuxuan Chen, Long Zhang, Xu Zhu, Hua Zhou, Zhuyin Ren | Summary: Merging natural language interfaces with computational fluid dynamics (CFD) workflows presents transformative opportunities for both industry and research. In this study, we introduce OptMetaOpenFOAM – a novel framework that bridges MetaOpenFOAM with external analysis and […]


Continue.. OptMetaOpenFOAM: Large Language Model Driven Chain of Thought for Sensitivity Analysis and Parameter Optimization based on CFD

ReaderLM-v2: Small Language Model for HTML to Markdown and JSON

Kavli Affiliate: Feng Wang | First 5 Authors: Feng Wang, Zesheng Shi, Bo Wang, Nan Wang, Han Xiao | Summary: We present ReaderLM-v2, a compact 1.5 billion parameter language model designed for efficient web content extraction. Our model processes documents up to 512K tokens, transforming messy HTML into clean Markdown or JSON formats with high […]


Continue.. ReaderLM-v2: Small Language Model for HTML to Markdown and JSON