Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm

Kavli Affiliate: Zhuo Li | First 5 Authors: Zhuo Li, Yuhao Du, Xiaoqi Jiao, Yiwen Guo, Yuege Feng | Summary: Selecting high-quality and diverse training samples from extensive datasets plays a crucial role in reducing training overhead and enhancing the performance of Large Language Models (LLMs). However, existing studies fall short in assessing the overall […]


Continue.. Add-One-In: Incremental Sample Selection for Large Language Models via a Choice-Based Greedy Paradigm

The Stochastic Siren: Astrophysical Gravitational-Wave Background Measurements of the Hubble Constant

Kavli Affiliate: Daniel E. Holz | First 5 Authors: Bryce Cousins, Kristen Schumacher, Adrian Ka-Wai Chung, Colm Talbot, Thomas Callister | Summary: Gravitational waves from individually resolved compact object mergers can be used as standard sirens, offering a novel self-calibrating precision probe of cosmology. While the standard siren method has been well-explored, the gravitational-wave background […]


Continue.. The Stochastic Siren: Astrophysical Gravitational-Wave Background Measurements of the Hubble Constant

Mapping the merging zone of late infall in the AB Aur planet-forming system

Kavli Affiliate: Ruobing Dong | First 5 Authors: Jessica Speedie, Ruobing Dong, Richard Teague, Dominique Segura-Cox, Jaime E. Pineda | Summary: Late infall events challenge the traditional view that planet formation occurs without external influence. Here we present deep ALMA $^{12}$CO $J=2-1$ and SO $J_{N}=5_6-4_5$ observations toward AB Aurigae, a Class II disk system with […]


Continue.. Mapping the merging zone of late infall in the AB Aur planet-forming system

No [CII] or dust detection in two Little Red Dots at z$_{rm spec}$ > 7

Kavli Affiliate: Kohei Inayoshi | First 5 Authors: Mengyuan Xiao, Pascal A. Oesch, Longji Bing, David Elbaz, Jorryt Matthee | Summary: Little Red Dots (LRDs) are compact, point-like sources characterized by their red color and broad Balmer lines, which have been debated to be either dominated by active galactic nuclei (AGN) or dusty star-forming galaxies […]


Continue.. No [CII] or dust detection in two Little Red Dots at z$_{rm spec}$ > 7

Reweighting and Analysing Event Generator Systematics by Neural Networks on High-Level Features

Kavli Affiliate: Mihoko M. Nojiri | First 5 Authors: Amon Furuichi, Sung Hak Lim, Mihoko M. Nojiri, , | Summary: The state-of-the-art deep learning (DL) models for jet classification use jet constituent information directly, improving performance tremendously. This draws attention to interpretability, namely, the decision-making process, correlations contributing to the classification, and high-level features (HLFs) […]


Continue.. Reweighting and Analysing Event Generator Systematics by Neural Networks on High-Level Features

Global Neutrino Constraints on the Minimal U(1)$_{L_μ-L_τ}$ Model

Kavli Affiliate: Satoshi Shirai | First 5 Authors: Masahiro Ibe, Satoshi Shirai, Keiichi Watanabe, , | Summary: We examine the minimal U(1)$_{L_mu-L_tau}$ gauge model in light of the latest neutrino data, including neutrino oscillations, cosmological observations, direct mass measurements, and neutrinoless double-beta decay. Using the most conservative oscillation data, we find that normal ordering is […]


Continue.. Global Neutrino Constraints on the Minimal U(1)$_{L_μ-L_τ}$ Model

Output Length Effect on DeepSeek-R1’s Safety in Forced Thinking

Kavli Affiliate: Zhuo Li | First 5 Authors: Xuying Li, Zhuo Li, Yuji Kosuga, Victor Bian, | Summary: Large Language Models (LLMs) have demonstrated strong reasoning capabilities, but their safety under adversarial conditions remains a challenge. This study examines the impact of output length on the robustness of DeepSeek-R1, particularly in Forced Thinking scenarios. We […]


Continue.. Output Length Effect on DeepSeek-R1’s Safety in Forced Thinking

Output Length Effect on DeepSeek-R1’s Safety in Forced Thinking

Kavli Affiliate: Zhuo Li | First 5 Authors: Xuying Li, Zhuo Li, Yuji Kosuga, Victor Bian, | Summary: Large Language Models (LLMs) have demonstrated strong reasoning capabilities, but their safety under adversarial conditions remains a challenge. This study examines the impact of output length on the robustness of DeepSeek-R1, particularly in Forced Thinking scenarios. We […]


Continue.. Output Length Effect on DeepSeek-R1’s Safety in Forced Thinking

Output Length Effect on DeepSeek-R1’s Safety in Forced Thinking

Kavli Affiliate: Zhuo Li | First 5 Authors: Xuying Li, Zhuo Li, Yuji Kosuga, Victor Bian, | Summary: Large Language Models (LLMs) have demonstrated strong reasoning capabilities, but their safety under adversarial conditions remains a challenge. This study examines the impact of output length on the robustness of DeepSeek-R1, particularly in Forced Thinking scenarios. We […]


Continue.. Output Length Effect on DeepSeek-R1’s Safety in Forced Thinking

COSMOS Spectroscopic Redshift Compilation (First Data Release): 165k Redshifts Encompassing Two Decades of Spectroscopy

Kavli Affiliate: Yingjie Peng | First 5 Authors: Ali Ahmad Khostovan, Jeyhan S. Kartaltepe, Mara Salvato, Olivier Ilbert, Caitlin M. Casey | Summary: We present the COSMOS Spectroscopic Redshift Compilation encompassing ~ 20 years of spectroscopic redshifts within the 2 deg$^2$ COSMOS legacy field. This compilation contains 165,312 redshifts of 97,929 unique objects from 108 […]


Continue.. COSMOS Spectroscopic Redshift Compilation (First Data Release): 165k Redshifts Encompassing Two Decades of Spectroscopy