Using artificial intelligence to document the hidden RNA virosphere

Kavli Affiliate: Li Zhao | Authors: Xin Hou, Yong He, Pan Fang, Shi-Qiang Mei, Zan Xu, Wei-Chen Wu, Jun-Hua Tian, Shun Zhang, Zhen-Yu Zeng, Qin-Yu Gou, Gen-Yang Xin, Shi-Jia Le, Yin-Yue Xia, Yu-Lan Zhou, Feng-Ming Hui, Yuan-Fei Pan, John-Sebastian Eden, Zhao-Hui Yang, Chong Han, Yue-Long Shu, Deyin Guo, Jun Li, Edward C Holmes, Zhao-Rong Li […]


Continue.. Using artificial intelligence to document the hidden RNA virosphere

Drone-assisted Road Gaussian Splatting with Cross-view Uncertainty

Kavli Affiliate: Cheng Peng | First 5 Authors: Saining Zhang, Baijun Ye, Xiaoxue Chen, Yuantao Chen, Zongzheng Zhang | Summary: Robust and realistic rendering for large-scale road scenes is essential in autonomous driving simulation. Recently, 3D Gaussian Splatting (3D-GS) has made groundbreaking progress in neural rendering, but the general fidelity of large-scale road scene renderings […]


Continue.. Drone-assisted Road Gaussian Splatting with Cross-view Uncertainty

Why are optical coronal lines faint in active galactic nuclei?

Kavli Affiliate: Claudio Ricci | First 5 Authors: Jeffrey D. McKaig, Shobita Satyapal, Ari Laor, Nicholas P. Abel, Sara M. Doan | Summary: Forbidden collisionally excited optical atomic transitions from high ionization potential (IP$geq$54.8,eV) ions, such as Ca$^{mathrm{4+}}$, Ne$^{mathrm{4+}}$, Fe$^{mathrm{6+}}$, Fe$^{mathrm{10+}}$, Fe$^{mathrm{13+}}$, Ar$^{mathrm{9+}}$, and S$^{mathrm{11+}}$, are known as optical coronal lines (CLs). The spectral energy […]


Continue.. Why are optical coronal lines faint in active galactic nuclei?

Axion Dark Matter eXperiment around 3.3 μeV with Dine-Fischler-Srednicki-Zhitnitsky Discovery Ability

Kavli Affiliate: Chao-Lin Kuo | First 5 Authors: C. Bartram, C. Boutan, T. Braine, J. H. Buckley, T. J. Caligiure | Summary: We report the results of a QCD axion dark matter search with discovery ability for Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) axions using an axion haloscope. Sub-Kelvin noise temperatures are reached with an ultra low-noise Josephson parametric […]


Continue.. Axion Dark Matter eXperiment around 3.3 μeV with Dine-Fischler-Srednicki-Zhitnitsky Discovery Ability

Axion Dark Matter eXperiment around 3.3 μeV with Dine-Fischler-Srednicki-Zhitnitsky Discovery Ability

Kavli Affiliate: Chao-Lin Kuo | First 5 Authors: C. Bartram, C. Boutan, T. Braine, J. H. Buckley, T. J. Caligiure | Summary: We report the results of a QCD axion dark matter search with discovery ability for Dine-Fischler-Srednicki-Zhitnitsky (DFSZ) axions using an axion haloscope. Sub-Kelvin noise temperatures are reached with an ultra low-noise Josephson parametric […]


Continue.. Axion Dark Matter eXperiment around 3.3 μeV with Dine-Fischler-Srednicki-Zhitnitsky Discovery Ability

The Fluorescence Camera of the POEMMA-Balloon with Radio (PBR): Design and Scientific goals

Kavli Affiliate: Angela Olinto | First 5 Authors: Matteo Battisti, Johannes Eser, George Filippatos, Angela Olinto, Giuseppe Osteria | Summary: The POEMMA-Balloon with Radio (PBR) is a proposed payload to fly on a NASA Super Pressure Balloon (SPB). It will act as a pathfinder of the Probe Of Extreme Multi-Messenger Astrophysics (POEMMA) detector. PBR will […]


Continue.. The Fluorescence Camera of the POEMMA-Balloon with Radio (PBR): Design and Scientific goals

Atoxia: Red-teaming Large Language Models with Target Toxic Answers

Kavli Affiliate: Zhuo Li | First 5 Authors: Yuhao Du, Zhuo Li, Pengyu Cheng, Xiang Wan, Anningzhe Gao | Summary: Despite the substantial advancements in artificial intelligence, large language models (LLMs) remain being challenged by generation safety. With adversarial jailbreaking prompts, one can effortlessly induce LLMs to output harmful content, causing unexpected negative social impacts. […]


Continue.. Atoxia: Red-teaming Large Language Models with Target Toxic Answers

Detecting AI Flaws: Target-Driven Attacks on Internal Faults in Language Models

Kavli Affiliate: Zhuo Li | First 5 Authors: Yuhao Du, Zhuo Li, Pengyu Cheng, Xiang Wan, Anningzhe Gao | Summary: Large Language Models (LLMs) have become a focal point in the rapidly evolving field of artificial intelligence. However, a critical concern is the presence of toxic content within the pre-training corpus of these models, which […]


Continue.. Detecting AI Flaws: Target-Driven Attacks on Internal Faults in Language Models