One-loop Renormalization of BPS String Masses in Pseudo-anomalous Heterotic String

Kavli Affiliate: Jeffrey A. Harvey | First 5 Authors: Jeffrey A. Harvey, Tai Wai Hu, , , | Summary: Compactification of heterotic string on a Calabi-Yau threefold can lead to a four-dimensional low-energy effective theory which contains a $U(1)$ gauge theory which is pseudo-anomalous, meaning that the fermion content is anomalous, but that the fermion […]


Continue.. One-loop Renormalization of BPS String Masses in Pseudo-anomalous Heterotic String

One-loop Renormalization of BPS String Masses in Pseudo-anomalous Heterotic String

Kavli Affiliate: Jeffrey A. Harvey | First 5 Authors: Jeffrey A. Harvey, Tai Wai Hu, , , | Summary: Compactification of heterotic string on a Calabi-Yau threefold can lead to a four-dimensional low-energy effective theory which contains a $U(1)$ gauge theory which is pseudo-anomalous, meaning that the fermion content is anomalous, but that the fermion […]


Continue.. One-loop Renormalization of BPS String Masses in Pseudo-anomalous Heterotic String

Characterization of a TES-based Anti-Coincidence Detector for Future Large Field-of-View X-ray Calorimetry Missions

Kavli Affiliate: Noah Kurinsky | First 5 Authors: Samuel V. Hull, Joseph S. Adams, Simon R. Bandler, Matthew Cherry, James A. Chervenak | Summary: Microcalorimeter instruments aboard future X-ray observatories will require an anti-coincidence (anti-co) detector to veto charged particle events and reduce the non-X-ray background. We have developed a large-format, TES-based prototype anti-coincidence detector […]


Continue.. Characterization of a TES-based Anti-Coincidence Detector for Future Large Field-of-View X-ray Calorimetry Missions

NVR: Vector Runahead on NPUs for Sparse Memory Access

Kavli Affiliate: Jing Wang | First 5 Authors: Hui Wang, Zhengpeng Zhao, Jing Wang, Yushu Du, Yuan Cheng | Summary: Deep Neural Networks are increasingly leveraging sparsity to reduce the scaling up of model parameter size. However, reducing wall-clock time through sparsity and pruning remains challenging due to irregular memory access patterns, leading to frequent […]


Continue.. NVR: Vector Runahead on NPUs for Sparse Memory Access

SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning

Kavli Affiliate: Jia Liu | First 5 Authors: Junkai Chen, Zhijie Deng, Kening Zheng, Yibo Yan, Shuliang Liu | Summary: As Multimodal Large Language Models (MLLMs) develop, their potential security issues have become increasingly prominent. Machine Unlearning (MU), as an effective strategy for forgetting specific knowledge in training data, has been widely used in privacy […]


Continue.. SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning

EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning

Kavli Affiliate: Ke Wang | First 5 Authors: Xiaoqian Liu, Ke Wang, Yongbin Li, Yuchuan Wu, Wentao Ma | Summary: Large Language Models (LLMs) have shown impressive reasoning capabilities in well-defined problems with clear solutions, such as mathematics and coding. However, they still struggle with complex real-world scenarios like business negotiations, which require strategic reasoning-an […]


Continue.. EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning

EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning

Kavli Affiliate: Ke Wang | First 5 Authors: Xiaoqian Liu, Ke Wang, Yongbin Li, Yuchuan Wu, Wentao Ma | Summary: Large Language Models (LLMs) have shown impressive reasoning capabilities in well-defined problems with clear solutions, such as mathematics and coding. However, they still struggle with complex real-world scenarios like business negotiations, which require strategic reasoning-an […]


Continue.. EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning

EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning

Kavli Affiliate: Ke Wang | First 5 Authors: Xiaoqian Liu, Ke Wang, Yongbin Li, Yuchuan Wu, Wentao Ma | Summary: Large Language Models (LLMs) have shown impressive reasoning capabilities in well-defined problems with clear solutions, such as mathematics and coding. However, they still struggle with complex real-world scenarios like business negotiations, which require strategic reasoning-an […]


Continue.. EPO: Explicit Policy Optimization for Strategic Reasoning in LLMs via Reinforcement Learning

EquiBench: Benchmarking Code Reasoning Capabilities of Large Language Models via Equivalence Checking

Kavli Affiliate: Ke Wang | First 5 Authors: Anjiang Wei, Jiannan Cao, Ran Li, Hongyu Chen, Yuhui Zhang | Summary: Equivalence checking, i.e., determining whether two programs produce identical outputs for all possible inputs, underpins a broad range of applications, including software refactoring, testing, and optimization. We present the task of equivalence checking as a […]


Continue.. EquiBench: Benchmarking Code Reasoning Capabilities of Large Language Models via Equivalence Checking

EquiBench: Benchmarking Code Reasoning Capabilities of Large Language Models via Equivalence Checking

Kavli Affiliate: Ke Wang | First 5 Authors: Anjiang Wei, Jiannan Cao, Ran Li, Hongyu Chen, Yuhui Zhang | Summary: Equivalence checking, i.e., determining whether two programs produce identical outputs for all possible inputs, underpins a broad range of applications, including software refactoring, testing, and optimization. We present the task of equivalence checking as a […]


Continue.. EquiBench: Benchmarking Code Reasoning Capabilities of Large Language Models via Equivalence Checking