Self-Edit: Fault-Aware Code Editor for Code Generation

Kavli Affiliate: Zhuo Li | First 5 Authors: Kechi Zhang, Zhuo Li, Jia Allen Li, Ge Li, Zhi Jin | Summary: Large language models (LLMs) have demonstrated an impressive ability to generate codes on competitive programming tasks. However, with limited sample numbers, LLMs still suffer from poor accuracy. Inspired by the process of human programming, […]


Continue.. Self-Edit: Fault-Aware Code Editor for Code Generation

Self-Edit: Fault-Aware Code Editor for Code Generation

Kavli Affiliate: Zhuo Li | First 5 Authors: Kechi Zhang, Zhuo Li, Jia Li, Ge Li, Zhi Jin | Summary: Large language models (LLMs) have demonstrated an impressive ability to generate codes on competitive programming tasks. However, with limited sample numbers, LLMs still suffer from poor accuracy. Inspired by the process of human programming, we […]


Continue.. Self-Edit: Fault-Aware Code Editor for Code Generation

ToolCoder: Teach Code Generation Models to use API search tools

Kavli Affiliate: Zhuo Li | First 5 Authors: Kechi Zhang, Huangzhao Zhang, Ge Li, Jia Li, Zhuo Li | Summary: Automatically generating source code from natural language descriptions has been a growing field of research in recent years. However, current large-scale code generation models often encounter difficulties when selecting appropriate APIs for specific contexts. These […]


Continue.. ToolCoder: Teach Code Generation Models to use API search tools

ToolCoder: Teach Code Generation Models to use API search tools

Kavli Affiliate: Zhuo Li | First 5 Authors: Kechi Zhang, Huangzhao Zhang, Ge Li, Jia Allen Li, Zhuo Li | Summary: Automatically generating source code from natural language descriptions has been a growing field of research in recent years. However, current large-scale code generation models often encounter difficulties when selecting appropriate APIs for specific contexts. […]


Continue.. ToolCoder: Teach Code Generation Models to use API search tools

ToolCoder: Teach Code Generation Models to use API search tools

Kavli Affiliate: Zhuo Li | First 5 Authors: Kechi Zhang, Huangzhao Zhang, Ge Li, Jia Li, Zhuo Li | Summary: Automatically generating source code from natural language descriptions has been a growing field of research in recent years. However, current large-scale code generation models often encounter difficulties when selecting appropriate APIs for specific contexts. These […]


Continue.. ToolCoder: Teach Code Generation Models to use API search tools

Feasibility of Passive Sounding of Uranian Moons using Uranian Kilometric Radiation

Kavli Affiliate: Dustin M. Schroeder | First 5 Authors: Andrew Romero-Wolf, Gregor Steinbruegge, Julie Castillo-Rogez, Corey J. Cochrane, Tom A. Nordheim | Summary: We present a feasibility study for passive sounding of Uranian icy moons using Uranian Kilometric Radio (UKR) emissions in the 100 – 900 kHz band. We provide a summary description of the […]


Continue.. Feasibility of Passive Sounding of Uranian Moons using Uranian Kilometric Radiation

Gravity of gluonic fluctuations and the value of the cosmological constant

Kavli Affiliate: Craig Hogan | First 5 Authors: Kris Mackewicz, Craig Hogan, , , | Summary: We analyze the classical linear gravitational effect of idealized pion-like dynamical systems, consisting of light quarks connected by attractive gluonic material with a stress-energy $p=-rho c^2$ in one or more dimensions. In one orbit of a system of total […]


Continue.. Gravity of gluonic fluctuations and the value of the cosmological constant

Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability

Kavli Affiliate: Max Tegmark | First 5 Authors: Ziming Liu, Eric Gan, Max Tegmark, , | Summary: We introduce Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable. Inspired by brains, BIMT embeds neurons in a geometric space and augments the loss function with a cost proportional to the length […]


Continue.. Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability

Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability

Kavli Affiliate: Max Tegmark | First 5 Authors: Ziming Liu, Eric Gan, Max Tegmark, , | Summary: We introduce Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable. Inspired by brains, BIMT embeds neurons in a geometric space and augments the loss function with a cost proportional to the length […]


Continue.. Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability

Correcting for Interference in Experiments: A Case Study at Douyin

Kavli Affiliate: Huawei Zhang | First 5 Authors: Vivek F. Farias, Hao Li, Tianyi Peng, Xinyuyang Ren, Huawei Zhang | Summary: Interference is a ubiquitous problem in experiments conducted on two-sided content marketplaces, such as Douyin (China’s analog of TikTok). In many cases, creators are the natural unit of experimentation, but creators interfere with each […]


Continue.. Correcting for Interference in Experiments: A Case Study at Douyin