Self-Instructed Derived Prompt Generation Meets In-Context Learning: Unlocking New Potential of Black-Box LLMs

Kavli Affiliate: Zhuo Li | First 5 Authors: Zhuo Li, Yuhao Du, Jinpeng Hu, Xiang Wan, Anningzhe Gao | Summary: Large language models (LLMs) have shown success in generating high-quality responses. In order to achieve better alignment with LLMs with human preference, various works are proposed based on specific optimization process, which, however, is not […]


Continue.. Self-Instructed Derived Prompt Generation Meets In-Context Learning: Unlocking New Potential of Black-Box LLMs

Broad-line Region of the Quasar PG 2130+099. II. Doubling the Size Over Four Years?

Kavli Affiliate: Luis C. Ho | First 5 Authors: Zhu-Heng Yao, Sen Yang, Wei-Jian Guo, Yong-Jie Chen, Yu-Yang Songsheng | Summary: Over the past three decades, multiple reverberation mapping (RM) campaigns conducted for the quasar PG 2130+099 have exhibited inconsistent findings with time delays ranging from $sim$10 to $sim$200 days. To achieve a comprehensive understanding […]


Continue.. Broad-line Region of the Quasar PG 2130+099. II. Doubling the Size Over Four Years?

VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

Kavli Affiliate: Zhuo Li | First 5 Authors: Mouxiang Chen, Lefei Shen, Zhuo Li, Xiaoyun Joy Wang, Jianling Sun | Summary: Foundation models have emerged as a promising approach in time series forecasting (TSF). Existing approaches either fine-tune large language models (LLMs) or build large-scale time-series datasets to develop TSF foundation models. However, these methods […]


Continue.. VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

Kavli Affiliate: Zhuo Li | First 5 Authors: Mouxiang Chen, Lefei Shen, Zhuo Li, Xiaoyun Joy Wang, Jianling Sun | Summary: Foundation models have emerged as a promising approach in time series forecasting (TSF). Existing approaches either repurpose large language models (LLMs) or build large-scale time series datasets to develop TSF foundation models for universal […]


Continue.. VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

Kavli Affiliate: Zhuo Li | First 5 Authors: Mouxiang Chen, Lefei Shen, Zhuo Li, Xiaoyun Joy Wang, Jianling Sun | Summary: Foundation models have emerged as a promising approach in time series forecasting (TSF). Existing approaches either repurpose large language models (LLMs) or build large-scale time series datasets to develop TSF foundation models for universal […]


Continue.. VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

Kavli Affiliate: Zhuo Li | First 5 Authors: Mouxiang Chen, Lefei Shen, Zhuo Li, Xiaoyun Joy Wang, Jianling Sun | Summary: Foundation models have emerged as a promising approach in time series forecasting (TSF). Existing approaches either repurpose large language models (LLMs) or build large-scale time series datasets to develop TSF foundation models for universal […]


Continue.. VisionTS: Visual Masked Autoencoders Are Free-Lunch Zero-Shot Time Series Forecasters

Wind from the Hot Accretion Flow and Super-Eddington Accretion Flow

Kavli Affiliate: Feng Yuan | First 5 Authors: Hai Yang, Feng Yuan, , , | Summary: Wind is believed to be widespread in various black hole accretion flows. However, unlike the wind from thin disks, which have substantial observational evidence, the wind from hot accretion flows is difficult to observe due to the extremely high […]


Continue.. Wind from the Hot Accretion Flow and Super-Eddington Accretion Flow

Coherent Information Phase Transition in a Noisy Quantum Circuit

Kavli Affiliate: Jing Wang | First 5 Authors: Dongheng Qian, Jing Wang, , , | Summary: Coherent information quantifies the transmittable quantum information through a channel and is directly linked to the channel’s quantum capacity. In the context of dynamical purification transitions, scrambling dynamics sustain extensive and positive coherent information at low measurement rates, but […]


Continue.. Coherent Information Phase Transition in a Noisy Quantum Circuit

Coherent Information Phase Transition in a Noisy Quantum Circuit

Kavli Affiliate: Jing Wang | First 5 Authors: Dongheng Qian, Dongheng Qian, , , | Summary: Coherent information quantifies the transmittable quantum information through a channel and is directly linked to the channel’s quantum capacity. In a monitored quantum circuit, regarded as a quantum channel, extensive and positive coherent information is sustained at low measurement […]


Continue.. Coherent Information Phase Transition in a Noisy Quantum Circuit

Detecting AI Flaws: Target-Driven Attacks on Internal Faults in Language Models

Kavli Affiliate: Zhuo Li | First 5 Authors: Yuhao Du, Zhuo Li, Pengyu Cheng, Xiang Wan, Anningzhe Gao | Summary: Large Language Models (LLMs) have become a focal point in the rapidly evolving field of artificial intelligence. However, a critical concern is the presence of toxic content within the pre-training corpus of these models, which […]


Continue.. Detecting AI Flaws: Target-Driven Attacks on Internal Faults in Language Models