Superposition- and interference-induced optical spectrum distortion in the figure-9 fiber laser

Kavli Affiliate: Xiang Zhang | First 5 Authors: Xiang Zhang, Guochao Wang, Kangrui Chang, Haobin Zheng, Yongzhuang Zhou | Summary: The spectrum of the output pulses from the figure-9 laser typically exhibits more distortion than the spectra from mode-locked lasers based on other saturable absorbers and the spectrum of its intracavity pulses. Here, we demonstrate […]


Continue.. Superposition- and interference-induced optical spectrum distortion in the figure-9 fiber laser

EACO-RAG: Edge-Assisted and Collaborative RAG with Adaptive Knowledge Update

Kavli Affiliate: Feng Wang | First 5 Authors: Jiaxing Li, Chi Xu, Lianchen Jia, Feng Wang, Cong Zhang | Summary: Large Language Models are revolutionizing Web, mobile, and Web of Things systems, driving intelligent and scalable solutions. However, as Retrieval-Augmented Generation (RAG) systems expand, they encounter significant challenges related to scalability, including increased delay and […]


Continue.. EACO-RAG: Edge-Assisted and Collaborative RAG with Adaptive Knowledge Update

EACO-RAG: Towards Distributed Tiered LLM Deployment using Edge-Assisted and Collaborative RAG with Adaptive Knowledge Update

Kavli Affiliate: Feng Wang | First 5 Authors: Jiaxing Li, Chi Xu, Lianchen Jia, Feng Wang, Cong Zhang | Summary: Large language models (LLMs) have demonstrated impressive capabilities in language tasks, but they require high computing power and rely on static knowledge. To overcome these limitations, Retrieval-Augmented Generation (RAG) incorporates up-to-date external information into LLMs […]


Continue.. EACO-RAG: Towards Distributed Tiered LLM Deployment using Edge-Assisted and Collaborative RAG with Adaptive Knowledge Update

Counting Ability of Large Language Models and Impact of Tokenization

Kavli Affiliate: Xiang Zhang | First 5 Authors: Xiang Zhang, Juntai Cao, Chenyu You, , | Summary: Transformers, the backbone of modern large language models (LLMs), face inherent architectural limitations that impede their reasoning capabilities. Unlike recurrent networks, Transformers lack recurrent connections, confining them to constant-depth computation. This restriction places them in the complexity class […]


Continue.. Counting Ability of Large Language Models and Impact of Tokenization

Counting Ability of Large Language Models and Impact of Tokenization

Kavli Affiliate: Xiang Zhang | First 5 Authors: Xiang Zhang, Juntai Cao, Chenyu You, , | Summary: Transformers, the backbone of modern large language models (LLMs), face inherent architectural limitations that impede their reasoning capabilities. Unlike recurrent networks, Transformers lack recurrent connections, confining them to constant-depth computation. This restriction places them in the complexity class […]


Continue.. Counting Ability of Large Language Models and Impact of Tokenization

Modeling the Superlattice Phase Diagram of Transition Metal Intercalation in Bilayer 2H-TaS$_2$

Kavli Affiliate: David T. Limmer | First 5 Authors: Isaac M. Craig, B. Junsuh Kim, David T. Limmer, D. Kwabena Bediako, Sinéad M. Griffin | Summary: Van der Waals hosts intercalated with transition metal (TM) ions exhibit a range of magnetic properties strongly influenced by the structural order of the intercalants. However, predictive computational models […]


Continue.. Modeling the Superlattice Phase Diagram of Transition Metal Intercalation in Bilayer 2H-TaS$_2$

Semi-supervised Chinese Poem-to-Painting Generation via Cycle-consistent Adversarial Networks

Kavli Affiliate: Feng Wang | First 5 Authors: Zhengyang Lu, Tianhao Guo, Feng Wang, , | Summary: Classical Chinese poetry and painting represent the epitome of artistic expression, but the abstract and symbolic nature of their relationship poses a significant challenge for computational translation. Most existing methods rely on large-scale paired datasets, which are scarce […]


Continue.. Semi-supervised Chinese Poem-to-Painting Generation via Cycle-consistent Adversarial Networks

Humanizing the Machine: Proxy Attacks to Mislead LLM Detectors

Kavli Affiliate: Xiang Zhang | First 5 Authors: Tianchun Wang, Yuanzhou Chen, Zichuan Liu, Zhanwen Chen, Haifeng Chen | Summary: The advent of large language models (LLMs) has revolutionized the field of text generation, producing outputs that closely mimic human-like writing. Although academic and industrial institutions have developed detectors to prevent the malicious usage of […]


Continue.. Humanizing the Machine: Proxy Attacks to Mislead LLM Detectors

Foundation Models in Electrocardiogram: A Review

Kavli Affiliate: Xiang Zhang | First 5 Authors: Yu Han, Xiaofeng Liu, Xiang Zhang, Cheng Ding, | Summary: The electrocardiogram (ECG) is ubiquitous across various healthcare domains, such as cardiac arrhythmia detection and sleep monitoring, making ECG analysis critically essential. Traditional deep learning models for ECG are task-specific, with a narrow scope of functionality and […]


Continue.. Foundation Models in Electrocardiogram: A Review

LEO-based Positioning: Foundations, Signal Design, and Receiver Enhancements for 6G NTN

Kavli Affiliate: Feng Wang | First 5 Authors: Harish K. Dureppagari, Chiranjib Saha, Harikumar Krishnamurthy, Xiao Feng Wang, Alberto Rico-Alvariño | Summary: The integration of non-terrestrial networks (NTN) into 5G new radio (NR) has opened up the possibility of developing a new positioning infrastructure using NR signals from Low-Earth Orbit (LEO) satellites. LEO-based cellular positioning […]


Continue.. LEO-based Positioning: Foundations, Signal Design, and Receiver Enhancements for 6G NTN