TsqLoRA: Towards Sensitivity and Quality Low-Rank Adaptation for Efficient Fine-Tuning

Kavli Affiliate: Long Zhang | First 5 Authors: Yu Chen, Yu Chen, , , | Summary: Fine-tuning large pre-trained models for downstream tasks has become a fundamental approach in natural language processing. Fully fine-tuning all model parameters is computationally expensive and memory-intensive, especially in resource-constrained environments. Existing parameter-efficient fine-tuning methods reduce the number of trainable […]


Continue.. TsqLoRA: Towards Sensitivity and Quality Low-Rank Adaptation for Efficient Fine-Tuning

CPCLDETECTOR: Knowledge Enhancement and Alignment Selection for Chinese Patronizing and Condescending Language Detection

Kavli Affiliate: Long Zhang | First 5 Authors: Jiaxun Yang, Jiaxun Yang, , , | Summary: Chinese Patronizing and Condescending Language (CPCL) is an implicitly discriminatory toxic speech targeting vulnerable groups on Chinese video platforms. The existing dataset lacks user comments, which are a direct reflection of video content. This undermines the model’s understanding of […]


Continue.. CPCLDETECTOR: Knowledge Enhancement and Alignment Selection for Chinese Patronizing and Condescending Language Detection

Spin PN Junctions: Giant Magnetoresistance, Tunable Circular Polarization, and Spin Zener Filter

Kavli Affiliate: Gang Su | First 5 Authors: Chun-Yi Xue, Chun-Yi Xue, , , | Summary: We demonstrate that spin PN junctions-magnetic semiconductor homojunctions with spin splitting-induced band offsets-fundamentally redefine carrier transport via spin-dependent recom bination probabilities. By integrating this mechanism into the Shockley model, we predict a near 100 enhancement in magnetoresistance sensitivity under […]


Continue.. Spin PN Junctions: Giant Magnetoresistance, Tunable Circular Polarization, and Spin Zener Filter

MS-GS: Multi-Appearance Sparse-View 3D Gaussian Splatting in the Wild

Kavli Affiliate: Cheng Peng | First 5 Authors: Deming Li, Deming Li, , , | Summary: In-the-wild photo collections often contain limited volumes of imagery and exhibit multiple appearances, e.g., taken at different times of day or seasons, posing significant challenges to scene reconstruction and novel view synthesis. Although recent adaptations of Neural Radiance Field […]


Continue.. MS-GS: Multi-Appearance Sparse-View 3D Gaussian Splatting in the Wild

MS-GS: Multi-Appearance Sparse-View 3D Gaussian Splatting in the Wild

Kavli Affiliate: Cheng Peng | First 5 Authors: Deming Li, Deming Li, , , | Summary: In-the-wild photo collections often contain limited volumes of imagery and exhibit multiple appearances, e.g., taken at different times of day or seasons, posing significant challenges to scene reconstruction and novel view synthesis. Although recent adaptations of Neural Radiance Field […]


Continue.. MS-GS: Multi-Appearance Sparse-View 3D Gaussian Splatting in the Wild

Detail Across Scales: Multi-Scale Enhancement for Full Spectrum Neural Representations

Kavli Affiliate: Cheng Peng | First 5 Authors: Yuan Ni, Yuan Ni, , , | Summary: Implicit neural representations (INRs) have emerged as a compact and parametric alternative to discrete array-based data representations, encoding information directly in neural network weights to enable resolution-independent representation and memory efficiency. However, existing INR approaches, when constrained to compact […]


Continue.. Detail Across Scales: Multi-Scale Enhancement for Full Spectrum Neural Representations

LLM-OREF: An Open Relation Extraction Framework Based on Large Language Models

Kavli Affiliate: Long Zhang | First 5 Authors: Hongyao Tu, Hongyao Tu, , , | Summary: The goal of open relation extraction (OpenRE) is to develop an RE model that can generalize to new relations not encountered during training. Existing studies primarily formulate OpenRE as a clustering task. They first cluster all test instances based […]


Continue.. LLM-OREF: An Open Relation Extraction Framework Based on Large Language Models

Annotating Training Data for Conditional Semantic Textual Similarity Measurement using Large Language Models

Kavli Affiliate: Yi Zhou | First 5 Authors: Gaifan Zhang, Gaifan Zhang, , , | Summary: Semantic similarity between two sentences depends on the aspects considered between those sentences. To study this phenomenon, Deshpande et al. (2023) proposed the Conditional Semantic Textual Similarity (C-STS) task and annotated a human-rated similarity dataset containing pairs of sentences […]


Continue.. Annotating Training Data for Conditional Semantic Textual Similarity Measurement using Large Language Models

10-W Sub-100-fs Ultrafast Cr:ZnS/ZnSe MOPA System enabled by doping gradient engineering

Kavli Affiliate: Long Zhang | First 5 Authors: Guangzi Feng, Guangzi Feng, , , | Summary: We report on a high-power mid-infrared femtosecond master oscillator power amplifier (MOPA) system, employing Cr:ZnS and Cr:ZnSe polycrystals with fine-tuned doping profiles. Based on the soft-aperture Kerr-lens mode-locking in the soliton regime, the seed oscillator generates ~40-fs pulses with […]


Continue.. 10-W Sub-100-fs Ultrafast Cr:ZnS/ZnSe MOPA System enabled by doping gradient engineering

MemGS: Memory-Efficient Gaussian Splatting for Real-Time SLAM

Kavli Affiliate: Yi Zhou | First 5 Authors: Yinlong Bai, Yinlong Bai, , , | Summary: Recent advancements in 3D Gaussian Splatting (3DGS) have made a significant impact on rendering and reconstruction techniques. Current research predominantly focuses on improving rendering performance and reconstruction quality using high-performance desktop GPUs, largely overlooking applications for embedded platforms like […]


Continue.. MemGS: Memory-Efficient Gaussian Splatting for Real-Time SLAM