Facilitating Reinforcement Learning for Process Control Using Transfer Learning: Perspectives

Kavli Affiliate: Biao Huang | First 5 Authors: Runze Lin, Junghui Chen, Lei Xie, Hongye Su, Biao Huang | Summary: This paper provides insights into deep reinforcement learning (DRL) for process control from the perspective of transfer learning. We analyze the challenges of applying DRL in the field of process industries and the necessity of […]


Continue.. Facilitating Reinforcement Learning for Process Control Using Transfer Learning: Perspectives

TAIL: A Terrain-Aware Multi-Modal SLAM Dataset for Robot Locomotion in Deformable Granular Environments

Kavli Affiliate: Zheng Zhu | First 5 Authors: Chen Yao, Yangtao Ge, Guowei Shi, Zirui Wang, Ningbo Yang | Summary: Terrain-aware perception holds the potential to improve the robustness and accuracy of autonomous robot navigation in the wilds, thereby facilitating effective off-road traversals. However, the lack of multi-modal perception across various motion patterns hinders the […]


Continue.. TAIL: A Terrain-Aware Multi-Modal SLAM Dataset for Robot Locomotion in Deformable Granular Environments

Self-duel solution of 3D incompressible Navier-Stokes equations

Kavli Affiliate: Yi Zhou | First 5 Authors: Ning-An Lai, Yi Zhou, , , | Summary: Whether the 3D incompressible Navier-Stokes equations will have a global smooth solution for all smooth, finite energy initial data is a Millennium Prize problem. One of the main difficulties of this problem is that the Navier-Stokes equations are actually […]


Continue.. Self-duel solution of 3D incompressible Navier-Stokes equations

Self-dual solution of 3D incompressible Navier-Stokes equations

Kavli Affiliate: Yi Zhou | First 5 Authors: Ning-An Lai, Yi Zhou, , , | Summary: Whether the 3D incompressible Navier-Stokes equations will have a global smooth solution for all smooth, finite energy initial data is a Millennium Prize problem. One of the main difficulties of this problem is that the Navier-Stokes equations are actually […]


Continue.. Self-dual solution of 3D incompressible Navier-Stokes equations

Local operator quench induced by two-dimensional inhomogeneous and homogeneous CFT Hamiltonians

Kavli Affiliate: Masahiro Nozaki | First 5 Authors: Weibo Mao, Masahiro Nozaki, Kotaro Tamaoka, Mao Tian Tan, | Summary: We explore non-equilibrium processes in two-dimensional conformal field theories (2d CFTs) due to the growth of operators induced by inhomogeneous and homogeneous Hamiltonians by investigating the time dependence of the partition function, energy density, and entanglement […]


Continue.. Local operator quench induced by two-dimensional inhomogeneous and homogeneous CFT Hamiltonians

Local operator quench induced by two-dimensional inhomogeneous and homogeneous CFT Hamiltonians

Kavli Affiliate: Masahiro Nozaki | First 5 Authors: Weibo Mao, Masahiro Nozaki, Kotaro Tamaoka, Mao Tian Tan, | Summary: We explore non-equilibrium processes in two-dimensional conformal field theories (2d CFTs) due to the growth of operators induced by inhomogeneous and homogeneous Hamiltonians by investigating the time dependence of the partition function, energy density, and entanglement […]


Continue.. Local operator quench induced by two-dimensional inhomogeneous and homogeneous CFT Hamiltonians

Evaluating Unsupervised Dimensionality Reduction Methods for Pretrained Sentence Embeddings

Kavli Affiliate: Yi Zhou | First 5 Authors: Gaifan Zhang, Yi Zhou, Danushka Bollegala, , | Summary: Sentence embeddings produced by Pretrained Language Models (PLMs) have received wide attention from the NLP community due to their superior performance when representing texts in numerous downstream applications. However, the high dimensionality of the sentence embeddings produced by […]


Continue.. Evaluating Unsupervised Dimensionality Reduction Methods for Pretrained Sentence Embeddings

Automatic Summarization of Doctor-Patient Encounter Dialogues Using Large Language Model through Prompt Tuning

Kavli Affiliate: Cheng Peng | First 5 Authors: Mengxian Lyu, Cheng Peng, Xiaohan Li, Patrick Balian, Jiang Bian | Summary: Automatic text summarization (ATS) is an emerging technology to assist clinicians in providing continuous and coordinated care. This study presents an approach to summarize doctor-patient dialogues using generative large language models (LLMs). We developed prompt-tuning […]


Continue.. Automatic Summarization of Doctor-Patient Encounter Dialogues Using Large Language Model through Prompt Tuning

Intention Action Anticipation Model with Guide-Feedback Loop Mechanism

Kavli Affiliate: Fuchun Zhang | First 5 Authors: Zongnan Ma, Fuchun Zhang, Zhixiong Nan, Yao Ge, | Summary: Anticipating human intention from videos has broad applications, such as automatic driving, robot assistive technology, and virtual reality. This study addresses the problem of intention action anticipation using egocentric video sequences to estimate actions that indicate human […]


Continue.. Intention Action Anticipation Model with Guide-Feedback Loop Mechanism

Improving Generalizability of Extracting Social Determinants of Health Using Large Language Models through Prompt-tuning

Kavli Affiliate: Cheng Peng | First 5 Authors: Cheng Peng, Zehao Yu, Kaleb E Smith, Wei-Hsuan Lo-Ciganic, Jiang Bian | Summary: The progress in natural language processing (NLP) using large language models (LLMs) has greatly improved patient information extraction from clinical narratives. However, most methods based on the fine-tuning strategy have limited transfer learning ability […]


Continue.. Improving Generalizability of Extracting Social Determinants of Health Using Large Language Models through Prompt-tuning