NT-LLM: A Novel Node Tokenizer for Integrating Graph Structure into Large Language Models

Kavli Affiliate: Dan Luo

| First 5 Authors: Yanbiao Ji, Chang Liu, Xin Chen, Yue Ding, Dan Luo

| Summary:

Graphs are a fundamental data structure for representing relationships in
real-world scenarios. With the success of Large Language Models (LLMs) across
various natural language processing (NLP) tasks, there has been growing
interest in integrating LLMs for graph learning. However, applying LLMs to
graph-related tasks poses significant challenges, as these models are not
inherently designed to capture the complex structural information present in
graphs. Existing approaches address this challenge through two strategies: the
chain of tasks approach, which uses Graph Neural Networks (GNNs) to encode the
graph structure so that LLMs are relieved from understanding spatial positions;
and Graph-to-Text Conversion, which translates graph structures into semantic
text representations that LLMs can process. Despite their progress, these
methods often struggle to fully preserve the topological information of graphs
or require extensive computational resources, limiting their practical
applicability.
In this work, we introduce Node Tokenizer for Large Language Models (NT-LLM),
a novel framework that efficiently encodes graph structures by selecting key
nodes as anchors and representing each node based on its relative distance to
these anchors. This position-anchored encoding effectively captures the graph
topology, enabling enhanced reasoning capabilities in LLMs over graph data.
Additionally, we implement a task-specific tuning procedure to further improve
structural understanding within LLMs. Through extensive empirical evaluations,
NT-LLM demonstrates significant performance improvements across a variety of
graph-related tasks.

| Search Query: ArXiv Query: search_query=au:”Dan Luo”&id_list=&start=0&max_results=3

Read More