Evaluating Unsupervised Dimensionality Reduction Methods for Pretrained Sentence Embeddings

Kavli Affiliate: Yi Zhou

| First 5 Authors: Gaifan Zhang, Yi Zhou, Danushka Bollegala, ,

| Summary:

Sentence embeddings produced by Pretrained Language Models (PLMs) have
received wide attention from the NLP community due to their superior
performance when representing texts in numerous downstream applications.
However, the high dimensionality of the sentence embeddings produced by PLMs is
problematic when representing large numbers of sentences in memory- or
compute-constrained devices. As a solution, we evaluate unsupervised
dimensionality reduction methods to reduce the dimensionality of sentence
embeddings produced by PLMs. Our experimental results show that simple methods
such as Principal Component Analysis (PCA) can reduce the dimensionality of
sentence embeddings by almost $50%$, without incurring a significant loss in
performance in multiple downstream tasks. Surprisingly, reducing the
dimensionality further improves performance over the original high-dimensional
versions for the sentence embeddings produced by some PLMs in some tasks.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3

Read More