Multi-Task Contrastive Learning for 8192-Token Bilingual Text Embeddings

Kavli Affiliate: Feng Wang

| First 5 Authors: Isabelle Mohr, Markus Krimmel, Saba Sturua, Mohammad Kalim Akram, Andreas Koukounas

| Summary:

We introduce a novel suite of state-of-the-art bilingual text embedding
models that are designed to support English and another target language. These
models are capable of processing lengthy text inputs with up to 8192 tokens,
making them highly versatile for a range of natural language processing tasks
such as text retrieval, clustering, and semantic textual similarity (STS)
calculations.
By focusing on bilingual models and introducing a unique multi-task learning
objective, we have significantly improved the model performance on STS tasks,
which outperforms the capabilities of existing multilingual models in both
target language understanding and cross-lingual evaluation tasks. Moreover, our
bilingual models are more efficient, requiring fewer parameters and less memory
due to their smaller vocabulary needs. Furthermore, we have expanded the
Massive Text Embedding Benchmark (MTEB) to include benchmarks for German and
Spanish embedding models. This integration aims to stimulate further research
and advancement in text embedding technologies for these languages.

| Search Query: ArXiv Query: search_query=au:”Feng Wang”&id_list=&start=0&max_results=3

Read More