WorldCuisines: A Massive-Scale Benchmark for Multilingual and Multicultural Visual Question Answering on Global Cuisines

Kavli Affiliate: Yi Zhou

| First 5 Authors: Genta Indra Winata, Frederikus Hudi, Patrick Amadeus Irawan, David Anugraha, Rifki Afina Putri

| Summary:

Vision Language Models (VLMs) often struggle with culture-specific knowledge,
particularly in languages other than English and in underrepresented cultural
contexts. To evaluate their understanding of such knowledge, we introduce
WorldCuisines, a massive-scale benchmark for multilingual and multicultural,
visually grounded language understanding. This benchmark includes a visual
question answering (VQA) dataset with text-image pairs across 30 languages and
dialects, spanning 9 language families and featuring over 1 million data
points, making it the largest multicultural VQA benchmark to date. It includes
tasks for identifying dish names and their origins. We provide evaluation
datasets in two sizes (12k and 60k instances) alongside a training dataset (1
million instances). Our findings show that while VLMs perform better with
correct location context, they struggle with adversarial contexts and
predicting specific regional cuisines and languages. To support future
research, we release a knowledge base with annotated food entries and images
along with the VQA data.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3

Read More