Subtle Data Crimes: Naively training machine learning algorithms could lead to overly-optimistic results

Kavli Affiliate: Ke Wang

| First 5 Authors: Efrat Shimron, Jonathan I. Tamir, Ke Wang, Michael Lustig,

| Summary:

While open databases are an important resource in the Deep Learning (DL) era,
they are sometimes used "off-label": data published for one task are used for
training algorithms for a different one. This work aims to highlight that in
some cases, this common practice may lead to biased, overly-optimistic results.
We demonstrate this phenomenon for inverse problem solvers and show how their
biased performance stems from hidden data preprocessing pipelines. We describe
two preprocessing pipelines typical of open-access databases and study their
effects on three well-established algorithms developed for Magnetic Resonance
Imaging (MRI) reconstruction: Compressed Sensing (CS), Dictionary Learning
(DictL), and DL. In this large-scale study we performed extensive computations.
Our results demonstrate that the CS, DictL and DL algorithms yield
systematically biased results when naively trained on seemingly-appropriate
data: the Normalized Root Mean Square Error (NRMSE) improves consistently with
the preprocessing extent, showing an artificial increase of 25%-48% in some
cases. Since this phenomenon is generally unknown, biased results are sometimes
published as state-of-the-art; we refer to that as subtle data crimes. This
work hence raises a red flag regarding naive off-label usage of Big Data and
reveals the vulnerability of modern inverse problem solvers to the resulting
bias.

| Search Query: ArXiv Query: search_query=au:”Ke Wang”&id_list=&start=0&max_results=10

Read More

Leave a Reply

Your email address will not be published.