Split personalities in Bayesian Neural Networks: the case for full marginalisation

Kavli Affiliate: Anthony Lasenby

| First 5 Authors: David Yallup, Will Handley, Mike Hobson, Anthony Lasenby, Pablo Lemos

| Summary:

The true posterior distribution of a Bayesian neural network is massively
multimodal. Whilst most of these modes are functionally equivalent, we
demonstrate that there remains a level of real multimodality that manifests in
even the simplest neural network setups. It is only by fully marginalising over
all posterior modes, using appropriate Bayesian sampling tools, that we can
capture the split personalities of the network. The ability of a network
trained in this manner to reason between multiple candidate solutions
dramatically improves the generalisability of the model, a feature we contend
is not consistently captured by alternative approaches to the training of
Bayesian neural networks. We provide a concise minimal example of this, which
can provide lessons and a future path forward for correctly utilising the
explainability and interpretability of Bayesian neural networks.

| Search Query: ArXiv Query: search_query=au:”Anthony Lasenby”&id_list=&start=0&max_results=3

Read More