Kavli Affiliate: Max Tegmark
| First 5 Authors: Alexander Zlokapa, Andrew K. Tan, John M. Martyn, Max Tegmark, Isaac L. Chuang
| Summary:
It has been an open question in deep learning if fault-tolerant computation
is possible: can arbitrarily reliable computation be achieved using only
unreliable neurons? In the mammalian cortex, analog error correction codes
known as grid codes have been observed to protect states against neural spiking
noise, but their role in information processing is unclear. Here, we use these
biological codes to show that a universal fault-tolerant neural network can be
achieved if the faultiness of each neuron lies below a sharp threshold, which
we find coincides in order of magnitude with noise observed in biological
neurons. The discovery of a sharp phase transition from faulty to
fault-tolerant neural computation opens a path towards understanding noisy
analog systems in artificial intelligence and neuroscience.
| Search Query: ArXiv Query: search_query=au:”Max Tegmark”&id_list=&start=0&max_results=10