Transferring Core Knowledge via Learngenes

Kavli Affiliate: Jing Wang

| First 5 Authors: Fu Feng, Jing Wang, Xin Geng, ,

| Summary:

The pre-training paradigm fine-tunes the models trained on large-scale
datasets to downstream tasks with enhanced performance. It transfers all
knowledge to downstream tasks without discriminating which part is necessary or
unnecessary, which may lead to negative transfer. In comparison, knowledge
transfer in nature is much more efficient. When passing genetic information to
descendants, ancestors encode only the essential knowledge into genes, which
act as the medium. Inspired by that, we adopt a recent concept called
“learngene” and refine its structures by mimicking the structures of natural
genes. We propose the Genetic Transfer Learning (GTL) — a framework to copy
the evolutionary process of organisms into neural networks. GTL trains a
population of networks, selects superior learngenes by tournaments, performs
learngene mutations, and passes the learngenes to next generations. Finally, we
successfully extract the learngenes of VGG11 and ResNet12. We show that the
learngenes bring the descendant networks instincts and strong learning ability:
with 20% parameters, the learngenes bring 12% and 16% improvements of accuracy
on CIFAR-FS and miniImageNet. Besides, the learngenes have the scalability and
adaptability on the downstream structure of networks and datasets. Overall, we
offer a novel insight that transferring core knowledge via learngenes may be
sufficient and efficient for neural networks.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=3

Read More