Controlling Neural Networks with Rule Representations

Kavli Affiliate: Xiang Zhang

| First 5 Authors: Sungyong Seo, Sercan O. Arik, Jinsung Yoon, Xiang Zhang, Kihyuk Sohn

| Summary:

We propose a novel training method that integrates rules into deep learning,
in a way the strengths of the rules are controllable at inference. Deep Neural
Networks with Controllable Rule Representations (DeepCTRL) incorporates a rule
encoder into the model coupled with a rule-based objective, enabling a shared
representation for decision making. DeepCTRL is agnostic to data type and model
architecture. It can be applied to any kind of rule defined for inputs and
outputs. The key aspect of DeepCTRL is that it does not require retraining to
adapt the rule strength — at inference, the user can adjust it based on the
desired operation point on accuracy vs. rule verification ratio. In real-world
domains where incorporating rules is critical — such as Physics, Retail and
Healthcare — we show the effectiveness of DeepCTRL in teaching rules for deep
learning. DeepCTRL improves the trust and reliability of the trained models by
significantly increasing their rule verification ratio, while also providing
accuracy gains at downstream tasks. Additionally, DeepCTRL enables novel use
cases such as hypothesis testing of the rules on data samples, and unsupervised
adaptation based on shared rules between datasets.

| Search Query: ArXiv Query: search_query=au:”Xiang Zhang”&id_list=&start=0&max_results=10

Read More

Leave a Reply