Teacher-Critical Training Strategies for Image Captioning

Kavli Affiliate: Jiansheng Chen

| First 5 Authors: Yiqing Huang, Jiansheng Chen, , ,

| Summary:

Existing image captioning models are usually trained by cross-entropy (XE)
loss and reinforcement learning (RL), which set ground-truth words as hard
targets and force the captioning model to learn from them. However, the widely
adopted training strategies suffer from misalignment in XE training and
inappropriate reward assignment in RL training. To tackle these problems, we
introduce a teacher model that serves as a bridge between the ground-truth
caption and the caption model by generating some easier-to-learn word proposals
as soft targets. The teacher model is constructed by incorporating the
ground-truth image attributes into the baseline caption model. To effectively
learn from the teacher model, we propose Teacher-Critical Training Strategies
(TCTS) for both XE and RL training to facilitate better learning processes for
the caption model. Experimental evaluations of several widely adopted caption
models on the benchmark MSCOCO dataset show the proposed TCTS comprehensively
enhances most evaluation metrics, especially the Bleu and Rouge-L scores, in
both training stages. TCTS is able to achieve to-date the best published single
model Bleu-4 and Rouge-L performances of 40.2% and 59.4% on the MSCOCO Karpathy
test split. Our codes and pre-trained models will be open-sourced.

| Search Query: ArXiv Query: search_query=au:”Jiansheng Chen”&id_list=&start=0&max_results=10

Read More

Leave a Reply