Share Email Print

Proceedings Paper

A general fine-tune method for catastrophic forgetting
Author(s): Yang Tao; Mingming Zhu; Hao Li; Cao Yuan
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

When the model begins a new task, the challenge of naming the "catastrophic forgetting" limits the scalability of the deep learning network, which quickly forgets the learning capabilities it has. The fine-tuning method recommends that the original feature extraction be retained to extract the features of the new task and to achieve the purpose of learning the new class. However, this method degrades performance on previously learned tasks because the shared parameters change without new guidance for the original task-specific prediction parameters. This paper proposes general fine-tune method to reduce catastrophic forgetting in sequential task learning scenarios. The critical idea of the method is fine-tuning the parameters in each layer, unlike the traditional fine tuning only for the last layer. The experimental results show that the new method is superior to fine-tune, in the accuracy of the old task and the performance of the new task is better than that of the EWC. A distinct advantage is that old tasks do not limit the performance of new tasks but provide some support for new tasks.

Paper Details

Date Published: 14 February 2020
PDF: 7 pages
Proc. SPIE 11429, MIPPR 2019: Automatic Target Recognition and Navigation, 1142912 (14 February 2020); doi: 10.1117/12.2540725
Show Author Affiliations
Yang Tao, Wuhan Polytechnic Univ. (China)
Mingming Zhu, Wuhan Polytechnic Univ. (China)
Hao Li, Wuhan Polytechnic Univ. (China)
Cao Yuan, Wuhan Polytechnic Univ. (China)

Published in SPIE Proceedings Vol. 11429:
MIPPR 2019: Automatic Target Recognition and Navigation
Jianguo Liu; Hanyu Hong; Xia Hua, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?