Share Email Print
cover

Proceedings Paper • new

Squeeze-SegNet: a new fast deep convolutional neural network for semantic segmentation
Author(s): Geraldin Nanfack; Azeddine Elhassouny; Rachid Oulad Haj Thami
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The recent researches in Deep Convolutional Neural Network have focused their attention on improving accuracy that provide significant advances. However, if they were limited to classification tasks, nowadays with contributions from Scientific Communities who are embarking in this field, they have become very useful in higher level tasks such as object detection and pixel-wise semantic segmentation. Thus, brilliant ideas in the field of semantic segmentation with deep learning have completed the state of the art of accuracy, however this architectures become very difficult to apply in embedded systems as is the case for autonomous driving. We present a new Deep fully Convolutional Neural Network for pixel-wise semantic segmentation which we call Squeeze-SegNet. The architecture is based on Encoder-Decoder style. We use a SqueezeNet-like encoder and a decoder formed by our proposed squeeze-decoder module and upsample layer using downsample indices like in SegNet and we add a deconvolution layer to provide final multi-channel feature map. On datasets like Camvid or City-states, our net gets SegNet-level accuracy with less than 10 times fewer parameters than SegNet.

Paper Details

Date Published: 13 April 2018
PDF: 8 pages
Proc. SPIE 10696, Tenth International Conference on Machine Vision (ICMV 2017), 106962O (13 April 2018); doi: 10.1117/12.2309497
Show Author Affiliations
Geraldin Nanfack, Univ. of Mohamed (Morocco)
Azeddine Elhassouny, Univ. of Mohamed (Morocco)
Rachid Oulad Haj Thami, Univ. of Mohamed (Morocco)


Published in SPIE Proceedings Vol. 10696:
Tenth International Conference on Machine Vision (ICMV 2017)
Antanas Verikas; Petia Radeva; Dmitry Nikolaev; Jianhong Zhou, Editor(s)

© SPIE. Terms of Use
Back to Top