Share Email Print

Proceedings Paper

Digital implementation of a neural network for imaging
Author(s): Richard Wood; Alex McGlashan; Jay Yatulis; Peter Mascher; Ian Bruce
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper outlines the design and testing of a digital imaging system that utilizes an artificial neural network with unsupervised and supervised learning to convert streaming input (real time) image space into parameter space. The primary objective of this work is to investigate the effectiveness of using a neural network to significantly reduce the information density of streaming images so that objects can be readily identified by a limited set of primary parameters and act as an enhanced human machine interface (HMI). Many applications are envisioned including use in biomedical imaging, anomaly detection and as an assistive device for the visually impaired. A digital circuit was designed and tested using a Field Programmable Gate Array (FPGA) and an off the shelf digital camera. Our results indicate that the networks can be readily trained when subject to limited sets of objects such as the alphabet. We can also separate limited object sets with rotational and positional invariance. The results also show that limited visual fields form with only local connectivity.

Paper Details

Date Published: 24 October 2012
PDF: 8 pages
Proc. SPIE 8412, Photonics North 2012, 84121H (24 October 2012); doi: 10.1117/12.2000727
Show Author Affiliations
Richard Wood, McMaster Univ. (Canada)
Alex McGlashan, Niagara College (Canada)
Jay Yatulis, Niagara College (Canada)
Peter Mascher, McMaster Univ. (Canada)
Ian Bruce, McMaster Univ. (Canada)

Published in SPIE Proceedings Vol. 8412:
Photonics North 2012
Jean-Claude Kieffer, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?