Share Email Print

Proceedings Paper

Memory-efficient large-scale linear support vector machine
Author(s): Abdullah Alrajeh; Akiko Takeda; Mahesan Niranjan
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Stochastic gradient descent has been advanced as a computationally efficient method for large-scale problems. In classification problems, many proposed linear support vector machines are very effective. However, they assume that the data is already in memory which might be not always the case. Recent work suggests a classical method that divides such a problem into smaller blocks then solves the sub-problems iteratively. We show that a simple modification of shrinking the dataset early will produce significant saving in computation and memory. We further find that on problems larger than previously considered, our approach is able to reach solutions on top-end desktop machines while competing methods cannot.

Paper Details

Date Published: 14 February 2015
PDF: 6 pages
Proc. SPIE 9445, Seventh International Conference on Machine Vision (ICMV 2014), 944527 (14 February 2015); doi: 10.1117/12.2180925
Show Author Affiliations
Abdullah Alrajeh, King Abdulaziz City for Science and Technology (Saudi Arabia)
Univ. of Southampton (United Kingdom)
Akiko Takeda, Univ. of Tokyo (Japan)
Mahesan Niranjan, Univ. of Southampton (United Kingdom)

Published in SPIE Proceedings Vol. 9445:
Seventh International Conference on Machine Vision (ICMV 2014)
Antanas Verikas; Branislav Vuksanovic; Petia Radeva; Jianhong Zhou, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?