Share Email Print
cover

Proceedings Paper

Linguistic attention-based model for aspect extraction
Author(s): Yunjie Ji; Jie Li; Yanhua Yu
Format Member Price Non-Member Price
PDF $17.00 $21.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Aspect extraction plays an important role in aspect-level sentiment analysis. Most existing approaches focus on explicit aspect extraction and either seriously rely on syntactic rules or only make use of neural network without linguistic knowledge. This paper proposes a linguistic attention-based model (LABM) to implement explicit and implicit aspect extraction together. The linguistic attention mechanism incorporates the knowledge of linguistics which has proven to be very useful in aspect extraction. We also propose a novel unsupervised training approach, distributed aspect learning (DAL), the core idea of DAL is that the aspect vector should align closely to the neural word embeddings of nouns which are tightly associated with the valid aspect indicators. Experimental results using six datasets demonstrate that our model is explainable and outperforms baseline models on evaluation tasks.

Paper Details

Date Published: 29 October 2018
PDF: 6 pages
Proc. SPIE 10836, 2018 International Conference on Image and Video Processing, and Artificial Intelligence, 1083619 (29 October 2018); doi: 10.1117/12.2513845
Show Author Affiliations
Yunjie Ji, Beijing Univ. of Posts and Telecommunications (China)
Jie Li, Beijing Univ. of Posts and Telecommunications (China)
Yanhua Yu, Beijing Univ. of Posts and Telecommunications (China)


Published in SPIE Proceedings Vol. 10836:
2018 International Conference on Image and Video Processing, and Artificial Intelligence
Ruidan Su, Editor(s)

© SPIE. Terms of Use
Back to Top