標題: | Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data |
作者: | Kumar, Akshi Srinivasan, Kathiravan Cheng Wen-Huang Zomaya, Albert Y. 電子工程學系及電子研究所 Department of Electronics Engineering and Institute of Electronics |
關鍵字: | Multimodal;Sentiment analysis;Deep learning;Context;BoVW |
公開日期: | 1-Jan-2020 |
摘要: | Detecting sentiments in natural language is tricky even for humans, making its automated detection more complicated. This research proffers a hybrid deep learning model for fine-grained sentiment prediction in real-time multimodal data. It reinforces the strengths of deep learning nets in combination to machine learning to deal with two specific semiotic systems, namely the textual (written text) and visual (still images) and their combination within the online content using decision level multimodal fusion. The proposed contextual ConvNet-SVMBovw model, has four modules, namely, the discretization, text analytics, image analytics, and decision module. The input to the model is multimodal text, m epsilon {text, image, info-graphic}. The discretization module uses Google Lens to separate the text from the image, which is then processed as discrete entities and sent to the respective text analytics and image analytics modules. Text analytics module determines the sentiment using a hybrid of a convolution neural network (ConvNet) enriched with the contextual semantics of SentiCircle. An aggregation scheme is introduced to compute the hybrid polarity. A support vector machine (SVM) classifier trained using bag-of-visual-words (BoVW) for predicting the visual content sentiment. A Boolean decision module with a logical OR operation is augmented to the architecture which validates and categorizes the output on the basis of five fine-grained sentiment categories (truth values), namely 'highly positive,' positive,"neutral,"negative' and 'highly negative.' The accuracy achieved by the proposed model is nearly 91% which is an improvement over the accuracy obtained by the text and image modules individually. |
URI: | http://dx.doi.org/10.1016/j.ipm.2019.102141 http://hdl.handle.net/11536/153457 |
ISSN: | 0306-4573 |
DOI: | 10.1016/j.ipm.2019.102141 |
期刊: | INFORMATION PROCESSING & MANAGEMENT |
Volume: | 57 |
Issue: | 1 |
起始頁: | 0 |
結束頁: | 0 |
Appears in Collections: | Articles |