完整後設資料紀錄
DC 欄位語言
dc.contributor.authorKumar, Akshien_US
dc.contributor.authorSrinivasan, Kathiravanen_US
dc.contributor.authorCheng Wen-Huangen_US
dc.contributor.authorZomaya, Albert Y.en_US
dc.date.accessioned2020-01-02T00:04:25Z-
dc.date.available2020-01-02T00:04:25Z-
dc.date.issued2020-01-01en_US
dc.identifier.issn0306-4573en_US
dc.identifier.urihttp://dx.doi.org/10.1016/j.ipm.2019.102141en_US
dc.identifier.urihttp://hdl.handle.net/11536/153457-
dc.description.abstractDetecting sentiments in natural language is tricky even for humans, making its automated detection more complicated. This research proffers a hybrid deep learning model for fine-grained sentiment prediction in real-time multimodal data. It reinforces the strengths of deep learning nets in combination to machine learning to deal with two specific semiotic systems, namely the textual (written text) and visual (still images) and their combination within the online content using decision level multimodal fusion. The proposed contextual ConvNet-SVMBovw model, has four modules, namely, the discretization, text analytics, image analytics, and decision module. The input to the model is multimodal text, m epsilon {text, image, info-graphic}. The discretization module uses Google Lens to separate the text from the image, which is then processed as discrete entities and sent to the respective text analytics and image analytics modules. Text analytics module determines the sentiment using a hybrid of a convolution neural network (ConvNet) enriched with the contextual semantics of SentiCircle. An aggregation scheme is introduced to compute the hybrid polarity. A support vector machine (SVM) classifier trained using bag-of-visual-words (BoVW) for predicting the visual content sentiment. A Boolean decision module with a logical OR operation is augmented to the architecture which validates and categorizes the output on the basis of five fine-grained sentiment categories (truth values), namely 'highly positive,' positive,"neutral,"negative' and 'highly negative.' The accuracy achieved by the proposed model is nearly 91% which is an improvement over the accuracy obtained by the text and image modules individually.en_US
dc.language.isoen_USen_US
dc.subjectMultimodalen_US
dc.subjectSentiment analysisen_US
dc.subjectDeep learningen_US
dc.subjectContexten_US
dc.subjectBoVWen_US
dc.titleHybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social dataen_US
dc.typeArticleen_US
dc.identifier.doi10.1016/j.ipm.2019.102141en_US
dc.identifier.journalINFORMATION PROCESSING & MANAGEMENTen_US
dc.citation.volume57en_US
dc.citation.issue1en_US
dc.citation.spage0en_US
dc.citation.epage0en_US
dc.contributor.department電子工程學系及電子研究所zh_TW
dc.contributor.departmentDepartment of Electronics Engineering and Institute of Electronicsen_US
dc.identifier.wosnumberWOS:000500387400016en_US
dc.citation.woscount0en_US
顯示於類別:期刊論文