Full metadata record
DC FieldValueLanguage
dc.contributor.authorCheng, Pei-Chengen_US
dc.contributor.authorChien, Been-Chianen_US
dc.contributor.authorKe, Hao-Renen_US
dc.contributor.authorYang, Wei-Pangen_US
dc.date.accessioned2014-12-08T15:17:49Z-
dc.date.available2014-12-08T15:17:49Z-
dc.date.issued2006en_US
dc.identifier.isbn3-540-45697-Xen_US
dc.identifier.issn0302-9743en_US
dc.identifier.urihttp://hdl.handle.net/11536/12905-
dc.description.abstractIn this paper we describe the technologies and experimental results for the medical retrieval task and automatic annotation task. We combine textual and content-based approaches to retrieve relevant medical images. The content-based approach containing four image features and the text-based approach using word expansion are developed to accomplish these tasks. Experimental results show that combining both the content-based and text-based approaches is better than using only one approach. In the automatic annotation task we use Support Vector Machines (SVM) to learn image feature characteristics for assisting the task of image classification. Based on the SVM model, we analyze which image feature is more promising in medical image retrieval. The results show that the spatial relationship between pixels is an important feature in medical image data because medical image data always has similar anatomic regions. Therefore, image features emphasizing spatial relationship have better results than others.en_US
dc.language.isoen_USen_US
dc.titleCombining textual and visual features for cross-language medical image retrievalen_US
dc.typeArticle; Proceedings Paperen_US
dc.identifier.journalACCESSING MULTILINGUAL INFORMATION REPOSITORIESen_US
dc.citation.volume4022en_US
dc.citation.spage712en_US
dc.citation.epage723en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000241359000078-
Appears in Collections:Conferences Paper