Full metadata record
DC FieldValueLanguage
dc.contributor.authorCheng, PCen_US
dc.contributor.authorYeh, JYen_US
dc.contributor.authorKe, HRen_US
dc.contributor.authorChien, BCen_US
dc.contributor.authorYang, WPen_US
dc.date.accessioned2014-12-08T15:36:39Z-
dc.date.available2014-12-08T15:36:39Z-
dc.date.issued2005en_US
dc.identifier.isbn3-540-27420-0en_US
dc.identifier.issn0302-9743en_US
dc.identifier.urihttp://hdl.handle.net/11536/25011-
dc.description.abstractThis paper concentrates on the user-centered search task at ImageCLEF 2004. In this work, we combine both textual and visual features for cross-language image retrieval, and propose two interactive retrieval systems T_ICLEF and VCT_ICLEF. The first one incorporates a relevance feedback mechanism based on textual information while the second one combines textual and image information to help users find a target image. The experimental results show that VCT_ICLEF had a better performance in almost all cases. Overall, it helped users find the topic image within a fewer iterations with a maximum of 2 iterations saved. Our user survey also reported that a combination of textual and visual information is helpful to indicate to the system what a user really wanted in mind.en_US
dc.language.isoen_USen_US
dc.titleComparison and combination of textual and visual features for interactive cross-language image retrievalen_US
dc.typeArticle; Proceedings Paperen_US
dc.identifier.journalMULTILINGUAL INFORMATION ACCESS FOR TEXT, SPEECH AND IMAGESen_US
dc.citation.volume3491en_US
dc.citation.spage793en_US
dc.citation.epage804en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.department圖書館zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.contributor.departmentLibraryen_US
dc.identifier.wosnumberWOS:000231117600077-
Appears in Collections:Conferences Paper