完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | Cheng, PC | en_US |
dc.contributor.author | Yeh, JY | en_US |
dc.contributor.author | Ke, HR | en_US |
dc.contributor.author | Chien, BC | en_US |
dc.contributor.author | Yang, WP | en_US |
dc.date.accessioned | 2014-12-08T15:36:39Z | - |
dc.date.available | 2014-12-08T15:36:39Z | - |
dc.date.issued | 2005 | en_US |
dc.identifier.isbn | 3-540-27420-0 | en_US |
dc.identifier.issn | 0302-9743 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/25011 | - |
dc.description.abstract | This paper concentrates on the user-centered search task at ImageCLEF 2004. In this work, we combine both textual and visual features for cross-language image retrieval, and propose two interactive retrieval systems T_ICLEF and VCT_ICLEF. The first one incorporates a relevance feedback mechanism based on textual information while the second one combines textual and image information to help users find a target image. The experimental results show that VCT_ICLEF had a better performance in almost all cases. Overall, it helped users find the topic image within a fewer iterations with a maximum of 2 iterations saved. Our user survey also reported that a combination of textual and visual information is helpful to indicate to the system what a user really wanted in mind. | en_US |
dc.language.iso | en_US | en_US |
dc.title | Comparison and combination of textual and visual features for interactive cross-language image retrieval | en_US |
dc.type | Article; Proceedings Paper | en_US |
dc.identifier.journal | MULTILINGUAL INFORMATION ACCESS FOR TEXT, SPEECH AND IMAGES | en_US |
dc.citation.volume | 3491 | en_US |
dc.citation.spage | 793 | en_US |
dc.citation.epage | 804 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | 圖書館 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.contributor.department | Library | en_US |
dc.identifier.wosnumber | WOS:000231117600077 | - |
顯示於類別: | 會議論文 |