完整後設資料紀錄
DC 欄位語言
dc.contributor.author翁慶年zh_TW
dc.contributor.author莊仁輝zh_TW
dc.contributor.author李嘉晃zh_TW
dc.contributor.author劉建良zh_TW
dc.contributor.authorWong, Ching-Nianen_US
dc.contributor.authorChuang, Jen-Huien_US
dc.contributor.authorLee, Chia-Hoangen_US
dc.contributor.authorLiu, Chien-Liangen_US
dc.date.accessioned2018-01-24T07:42:05Z-
dc.date.available2018-01-24T07:42:05Z-
dc.date.issued2017en_US
dc.identifier.urihttp://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070456008en_US
dc.identifier.urihttp://hdl.handle.net/11536/142357-
dc.description.abstract圖片與文字之間的轉換,可以被視為是在電腦視覺領域以及自然語言處理領域中的兩種任務的組合:根據文字來產生對應的圖片,以及根據圖片來產生對應的文字。在傳統的監督式學習演算法中,我們不只需要標注,也需要標注與樣本之間的配對資訊,來讓傳統的監督式演算法能夠學習圖片與文字標注的對應關係。傳統的監督式學習演算法所接受的資料,可以是每一個樣本配對一個標注,或在很多情況中也可能是一個樣本配對多個標注。且多標注分類問題一直為學者關注的研究領域之一。然而,標注是耗費時間的,尤其在許多情況下標注與標注和樣本之間的配對資訊可能無法取得。本論文著重於處理在缺乏配對資訊的情況下進行學習。 在缺乏配對資訊的情況下,圖片與文字之間的轉換任務,能被視為一種學習介於兩種不同資料集之間的潛在關係的任務,特別注意的是,其中一個資料集表達方式為連續數值,而另一個資料集的表達方式為離散數值。 我們提出一個以對抗式訓練方式訓練的模型來處理這個任務,並且展示我們的模型,在沒有使用配對資訊來訓練的情況下,可以以文字來描述圖片的特徵,也能夠根據文字而產生具有文字所描述之特徵的圖片。zh_TW
dc.description.abstractTranslation between images and texts could be regarded as a combination of two tasks: generating images conditioned on texts, and generating texts conditioned on images. In traditional supervised learning algorithms, we need not only the labels but also the pair information between the samples and the labels to learn the relations between images and corresponding text labels. Moreover, traditional supervised learning algorithms allow a single label for each sample, but multi-label outcomes also occur in many applications settings, explaining why multi-label classification has caught the attention of researchers over decades. It is apparent that labeling is a time-consuming and label-intensive task. In particular, the labeling and pairing information may be unavailable in many settings. This thesis focuses on the condition, in which pair information is absent from the data. The task of translation between images and texts without pair information could be considered a task to learn the implicit relationship between two different dataset, where one is in continuous field and the other one is in discrete field. We propose a model to deal with this task, and demonstrate that our proposed model trained without pair information could describe images by attribute tokens and generate images according to attribute tokens.en_US
dc.language.isoen_USen_US
dc.subject類神經網路zh_TW
dc.subject深度學習zh_TW
dc.subject產生性對抗式網路zh_TW
dc.subjectNeural networksen_US
dc.subjectDeep learningen_US
dc.subjectGenerative adversarial networksen_US
dc.title產生性對抗式網路於無配對條件之圖片與文字間轉換zh_TW
dc.titleUnpaired Translation between Images and Texts Using Generative Adversarial Networksen_US
dc.typeThesisen_US
dc.contributor.department資訊科學與工程研究所zh_TW
顯示於類別:畢業論文