完整後設資料紀錄
DC 欄位 | 值 | 語言 |
---|---|---|
dc.contributor.author | 蔡秉宸 | en_US |
dc.contributor.author | Tsai, Bing-Chen | en_US |
dc.contributor.author | 王聖智 | en_US |
dc.contributor.author | 簡鳳村 | en_US |
dc.contributor.author | Wang, Sheng-Jyh | en_US |
dc.contributor.author | Chien,Feng-Tsun | en_US |
dc.date.accessioned | 2014-12-12T02:38:40Z | - |
dc.date.available | 2014-12-12T02:38:40Z | - |
dc.date.issued | 2013 | en_US |
dc.identifier.uri | http://140.113.39.130/cdrfb3/record/nctu/#GT070050238 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/73715 | - |
dc.description.abstract | 在這篇論文中,我們將去研究如何快速有效地訓練出一個具有深度結構的模型,而參數的選擇將是主要影響整個模型的關鍵。在此,我們會比較不同的參數對整個模型的影響,包含了給定的初始值、學習速度和權重成本等等不同因素對於整個模型的影響。其中,在這個具有深度結構的模型中分成兩部份,分別為無監督式的預先學習和有監督式的微調模型。這邊提到的無監督式的預先學習是最近非常有效的學習方法"Restricted Boltzmann Machines",而有監督式的微調方法是用"Wake-Sleep Algorithm"來建立一個可產生資料的深層類神經網路模型,並討論無監督式的預先學習對於有監督式的微調的影響,最後會看到深層類神經網路模型產生出來的資料. | zh_TW |
dc.description.abstract | In this thesis, we will discuss how to training a deep architecture model efficiently, then, the parameters are the important role in our model. We will discuss the influence between different parameters, include initial weight、learning rate. There are two parts in this deep architecture, one is unsupervised pre-training, the other is supervised fine-tune. In unsupervised pre-training, we use an efficient learning method “Restricted Boltzmann Machines” to extract the feature form data. In the supervised fine-tune, we use “Wake-Sleep algorithm” to establish a deep neural nets. | en_US |
dc.language.iso | zh_TW | en_US |
dc.subject | 限制性波茲曼機 | zh_TW |
dc.subject | Restricted Boltzmann Machines | en_US |
dc.title | 基於限制性波茲曼機的訓練深度類神經網路研究 | zh_TW |
dc.title | A Study on Training Deep Neural Nets Based on Restricted Boltzmann Machines | en_US |
dc.type | Thesis | en_US |
dc.contributor.department | 電子工程學系 電子研究所 | zh_TW |
顯示於類別: | 畢業論文 |