完整後設資料紀錄
DC 欄位語言
dc.contributor.author蔡秉宸en_US
dc.contributor.authorTsai, Bing-Chenen_US
dc.contributor.author王聖智en_US
dc.contributor.author簡鳳村en_US
dc.contributor.authorWang, Sheng-Jyhen_US
dc.contributor.authorChien,Feng-Tsunen_US
dc.date.accessioned2014-12-12T02:38:40Z-
dc.date.available2014-12-12T02:38:40Z-
dc.date.issued2013en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT070050238en_US
dc.identifier.urihttp://hdl.handle.net/11536/73715-
dc.description.abstract在這篇論文中,我們將去研究如何快速有效地訓練出一個具有深度結構的模型,而參數的選擇將是主要影響整個模型的關鍵。在此,我們會比較不同的參數對整個模型的影響,包含了給定的初始值、學習速度和權重成本等等不同因素對於整個模型的影響。其中,在這個具有深度結構的模型中分成兩部份,分別為無監督式的預先學習和有監督式的微調模型。這邊提到的無監督式的預先學習是最近非常有效的學習方法"Restricted Boltzmann Machines",而有監督式的微調方法是用"Wake-Sleep Algorithm"來建立一個可產生資料的深層類神經網路模型,並討論無監督式的預先學習對於有監督式的微調的影響,最後會看到深層類神經網路模型產生出來的資料.zh_TW
dc.description.abstractIn this thesis, we will discuss how to training a deep architecture model efficiently, then, the parameters are the important role in our model. We will discuss the influence between different parameters, include initial weight、learning rate. There are two parts in this deep architecture, one is unsupervised pre-training, the other is supervised fine-tune. In unsupervised pre-training, we use an efficient learning method “Restricted Boltzmann Machines” to extract the feature form data. In the supervised fine-tune, we use “Wake-Sleep algorithm” to establish a deep neural nets.en_US
dc.language.isozh_TWen_US
dc.subject限制性波茲曼機zh_TW
dc.subjectRestricted Boltzmann Machinesen_US
dc.title基於限制性波茲曼機的訓練深度類神經網路研究zh_TW
dc.titleA Study on Training Deep Neural Nets Based on Restricted Boltzmann Machinesen_US
dc.typeThesisen_US
dc.contributor.department電子工程學系 電子研究所zh_TW
顯示於類別:畢業論文