完整後設資料紀錄
DC 欄位語言
dc.contributor.authorHuang, Kou-Yuanen_US
dc.contributor.authorShen, Liang-Chien_US
dc.contributor.authorYou, Jiun-Deren_US
dc.date.accessioned2017-04-21T06:49:04Z-
dc.date.available2017-04-21T06:49:04Z-
dc.date.issued2015en_US
dc.identifier.isbn978-1-4799-1959-8en_US
dc.identifier.issn2161-4393en_US
dc.identifier.urihttp://hdl.handle.net/11536/134624-
dc.description.abstractIn the multilayer perceptron (MLP), there was a theorem about the maximum number of separable regions (M) given the number of hidden nodes (H) in the input d-dimensional space. We propose a recurrence relation in the high dimensional space and prove the theorem using the expansion of recurrence relation instead of proof by induction. The MLP model has input layer, one hidden layer, and output layer. We use different MLP models on the well log data inversion to test the number of hidden nodes determined by the theorem. The inputs are the first order, second order, and third order features. Higher order neural network (HONN) has the property of more nonlinear mapping. In the experiments, we have 31 simulated well log data. 25 well log data are used for training, and 6 are for testing. The experimental results can support the number of hidden nodes determined by the theorem.en_US
dc.language.isoen_USen_US
dc.subjectMultilayer perceptronen_US
dc.subjecthidden node numberen_US
dc.subjectrecurrence formulaen_US
dc.subjecthigher order neural networken_US
dc.subjectwell log data inversionen_US
dc.titleProof of Hidden Node Number in MLP and Experiments on Well Log Data Inversionen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000370730601018en_US
dc.citation.woscount0en_US
顯示於類別:會議論文