完整後設資料紀錄
DC 欄位語言
dc.contributor.authorLiu, CSen_US
dc.contributor.authorTseng, CHen_US
dc.date.accessioned2014-12-08T15:27:19Z-
dc.date.available2014-12-08T15:27:19Z-
dc.date.issued1998en_US
dc.identifier.isbn0-7803-5214-9en_US
dc.identifier.issn1082-3409en_US
dc.identifier.urihttp://hdl.handle.net/11536/19566-
dc.description.abstractA two-level learning algorithm that decomposes multilayer neural networks into a set of sub-networks is presented. A lot of popular optimization methods, such as conjugate-gradient and quasi-Neurton methods, cart be utilized to train these sub-networks. In addition, if the activation functions are hard-limiting functions, the multilayer neural networks can be trained by the perceptron learning rule in this two-level learning algorithm. Two experimental problems are given as examples for this algorithm.en_US
dc.language.isoen_USen_US
dc.titleTwo-level learning algorithm for multilayer neural networksen_US
dc.typeProceedings Paperen_US
dc.identifier.journalTENTH IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGSen_US
dc.citation.spage97en_US
dc.citation.epage102en_US
dc.contributor.department機械工程學系zh_TW
dc.contributor.departmentDepartment of Mechanical Engineeringen_US
dc.identifier.wosnumberWOS:000079563400013-
顯示於類別:會議論文