完整後設資料紀錄
DC 欄位語言
dc.contributor.authorTien, Po-Lungen_US
dc.date.accessioned2018-08-21T05:52:48Z-
dc.date.available2018-08-21T05:52:48Z-
dc.date.issued2017-11-01en_US
dc.identifier.issn2162-237Xen_US
dc.identifier.urihttp://dx.doi.org/10.1109/TNNLS.2016.2600410en_US
dc.identifier.urihttp://hdl.handle.net/11536/143960-
dc.description.abstractIn this paper, we propose a novel discrete-time recurrent neural network aiming to resolve a new class of multi-constrained K-winner-take-all (K-WTA) problems. By facilitating specially designed asymmetric neuron weights, the proposed model is capable of operating in a fully parallel manner, thereby allowing true digital implementation. This paper also provides theorems that delineate the theoretical upper bound of the convergence latency, which is merely O(K). Importantly, via simulations, the average convergence time is close to O(1) in most general cases. Moreover, as the multi-constrained K-WTA problem degenerates to a traditional single-constrained problem, the upper bound becomes exactly two parallel iterations, which significantly outperforms the existing K-WTA models. By associating the neurons and neuron weights with routing paths and path priorities, respectively, we then apply the model to a prioritized flow scheduler for the data center networks. Through extensive simulations, we demonstrate that the proposed scheduler converges to the equilibrium state within near-constant time for different scales of networks while achieving maximal throughput, quality-of-service priority differentiation, and minimum energy consumption, subject to the flow contention-free constraints.en_US
dc.language.isoen_USen_US
dc.subjectEnergy savingen_US
dc.subjectK-winner take allen_US
dc.subjectparallel computationen_US
dc.subjectprioritized schedulingen_US
dc.subjectquality of service (QoS)en_US
dc.subjectrecurrent neural networken_US
dc.titleA New Discrete-Time Multi-Constrained K-Winner-Take-All Recurrent Network and Its Application to Prioritized Schedulingen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/TNNLS.2016.2600410en_US
dc.identifier.journalIEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMSen_US
dc.citation.volume28en_US
dc.citation.spage2674en_US
dc.citation.epage2685en_US
dc.contributor.department交大名義發表zh_TW
dc.contributor.departmentNational Chiao Tung Universityen_US
dc.identifier.wosnumberWOS:000413403900018en_US
顯示於類別:期刊論文