標題: 以類神經網路進行琴弦的動態模擬
Dynamics Modeling of Musical String by ANN
作者: 梁勝富
Liang, Sheng-Fu
林進燈, 蘇文鈺
Chin-Teng Lin, Alvin Su
電控工程研究所
關鍵字: 音樂合成;實體模擬;調頻法;波形表法;類神經網路;非線性特性;Music Synthesis;Physical Modeling;FM;Wavetable;Neural Network;Nonlinearity
公開日期: 1995
摘要: 當音樂合成方法如 FM 和 Wavetable 無法滿足越來越嚴苛品質要求,樂 器實體模擬就成為此領域主要的研究方向。藉著波傳遞特性及 其相關數 位化技術可以產生逼真且具動態效果的樂聲。首先我們將 karplus- Strong plucked-string algorithm 推衍至二維架構作為樂器面板的模 擬。為了能有效模擬樂器實體,我們提出一種新的類神經網路架構稱之為 「Linear Scattering Recurrent Network (LSRN) 」,來學習樂器的物 理特性。此網路利用對振動弦波的量測作為學習的資料,經過充分學習得 以模擬琴弦的物理特性。我們透過學習理論的推導與電腦模擬的結果來證 實其可行咀此外,我們亦討論樂器的非線性特性作為未來的研究方向。 Music synthesis by physical modeling methods becomes the major research topic in the related area when FM synthesis and Wavetable synthesis cannot satisfy the demanding users. Combining the property of wave propagation and the associate discrete-time implementation, it is possible to generate realistic and dynamic musical tones. We first advance the Karplus-Strong plucked-string algorithm into a 2-D membrane extension. In order to model□sHeal instrument, we propose a class of neural network called Linear Scattering Recurrent Network (LSRN) which employs the measurement of the response of a string as the learning data such that the model can be trained to be a counterpart of the string in the synthesis domain. The correspondent learning algorithm and computer simulations are given to demonstrate the encouraging modeling results. Musical instrumental nonlinearity which points to our future works is also discussed.
URI: http://140.113.39.130/cdrfb3/record/nctu/#NT840327038
http://hdl.handle.net/11536/60295
顯示於類別:畢業論文