完整後設資料紀錄
DC 欄位語言
dc.contributor.author陳勝豪en_US
dc.contributor.authorChen, Sheng-Haoen_US
dc.contributor.author吳重雨en_US
dc.contributor.authorWu, Chung-Yuen_US
dc.date.accessioned2014-12-12T02:26:40Z-
dc.date.available2014-12-12T02:26:40Z-
dc.date.issued2008en_US
dc.identifier.urihttp://140.113.39.130/cdrfb3/record/nctu/#GT009211821en_US
dc.identifier.urihttp://hdl.handle.net/11536/67902-
dc.description.abstract此論文研究針對於類神經網路(細胞非線性網路)的研究與應用,細胞非線性網路模仿神經聯結方式運算,可視為一類比式計算機處理單元陣列,適合運用在影像處理,雖然目前數位式計算機處理單元可以達到數個GHz的處理速度,但在影像處理方面,若以各個圖元分別作運算,仍需要大量的處理時間,因此若以細胞非線性網路陣列平行運算,可達到高速運算的結果,並針對神經網路之特性與其優缺點,以類比電路實現為主軸,分別實現以下兩個部分: 1.設計分析一可程式化之大鄰近層細胞非線性網路通用機器核心部分。 2.設計分析一可學習之免衰減比例式記憶細胞非線性網路與一反覆學習比例式記憶細胞非線性網路。 目前細胞非線性網路通用機器僅能處理3x3的範本,即僅有鄰接的各圖元間有係數的關聯,而大鄰近層細胞非線性網路的主要構想,在於若可將關聯推性廣至更遠之細胞上,可增加細胞非線性網路的功能性;此外,亦有其他團隊針對將大鄰近層細胞非線性網路的範本,分解成數個3x3的範本來達到相同的功能,因此若能設計一大鄰近層細胞非線性網路,一步完成大鄰近層細胞非線性網路的功能,可節省所需之處理時間與消耗功率;因為大鄰近層細胞非線性網路為一大型陣列,電路設計方面主要考慮其功率消耗與面積大小,並以傳導式連結的電路架構,使其可實現大鄰近層細胞非線性網路的功能,論文中許多大鄰近層細胞非線性網路的範本,皆可在模擬中實現,而 所設計之大鄰近層細胞非線性網路陣列大小為20x20,晶片大小為1543 um x 1248 um,功率消耗在待機時僅0.7 mW,一般操作下為 18 mW,操作頻率為20 MHz,並在實現中驗證可實現人的錯覺範本。 可學習之比例式記憶細胞非線性網路目的在於學習各種樣本,並將含有雜訊的樣本復原,原理是將兩個圖元間的關係,紀錄在比例式記憶體的電容中,並利用其漏電的缺點強化圖元間的關係,並將各個圖元周圍的範本常態化(normalized),因此稱之為比例式記憶,藉此可提高其辨識率;然而,由於各個圖元間的差異,若以相同的放電時間強化圖元間關係,可能會造成此關係被破壞或是強化不足,因此各個關係改以與圖元周圍的關係平均來決定其值的去留,以此方式可節省除法器的運用並簡化比例式記憶細胞非線性網路的複雜度。另外,從機率統計方面亦可推論出臨界值範本的必要性,即為其臨界值範本(Threshold),由此提出以遞迴學習的方式,統計出雜訊與辨識後的臨界值,藉此可更加增加其辨識率。 本論文之主要貢獻為,建立一完整大鄰近層細胞非線性網路之架構,並以簡單之電路實現,因此可達到小面積、低功率,經實驗量測可用於二元(binary)的影像運算;另外不需放電之可學習之比例式記憶細胞非線性網路方式,亦簡化了電路的複雜度,使其容易實現。亦討論了可學習比例式記憶細胞非線性網路之機率統計模型,並依據推論結果,運用臨界值範本的學習,更增進其辨識率。zh_TW
dc.description.abstractThis dissertation focuses on the studies and applications of the cellular neural/nonlinear networks (CNN). CNN is an analog CPU array which can imitate the operations of neural connections which is suitable for image processing. Although the speed of the recent digital CPUs can reach higher than several GHz, when the digital CPU is applied on the image processing, it takes a lot of time to achieve the processing separately. Hence, the advantage of parallel processing of CNN array is required to achieve high speed processing. According to the properties of CNN, two major topics are realized by using analog circuit design. I.The design and analysis of a CMOS low-power, large-neighborhood CNN with propagating connections II.The design and analysis of a ratio memory CNN Recently, cellular nonlinear network universal machine (CNNUM) can only achieve the 3x3 templates of nearest connecting correlations. The main concept of large-neighborhood cellular nonlinear network (LNCNN) is to extend the connecting correlations and to increase the capability of CNN. Moreover, some studies have decomposed the LNCNN templates into several 3x3 templates to realize the same functions. However, this may take more cost to achieve one LNCNN function. Hence, it is necessary to design a LNCNN for the templates larger than 3x3. Because LNCNN is a very large scale array, the power consumption and chip area are considered first. With the propagating connections, the functions of LNCNN are realized by the designed 20x20 LNCNN array and the chip size is 1543 um x 1248 um. The power consumption is 0.7 mW on standby and 18 mW in operation with a system clock frequency of 20 MHz. The purpose of the learnable ratio memory cellular nonlinear networks is to learn the every kind of patterns and recover the learned noisy patterns. The concept is to store the correlations of two neighboring cells on the capacitor in the ratio memories and use the intrinsic leakage to enhance the common characteristics. Moreover, the templates are normalized by the correlation with neighboring cells to increase the recognition rate and thus, it is called ratio memory. However, due to the difference of any two cells, if the same elapsed time for leakage is applied to enhance the characteristics, it may cause only the self-feedback term to remain or the enhancement of common characteristics to be smaller. Hence, the templates are decided by the correlation and the mean of the four correlations around one cell. This can make the design much easier and the divider can be abandoned. Besides, by the deviation of the statistics and probability, there exists a dc term except for the templates. It is found that the threshold template is required and learned by recursive learning to gather the information of the noisy patterns to increase the recognition rate. The main contribution of this dissertation is that the complete architecture of large-neighborhood CNN has been established and realized by a simple circuit design. Hence, a small-size, low-power LNCNN chip has been fabricated and measured. According to the experimental result, the LNCNN chip can be applied on the binary image processing. Moreover, the statistic and probability models of the learnable ratio memory CNN has also been derived and, according to the results, the learning of the threshold templates are used to increase the recognition rate. Furthermore, the learnable ratio memory CNN without elapsed time has also been proposed to simplify the complexity of the circuits for realization.en_US
dc.language.isoen_USen_US
dc.subject大鄰近層細胞非線性網路zh_TW
dc.subject比例式記憶細胞非線性網路zh_TW
dc.subject細胞非線性網路zh_TW
dc.subject傳導式連結zh_TW
dc.subject遞迴學習zh_TW
dc.subjectLarge-neighborhood CNNen_US
dc.subjectRatio-momory CNNen_US
dc.subjectcellular nonlinear networksen_US
dc.subjectpropagating connecitonsen_US
dc.subjectrecursive learningen_US
dc.title大鄰近層細胞非線性網路與比例式記憶細胞非線性網路之設計與分析zh_TW
dc.titleTHE DESIGN AND ANALYSIS OF LARGE-NEIGHBORHOOD CELLULAR NONLINEAR NETWORKS AND RATIO-MEMORY CELLULAR NONLINEAR NETWORKSen_US
dc.typeThesisen_US
dc.contributor.department電子研究所zh_TW
顯示於類別:畢業論文


文件中的檔案:

  1. 182101.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。