完整後設資料紀錄
DC 欄位語言
dc.contributor.authorYang, CTen_US
dc.contributor.authorTseng, SSen_US
dc.contributor.authorFann, YWen_US
dc.contributor.authorTsai, TKen_US
dc.contributor.authorHsieh, MHen_US
dc.contributor.authorWu, CTen_US
dc.date.accessioned2014-12-08T15:44:10Z-
dc.date.available2014-12-08T15:44:10Z-
dc.date.issued2001-03-01en_US
dc.identifier.issn1532-0626en_US
dc.identifier.urihttp://dx.doi.org/10.1002/cpe.563en_US
dc.identifier.urihttp://hdl.handle.net/11536/29833-
dc.description.abstractThe main function of parallelizing compilers is to analyze sequential programs, in particular the loop structure, to detect hidden parallelism and automatically restructure sequential programs into parallel subtasks that are executed on a multiprocessor. This article describes the design and implementation of an efficient parallelizing compiler to parallelize loops and achieve high speedup rates on multiprocessor systems. It is well known that the execution efficiency of a loop can be enhanced if the loop is executed in parallel or partially parallel, such as in a DOALL or DOACROSS loop. This article also reviews a practical parallel loop detector (PPD) that is implemented in our PFPC on finding the parallelism in loops. The PPD can extract the potential DOALL and DOACROSS loops in a program by verifying array subscripts. In addition, a new model by using knowledge-based approach is proposed to exploit more loop parallelisms in this paper. The knowledge-based approach integrates existing loop transformations and loop scheduling algorithms to make good use of their ability to extract loop parallelisms. Two rule-based systems, called the KPLT and IPLS, are then developed using repertory grid analysis and attribute-ordering tables respectively, to construct the knowledge bases. These systems can choose an appropriate transform and loop schedule, and then apply the resulting methods to perform loop parallelization and obtain a high speedup rate. For example, the IPLS system can choose an appropriate loop schedule for running on multiprocessor systems. Finally, a runtime technique based on the inspector/executor scheme is proposed in this article for finding available parallelism on loops. Our inspector can determine the wavefronts of a loop with any complex indirected array-indexing pattern by building a DEF-USE table. The inspector is fully parallel without any synchronization. Experimental results show that the new method can resolve any complex data dependence patterns where no previous research can. One of the ultimate goals is to construct a high-performance and portable FORTRAN parallelizing compiler on shared-memory multiprocessors. We believe that our research may provide more insight into the development of a highperformance parallelizing compiler Copyright (C) 2001 John Wiley & Sons, Ltd.en_US
dc.language.isoen_USen_US
dc.subjectparallelizing compileren_US
dc.subjectknowledge-based systemen_US
dc.subjectloop parallelizationen_US
dc.subjectmultithreaded OSen_US
dc.subjectprogram restructuringen_US
dc.titleUsing knowledge-based systems for research on parallelizing compilersen_US
dc.typeArticleen_US
dc.identifier.doi10.1002/cpe.563en_US
dc.identifier.journalCONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCEen_US
dc.citation.volume13en_US
dc.citation.issue3en_US
dc.citation.spage181en_US
dc.citation.epage208en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000168248400002-
dc.citation.woscount2-
顯示於類別:期刊論文


文件中的檔案:

  1. 000168248400002.pdf

若為 zip 檔案,請下載檔案解壓縮後,用瀏覽器開啟資料夾中的 index.html 瀏覽全文。