完整後設資料紀錄
DC 欄位語言
dc.contributor.authorPeng, Shih-Jungen_US
dc.contributor.authorChen, Deng-Jyien_US
dc.date.accessioned2014-12-08T15:13:28Z-
dc.date.available2014-12-08T15:13:28Z-
dc.date.issued2007en_US
dc.identifier.isbn978-1-4244-0982-2en_US
dc.identifier.urihttp://hdl.handle.net/11536/10411-
dc.description.abstractApplication systems that utilize recognition technologies, such as speech recognition, provide human-machine interface that could aid people more easily in operating system device or help those who are physically unable to interact with computers through traditional input devices such as mouse or keyboard. As we have seen, speech recognition technology is widely used between the device interfaces and human. The common approached method to increase speech recognition function into devices is through low-level programmed wrappers. For getting that, first we must obtain the source code of system and have programming knowledge in order to perform it. In that, the speech commands are pre-defined for particular application systems, so user could not set or modify the commands to what they want especially. Even if the designer want to add or delete some speech commands, it also is not easy and is time consuming. In this research, we provide a general interfacing framework. Under this, user could set or modify speech commands conveniently and easily without the needs of the detailed system code, system design and programming knowledge. After speech commands are set by end user, interface would store these commands to database. When end user speak a command through interface, the interface would analyze and recognize it, and then interact directly with application systems through calling the API functions, we have done before, to control mouse moving and keyboard pressing. The proposed system could be applied to GUI based commercial software without accessing their internal code. When installing and executing our proposed interfacing framework under windows environment, user can interact with most of the application systems by controlling mouse moving, mouse jumping, mouse clicking, keyboard pressing and compound keyboard pressing through speech commands just like what we have done normally. Finally, we apply some examples to demonstrate the applicability and feasibility of the proposed interfacing framework.en_US
dc.language.isoen_USen_US
dc.subjectspeech recognitionen_US
dc.subjectrecognizeren_US
dc.subjectspeech interfaceen_US
dc.subjecte-learningen_US
dc.titleA generic interface methodology for bridging application systems and speech recognizersen_US
dc.typeProceedings Paperen_US
dc.identifier.journal2007 6TH INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS & SIGNAL PROCESSING, VOLS 1-4en_US
dc.citation.spage25en_US
dc.citation.epage29en_US
dc.contributor.department資訊工程學系zh_TW
dc.contributor.departmentDepartment of Computer Scienceen_US
dc.identifier.wosnumberWOS:000256699500006-
顯示於類別:會議論文