Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Peng, Shih-Jung | en_US |
dc.contributor.author | Chen, Deng-Jyi | en_US |
dc.date.accessioned | 2014-12-08T15:13:28Z | - |
dc.date.available | 2014-12-08T15:13:28Z | - |
dc.date.issued | 2007 | en_US |
dc.identifier.isbn | 978-1-4244-0982-2 | en_US |
dc.identifier.uri | http://hdl.handle.net/11536/10411 | - |
dc.description.abstract | Application systems that utilize recognition technologies, such as speech recognition, provide human-machine interface that could aid people more easily in operating system device or help those who are physically unable to interact with computers through traditional input devices such as mouse or keyboard. As we have seen, speech recognition technology is widely used between the device interfaces and human. The common approached method to increase speech recognition function into devices is through low-level programmed wrappers. For getting that, first we must obtain the source code of system and have programming knowledge in order to perform it. In that, the speech commands are pre-defined for particular application systems, so user could not set or modify the commands to what they want especially. Even if the designer want to add or delete some speech commands, it also is not easy and is time consuming. In this research, we provide a general interfacing framework. Under this, user could set or modify speech commands conveniently and easily without the needs of the detailed system code, system design and programming knowledge. After speech commands are set by end user, interface would store these commands to database. When end user speak a command through interface, the interface would analyze and recognize it, and then interact directly with application systems through calling the API functions, we have done before, to control mouse moving and keyboard pressing. The proposed system could be applied to GUI based commercial software without accessing their internal code. When installing and executing our proposed interfacing framework under windows environment, user can interact with most of the application systems by controlling mouse moving, mouse jumping, mouse clicking, keyboard pressing and compound keyboard pressing through speech commands just like what we have done normally. Finally, we apply some examples to demonstrate the applicability and feasibility of the proposed interfacing framework. | en_US |
dc.language.iso | en_US | en_US |
dc.subject | speech recognition | en_US |
dc.subject | recognizer | en_US |
dc.subject | speech interface | en_US |
dc.subject | e-learning | en_US |
dc.title | A generic interface methodology for bridging application systems and speech recognizers | en_US |
dc.type | Proceedings Paper | en_US |
dc.identifier.journal | 2007 6TH INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS & SIGNAL PROCESSING, VOLS 1-4 | en_US |
dc.citation.spage | 25 | en_US |
dc.citation.epage | 29 | en_US |
dc.contributor.department | 資訊工程學系 | zh_TW |
dc.contributor.department | Department of Computer Science | en_US |
dc.identifier.wosnumber | WOS:000256699500006 | - |
Appears in Collections: | Conferences Paper |