Full metadata record
DC FieldValueLanguage
dc.contributor.author林珊如zh_TW
dc.contributor.authorLIN SUNNY S.J.en_US
dc.date.accessioned2016-03-28T08:17:26Z-
dc.date.available2016-03-28T08:17:26Z-
dc.date.issued2015en_US
dc.identifier.govdocNSC102-2511-S009-005-MY3zh_TW
dc.identifier.urihttp://hdl.handle.net/11536/130011-
dc.identifier.urihttps://www.grb.gov.tw/search/planDetail?id=11259847&docId=452189en_US
dc.description.abstract在線上教學盛行,多媒體圖文成為常見教學素材的電子時代,無論檢視Udacity免費的 大學線上課程(2012 年12 月經濟學人的封面故事,由美國名校教授自願開課,有精彩 的演講及詳實的講義、嚴格的考試),或檢視國高中線上教材,授課內容都是圖文並陳 的多媒體型態。另外,我們分析2009 年PISA 的樣本試題(檢驗各國15 歲學生的閱讀、 數學、科學素養,15歲學生是即將完成義務教育,可以謀求就業的基本人力),測試閱 讀素養的試題,有60%的文本同時含有圖表,剩下的40%雖然為無圖的純文本,卻有 一半在選擇題題項裡呈現一個圖表(亦即80%的閱讀試題,在閱讀文本或題目裡要求學 生整合式的圖文理解);測試數學素養的樣本試題,100%都是文本附有圖表;測試科學 素養的樣本試題,80%是文本附有圖表。顯示能整合式的圖文理解能力是全世界各國 對國高中生(初階就業人力)基本素養的要求。 本子計畫隸屬於整合型計畫: 「國高中生閱讀理解網際網路提供的各類圖文及相關問題 解決: 學習動機、線上-離線認知歷程及注視驅動鷹架」,與其他六個子計畫合作的層 面包括相同的主題方向、合作開發平台系統、採用,本計畫將分析國中生線上閱讀科 學主題圖文資料時的閱讀歷程(尤其是圖文參照與整合)及學習成果。以下分三年敘述。 第一年的研究目的為國中生對數位文本的圖文整合能力分析、教學與眼動資料收集:在 教學實驗將邀請五班國中生閱讀科學文本(含圖),預計採用人體血液循環(或聲音的產 生或極地的溫度,需進一步評估)為主題。實驗教學的圖示有三種分組:「型態拓樸圖 (typographical diagram)」、「邏輯拓樸圖(typological diagram)」及「無圖」;文本分長短 兩個版本:「完整參照文(complete cross-reference text)」也稱為長版,每行文字一一對應 附圖說明循環系統的結構與功能,文字敘述與圖完全配合。「不完全參照文(incomplete cross-reference text)」也稱為短版,隨機將長版文字(對圖的說明)之兩個概念摘除,有 AB本。根據圖文材料配置,將學生組成五組。依變項包含歷程資料(眼動資料與認知 負荷測量)及學習成果(測量概念事實題、推論題與心智模式)。另外也邀請先被科學知 識高低的30 位學生參與眼動研究。第一年的研究議題含: (1)讀圖效果:讀兩種圖及無圖的三組在眼動資料與認知負荷測量及學習成果(概念事實題、推論題與心智模式)上有 差異嗎? 學生會凝視兩種圖的所有重點嗎? 順序為何? (2)文字參照效果:閱讀完整參照 及不完全參照的文本,在依變項上產生什麼效果? 學生會凝視兩種文本的所有重點嗎? (3)文圖整合策略:學生讀圖文有特定順序型態嗎? 數位閱讀的圖文整合策略有幾種? 第二年的研究目的為學生閱讀數位科學文圖時,進一步以註記系統做數位筆記,分為 文字與繪圖兩種筆記,包含筆記教學與眼動資料收集,與子計畫2,4,5 合作。根據第一 年的研究結果對邏輯拓樸圖及說明文稍作修改。區分四組學生:讀邏輯圖—完全參照 文,再做文字筆記;讀邏輯圖—完全參照文,再做繪圖筆記;讀邏輯圖—不完全參照 文,再做繪圖筆記;無圖—讀完整文,做繪圖筆記,完成後才提供邏輯分析圖為回饋。 研究議題含: (1)研發可連結眼動儀器的繪圖筆記系統,並進行實地測試易用性。(2)文 字參照因子與筆記因子的交互作用分析,(3)眼動資料收集:學生會注意閱讀圖文所有 的重點嗎? 做文字筆記與繪圖筆記時,有不同的眼動型態嗎? 特別是讀圖與作繪圖筆 記的眼睛穿梭在那些位置? 第三年研究目標將研發眼球驅動的學習支援系統與教學測試,與大部分子計畫合作。 把完全參照文本的各重點標示成n個興趣區(AOI-text 1~n),把邏輯拓樸圖的各重點標 示成m 個興趣區(AOI-text 1~m),依據過去兩年的研究結果(眼睛凝視文字重點與邏輯 圖重點的時間長度、次數等資料)為基底,設計眼球驅動的學習支援系統。測試系統時 讓30 位受試者(程度高低)閱讀邏輯圖及完全參照文本。學生開始讀文本後,在特定時 間內(依據前兩年學習成果不佳者的眼動資料計算,在此先假設為60秒),未凝視(>300 毫秒)文本的某項重點(如AOI-text 4),即啟動某類重點提示(將在多種方式中尋找適合 的提示:黑體、反黃、閃爍或文字說明)。或學生開始讀圖後,在特定時間內沒有凝視圖 的重點,亦將有重點提示。如果兩年的研究中能進一步得到圖文穿巡交錯的精確訊息, 亦將設計圖文配合的重點提示(如花太少時間讀圖等提醒)。將檢驗是否宜設置眼球驅動 支援的開關按鈕。zh_TW
dc.description.abstractAs a subproject in a group of seven, I concentrate on one particular situation – in reading digital materials, the reading process/performance of images accompanied with the written-text and the integrative making sense out of the cross-reference across images and written-text. The first aim (for the first year) of this study is to investigate junior high students’ process and outcomes in reading images along with digital science texts. There will be an instructional experiment and a separate study to collect eye data. In the instructional experiment five groups of participants will be invited to read texts in webpage format describe the structure and functions of human circulatory systems (人體血液循環系統組織架構與路徑功能). The texts will be accompanied by two types of images (either “topographical diagram 實物圖” or “topological diagram 邏輯示意圖”) or no image. There will also be two digital text versions. The long version contains verbal descriptions all relevant to the image (complete cross-referent text); while the two short versions randomly omits two descriptions of important graphical elements (incomplete cross-referent texts). The reading process (eye tracking, cognitive load appraisal) and learning outcomes (comprehension, inference, and mental model) will be recorded and compared. There will be a study to collect 30 students’ eye data. The participants will include those who with high and low prior knowledge. The issues to be investigated in the first year are (1) Graphic effect:Whether reading two diagrams and no diagram would produce various effects in terms of cognitive load, comprehension, inference, mental model and eye data? Whether students attend to all of the main ideas in two diagrams? (2) Text cross-referent effect:Whether reading texts with various degrees of cross-reference would produce various effects on dependent variables? Whether students attend to all of the main ideas in two digital texts? (3) Text-image integration strategy:What is the gazes sequence across text and image? Do all the process and outcome data show readers adopt text-image integration strategies in digital reading? The aim of the second year is (cooperatively with sub-projects 2, 4, and 5) to investigate drawing with electronic writing pad while reading digital science materials. There will an instructional experiment and a study to collect eye data. The issues to be investigated in the second year are (1) The design of user-friendly digital drawing-note tool which could be connected to eye tracker. (2) Interaction between cross-referent factor and annotation factor will be examined. (3) Eye data:While annotation, whether students attend to all of the main ideas in text and diagram? Whether they show various eye patterns in making text note and drawing notes? While making text note, do they show certain reading-noting integration pattern? While making draw note, do they show certain reading-noting integration pattern? For the third year, I will cooperate with all peer sub-projects to design various types of innovative gaze-driven learning supports or scaffolds. In my project, a reader’s eyes will be scanned, taken into as the inputs and a reading support mechanism will be designed to decide whether the reader needs helps. Based on the eye data collected from previous two years and the current reader’s gazes, if a person could not effectively attend to the main ideas in the text or in the images after a certain period, the mechanism will activate a highlight function to show the main ideas that is unattended by the reader. The main ideas in a text could be plotted as the “Areas of Interest” (AOI-text 1~n) and the eye behaviors of a reader could be recorded with the eye tracker so do the main ideas in an image (AOI-image 1~m). If eye gazes do not fall into one of the AOIs-text and AOIs-image, the system could provide orderly scaffolds to support reading. If a reader with poor image reading competence, for example placing inadequate proportion of time in reading an image accompanied with the written-text, the system could prompt short instructions to help the reader overcoming the reading gaps.en_US
dc.description.sponsorship科技部zh_TW
dc.language.isozh_TWen_US
dc.subject數位閱讀zh_TW
dc.subject圖的閱讀zh_TW
dc.subject圖文整合策略zh_TW
dc.subject眼動研究zh_TW
dc.subject眼球驅動學習支援zh_TW
dc.subjectreading digital texten_US
dc.subjectgraphic readingen_US
dc.subjecttext-image integrationen_US
dc.subjecteye trackingen_US
dc.subjectgazeen_US
dc.title國高中生閱讀理解網際網路提供的各類圖文及相關問題解決: 學習動機、線上-離線認知歷程及注視驅動鷹架-子計畫一:科學主題圖文之數位閱讀註記及眼球驅動支援:知識獲取歷程與學習效果zh_TW
dc.titleReading, Annotating Digitized Science Image-Text for Solving Problems with Gaze Driven Supports: Knowledge Acquisition Process and Outcomesen_US
dc.typePlanen_US
dc.contributor.department國立交通大學教育研究所zh_TW
Appears in Collections:Research Plans