標題: 多光譜衛星影像之雲成分移除及水深反演
Cloud Component Removal and Shallow Water Depth Retrieval with Multi-spectral Satellite Image
作者: 鄒博堯
史天元
Tsou, Po-Yao
Shih, Tian-Yuan
土木工程系所
關鍵字: 水深反演;雲成分移除;類神經網路;半解析模型;Bathymetry;Cloud Component Removal;Artificial Neural Network;Semi-analytical Model
公開日期: 2017
摘要: 了解水深或海底地形對於人類從事近岸海域活動相當重要,近年來隨著國際情勢發展,海底地形測繪也成為政府關切的重點。光學衛星影像具有在短時間內拍攝範圍廣闊的優點,應用於水深反演有其優勢。 另ㄧ方面,衛星影像容易受到雲或霧霾等大氣因素干擾、遮蔽,進而影響水體像元的光譜,使其無法反演出合理的水深。故本研究將衛星影像中受雲霧干擾的水體像元視為混合像元,以線性光譜分解(LSU)估計受其中雲的含量(abundance),再以線性關係式將其雲成分移除,以萃取像元中的水體資訊,用於水深反演。 本研究使用東沙環礁之WorldView衛星影像,配合空載光達實測資料,進行水深反演及驗證。反演模式包含類神經網路法及模型聯立解法,前者使用實測水深資料樣本訓練網路,後者倚靠實測水深資料歸納出的水體固有光學性質參數(IOPs)輔助。 研究成果顯示,類神經網路法及模型聯立解法皆適用於反演水深,以類神經網路法的精度較高,但會產生少數相對於模型聯立解法的極大誤差。在估計移除雲成分後的雲霧區水深時,誤差隨著原始雲含量的增加而有上升的趨勢。靠環礁外圍之礁臺的混合像元,其所在環境水深較淺(10公尺內),移除雲成分後,反演水深的精度高於靠環礁內部水深較深(約20公尺)之潟湖的混合像元,並且與純水體像元所反演水深的精度相當。
Water depth and the topography under water provide important information for nearshore human activities. With the intensification of international territory concerns, bathymetric mapping is also gaining attention. With its wide coverage, optical satellite imagery provides an efficient tool for estimating shallow water depth as compared to the traditional field surveying. On the other hand, the existence of cloud and haze contaminates the spectral signatures, which introduces errors to the depth retrieved. In this research, the contaminated pixels are treated as a mixture of water and cloud component. Linear Spectral Unmixing (LSU) procedure is applied for estimating the cloud abundance in mixed pixels. The cloud component is then removed with a linear function. The “purified” water component is then used for depth retrieval. In this research, water depth is estimated with two methods, namely, artificial neural network (ANN) and physical model. The former demands in-situ bathymetric samples for training, the latter requires site information of inherent optical properties (IOPs). The experiments reveal that retrieving depth with ANN generates better result than the physical model, but with few extremely large errors. As for mixed pixels, the error of depth estimation becomes higher when cloud abundance increases. The precision of depth retrieval is higher for mixed pixels at reef flat (within 10 meters in depth) than those in the lagoon (about 20 meters in depth), and the precision generally agrees with those retrieved from water pixels without cloud or haze.
URI: http://etd.lib.nctu.edu.tw/cdrfb3/record/nctu/#GT070451276
http://hdl.handle.net/11536/141817
顯示於類別:畢業論文