期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
顾及粗糙度的土壤有机碳成像高光谱估测模型 被引量:1
1
作者 徐璐 陈奕云 +3 位作者 洪永胜 魏钰 郭龙 Marc Linderman 《光谱学与光谱分析》 SCIE EI CAS CSCD 北大核心 2022年第9期2788-2794,共7页
可见近红外非成像光谱分析技术已被广泛用于土壤有机碳(SOC)含量估测,然而该技术的使用受土壤粗糙度的影响,对样本的前处理要求较高,导致模型的实用性受限。针对这一问题,以美国爱荷华州农田土壤为研究对象,使用成像及非成像光谱仪获取... 可见近红外非成像光谱分析技术已被广泛用于土壤有机碳(SOC)含量估测,然而该技术的使用受土壤粗糙度的影响,对样本的前处理要求较高,导致模型的实用性受限。针对这一问题,以美国爱荷华州农田土壤为研究对象,使用成像及非成像光谱仪获取土壤样本研磨前后的可见近红外反射光谱,采用去包络线(CR)、吸光度变换(AB)、S-G平滑(SG)、标准正态变换(SNV)、多元散射校正(MSC)5种光谱预处理手段,利用偏最小二乘回归(PLSR)和支持向量回归(SVR)算法构建并对比土壤SOC光谱估算模型,探究利用成像光谱数据估测高粗糙度样本SOC含量的可行性。实验结果表明,使用成像光谱数据能够实现高粗糙度样本的SOC含量估算,而使用非成像光谱数据则无法估算高粗糙度样本的SOC含量;基于成像光谱数据建立的高粗糙度SOC最优PLSR估算模型R^(2)能够达到0.739以及最优SVR估算模型R^(2)为0.712,而基于非成像光谱数据建立的高粗糙度SOC最优PLSR和SVR估算模型R^(2)仅仅分别为0.344和0.311。基于AB,SG,SNV和MSC这4种预处理手段之后的成像光谱数据建立的土壤样本研磨前的PLSR模型性能优于样本研磨之后建立的PLSR模型,而SVR模型性能正好相反。而对于非成像光谱数据来说,土壤样本研磨后建立PLSR和SVR模型精度总是强于样本研磨前建立的模型精度。对于这两种光谱数据和两个估算模型而言,不同的光谱预处理方法提高模型估算精度的能力不同。土壤样本研磨前后,基于成像光谱数据建立的PLSR和SVR模型性能均优于非成像光谱数据所构建的模型。成像光谱技术能够增强高粗糙度土壤样本可见近红外光谱与SOC的相关性,从而提高模型估算精度;能够克服土壤粗糙度的影响;为野外大尺度估测SOC含量提供了新的手段。 展开更多
关键词 成像光谱技术 土壤粗糙度 可见近红外光谱 光谱预处理 土壤有机碳
下载PDF
Identifying disaster-related tweets and their semantic,spatial and temporal context using deep learning,natural language processing and spatial analysis:a case study of Hurricane Irma 被引量:2
2
作者 Muhammed Ali Sit Caglar Koylu Ibrahim Demir 《International Journal of Digital Earth》 SCIE EI 2019年第11期1205-1229,共25页
We introduce an analytical framework for analyzing tweets to(1)identify and categorize fine-grained details about a disaster such as affected individuals,damaged infrastructure and disrupted services;(2)distinguish im... We introduce an analytical framework for analyzing tweets to(1)identify and categorize fine-grained details about a disaster such as affected individuals,damaged infrastructure and disrupted services;(2)distinguish impact areas and time periods,and relative prominence of each category of disaster-related information across space and time.We first identify disaster-related tweets by generating a human-labeled training dataset and experimenting a series of deep learning and machine learning methods for a binary classification of disasterrelatedness.We employ LSTM(Long Short-Term Memory)networks for the classification task because LSTM networks outperform other methods by considering the whole text structure using long-term semantic word and feature dependencies.Second,we employ an unsupervised multi-label classification of tweets using Latent Dirichlet Allocation(LDA),and identify latent categories of tweets such as affected individuals and disrupted services.Third,we employ spatiallyadaptive kernel smoothing and density-based spatial clustering to identify the relative prominence and impact areas for each information category,respectively.Using Hurricane Irma as a case study,we analyze over 500 million keyword-based and geo-located collection of tweets before,during and after the disaster.Our results highlight potential areas with high density of affected individuals and infrastructure damage throughout the temporal progression of the disaster. 展开更多
关键词 Social sensing TWITTER deep learning natural language processing spatial analysis HURRICANE
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部