The recently deployed Transition Region Explorer(TREx)-RGB(red-green-blue)all-sky imager(ASI)is designed to capture“true color”images of the aurora and airglow.Because the 557.7 nm green line is usually the brightes...The recently deployed Transition Region Explorer(TREx)-RGB(red-green-blue)all-sky imager(ASI)is designed to capture“true color”images of the aurora and airglow.Because the 557.7 nm green line is usually the brightest emission line in visible auroras,the green channel of a TREx-RGB camera is usually dominated by the 557.7 nm emission.Under this rationale,the TREx mission does not include a specific 557.7 nm imager and is designed to use the RGB green-channel data as a proxy for the 557.7 nm aurora.In this study,we present an initial effort to establish the conversion ratio or formula linking the RGB green-channel data to the absolute intensity of 557.7 nm auroras,which is crucial for quantitative uses of the RGB data.We illustrate two approaches:(1)through a comparison with the collocated measurement of green-line auroras from the TREx spectrograph,and(2)through a comparison with the modeled green-line intensity according to realistic electron precipitation flux measurements from low-Earth-orbit satellites,with the aid of an auroral transport model.We demonstrate the procedures and provide initial results for the TREx-RGB ASIs at the Rabbit Lake and Lucky Lake stations.The RGB response is found to be nonlinear.Empirical conversion ratios or formulas between RGB green-channel data and the green-line auroral intensity are given and can be applied immediately by TREx-RGB data users.The methodology established in this study will also be applicable to the upcoming SMILE ASI mission,which will adopt a similar RGB camera system in its deployment.展开更多
针对RGB(Red Green Blue)模态与热度模态信息表征形式不一致,特征信息无法有效挖掘、融合问题,提出了一种新的联合注意力强化网络-FCNet(Feature Sharpening and Cross-modal Feature Fusion Net)。首先,通过双维度注意力机制提升图像...针对RGB(Red Green Blue)模态与热度模态信息表征形式不一致,特征信息无法有效挖掘、融合问题,提出了一种新的联合注意力强化网络-FCNet(Feature Sharpening and Cross-modal Feature Fusion Net)。首先,通过双维度注意力机制提升图像特征映射能力;然后,利用跨模态特征融合机制捕获目标区域;最后,利用逐层解码结构消除背景干扰,优化检测目标。实验结果表明,该优化改进算法运算参数更少、运算时间更短,且模型整体检测性能均优于现有多模态检测模型性能。展开更多
基金jointly funded by the Canada Foundation for Innovationthe Alberta Economic Development and Trade organization+1 种基金the University of Calgarysupported by the Canadian Space Agency。
文摘The recently deployed Transition Region Explorer(TREx)-RGB(red-green-blue)all-sky imager(ASI)is designed to capture“true color”images of the aurora and airglow.Because the 557.7 nm green line is usually the brightest emission line in visible auroras,the green channel of a TREx-RGB camera is usually dominated by the 557.7 nm emission.Under this rationale,the TREx mission does not include a specific 557.7 nm imager and is designed to use the RGB green-channel data as a proxy for the 557.7 nm aurora.In this study,we present an initial effort to establish the conversion ratio or formula linking the RGB green-channel data to the absolute intensity of 557.7 nm auroras,which is crucial for quantitative uses of the RGB data.We illustrate two approaches:(1)through a comparison with the collocated measurement of green-line auroras from the TREx spectrograph,and(2)through a comparison with the modeled green-line intensity according to realistic electron precipitation flux measurements from low-Earth-orbit satellites,with the aid of an auroral transport model.We demonstrate the procedures and provide initial results for the TREx-RGB ASIs at the Rabbit Lake and Lucky Lake stations.The RGB response is found to be nonlinear.Empirical conversion ratios or formulas between RGB green-channel data and the green-line auroral intensity are given and can be applied immediately by TREx-RGB data users.The methodology established in this study will also be applicable to the upcoming SMILE ASI mission,which will adopt a similar RGB camera system in its deployment.
文摘针对RGB(Red Green Blue)模态与热度模态信息表征形式不一致,特征信息无法有效挖掘、融合问题,提出了一种新的联合注意力强化网络-FCNet(Feature Sharpening and Cross-modal Feature Fusion Net)。首先,通过双维度注意力机制提升图像特征映射能力;然后,利用跨模态特征融合机制捕获目标区域;最后,利用逐层解码结构消除背景干扰,优化检测目标。实验结果表明,该优化改进算法运算参数更少、运算时间更短,且模型整体检测性能均优于现有多模态检测模型性能。