剩余使用寿命(Remaining useful life,RUL)预测是大型设备故障预测与健康管理(Prognostics and health management,PHM)的重要环节,对于降低设备维修成本和避免灾难性故障具有重要意义.针对RUL预测,首次提出一种基于多变量分析的时序图...剩余使用寿命(Remaining useful life,RUL)预测是大型设备故障预测与健康管理(Prognostics and health management,PHM)的重要环节,对于降低设备维修成本和避免灾难性故障具有重要意义.针对RUL预测,首次提出一种基于多变量分析的时序图推理模型(Multivariate similarity temporal knowledge graph,MSTKG),通过捕捉设备各部件的运行状态耦合关系及其变化趋势,挖掘其中蕴含的设备性能退化信息,为寿命预测提供有效依据.首先,设计时序图结构,形式化表达各部件不同工作周期的关联关系.其次,提出联合图卷积神经网络(Convolutional neural network,CNN)和门控循环单元(Gated recurrent unit,GRU)的深度推理网络,建模并学习设备各部件工作状态的时空演化过程,并结合回归分析,得到剩余使用寿命预测结果.最后,与现有预测方法相比,所提方法能够显式建模并利用设备部件耦合关系的变化信息,仿真实验结果验证了该方法的优越性.展开更多
Dense captioning aims to simultaneously localize and describe regions-of-interest(RoIs)in images in natural language.Specifically,we identify three key problems:1)dense and highly overlapping RoIs,making accurate loca...Dense captioning aims to simultaneously localize and describe regions-of-interest(RoIs)in images in natural language.Specifically,we identify three key problems:1)dense and highly overlapping RoIs,making accurate localization of each target region challenging;2)some visually ambiguous target regions which are hard to recognize each of them just by appearance;3)an extremely deep image representation which is of central importance for visual recognition.To tackle these three challenges,we propose a novel end-to-end dense captioning framework consisting of a joint localization module,a contextual reasoning module and a deep convolutional neural network(CNN).We also evaluate five deep CNN structures to explore the benefits of each.Extensive experiments on visual genome(VG)dataset demonstrate the effectiveness of our approach,which compares favorably with the state-of-the-art methods.展开更多
基金Project(2020A1515010718)supported by the Basic and Applied Basic Research Foundation of Guangdong Province,China。
文摘Dense captioning aims to simultaneously localize and describe regions-of-interest(RoIs)in images in natural language.Specifically,we identify three key problems:1)dense and highly overlapping RoIs,making accurate localization of each target region challenging;2)some visually ambiguous target regions which are hard to recognize each of them just by appearance;3)an extremely deep image representation which is of central importance for visual recognition.To tackle these three challenges,we propose a novel end-to-end dense captioning framework consisting of a joint localization module,a contextual reasoning module and a deep convolutional neural network(CNN).We also evaluate five deep CNN structures to explore the benefits of each.Extensive experiments on visual genome(VG)dataset demonstrate the effectiveness of our approach,which compares favorably with the state-of-the-art methods.