Diabetic retinopathy(DR)is one of the major causes of visual impairment in adults with diabetes.Optical coherence tomography angiography(OCTA)is nowadays widely used as the golden criterion for diagnosing DR.Recently,...Diabetic retinopathy(DR)is one of the major causes of visual impairment in adults with diabetes.Optical coherence tomography angiography(OCTA)is nowadays widely used as the golden criterion for diagnosing DR.Recently,wide-field OCTA(WF-OCTA)provided more abundant information including that of the peripheral retinal degenerative changes and it can contribute in accurately diagnosing DR.The need for an automatic DR diagnostic system based on WF-OCTA pictures attracts more and more attention due to the large diabetic population and the prevalence of retinopathy cases.In this study,automatic diagnosis of DR using vision transformer was performed using WF-OCTA images(12 mm×12 mm single-scan)centered on the fovea as the dataset.WF-OCTA images were automatically classified into four classes:No DR,mild nonproliferative diabetic retinopathy(NPDR),moderate to severe NPDR,and proliferative diabetic retinopathy(PDR).The proposed method for detecting DR on the test set achieves accuracy of 99.55%,sensitivity of 99.49%,and specificity of 99.57%.The accuracy of the method for DR staging reaches up to 99.20%,which has been proven to be higher than that attained by classical convolutional neural network models.Results show that the automatic diagnosis of DR based on vision transformer and WF-OCTA pictures is more effective for detecting and staging DR.展开更多
In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local ...In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local information, LIViT. The method replaces image patch sequence with polarimetric feature sequence in the feature embedding, and uses convolution for mapping to preserve image spatial detail information. On the other hand, the addition of the wavelet transform branch enables the network to pay more attention to the shape and edge information of the feature target and improves the extraction of local edge information. The results in Wuhan, China and Flevoland, Netherlands show that considering local information when using Vision Transformer for PolSAR image classification effectively improves the image classification accuracy and shows better advantages in PolSAR image classification.展开更多
As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from bo...As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research.展开更多
AIM:To investigate the frequency and associated factors of accommodation and non-strabismic binocular vision dysfunction among medical university students.METHODS:Totally 158 student volunteers underwent routine visio...AIM:To investigate the frequency and associated factors of accommodation and non-strabismic binocular vision dysfunction among medical university students.METHODS:Totally 158 student volunteers underwent routine vision examination in the optometry clinic of Guangxi Medical University.Their data were used to identify the different types of accommodation and nonstrabismic binocular vision dysfunction and to determine their frequency.Correlation analysis and logistic regression were used to examine the factors associated with these abnormalities.RESULTS:The results showed that 36.71%of the subjects had accommodation and non-strabismic binocular vision issues,with 8.86%being attributed to accommodation dysfunction and 27.85%to binocular abnormalities.Convergence insufficiency(CI)was the most common abnormality,accounting for 13.29%.Those with these abnormalities experienced higher levels of eyestrain(χ2=69.518,P<0.001).The linear correlations were observed between the difference of binocular spherical equivalent(SE)and the index of horizontal esotropia at a distance(r=0.231,P=0.004)and the asthenopia survey scale(ASS)score(r=0.346,P<0.001).Furthermore,the right eye's SE was inversely correlated with the convergence of positive and negative fusion images at close range(r=-0.321,P<0.001),the convergence of negative fusion images at close range(r=-0.294,P<0.001),the vergence facility(VF;r=-0.234,P=0.003),and the set of negative fusion images at far range(r=-0.237,P=0.003).Logistic regression analysis indicated that gender,age,and the difference in right and binocular SE did not influence the emergence of these abnormalities.CONCLUSION:Binocular vision abnormalities are more prevalent than accommodation dysfunction,with CI being the most frequent type.Greater binocular refractive disparity leads to more severe eyestrain symptoms.展开更多
Existingfirefighting robots are focused on simple storage orfire sup-pression outside buildings rather than detection or recognition.Utilizing a large number of robots using expensive equipment is challenging.This study ...Existingfirefighting robots are focused on simple storage orfire sup-pression outside buildings rather than detection or recognition.Utilizing a large number of robots using expensive equipment is challenging.This study aims to increase the efficiency of search and rescue operations and the safety offirefigh-ters by detecting and identifying the disaster site by recognizing collapsed areas,obstacles,and rescuers on-site.A fusion algorithm combining a camera and three-dimension light detection and ranging(3D LiDAR)is proposed to detect and loca-lize the interiors of disaster sites.The algorithm detects obstacles by analyzingfloor segmentation and edge patterns using a mask regional convolutional neural network(mask R-CNN)features model based on the visual data collected from a parallelly connected camera and 3D LiDAR.People as objects are detected using you only look once version 4(YOLOv4)in the image data to localize persons requiring rescue.The point cloud data based on 3D LiDAR cluster the objects using the density-based spatial clustering of applications with noise(DBSCAN)clustering algorithm and estimate the distance to the actual object using the center point of the clustering result.The proposed artificial intelligence(AI)algorithm was verified based on individual sensors using a sensor-mounted robot in an actual building to detectfloor surfaces,atypical obstacles,and persons requiring rescue.Accordingly,the fused AI algorithm was comparatively verified.展开更多
针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL...针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。展开更多
Fatigue cracks that develop in civil infrastructure such as steel bridges due to repetitive loads pose a major threat to structural integrity.Despite being the most common practice for fatigue crack detection,human vi...Fatigue cracks that develop in civil infrastructure such as steel bridges due to repetitive loads pose a major threat to structural integrity.Despite being the most common practice for fatigue crack detection,human visual inspection is known to be labor intensive,time-consuming,and prone to error.In this study,a computer vision-based fatigue crack detection approach using a short video recorded under live loads by a moving consumer-grade camera is presented.The method detects fatigue crack by tracking surface motion and identifies the differential motion pattern caused by opening and closing of the fatigue crack.However,the global motion introduced by a moving camera in the recorded video is typically far greater than the actual motion associated with fatigue crack opening/closing,leading to false detection results.To overcome the challenge,global motion compensation(GMC)techniques are introduced to compensate for camera-induced movement.In particular,hierarchical model-based motion estimation is adopted for 2D videos with simple geometry and a new method is developed by extending the bundled camera paths approach for 3D videos with complex geometry.The proposed methodology is validated using two laboratory test setups for both in-plane and out-of-plane fatigue cracks.The results confirm the importance of motion compensation for both 2D and 3D videos and demonstrate the effectiveness of the proposed GMC methods as well as the subsequent crack detection algorithm.展开更多
With the rapid development of drones and autonomous vehicles, miniaturized and lightweight vision sensors that can track targets are of great interests. Limited by the flat structure, conventional image sensors apply ...With the rapid development of drones and autonomous vehicles, miniaturized and lightweight vision sensors that can track targets are of great interests. Limited by the flat structure, conventional image sensors apply a large number of lenses to achieve corresponding functions, increasing the overall volume and weight of the system.展开更多
基金supported by the National Natural Science Foundation of China(Grant Nos.62175156,81827807,81770940)Science and Technology Commission of Shanghai Municipality(22S31903000,16DZ0501100)Collaborative Innovation Project of Shanghai Institute of Technology(XTCX2022-27).
文摘Diabetic retinopathy(DR)is one of the major causes of visual impairment in adults with diabetes.Optical coherence tomography angiography(OCTA)is nowadays widely used as the golden criterion for diagnosing DR.Recently,wide-field OCTA(WF-OCTA)provided more abundant information including that of the peripheral retinal degenerative changes and it can contribute in accurately diagnosing DR.The need for an automatic DR diagnostic system based on WF-OCTA pictures attracts more and more attention due to the large diabetic population and the prevalence of retinopathy cases.In this study,automatic diagnosis of DR using vision transformer was performed using WF-OCTA images(12 mm×12 mm single-scan)centered on the fovea as the dataset.WF-OCTA images were automatically classified into four classes:No DR,mild nonproliferative diabetic retinopathy(NPDR),moderate to severe NPDR,and proliferative diabetic retinopathy(PDR).The proposed method for detecting DR on the test set achieves accuracy of 99.55%,sensitivity of 99.49%,and specificity of 99.57%.The accuracy of the method for DR staging reaches up to 99.20%,which has been proven to be higher than that attained by classical convolutional neural network models.Results show that the automatic diagnosis of DR based on vision transformer and WF-OCTA pictures is more effective for detecting and staging DR.
文摘In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local information, LIViT. The method replaces image patch sequence with polarimetric feature sequence in the feature embedding, and uses convolution for mapping to preserve image spatial detail information. On the other hand, the addition of the wavelet transform branch enables the network to pay more attention to the shape and edge information of the feature target and improves the extraction of local edge information. The results in Wuhan, China and Flevoland, Netherlands show that considering local information when using Vision Transformer for PolSAR image classification effectively improves the image classification accuracy and shows better advantages in PolSAR image classification.
基金National Natural Science Foundation of China(Grant No.62101138)Shandong Natural Science Foundation(Grant No.ZR2021QD148)+1 种基金Guangdong Natural Science Foundation(Grant No.2022A1515012573)Guangzhou Basic and Applied Basic Research Project(Grant No.202102020701)for providing funds for publishing this paper。
文摘As positioning sensors,edge computation power,and communication technologies continue to develop,a moving agent can now sense its surroundings and communicate with other agents.By receiving spatial information from both its environment and other agents,an agent can use various methods and sensor types to localize itself.With its high flexibility and robustness,collaborative positioning has become a widely used method in both military and civilian applications.This paper introduces the basic fundamental concepts and applications of collaborative positioning,and reviews recent progress in the field based on camera,LiDAR(Light Detection and Ranging),wireless sensor,and their integration.The paper compares the current methods with respect to their sensor type,summarizes their main paradigms,and analyzes their evaluation experiments.Finally,the paper discusses the main challenges and open issues that require further research.
基金Supported by the Innovat ion and Entrepreneurship Project for College Students of the First Affiliated Hospital of Guangxi Medical University in 2022 and the Development and Application of Appropriate Medical and Health Technologies in Guangxi(No.S2021093).
文摘AIM:To investigate the frequency and associated factors of accommodation and non-strabismic binocular vision dysfunction among medical university students.METHODS:Totally 158 student volunteers underwent routine vision examination in the optometry clinic of Guangxi Medical University.Their data were used to identify the different types of accommodation and nonstrabismic binocular vision dysfunction and to determine their frequency.Correlation analysis and logistic regression were used to examine the factors associated with these abnormalities.RESULTS:The results showed that 36.71%of the subjects had accommodation and non-strabismic binocular vision issues,with 8.86%being attributed to accommodation dysfunction and 27.85%to binocular abnormalities.Convergence insufficiency(CI)was the most common abnormality,accounting for 13.29%.Those with these abnormalities experienced higher levels of eyestrain(χ2=69.518,P<0.001).The linear correlations were observed between the difference of binocular spherical equivalent(SE)and the index of horizontal esotropia at a distance(r=0.231,P=0.004)and the asthenopia survey scale(ASS)score(r=0.346,P<0.001).Furthermore,the right eye's SE was inversely correlated with the convergence of positive and negative fusion images at close range(r=-0.321,P<0.001),the convergence of negative fusion images at close range(r=-0.294,P<0.001),the vergence facility(VF;r=-0.234,P=0.003),and the set of negative fusion images at far range(r=-0.237,P=0.003).Logistic regression analysis indicated that gender,age,and the difference in right and binocular SE did not influence the emergence of these abnormalities.CONCLUSION:Binocular vision abnormalities are more prevalent than accommodation dysfunction,with CI being the most frequent type.Greater binocular refractive disparity leads to more severe eyestrain symptoms.
基金supported by Basic Science Research Program through the National Research Foundation of Korea funded by the Ministry of Education(No.2020R1I1A3068274),Received by Junho Ahn.https://www.nrf.re.kr/supported by the Korea Agency for Infrastructure Technology Advancement(KAIA)by the Ministry of Land,Infrastructure and Transport under Grant(No.22QPWO-C152223-04),Received by Chulsu Kim.https://www.kaia.re.kr/.
文摘Existingfirefighting robots are focused on simple storage orfire sup-pression outside buildings rather than detection or recognition.Utilizing a large number of robots using expensive equipment is challenging.This study aims to increase the efficiency of search and rescue operations and the safety offirefigh-ters by detecting and identifying the disaster site by recognizing collapsed areas,obstacles,and rescuers on-site.A fusion algorithm combining a camera and three-dimension light detection and ranging(3D LiDAR)is proposed to detect and loca-lize the interiors of disaster sites.The algorithm detects obstacles by analyzingfloor segmentation and edge patterns using a mask regional convolutional neural network(mask R-CNN)features model based on the visual data collected from a parallelly connected camera and 3D LiDAR.People as objects are detected using you only look once version 4(YOLOv4)in the image data to localize persons requiring rescue.The point cloud data based on 3D LiDAR cluster the objects using the density-based spatial clustering of applications with noise(DBSCAN)clustering algorithm and estimate the distance to the actual object using the center point of the clustering result.The proposed artificial intelligence(AI)algorithm was verified based on individual sensors using a sensor-mounted robot in an actual building to detectfloor surfaces,atypical obstacles,and persons requiring rescue.Accordingly,the fused AI algorithm was comparatively verified.
文摘针对当前遥感农作物分类研究中深度学习模型对光谱时间和空间信息特征采样不足,农作物提取仍然存在边界模糊、漏提、误提的问题,提出了一种名为视觉Transformer-长短期记忆递归神经网络(Vision Transformer-long short term memory,ViTL)的深度学习模型,ViTL模型集成了双路Vision-Transformer特征提取、时空特征融合和长短期记忆递归神经网络(LSTM)时序分类等3个关键模块,双路Vision-Transformer特征提取模块用于捕获图像的时空特征相关性,一路提取空间分类特征,一路提取时间变化特征;时空特征融合模块用于将多时特征信息进行交叉融合;LSTM时序分类模块捕捉多时序的依赖关系并进行输出分类。综合利用基于多时序卫星影像的遥感技术理论和方法,对黑龙江省齐齐哈尔市讷河市作物信息进行提取,研究结果表明,ViTL模型表现出色,其总体准确率(Overall Accuracy,OA)、平均交并比(Mean Intersection over Union,MIoU)和F1分数分别达到0.8676、0.6987和0.8175,与其他广泛使用的深度学习方法相比,包括三维卷积神经网络(3-D CNN)、二维卷积神经网络(2-D CNN)和长短期记忆递归神经网络(LSTM),ViTL模型的F1分数提高了9%~12%,显示出显著的优越性。ViTL模型克服了面对多时序遥感影像的农作物分类任务中的时间和空间信息特征采样不足问题,为准确、高效地农作物分类提供了新思路。
基金NCHRP Project,IDEA 223:Fatigue Crack Inspection using Computer Vision and Augmented Reality。
文摘Fatigue cracks that develop in civil infrastructure such as steel bridges due to repetitive loads pose a major threat to structural integrity.Despite being the most common practice for fatigue crack detection,human visual inspection is known to be labor intensive,time-consuming,and prone to error.In this study,a computer vision-based fatigue crack detection approach using a short video recorded under live loads by a moving consumer-grade camera is presented.The method detects fatigue crack by tracking surface motion and identifies the differential motion pattern caused by opening and closing of the fatigue crack.However,the global motion introduced by a moving camera in the recorded video is typically far greater than the actual motion associated with fatigue crack opening/closing,leading to false detection results.To overcome the challenge,global motion compensation(GMC)techniques are introduced to compensate for camera-induced movement.In particular,hierarchical model-based motion estimation is adopted for 2D videos with simple geometry and a new method is developed by extending the bundled camera paths approach for 3D videos with complex geometry.The proposed methodology is validated using two laboratory test setups for both in-plane and out-of-plane fatigue cracks.The results confirm the importance of motion compensation for both 2D and 3D videos and demonstrate the effectiveness of the proposed GMC methods as well as the subsequent crack detection algorithm.
文摘With the rapid development of drones and autonomous vehicles, miniaturized and lightweight vision sensors that can track targets are of great interests. Limited by the flat structure, conventional image sensors apply a large number of lenses to achieve corresponding functions, increasing the overall volume and weight of the system.