Intelligent machinery fault diagnosis methods have been popularly and successfully developed in the past decades,and the vibration acceleration data collected by contact accelerometers have been widely investigated.In...Intelligent machinery fault diagnosis methods have been popularly and successfully developed in the past decades,and the vibration acceleration data collected by contact accelerometers have been widely investigated.In many industrial scenarios,contactless sensors are more preferred.The event camera is an emerging bio-inspired technology for vision sensing,which asynchronously records per-pixel brightness change polarity with high temporal resolution and low latency.It offers a promising tool for contactless machine vibration sensing and fault diagnosis.However,the dynamic vision-based methods suffer from variations of practical factors such as camera position,machine operating condition,etc.Furthermore,as a new sensing technology,the labeled dynamic vision data are limited,which generally cannot cover a wide range of machine fault modes.Aiming at these challenges,a novel dynamic vision-based machinery fault diagnosis method is proposed in this paper.It is motivated to explore the abundant vibration acceleration data for enhancing the dynamic vision-based model performance.A crossmodality feature alignment method is thus proposed with deep adversarial neural networks to achieve fault diagnosis knowledge transfer.An event erasing method is further proposed for improving model robustness against variations.The proposed method can effectively identify unseen fault mode with dynamic vision data.Experiments on two rotating machine monitoring datasets are carried out for validations,and the results suggest the proposed method is promising for generalized contactless machinery fault diagnosis.展开更多
Prompt radiation emitted during accelerator operation poses a significant health risk,necessitating a thorough search and securing of hazardous areas prior to initiation.Currently,manual sweep methods are employed.How...Prompt radiation emitted during accelerator operation poses a significant health risk,necessitating a thorough search and securing of hazardous areas prior to initiation.Currently,manual sweep methods are employed.However,the limitations of manual sweeps have become increasingly evident with the implementation of large-scale accelerators.By leveraging advancements in machine vision technology,the automatic identification of stranded personnel in controlled areas through camera imagery presents a viable solution for efficient search and security.Given the criticality of personal safety for stranded individuals,search and security processes must be sufficiently reliable.To ensure comprehensive coverage,180°camera groups were strategically positioned on both sides of the accelerator tunnel to eliminate blind spots within the monitoring range.The YOLOV8 network model was modified to enable the detection of small targets,such as hands and feet,as well as larger targets formed by individuals near the cameras.Furthermore,the system incorporates a pedestrian recognition model that detects human body parts,and an information fusion strategy is used to integrate the detected head,hands,and feet with the identified pedestrians as a cohesive unit.This strategy enhanced the capability of the model to identify pedestrians obstructed by equipment,resulting in a notable improvement in the recall rate.Specifically,recall rates of 0.915 and 0.82were obtained for Datasets 1 and 2,respectively.Although there was a slight decrease in accuracy,it aligned with the intended purpose of the search-and-secure software design.Experimental tests conducted within an accelerator tunnel demonstrated the effectiveness of this approach in achieving reliable recognition outcomes.展开更多
极线校正是一种针对双目相机原始图像对的投影变换方法,使校正后图像对应的极线位于同一水平线上,消除垂直视差,将立体匹配优化为一维搜索问题。针对现今极线校正的不足,本文提出一种基于双目相机平移矩阵的极线校正方法:首先利用奇异...极线校正是一种针对双目相机原始图像对的投影变换方法,使校正后图像对应的极线位于同一水平线上,消除垂直视差,将立体匹配优化为一维搜索问题。针对现今极线校正的不足,本文提出一种基于双目相机平移矩阵的极线校正方法:首先利用奇异值分解(singular value decomposition,SVD)平移矩阵,求得校正后的新旋转矩阵;其次通过校正前后的图像关系确立一个新相机内参矩阵,完成极线校正。运用本文方法对SYNTIM数据库的不同场景多组双目图像进行验证,实验结果表明平均校正误差在0.6像素内,图像几乎不产生畸变,平均偏斜在2.4°左右,平均运行时间为0.2302 s,该方法具有应用价值,完全满足极线校正的需求,解决了双目相机在立体匹配过程中由于相机的机械偏差而产生的误差和繁琐的计算过程。展开更多
Slip status of drill string is system generated binary value computed by comparison of sensor generated real time hook load value with a minimum threshold value of hook load stored in measurement system.This research ...Slip status of drill string is system generated binary value computed by comparison of sensor generated real time hook load value with a minimum threshold value of hook load stored in measurement system.This research article describes a novel method of slip status detection by machine vision technology which helps overcome the constraints of slip status detection with legacy measurement method.It also helps improve the real time drilling data quality and optimize and automate drilling operations.A method to detect drill string slip status with high-resolution digital camera installed on mast near rig floor is described along with backend vision processing and communication modules,which generate binary values of slip status.The binary values are transferred in real time to drilling measurement system of rig to compute other drilling parameters like bit depth,hole depth and stand counters.This method includes deploying active optical sensors at the rig floor,obtaining 1-D,2-D,or 3D image data,and processing it to obtain the status of drill string.Reliable measurement of slip status by machine vision helps reduce non-productive time(NPT)by reliable real time surveillance of drilling operations.展开更多
基金supported by the National Science Fund for Distinguished Young Scholars of China(52025056)the China Postdoctoral Science Foundation(2023M732789)+1 种基金the China Postdoctoral Innovative Talents Support Program(BX20230290)the Fundamental Research Funds for the Central Universities(xzy012022062).
文摘Intelligent machinery fault diagnosis methods have been popularly and successfully developed in the past decades,and the vibration acceleration data collected by contact accelerometers have been widely investigated.In many industrial scenarios,contactless sensors are more preferred.The event camera is an emerging bio-inspired technology for vision sensing,which asynchronously records per-pixel brightness change polarity with high temporal resolution and low latency.It offers a promising tool for contactless machine vibration sensing and fault diagnosis.However,the dynamic vision-based methods suffer from variations of practical factors such as camera position,machine operating condition,etc.Furthermore,as a new sensing technology,the labeled dynamic vision data are limited,which generally cannot cover a wide range of machine fault modes.Aiming at these challenges,a novel dynamic vision-based machinery fault diagnosis method is proposed in this paper.It is motivated to explore the abundant vibration acceleration data for enhancing the dynamic vision-based model performance.A crossmodality feature alignment method is thus proposed with deep adversarial neural networks to achieve fault diagnosis knowledge transfer.An event erasing method is further proposed for improving model robustness against variations.The proposed method can effectively identify unseen fault mode with dynamic vision data.Experiments on two rotating machine monitoring datasets are carried out for validations,and the results suggest the proposed method is promising for generalized contactless machinery fault diagnosis.
文摘Prompt radiation emitted during accelerator operation poses a significant health risk,necessitating a thorough search and securing of hazardous areas prior to initiation.Currently,manual sweep methods are employed.However,the limitations of manual sweeps have become increasingly evident with the implementation of large-scale accelerators.By leveraging advancements in machine vision technology,the automatic identification of stranded personnel in controlled areas through camera imagery presents a viable solution for efficient search and security.Given the criticality of personal safety for stranded individuals,search and security processes must be sufficiently reliable.To ensure comprehensive coverage,180°camera groups were strategically positioned on both sides of the accelerator tunnel to eliminate blind spots within the monitoring range.The YOLOV8 network model was modified to enable the detection of small targets,such as hands and feet,as well as larger targets formed by individuals near the cameras.Furthermore,the system incorporates a pedestrian recognition model that detects human body parts,and an information fusion strategy is used to integrate the detected head,hands,and feet with the identified pedestrians as a cohesive unit.This strategy enhanced the capability of the model to identify pedestrians obstructed by equipment,resulting in a notable improvement in the recall rate.Specifically,recall rates of 0.915 and 0.82were obtained for Datasets 1 and 2,respectively.Although there was a slight decrease in accuracy,it aligned with the intended purpose of the search-and-secure software design.Experimental tests conducted within an accelerator tunnel demonstrated the effectiveness of this approach in achieving reliable recognition outcomes.
文摘极线校正是一种针对双目相机原始图像对的投影变换方法,使校正后图像对应的极线位于同一水平线上,消除垂直视差,将立体匹配优化为一维搜索问题。针对现今极线校正的不足,本文提出一种基于双目相机平移矩阵的极线校正方法:首先利用奇异值分解(singular value decomposition,SVD)平移矩阵,求得校正后的新旋转矩阵;其次通过校正前后的图像关系确立一个新相机内参矩阵,完成极线校正。运用本文方法对SYNTIM数据库的不同场景多组双目图像进行验证,实验结果表明平均校正误差在0.6像素内,图像几乎不产生畸变,平均偏斜在2.4°左右,平均运行时间为0.2302 s,该方法具有应用价值,完全满足极线校正的需求,解决了双目相机在立体匹配过程中由于相机的机械偏差而产生的误差和繁琐的计算过程。
文摘Slip status of drill string is system generated binary value computed by comparison of sensor generated real time hook load value with a minimum threshold value of hook load stored in measurement system.This research article describes a novel method of slip status detection by machine vision technology which helps overcome the constraints of slip status detection with legacy measurement method.It also helps improve the real time drilling data quality and optimize and automate drilling operations.A method to detect drill string slip status with high-resolution digital camera installed on mast near rig floor is described along with backend vision processing and communication modules,which generate binary values of slip status.The binary values are transferred in real time to drilling measurement system of rig to compute other drilling parameters like bit depth,hole depth and stand counters.This method includes deploying active optical sensors at the rig floor,obtaining 1-D,2-D,or 3D image data,and processing it to obtain the status of drill string.Reliable measurement of slip status by machine vision helps reduce non-productive time(NPT)by reliable real time surveillance of drilling operations.