This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include pictu...This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.展开更多
The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese...The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese Academy of Sciences(CAS)and is due for launch in 2025.SXI is a compact X-ray telescope with a wide field-of-view(FOV)capable of encompassing large portions of Earth’s magnetosphere from the vantage point of the SMILE orbit.SXI is sensitive to the soft X-rays produced by the Solar Wind Charge eXchange(SWCX)process produced when heavy ions of solar wind origin interact with neutral particles in Earth’s exosphere.SWCX provides a mechanism for boundary detection within the magnetosphere,such as the position of Earth’s magnetopause,because the solar wind heavy ions have a very low density in regions of closed magnetic field lines.The sensitivity of the SXI is such that it can potentially track movements of the magnetopause on timescales of a few minutes and the orbit of SMILE will enable such movements to be tracked for segments lasting many hours.SXI is led by the University of Leicester in the United Kingdom(UK)with collaborating organisations on hardware,software and science support within the UK,Europe,China and the United States.展开更多
Throughout the SMILE mission the satellite will be bombarded by radiation which gradually damages the focal plane devices and degrades their performance.In order to understand the changes of the CCD370s within the sof...Throughout the SMILE mission the satellite will be bombarded by radiation which gradually damages the focal plane devices and degrades their performance.In order to understand the changes of the CCD370s within the soft X-ray Imager,an initial characterisation of the devices has been carried out to give a baseline performance level.Three CCDs have been characterised,the two flight devices and the flight spa re.This has been carried out at the Open University in a bespo ke cleanroom measure ment facility.The results show that there is a cluster of bright pixels in the flight spa re which increases in size with tempe rature.However at the nominal ope rating tempe rature(-120℃) it is within the procure ment specifications.Overall,the devices meet the specifications when ope rating at -120℃ in 6 × 6 binned frame transfer science mode.The se rial charge transfer inefficiency degrades with temperature in full frame mode.However any charge losses are recovered when binning/frame transfer is implemented.展开更多
Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosph...Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.展开更多
The Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)Soft X-ray Imager(SXI)will shine a spotlight on magnetopause dynamics during magnetic reconnection.We simulate an event with a southward interplanetary magne...The Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)Soft X-ray Imager(SXI)will shine a spotlight on magnetopause dynamics during magnetic reconnection.We simulate an event with a southward interplanetary magnetic field turning and produce SXI count maps with a 5-minute integration time.By making assumptions about the magnetopause shape,we find the magnetopause standoff distance from the count maps and compare it with the one obtained directly from the magnetohydrodynamic(MHD)simulation.The root mean square deviations between the reconstructed and MHD standoff distances do not exceed 0.2 RE(Earth radius)and the maximal difference equals 0.24 RE during the 25-minute interval around the southward turning.展开更多
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal...Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.展开更多
In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,...In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,diagnosis and evaluation of kidney and urinary tract disease,providing insight into the specific type and severity.However,manual urine sediment examination is labor-intensive,time-consuming,and subjective.Traditional machine learning based object detection methods require hand-crafted features for localization and classification,which have poor generalization capabilities and are difficult to quickly and accurately detect the number of urine sediments.Deep learning based object detection methods have the potential to address the challenges mentioned above,but these methods require access to large urine sediment image datasets.Unfortunately,only a limited number of publicly available urine sediment datasets are currently available.To alleviate the lack of urine sediment datasets in medical image analysis,we propose a new dataset named UriSed2K,which contains 2465 high-quality images annotated with expert guidance.Two main challenges are associated with our dataset:a large number of small objects and the occlusion between these small objects.Our manuscript focuses on applying deep learning object detection methods to the urine sediment dataset and addressing the challenges presented by this dataset.Specifically,our goal is to improve the accuracy and efficiency of the detection algorithm and,in doing so,provide medical professionals with an automatic detector that saves time and effort.We propose an improved lightweight one-stage object detection algorithm called Discriminatory-YOLO.The proposed algorithm comprises a local context attention module and a global background suppression module,which aid the detector in distinguishing urine sediment features in the image.The local context attention module captures context information beyond the object region,while the global background suppression module emphasizes objects in uninformative backgrounds.We comprehensively evaluate our method on the UriSed2K dataset,which includes seven categories of urine sediments,such as erythrocytes(red blood cells),leukocytes(white blood cells),epithelial cells,crystals,mycetes,broken erythrocytes,and broken leukocytes,achieving the best average precision(AP)of 95.3%while taking only 10 ms per image.The source code and dataset are available at https://github.com/binghuiwu98/discriminatoryyolov5.展开更多
A novel image encryption scheme based on parallel compressive sensing and edge detection embedding technology is proposed to improve visual security. Firstly, the plain image is sparsely represented using the discrete...A novel image encryption scheme based on parallel compressive sensing and edge detection embedding technology is proposed to improve visual security. Firstly, the plain image is sparsely represented using the discrete wavelet transform.Then, the coefficient matrix is scrambled and compressed to obtain a size-reduced image using the Fisher–Yates shuffle and parallel compressive sensing. Subsequently, to increase the security of the proposed algorithm, the compressed image is re-encrypted through permutation and diffusion to obtain a noise-like secret image. Finally, an adaptive embedding method based on edge detection for different carrier images is proposed to generate a visually meaningful cipher image. To improve the plaintext sensitivity of the algorithm, the counter mode is combined with the hash function to generate keys for chaotic systems. Additionally, an effective permutation method is designed to scramble the pixels of the compressed image in the re-encryption stage. The simulation results and analyses demonstrate that the proposed algorithm performs well in terms of visual security and decryption quality.展开更多
The intelligent detection technology driven by X-ray images and deep learning represents the forefront of advanced techniques and development trends in flaw detection and automated evaluation of light alloy castings.H...The intelligent detection technology driven by X-ray images and deep learning represents the forefront of advanced techniques and development trends in flaw detection and automated evaluation of light alloy castings.However,the efficacy of deep learning models hinges upon a substantial abundance of flaw samples.The existing research on X-ray image augmentation for flaw detection suffers from shortcomings such as poor diversity of flaw samples and low reliability of quality evaluation.To this end,a novel approach was put forward,which involves the creation of the Interpolation-Deep Convolutional Generative Adversarial Network(I-DCGAN)for flaw detection image generation and a comprehensive evaluation algorithm named TOPSIS-IFP.I-DCGAN enables the generation of high-resolution,diverse simulated images with multiple appearances,achieving an improvement in sample diversity and quality while maintaining a relatively lower computational complexity.TOPSIS-IFP facilitates multi-dimensional quality evaluation,including aspects such as diversity,authenticity,image distribution difference,and image distortion degree.The results indicate that the X-ray radiographic images of magnesium and aluminum alloy castings achieve optimal performance when trained up to the 800th and 600th epochs,respectively.The TOPSIS-IFP value reaches 78.7%and 73.8%similarity to the ideal solution,respectively.Compared to single index evaluation,the TOPSIS-IFP algorithm achieves higher-quality simulated images at the optimal training epoch.This approach successfully mitigates the issue of unreliable quality associated with single index evaluation.The image generation and comprehensive quality evaluation method developed in this paper provides a novel approach for image augmentation in flaw recognition,holding significant importance for enhancing the robustness of subsequent flaw recognition networks.展开更多
Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unman...Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.展开更多
Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have b...Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.展开更多
Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems,...Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.展开更多
We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance...We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance the capability of deep neural networks in extracting geometric attributes from depth images,we developed a novel deep geometric convolution operator(DGConv).DGConv is utilized to construct a deep local geometric feature extraction module,facilitating a more comprehensive exploration of the intrinsic geometric information within depth images.Secondly,we integrate the newly proposed deep geometric feature module with the Fully Convolutional Network(FCN8)to establish a high-performance deep neural network algorithm tailored for depth image segmentation.Concurrently,we enhance the FCN8 detection head by separating the segmentation and classification processes.This enhancement significantly boosts the network’s overall detection capability.Thirdly,for a comprehensive assessment of our proposed algorithm and its applicability in real-world industrial settings,we curated a line-scan image dataset featuring weld seams.This dataset,named the Standardized Linear Depth Profile(SLDP)dataset,was collected from actual industrial sites where autonomous robots are in operation.Ultimately,we conducted experiments utilizing the SLDP dataset,achieving an average accuracy of 92.7%.Our proposed approach exhibited a remarkable performance improvement over the prior method on the identical dataset.Moreover,we have successfully deployed the proposed algorithm in genuine industrial environments,fulfilling the prerequisites of unmanned robot operations.展开更多
Objective This study aimed to compare the performance of standard-definition white-light endoscopy(SD-WL),high-definition white-light endoscopy(HD-WL),and high-definition narrow-band imaging(HD-NBI)in detecting colore...Objective This study aimed to compare the performance of standard-definition white-light endoscopy(SD-WL),high-definition white-light endoscopy(HD-WL),and high-definition narrow-band imaging(HD-NBI)in detecting colorectal lesions in the Chinese population.Methods This was a multicenter,single-blind,randomized,controlled trial with a non-inferiority design.Patients undergoing endoscopy for physical examination,screening,and surveillance were enrolled from July 2017 to December 2020.The primary outcome measure was the adenoma detection rate(ADR),defined as the proportion of patients with at least one adenoma detected.The associated factors for detecting adenomas were assessed using univariate and multivariate logistic regression.Results Out of 653 eligible patients enrolled,data from 596 patients were analyzed.The ADRs were 34.5%in the SD-WL group,33.5%in the HD-WL group,and 37.5%in the HD-NBI group(P=0.72).The advanced neoplasm detection rates(ANDRs)in the three arms were 17.1%,15.5%,and 10.4%(P=0.17).No significant differences were found between the SD group and HD group regarding ADR or ANDR(ADR:34.5%vs.35.6%,P=0.79;ANDR:17.1%vs.13.0%,P=0.16,respectively).Similar results were observed between the HD-WL group and HD-NBI group(ADR:33.5%vs.37.7%,P=0.45;ANDR:15.5%vs.10.4%,P=0.18,respectively).In the univariate and multivariate logistic regression analyses,neither HD-WL nor HD-NBI led to a significant difference in overall adenoma detection compared to SD-WL(HD-WL:OR 0.91,P=0.69;HD-NBI:OR 1.15,P=0.80).Conclusion HD-NBI and HD-WL are comparable to SD-WL for overall adenoma detection among Chinese outpatients.It can be concluded that HD-NBI or HD-WL is not superior to SD-WL,but more effective instruction may be needed to guide the selection of different endoscopic methods in the future.Our study’s conclusions may aid in the efficient allocation and utilization of limited colonoscopy resources,especially advanced imaging technologies.展开更多
Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enh...Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enhanced clarity in examiningmicroscopic features of breast tissues based on their staining properties.Early cancer detection facilitates the quickening of the therapeutic process,thereby increasing survival rates.The analysis made by medical professionals,especially pathologists,is time-consuming and challenging,and there arises a need for automated breast cancer detection systems.The upcoming artificial intelligence platforms,especially deep learning models,play an important role in image diagnosis and prediction.Initially,the histopathology biopsy images are taken from standard data sources.Further,the gathered images are given as input to the Multi-Scale Dilated Vision Transformer,where the essential features are acquired.Subsequently,the features are subjected to the Bidirectional Long Short-Term Memory(Bi-LSTM)for classifying the breast cancer disorder.The efficacy of the model is evaluated using divergent metrics.When compared with other methods,the proposed work reveals that it offers impressive results for detection.展开更多
Unmanned aerial vehicles(UAVs)have been widely used in military,medical,wireless communications,aerial surveillance,etc.One key topic involving UAVs is pose estimation in autonomous navigation.A standard procedure for...Unmanned aerial vehicles(UAVs)have been widely used in military,medical,wireless communications,aerial surveillance,etc.One key topic involving UAVs is pose estimation in autonomous navigation.A standard procedure for this process is to combine inertial navigation system sensor information with the global navigation satellite system(GNSS)signal.However,some factors can interfere with the GNSS signal,such as ionospheric scintillation,jamming,or spoofing.One alternative method to avoid using the GNSS signal is to apply an image processing approach by matching UAV images with georeferenced images.But a high effort is required for image edge extraction.Here a support vector regression(SVR)model is proposed to reduce this computational load and processing time.The dynamic partial reconfiguration(DPR)of part of the SVR datapath is implemented to accelerate the process,reduce the area,and analyze its granularity by increasing the grain size of the reconfigurable region.Results show that the implementation in hardware is 68 times faster than that in software.This architecture with DPR also facilitates the low power consumption of 4 mW,leading to a reduction of 57%than that without DPR.This is also the lowest power consumption in current machine learning hardware implementations.Besides,the circuitry area is 41 times smaller.SVR with Gaussian kernel shows a success rate of 99.18%and minimum square error of 0.0146 for testing with the planning trajectory.This system is useful for adaptive applications where the user/designer can modify/reconfigure the hardware layout during its application,thus contributing to lower power consumption,smaller hardware area,and shorter execution time.展开更多
In computer vision,object recognition and image categorization have proven to be difficult challenges.They have,nevertheless,generated responses to a wide range of difficult issues from a variety of fields.Convolution...In computer vision,object recognition and image categorization have proven to be difficult challenges.They have,nevertheless,generated responses to a wide range of difficult issues from a variety of fields.Convolution Neural Networks(CNNs)have recently been identified as the most widely proposed deep learning(DL)algorithms in the literature.CNNs have unquestionably delivered cutting-edge achievements,particularly in the areas of image classification,speech recognition,and video processing.However,it has been noticed that the CNN-training assignment demands a large amount of data,which is in low supply,especially in the medical industry,and as a result,the training process takes longer.In this paper,we describe an attentionaware CNN architecture for classifying chest X-ray images to diagnose Pneumonia in order to address the aforementioned difficulties.AttentionModules provide attention-aware properties to the Attention Network.The attentionaware features of various modules alter as the layers become deeper.Using a bottom-up top-down feedforward structure,the feedforward and feedback attention processes are integrated into a single feedforward process inside each attention module.In the present work,a deep neural network(DNN)is combined with an attention mechanism to test the prediction of Pneumonia disease using chest X-ray pictures.To produce attention-aware features,the suggested networkwas built by merging channel and spatial attentionmodules in DNN architecture.With this network,we worked on a publicly available Kaggle chest X-ray dataset.Extensive testing was carried out to validate the suggested model.In the experimental results,we attained an accuracy of 95.47%and an F-score of 0.92,indicating that the suggested model outperformed against the baseline models.展开更多
X-ray security equipment is currently a more commonly used dangerous goods detection tool, due to the increasing security work tasks, the use of target detection technology to assist security personnel to carry out wo...X-ray security equipment is currently a more commonly used dangerous goods detection tool, due to the increasing security work tasks, the use of target detection technology to assist security personnel to carry out work has become an inevitable trend. With the development of deep learning, object detection technology is becoming more and more mature, and object detection framework based on convolutional neural networks has been widely used in industrial, medical and military fields. In order to improve the efficiency of security staff, reduce the risk of dangerous goods missed detection. Based on the data collected in X-ray security equipment, this paper uses a method of inserting dangerous goods into an empty package to balance all kinds of dangerous goods data and expand the data set. The high-low energy images are combined using the high-low energy feature fusion method. Finally, the dangerous goods target detection technology based on the YOLOv7 model is used for model training. After the introduction of the above method, the detection accuracy is improved by 6% compared with the direct use of the original data set for detection, and the speed is 93FPS, which can meet the requirements of the online security system, greatly improve the work efficiency of security personnel, and eliminate the security risks caused by missed detection.展开更多
Detecting double Joint Photographic Experts Group (JPEG) compressionfor color images is vital in the field of image forensics. In previousresearches, there have been various approaches to detecting double JPEGcompress...Detecting double Joint Photographic Experts Group (JPEG) compressionfor color images is vital in the field of image forensics. In previousresearches, there have been various approaches to detecting double JPEGcompression with different quantization matrices. However, the detectionof double JPEG color images with the same quantization matrix is stilla challenging task. An effective detection approach to extract features isproposed in this paper by combining traditional analysis with ConvolutionalNeural Networks (CNN). On the one hand, the number of nonzero pixels andthe sum of pixel values of color space conversion error are provided with 12-dimensional features through experiments. On the other hand, the roundingerror, the truncation error and the quantization coefficient matrix are used togenerate a total of 128-dimensional features via a specially designed CNN. Insuch aCNN, convolutional layers with fixed kernel of 1×1 and Dropout layersare adopted to prevent overfitting of the model, and an average pooling layeris used to extract local characteristics. In this approach, the Support VectorMachine (SVM) classifier is applied to distinguishwhether a given color imageis primarily or secondarily compressed. The approach is also suitable for thecase when customized needs are considered. The experimental results showthat the proposed approach is more effective than some existing ones whenthe compression quality factors are low.展开更多
COVID-19 is a respiratory illness caused by the SARS-CoV-2 virus, first identified in 2019. The primary mode of transmission is through respiratory droplets when an infected person coughs or sneezes. Symptoms can rang...COVID-19 is a respiratory illness caused by the SARS-CoV-2 virus, first identified in 2019. The primary mode of transmission is through respiratory droplets when an infected person coughs or sneezes. Symptoms can range from mild to severe, and timely diagnosis is crucial for effective treatment. Chest X-Ray imaging is one diagnostic tool used for COVID-19, and a Convolutional Neural Network (CNN) is a popular technique for image classification. In this study, we proposed a CNN-based approach for detecting COVID-19 in chest X-Ray images. The model was trained on a dataset containing both COVID-19 positive and negative cases and evaluated on a separate test dataset to measure its accuracy. Our results indicated that the CNN approach could accurately detect COVID-19 in chest X-Ray images, with an overall accuracy of 97%. This approach could potentially serve as an early diagnostic tool to reduce the spread of the virus.展开更多
基金the appreciation to the Deanship of Postgraduate Studies and ScientificResearch atMajmaah University for funding this research work through the Project Number R-2024-922.
文摘This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.
基金funding and support from the United Kingdom Space Agency(UKSA)the European Space Agency(ESA)+5 种基金funded and supported through the ESA PRODEX schemefunded through PRODEX PEA 4000123238the Research Council of Norway grant 223252funded by Spanish MCIN/AEI/10.13039/501100011033 grant PID2019-107061GB-C61funding and support from the Chinese Academy of Sciences(CAS)funding and support from the National Aeronautics and Space Administration(NASA)。
文摘The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese Academy of Sciences(CAS)and is due for launch in 2025.SXI is a compact X-ray telescope with a wide field-of-view(FOV)capable of encompassing large portions of Earth’s magnetosphere from the vantage point of the SMILE orbit.SXI is sensitive to the soft X-rays produced by the Solar Wind Charge eXchange(SWCX)process produced when heavy ions of solar wind origin interact with neutral particles in Earth’s exosphere.SWCX provides a mechanism for boundary detection within the magnetosphere,such as the position of Earth’s magnetopause,because the solar wind heavy ions have a very low density in regions of closed magnetic field lines.The sensitivity of the SXI is such that it can potentially track movements of the magnetopause on timescales of a few minutes and the orbit of SMILE will enable such movements to be tracked for segments lasting many hours.SXI is led by the University of Leicester in the United Kingdom(UK)with collaborating organisations on hardware,software and science support within the UK,Europe,China and the United States.
文摘Throughout the SMILE mission the satellite will be bombarded by radiation which gradually damages the focal plane devices and degrades their performance.In order to understand the changes of the CCD370s within the soft X-ray Imager,an initial characterisation of the devices has been carried out to give a baseline performance level.Three CCDs have been characterised,the two flight devices and the flight spa re.This has been carried out at the Open University in a bespo ke cleanroom measure ment facility.The results show that there is a cluster of bright pixels in the flight spa re which increases in size with tempe rature.However at the nominal ope rating tempe rature(-120℃) it is within the procure ment specifications.Overall,the devices meet the specifications when ope rating at -120℃ in 6 × 6 binned frame transfer science mode.The se rial charge transfer inefficiency degrades with temperature in full frame mode.However any charge losses are recovered when binning/frame transfer is implemented.
基金supported by the National Natural Science Foundation of China(Grant Nos.42322408,42188101,41974211,and 42074202)the Key Research Program of Frontier Sciences,Chinese Academy of Sciences(Grant No.QYZDJ-SSW-JSC028)+1 种基金the Strategic Priority Program on Space Science,Chinese Academy of Sciences(Grant Nos.XDA15052500,XDA15350201,and XDA15014800)supported by the Youth Innovation Promotion Association of the Chinese Academy of Sciences(Grant No.Y202045)。
文摘Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.
基金support from the UK Space Agency under Grant Number ST/T002964/1partly supported by the International Space Science Institute(ISSI)in Bern,through ISSI International Team Project Number 523(“Imaging the Invisible:Unveiling the Global Structure of Earth’s Dynamic Magnetosphere”)。
文摘The Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)Soft X-ray Imager(SXI)will shine a spotlight on magnetopause dynamics during magnetic reconnection.We simulate an event with a southward interplanetary magnetic field turning and produce SXI count maps with a 5-minute integration time.By making assumptions about the magnetopause shape,we find the magnetopause standoff distance from the count maps and compare it with the one obtained directly from the magnetohydrodynamic(MHD)simulation.The root mean square deviations between the reconstructed and MHD standoff distances do not exceed 0.2 RE(Earth radius)and the maximal difference equals 0.24 RE during the 25-minute interval around the southward turning.
文摘Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.
基金This work was partially supported by the National Natural Science Foundation of China(Grant Nos.61906168,U20A20171)Zhejiang Provincial Natural Science Foundation of China(Grant Nos.LY23F020023,LY21F020027)Construction of Hubei Provincial Key Laboratory for Intelligent Visual Monitoring of Hydropower Projects(Grant Nos.2022SDSJ01).
文摘In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,diagnosis and evaluation of kidney and urinary tract disease,providing insight into the specific type and severity.However,manual urine sediment examination is labor-intensive,time-consuming,and subjective.Traditional machine learning based object detection methods require hand-crafted features for localization and classification,which have poor generalization capabilities and are difficult to quickly and accurately detect the number of urine sediments.Deep learning based object detection methods have the potential to address the challenges mentioned above,but these methods require access to large urine sediment image datasets.Unfortunately,only a limited number of publicly available urine sediment datasets are currently available.To alleviate the lack of urine sediment datasets in medical image analysis,we propose a new dataset named UriSed2K,which contains 2465 high-quality images annotated with expert guidance.Two main challenges are associated with our dataset:a large number of small objects and the occlusion between these small objects.Our manuscript focuses on applying deep learning object detection methods to the urine sediment dataset and addressing the challenges presented by this dataset.Specifically,our goal is to improve the accuracy and efficiency of the detection algorithm and,in doing so,provide medical professionals with an automatic detector that saves time and effort.We propose an improved lightweight one-stage object detection algorithm called Discriminatory-YOLO.The proposed algorithm comprises a local context attention module and a global background suppression module,which aid the detector in distinguishing urine sediment features in the image.The local context attention module captures context information beyond the object region,while the global background suppression module emphasizes objects in uninformative backgrounds.We comprehensively evaluate our method on the UriSed2K dataset,which includes seven categories of urine sediments,such as erythrocytes(red blood cells),leukocytes(white blood cells),epithelial cells,crystals,mycetes,broken erythrocytes,and broken leukocytes,achieving the best average precision(AP)of 95.3%while taking only 10 ms per image.The source code and dataset are available at https://github.com/binghuiwu98/discriminatoryyolov5.
基金supported by the Key Area R&D Program of Guangdong Province (Grant No.2022B0701180001)the National Natural Science Foundation of China (Grant No.61801127)+1 种基金the Science Technology Planning Project of Guangdong Province,China (Grant Nos.2019B010140002 and 2020B111110002)the Guangdong-Hong Kong-Macao Joint Innovation Field Project (Grant No.2021A0505080006)。
文摘A novel image encryption scheme based on parallel compressive sensing and edge detection embedding technology is proposed to improve visual security. Firstly, the plain image is sparsely represented using the discrete wavelet transform.Then, the coefficient matrix is scrambled and compressed to obtain a size-reduced image using the Fisher–Yates shuffle and parallel compressive sensing. Subsequently, to increase the security of the proposed algorithm, the compressed image is re-encrypted through permutation and diffusion to obtain a noise-like secret image. Finally, an adaptive embedding method based on edge detection for different carrier images is proposed to generate a visually meaningful cipher image. To improve the plaintext sensitivity of the algorithm, the counter mode is combined with the hash function to generate keys for chaotic systems. Additionally, an effective permutation method is designed to scramble the pixels of the compressed image in the re-encryption stage. The simulation results and analyses demonstrate that the proposed algorithm performs well in terms of visual security and decryption quality.
基金funded by the National Key R&D Program of China(2020YFB1710100)the National Natural Science Foundation of China(Nos.52275337,52090042,51905188).
文摘The intelligent detection technology driven by X-ray images and deep learning represents the forefront of advanced techniques and development trends in flaw detection and automated evaluation of light alloy castings.However,the efficacy of deep learning models hinges upon a substantial abundance of flaw samples.The existing research on X-ray image augmentation for flaw detection suffers from shortcomings such as poor diversity of flaw samples and low reliability of quality evaluation.To this end,a novel approach was put forward,which involves the creation of the Interpolation-Deep Convolutional Generative Adversarial Network(I-DCGAN)for flaw detection image generation and a comprehensive evaluation algorithm named TOPSIS-IFP.I-DCGAN enables the generation of high-resolution,diverse simulated images with multiple appearances,achieving an improvement in sample diversity and quality while maintaining a relatively lower computational complexity.TOPSIS-IFP facilitates multi-dimensional quality evaluation,including aspects such as diversity,authenticity,image distribution difference,and image distortion degree.The results indicate that the X-ray radiographic images of magnesium and aluminum alloy castings achieve optimal performance when trained up to the 800th and 600th epochs,respectively.The TOPSIS-IFP value reaches 78.7%and 73.8%similarity to the ideal solution,respectively.Compared to single index evaluation,the TOPSIS-IFP algorithm achieves higher-quality simulated images at the optimal training epoch.This approach successfully mitigates the issue of unreliable quality associated with single index evaluation.The image generation and comprehensive quality evaluation method developed in this paper provides a novel approach for image augmentation in flaw recognition,holding significant importance for enhancing the robustness of subsequent flaw recognition networks.
基金This research was funded by the Natural Science Foundation of Hebei Province(F2021506004).
文摘Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.
基金the National Natural Science Foundation of China(62003298,62163036)the Major Project of Science and Technology of Yunnan Province(202202AD080005,202202AH080009)the Yunnan University Professional Degree Graduate Practice Innovation Fund Project(ZC-22222770)。
文摘Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.
基金support by the National Natural Science Foundation of China (Grant No. 62005049)Natural Science Foundation of Fujian Province (Grant Nos. 2020J01451, 2022J05113)Education and Scientific Research Program for Young and Middleaged Teachers in Fujian Province (Grant No. JAT210035)。
文摘Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO(MS-YOLO), which utilizes the SPD-Conv and Sim AM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.
基金This work was supported by the National Natural Science Foundation of China(Grant No.U20A20197).
文摘We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance the capability of deep neural networks in extracting geometric attributes from depth images,we developed a novel deep geometric convolution operator(DGConv).DGConv is utilized to construct a deep local geometric feature extraction module,facilitating a more comprehensive exploration of the intrinsic geometric information within depth images.Secondly,we integrate the newly proposed deep geometric feature module with the Fully Convolutional Network(FCN8)to establish a high-performance deep neural network algorithm tailored for depth image segmentation.Concurrently,we enhance the FCN8 detection head by separating the segmentation and classification processes.This enhancement significantly boosts the network’s overall detection capability.Thirdly,for a comprehensive assessment of our proposed algorithm and its applicability in real-world industrial settings,we curated a line-scan image dataset featuring weld seams.This dataset,named the Standardized Linear Depth Profile(SLDP)dataset,was collected from actual industrial sites where autonomous robots are in operation.Ultimately,we conducted experiments utilizing the SLDP dataset,achieving an average accuracy of 92.7%.Our proposed approach exhibited a remarkable performance improvement over the prior method on the identical dataset.Moreover,we have successfully deployed the proposed algorithm in genuine industrial environments,fulfilling the prerequisites of unmanned robot operations.
基金supported by the Beijing Municipal Science and Technology Commission(BMSTC,No.D171100002617001).
文摘Objective This study aimed to compare the performance of standard-definition white-light endoscopy(SD-WL),high-definition white-light endoscopy(HD-WL),and high-definition narrow-band imaging(HD-NBI)in detecting colorectal lesions in the Chinese population.Methods This was a multicenter,single-blind,randomized,controlled trial with a non-inferiority design.Patients undergoing endoscopy for physical examination,screening,and surveillance were enrolled from July 2017 to December 2020.The primary outcome measure was the adenoma detection rate(ADR),defined as the proportion of patients with at least one adenoma detected.The associated factors for detecting adenomas were assessed using univariate and multivariate logistic regression.Results Out of 653 eligible patients enrolled,data from 596 patients were analyzed.The ADRs were 34.5%in the SD-WL group,33.5%in the HD-WL group,and 37.5%in the HD-NBI group(P=0.72).The advanced neoplasm detection rates(ANDRs)in the three arms were 17.1%,15.5%,and 10.4%(P=0.17).No significant differences were found between the SD group and HD group regarding ADR or ANDR(ADR:34.5%vs.35.6%,P=0.79;ANDR:17.1%vs.13.0%,P=0.16,respectively).Similar results were observed between the HD-WL group and HD-NBI group(ADR:33.5%vs.37.7%,P=0.45;ANDR:15.5%vs.10.4%,P=0.18,respectively).In the univariate and multivariate logistic regression analyses,neither HD-WL nor HD-NBI led to a significant difference in overall adenoma detection compared to SD-WL(HD-WL:OR 0.91,P=0.69;HD-NBI:OR 1.15,P=0.80).Conclusion HD-NBI and HD-WL are comparable to SD-WL for overall adenoma detection among Chinese outpatients.It can be concluded that HD-NBI or HD-WL is not superior to SD-WL,but more effective instruction may be needed to guide the selection of different endoscopic methods in the future.Our study’s conclusions may aid in the efficient allocation and utilization of limited colonoscopy resources,especially advanced imaging technologies.
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through Small Group Research Project under Grant Number RGP1/261/45.
文摘Breast cancer is a significant threat to the global population,affecting not only women but also a threat to the entire population.With recent advancements in digital pathology,Eosin and hematoxylin images provide enhanced clarity in examiningmicroscopic features of breast tissues based on their staining properties.Early cancer detection facilitates the quickening of the therapeutic process,thereby increasing survival rates.The analysis made by medical professionals,especially pathologists,is time-consuming and challenging,and there arises a need for automated breast cancer detection systems.The upcoming artificial intelligence platforms,especially deep learning models,play an important role in image diagnosis and prediction.Initially,the histopathology biopsy images are taken from standard data sources.Further,the gathered images are given as input to the Multi-Scale Dilated Vision Transformer,where the essential features are acquired.Subsequently,the features are subjected to the Bidirectional Long Short-Term Memory(Bi-LSTM)for classifying the breast cancer disorder.The efficacy of the model is evaluated using divergent metrics.When compared with other methods,the proposed work reveals that it offers impressive results for detection.
基金financially supported by the National Council for Scientific and Technological Development(CNPq,Brazil),Swedish-Brazilian Research and Innovation Centre(CISB),and Saab AB under Grant No.CNPq:200053/2022-1the National Council for Scientific and Technological Development(CNPq,Brazil)under Grants No.CNPq:312924/2017-8 and No.CNPq:314660/2020-8.
文摘Unmanned aerial vehicles(UAVs)have been widely used in military,medical,wireless communications,aerial surveillance,etc.One key topic involving UAVs is pose estimation in autonomous navigation.A standard procedure for this process is to combine inertial navigation system sensor information with the global navigation satellite system(GNSS)signal.However,some factors can interfere with the GNSS signal,such as ionospheric scintillation,jamming,or spoofing.One alternative method to avoid using the GNSS signal is to apply an image processing approach by matching UAV images with georeferenced images.But a high effort is required for image edge extraction.Here a support vector regression(SVR)model is proposed to reduce this computational load and processing time.The dynamic partial reconfiguration(DPR)of part of the SVR datapath is implemented to accelerate the process,reduce the area,and analyze its granularity by increasing the grain size of the reconfigurable region.Results show that the implementation in hardware is 68 times faster than that in software.This architecture with DPR also facilitates the low power consumption of 4 mW,leading to a reduction of 57%than that without DPR.This is also the lowest power consumption in current machine learning hardware implementations.Besides,the circuitry area is 41 times smaller.SVR with Gaussian kernel shows a success rate of 99.18%and minimum square error of 0.0146 for testing with the planning trajectory.This system is useful for adaptive applications where the user/designer can modify/reconfigure the hardware layout during its application,thus contributing to lower power consumption,smaller hardware area,and shorter execution time.
文摘In computer vision,object recognition and image categorization have proven to be difficult challenges.They have,nevertheless,generated responses to a wide range of difficult issues from a variety of fields.Convolution Neural Networks(CNNs)have recently been identified as the most widely proposed deep learning(DL)algorithms in the literature.CNNs have unquestionably delivered cutting-edge achievements,particularly in the areas of image classification,speech recognition,and video processing.However,it has been noticed that the CNN-training assignment demands a large amount of data,which is in low supply,especially in the medical industry,and as a result,the training process takes longer.In this paper,we describe an attentionaware CNN architecture for classifying chest X-ray images to diagnose Pneumonia in order to address the aforementioned difficulties.AttentionModules provide attention-aware properties to the Attention Network.The attentionaware features of various modules alter as the layers become deeper.Using a bottom-up top-down feedforward structure,the feedforward and feedback attention processes are integrated into a single feedforward process inside each attention module.In the present work,a deep neural network(DNN)is combined with an attention mechanism to test the prediction of Pneumonia disease using chest X-ray pictures.To produce attention-aware features,the suggested networkwas built by merging channel and spatial attentionmodules in DNN architecture.With this network,we worked on a publicly available Kaggle chest X-ray dataset.Extensive testing was carried out to validate the suggested model.In the experimental results,we attained an accuracy of 95.47%and an F-score of 0.92,indicating that the suggested model outperformed against the baseline models.
文摘X-ray security equipment is currently a more commonly used dangerous goods detection tool, due to the increasing security work tasks, the use of target detection technology to assist security personnel to carry out work has become an inevitable trend. With the development of deep learning, object detection technology is becoming more and more mature, and object detection framework based on convolutional neural networks has been widely used in industrial, medical and military fields. In order to improve the efficiency of security staff, reduce the risk of dangerous goods missed detection. Based on the data collected in X-ray security equipment, this paper uses a method of inserting dangerous goods into an empty package to balance all kinds of dangerous goods data and expand the data set. The high-low energy images are combined using the high-low energy feature fusion method. Finally, the dangerous goods target detection technology based on the YOLOv7 model is used for model training. After the introduction of the above method, the detection accuracy is improved by 6% compared with the direct use of the original data set for detection, and the speed is 93FPS, which can meet the requirements of the online security system, greatly improve the work efficiency of security personnel, and eliminate the security risks caused by missed detection.
基金Supported by the Fundamental Research Funds for the Central Universities (No.500421126)。
文摘Detecting double Joint Photographic Experts Group (JPEG) compressionfor color images is vital in the field of image forensics. In previousresearches, there have been various approaches to detecting double JPEGcompression with different quantization matrices. However, the detectionof double JPEG color images with the same quantization matrix is stilla challenging task. An effective detection approach to extract features isproposed in this paper by combining traditional analysis with ConvolutionalNeural Networks (CNN). On the one hand, the number of nonzero pixels andthe sum of pixel values of color space conversion error are provided with 12-dimensional features through experiments. On the other hand, the roundingerror, the truncation error and the quantization coefficient matrix are used togenerate a total of 128-dimensional features via a specially designed CNN. Insuch aCNN, convolutional layers with fixed kernel of 1×1 and Dropout layersare adopted to prevent overfitting of the model, and an average pooling layeris used to extract local characteristics. In this approach, the Support VectorMachine (SVM) classifier is applied to distinguishwhether a given color imageis primarily or secondarily compressed. The approach is also suitable for thecase when customized needs are considered. The experimental results showthat the proposed approach is more effective than some existing ones whenthe compression quality factors are low.
文摘COVID-19 is a respiratory illness caused by the SARS-CoV-2 virus, first identified in 2019. The primary mode of transmission is through respiratory droplets when an infected person coughs or sneezes. Symptoms can range from mild to severe, and timely diagnosis is crucial for effective treatment. Chest X-Ray imaging is one diagnostic tool used for COVID-19, and a Convolutional Neural Network (CNN) is a popular technique for image classification. In this study, we proposed a CNN-based approach for detecting COVID-19 in chest X-Ray images. The model was trained on a dataset containing both COVID-19 positive and negative cases and evaluated on a separate test dataset to measure its accuracy. Our results indicated that the CNN approach could accurately detect COVID-19 in chest X-Ray images, with an overall accuracy of 97%. This approach could potentially serve as an early diagnostic tool to reduce the spread of the virus.