Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s...Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.展开更多
A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,...A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,enabling the acquisition of full-process data of the fragment scattering process.However,mismatches between camera frame rates and target velocities can lead to long motion blur tails of high-speed fragment targets,resulting in low signal-to-noise ratios and rendering conventional detection algorithms ineffective in dynamic strong interference testing environments.In this study,we propose a detection framework centered on dynamic strong interference disturbance signal separation and suppression.We introduce a mixture Gaussian model constrained under a joint spatialtemporal-transform domain Dirichlet process,combined with total variation regularization to achieve disturbance signal suppression.Experimental results demonstrate that the proposed disturbance suppression method can be integrated with certain conventional motion target detection tasks,enabling adaptation to real-world data to a certain extent.Moreover,we provide a specific implementation of this process,which achieves a detection rate close to 100%with an approximate 0%false alarm rate in multiple sets of real target field test data.This research effectively advances the development of the field of damage parameter testing.展开更多
Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosph...Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts ...Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness.展开更多
In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the imag...In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the image to its original quality.The challenge lies in regenerating significantly compressed images into a state in which these become identifiable.Therefore,this study focuses on the restoration of JPEG images subjected to substantial degradation caused by maximum lossy compression using Generative Adversarial Networks(GAN).The generator in this network is based on theU-Net architecture.It features a newhourglass structure that preserves the characteristics of the deep layers.In addition,the network incorporates two loss functions to generate natural and high-quality images:Low Frequency(LF)loss and High Frequency(HF)loss.HF loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features.This can enhance the performance in the high-frequency region.In contrast,LF loss is used to handle the low-frequency region.The two loss functions facilitate the generation of images by the generator,which can mislead the discriminator while accurately generating high-and low-frequency regions.Consequently,by removing the blocking effects frommaximum lossy compressed images,images inwhich identities could be recognized are generated.This study represents a significant improvement over previous research in terms of the image resolution performance.展开更多
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce...Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.展开更多
Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unman...Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.展开更多
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal...Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
Rapid and accurate acquisition of soil organic matter(SOM)information in cultivated land is important for sustainable agricultural development and carbon balance management.This study proposed a novel approach to pred...Rapid and accurate acquisition of soil organic matter(SOM)information in cultivated land is important for sustainable agricultural development and carbon balance management.This study proposed a novel approach to predict SOM with high accuracy using multiyear synthetic remote sensing variables on a monthly scale.We obtained 12 monthly synthetic Sentinel-2 images covering the study area from 2016 to 2021 through the Google Earth Engine(GEE)platform,and reflectance bands and vegetation indices were extracted from these composite images.Then the random forest(RF),support vector machine(SVM)and gradient boosting regression tree(GBRT)models were tested to investigate the difference in SOM prediction accuracy under different combinations of monthly synthetic variables.Results showed that firstly,all monthly synthetic spectral bands of Sentinel-2 showed a significant correlation with SOM(P<0.05)for the months of January,March,April,October,and November.Secondly,in terms of single-monthly composite variables,the prediction accuracy was relatively poor,with the highest R^(2)value of 0.36 being observed in January.When monthly synthetic environmental variables were grouped in accordance with the four quarters of the year,the first quarter and the fourth quarter showed good performance,and any combination of three quarters was similar in estimation accuracy.The overall best performance was observed when all monthly synthetic variables were incorporated into the models.Thirdly,among the three models compared,the RF model was consistently more accurate than the SVM and GBRT models,achieving an R^(2)value of 0.56.Except for band 12 in December,the importance of the remaining bands did not exhibit significant differences.This research offers a new attempt to map SOM with high accuracy and fine spatial resolution based on monthly synthetic Sentinel-2 images.展开更多
The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started...The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started translating and introducing anatomical and physiological knowledge about the pancreas.As for the word pancreas,an early and influential translation was “sweet meat”(甜肉),proposed by Benjamin Hobson(合信).The translation “sweet meat” is not faithful to the original meaning of “pancreas”,but is a term coined by Hobson based on his personal habits,and the word “sweet” appeared by chance.However,in the decades since the term “sweet meat” became popular,Chinese medicine practitioners,such as Tang Zonghai(唐宗海),reinterpreted it by drawing new medical illustrations for “sweet meat” and giving new connotations to the word “sweet”.This discussion and interpretation of “sweet meat” in modern China,particularly among Chinese medicine professionals,is not only a dissemination and interpretation of the knowledge of “pancreas”,but also a construction of knowledge around the term “sweet meat”.展开更多
An algorithm for detecting moving IR point target in complex background is proposed, which is based on the Reverse Phase Feature of Neighborhood (RPFN) of target in difference between neighbor frame images that two ...An algorithm for detecting moving IR point target in complex background is proposed, which is based on the Reverse Phase Feature of Neighborhood (RPFN) of target in difference between neighbor frame images that two positions of the target in the difference image are near and the gray values of them are close to in absolute value but with inverse sign. Firstly, pairs of points with RPFN are detected in the difference image between neighbor frame images, with which a virtual vector graph is made, and then the moving point target can be detected by the vectors' sequence cumulated in vector graphs. In addition, a theorem for the convergence of detection of target contrail by this algorithm is given and proved so as to afford a solid guarantee for practical applications of the algorithm proposed in this paper. Finally, some simulation results with 1000 frames from 10 typical images in complex background show that moving point targets with SNR not lower than 1.5 can be detected effectively.展开更多
Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibilit...Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.展开更多
In light of the limited efficacy of conventional methods for identifying pavement cracks and the absence of comprehensive depth and location data in two-dimensional photographs,this study presents an intelligent strat...In light of the limited efficacy of conventional methods for identifying pavement cracks and the absence of comprehensive depth and location data in two-dimensional photographs,this study presents an intelligent strategy for extracting road cracks.This methodology involves the integration of laser point cloud data obtained from a vehicle-mounted system and a panoramic sequence of images.The study employs a vehicle-mounted LiDAR measurement system to acquire laser point cloud and panoramic sequence image data simultaneously.A convolutional neural network is utilized to extract cracks from the panoramic sequence image.The extracted sequence image is then aligned with the laser point cloud,enabling the assignment of RGB information to the vehicle-mounted three dimensional(3D)point cloud and location information to the two dimensional(2D)panoramic image.Additionally,a threshold value is set based on the crack elevation change to extract the aligned roadway point cloud.The three-dimensional data pertaining to the cracks can be acquired.The experimental findings demonstrate that the use of convolutional neural networks has yielded noteworthy outcomes in the extraction of road cracks.The utilization of point cloud and image alignment techniques enables the extraction of precise location data pertaining to road cracks.This approach exhibits superior accuracy when compared to conventional methods.Moreover,it facilitates rapid and accurate identification and localization of road cracks,thereby playing a crucial role in ensuring road maintenance and traffic safety.Consequently,this technique finds extensive application in the domains of intelligent transportation and urbanization development.The technology exhibits significant promise for use in the domains of intelligent transportation and city development.展开更多
Image-based root phenotyping technologies,including the minirhizotron(MR),have expanded our understanding of the in situ root responses to changing environmental conditions.The conventional manual methods used to anal...Image-based root phenotyping technologies,including the minirhizotron(MR),have expanded our understanding of the in situ root responses to changing environmental conditions.The conventional manual methods used to analyze MR images are time-consuming,limiting their implementation.This study presents an adaptation of our previously developed convolutional neural network-based models to estimate the total(cumulative)root length(TRL)per MR image without requiring segmentation.Training data were derived from manual annotations in Rootfly,commonly used software for MR image analysis.We compared TRL estimation with 2 models,a regression-based model and a detection-based model that detects the annotated points along the roots.Notably,the detection-based model can assist in examining human annotations by providing a visual inspection of roots in MR images.The models were trained and tested with 4,015 images acquired using 2 MR system types(manual and automated)and from 4 crop species(corn,pepper,melon,and tomato)grown under various abiotic stresses.These datasets are made publicly available as part of this publication.The coefficients of determination(R2),between the measurements made using Rootfly and the suggested TRL estimation models were 0.929 to 0.986 for the main datasets,demonstrating that this tool is accurate and robust.Additional analyses were conducted to examine the effects of(a)the data acquisition system and thus the image quality on the models'performance,(b)automated differentiation between images with and without roots,and(c)the use of the transfer learning technique.These approaches can support precision agriculture by providing real-time root growth information.展开更多
This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include pictu...This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.展开更多
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c...The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.展开更多
Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural netwo...Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels.展开更多
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia...In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.展开更多
文摘Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.
文摘A measurement system for the scattering characteristics of warhead fragments based on high-speed imaging systems offers advantages such as simple deployment,flexible maneuverability,and high spatiotemporal resolution,enabling the acquisition of full-process data of the fragment scattering process.However,mismatches between camera frame rates and target velocities can lead to long motion blur tails of high-speed fragment targets,resulting in low signal-to-noise ratios and rendering conventional detection algorithms ineffective in dynamic strong interference testing environments.In this study,we propose a detection framework centered on dynamic strong interference disturbance signal separation and suppression.We introduce a mixture Gaussian model constrained under a joint spatialtemporal-transform domain Dirichlet process,combined with total variation regularization to achieve disturbance signal suppression.Experimental results demonstrate that the proposed disturbance suppression method can be integrated with certain conventional motion target detection tasks,enabling adaptation to real-world data to a certain extent.Moreover,we provide a specific implementation of this process,which achieves a detection rate close to 100%with an approximate 0%false alarm rate in multiple sets of real target field test data.This research effectively advances the development of the field of damage parameter testing.
基金supported by the National Natural Science Foundation of China(Grant Nos.42322408,42188101,41974211,and 42074202)the Key Research Program of Frontier Sciences,Chinese Academy of Sciences(Grant No.QYZDJ-SSW-JSC028)+1 种基金the Strategic Priority Program on Space Science,Chinese Academy of Sciences(Grant Nos.XDA15052500,XDA15350201,and XDA15014800)supported by the Youth Innovation Promotion Association of the Chinese Academy of Sciences(Grant No.Y202045)。
文摘Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.
基金supported by the National Natural Science Foundation of China(62375144 and 61875092)Tianjin Foundation of Natural Science(21JCYBJC00260)Beijing-Tianjin-Hebei Basic Research Cooperation Special Program(19JCZDJC65300).
文摘Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness.
基金supported by the Technology Development Program(S3344882)funded by the Ministry of SMEs and Startups(MSS,Korea).
文摘In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the image to its original quality.The challenge lies in regenerating significantly compressed images into a state in which these become identifiable.Therefore,this study focuses on the restoration of JPEG images subjected to substantial degradation caused by maximum lossy compression using Generative Adversarial Networks(GAN).The generator in this network is based on theU-Net architecture.It features a newhourglass structure that preserves the characteristics of the deep layers.In addition,the network incorporates two loss functions to generate natural and high-quality images:Low Frequency(LF)loss and High Frequency(HF)loss.HF loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features.This can enhance the performance in the high-frequency region.In contrast,LF loss is used to handle the low-frequency region.The two loss functions facilitate the generation of images by the generator,which can mislead the discriminator while accurately generating high-and low-frequency regions.Consequently,by removing the blocking effects frommaximum lossy compressed images,images inwhich identities could be recognized are generated.This study represents a significant improvement over previous research in terms of the image resolution performance.
文摘Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.
基金This research was funded by the Natural Science Foundation of Hebei Province(F2021506004).
文摘Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.
文摘Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
基金National Key Research and Development Program of China(2022YFB3903302 and 2021YFC1809104)。
文摘Rapid and accurate acquisition of soil organic matter(SOM)information in cultivated land is important for sustainable agricultural development and carbon balance management.This study proposed a novel approach to predict SOM with high accuracy using multiyear synthetic remote sensing variables on a monthly scale.We obtained 12 monthly synthetic Sentinel-2 images covering the study area from 2016 to 2021 through the Google Earth Engine(GEE)platform,and reflectance bands and vegetation indices were extracted from these composite images.Then the random forest(RF),support vector machine(SVM)and gradient boosting regression tree(GBRT)models were tested to investigate the difference in SOM prediction accuracy under different combinations of monthly synthetic variables.Results showed that firstly,all monthly synthetic spectral bands of Sentinel-2 showed a significant correlation with SOM(P<0.05)for the months of January,March,April,October,and November.Secondly,in terms of single-monthly composite variables,the prediction accuracy was relatively poor,with the highest R^(2)value of 0.36 being observed in January.When monthly synthetic environmental variables were grouped in accordance with the four quarters of the year,the first quarter and the fourth quarter showed good performance,and any combination of three quarters was similar in estimation accuracy.The overall best performance was observed when all monthly synthetic variables were incorporated into the models.Thirdly,among the three models compared,the RF model was consistently more accurate than the SVM and GBRT models,achieving an R^(2)value of 0.56.Except for band 12 in December,the importance of the remaining bands did not exhibit significant differences.This research offers a new attempt to map SOM with high accuracy and fine spatial resolution based on monthly synthetic Sentinel-2 images.
基金financed by the grant from the Youth Fund for Humanities and Social Sciences Research of the Ministry of Education (No. 19YJCZH040)。
文摘The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started translating and introducing anatomical and physiological knowledge about the pancreas.As for the word pancreas,an early and influential translation was “sweet meat”(甜肉),proposed by Benjamin Hobson(合信).The translation “sweet meat” is not faithful to the original meaning of “pancreas”,but is a term coined by Hobson based on his personal habits,and the word “sweet” appeared by chance.However,in the decades since the term “sweet meat” became popular,Chinese medicine practitioners,such as Tang Zonghai(唐宗海),reinterpreted it by drawing new medical illustrations for “sweet meat” and giving new connotations to the word “sweet”.This discussion and interpretation of “sweet meat” in modern China,particularly among Chinese medicine professionals,is not only a dissemination and interpretation of the knowledge of “pancreas”,but also a construction of knowledge around the term “sweet meat”.
文摘An algorithm for detecting moving IR point target in complex background is proposed, which is based on the Reverse Phase Feature of Neighborhood (RPFN) of target in difference between neighbor frame images that two positions of the target in the difference image are near and the gray values of them are close to in absolute value but with inverse sign. Firstly, pairs of points with RPFN are detected in the difference image between neighbor frame images, with which a virtual vector graph is made, and then the moving point target can be detected by the vectors' sequence cumulated in vector graphs. In addition, a theorem for the convergence of detection of target contrail by this algorithm is given and proved so as to afford a solid guarantee for practical applications of the algorithm proposed in this paper. Finally, some simulation results with 1000 frames from 10 typical images in complex background show that moving point targets with SNR not lower than 1.5 can be detected effectively.
基金supported by a grant from the Basic Science Research Program through the National Research Foundation(NRF)(2021R1F1A1063634)funded by the Ministry of Science and ICT(MSIT),Republic of KoreaThe authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/13/40)+2 种基金Also,the authors are thankful to Prince Satam bin Abdulaziz University for supporting this study via funding from Prince Satam bin Abdulaziz University project number(PSAU/2024/R/1445)This work was also supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R54)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.
基金founded by National Key R&D Program of China (No.2021YFB2601200)National Natural Science Foundation of China (No.42171416)Teacher Support Program for Pyramid Talent Training Project of Beijing University of Civil Engineering and Architecture (No.JDJQ20200307).
文摘In light of the limited efficacy of conventional methods for identifying pavement cracks and the absence of comprehensive depth and location data in two-dimensional photographs,this study presents an intelligent strategy for extracting road cracks.This methodology involves the integration of laser point cloud data obtained from a vehicle-mounted system and a panoramic sequence of images.The study employs a vehicle-mounted LiDAR measurement system to acquire laser point cloud and panoramic sequence image data simultaneously.A convolutional neural network is utilized to extract cracks from the panoramic sequence image.The extracted sequence image is then aligned with the laser point cloud,enabling the assignment of RGB information to the vehicle-mounted three dimensional(3D)point cloud and location information to the two dimensional(2D)panoramic image.Additionally,a threshold value is set based on the crack elevation change to extract the aligned roadway point cloud.The three-dimensional data pertaining to the cracks can be acquired.The experimental findings demonstrate that the use of convolutional neural networks has yielded noteworthy outcomes in the extraction of road cracks.The utilization of point cloud and image alignment techniques enables the extraction of precise location data pertaining to road cracks.This approach exhibits superior accuracy when compared to conventional methods.Moreover,it facilitates rapid and accurate identification and localization of road cracks,thereby playing a crucial role in ensuring road maintenance and traffic safety.Consequently,this technique finds extensive application in the domains of intelligent transportation and urbanization development.The technology exhibits significant promise for use in the domains of intelligent transportation and city development.
基金supported by the Israeli Ministry of Agriculture and Rural Development(Eugene Kandel Knowledge Centers)as part of the“Root of the Matter-The Root Zone Knowledge Center for Leveraging Modern Agriculture",grant number 16-38-0044 of the Israeli Ministry of Agriculture and Rural Development,and the European Union's Horizon 2020 Research and Innovation Program(grant agreement no.777222)(ATTRACT project“NextMR-IAA”)Partial support was provided by Ben-Gurion University of the Negev and the W.Gunther Plaut Chair in Manufacturing Engineering.
文摘Image-based root phenotyping technologies,including the minirhizotron(MR),have expanded our understanding of the in situ root responses to changing environmental conditions.The conventional manual methods used to analyze MR images are time-consuming,limiting their implementation.This study presents an adaptation of our previously developed convolutional neural network-based models to estimate the total(cumulative)root length(TRL)per MR image without requiring segmentation.Training data were derived from manual annotations in Rootfly,commonly used software for MR image analysis.We compared TRL estimation with 2 models,a regression-based model and a detection-based model that detects the annotated points along the roots.Notably,the detection-based model can assist in examining human annotations by providing a visual inspection of roots in MR images.The models were trained and tested with 4,015 images acquired using 2 MR system types(manual and automated)and from 4 crop species(corn,pepper,melon,and tomato)grown under various abiotic stresses.These datasets are made publicly available as part of this publication.The coefficients of determination(R2),between the measurements made using Rootfly and the suggested TRL estimation models were 0.929 to 0.986 for the main datasets,demonstrating that this tool is accurate and robust.Additional analyses were conducted to examine the effects of(a)the data acquisition system and thus the image quality on the models'performance,(b)automated differentiation between images with and without roots,and(c)the use of the transfer learning technique.These approaches can support precision agriculture by providing real-time root growth information.
基金the appreciation to the Deanship of Postgraduate Studies and ScientificResearch atMajmaah University for funding this research work through the Project Number R-2024-922.
文摘This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.
文摘The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.
基金Subjects funded by the National Natural Science Foundation of China(Nos.62275216 and 61775181)the Natural Science Basic Research Programme of Shaanxi Province-Major Basic Research Special Project(Nos.S2018-ZC-TD-0061 and TZ0393)the Special Project for the Development of National Key Scientific Instruments and Equipment No.(51927804).
文摘Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels.
基金funded by Researchers Supporting Program at King Saud University,(RSPD2024R809).
文摘In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.