Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s...Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.展开更多
Aiming to solve the bottleneck problem of electromagnetic scattering simulation in the scenes of extremely large-scale seas and ships,a high-frequency method by using graphics processing unit(GPU)parallel acceleration...Aiming to solve the bottleneck problem of electromagnetic scattering simulation in the scenes of extremely large-scale seas and ships,a high-frequency method by using graphics processing unit(GPU)parallel acceleration technique is proposed.For the implementation of different electromagnetic methods of physical optics(PO),shooting and bouncing ray(SBR),and physical theory of diffraction(PTD),a parallel computing scheme based on the CPU-GPU parallel computing scheme is realized to balance computing tasks.Finally,a multi-GPU framework is further proposed to solve the computational difficulty caused by the massive number of ray tubes in the ray tracing process.By using the established simulation platform,signals of ships at different seas are simulated and their images are achieved as well.It is shown that the higher sea states degrade the averaged peak signal-to-noise ratio(PSNR)of radar image.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosph...Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.展开更多
Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss pos...Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts ...Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness.展开更多
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce...Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.展开更多
Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unman...Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started...The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started translating and introducing anatomical and physiological knowledge about the pancreas.As for the word pancreas,an early and influential translation was “sweet meat”(甜肉),proposed by Benjamin Hobson(合信).The translation “sweet meat” is not faithful to the original meaning of “pancreas”,but is a term coined by Hobson based on his personal habits,and the word “sweet” appeared by chance.However,in the decades since the term “sweet meat” became popular,Chinese medicine practitioners,such as Tang Zonghai(唐宗海),reinterpreted it by drawing new medical illustrations for “sweet meat” and giving new connotations to the word “sweet”.This discussion and interpretation of “sweet meat” in modern China,particularly among Chinese medicine professionals,is not only a dissemination and interpretation of the knowledge of “pancreas”,but also a construction of knowledge around the term “sweet meat”.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibilit...Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.展开更多
Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining wal...Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining walls,stabilizing piles,and anchors,are time-consuming and labor-and energy-intensive.This study proposes an innovative polymer grout method to improve the bearing capacity and reduce the displacement of bedding slopes.A series of large-scale model tests were carried out to verify the effectiveness of polymer grout in protecting bedding slopes.Specifically,load-displacement relationships and failure patterns were analyzed for different testing slopes with various dosages of polymer.Results show the great potential of polymer grout in improving bearing capacity,reducing settlement,and protecting slopes from being crushed under shearing.The polymer-treated slopes remained structurally intact,while the untreated slope exhibited considerable damage when subjected to loads surpassing the bearing capacity.It is also found that polymer-cemented soils concentrate around the injection pipe,forming a fan-shaped sheet-like structure.This study proves the improvement of polymer grouting for bedding slope treatment and will contribute to the development of a fast method to protect bedding slopes from landslides.展开更多
This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include pictu...This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.展开更多
This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online ide...This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.展开更多
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework...Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.展开更多
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c...The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.展开更多
Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural netwo...Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels.展开更多
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia...In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.展开更多
文摘Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.
基金supported by the Opening Foundation of the Agile and Intelligence Computing Key Laboratory of Sichuan Province under Grant No.H23004the Chengdu Municipal Science and Technology Bureau Technological Innovation R&D Project(Key Project)under Grant No.2024-YF08-00106-GX.
文摘Aiming to solve the bottleneck problem of electromagnetic scattering simulation in the scenes of extremely large-scale seas and ships,a high-frequency method by using graphics processing unit(GPU)parallel acceleration technique is proposed.For the implementation of different electromagnetic methods of physical optics(PO),shooting and bouncing ray(SBR),and physical theory of diffraction(PTD),a parallel computing scheme based on the CPU-GPU parallel computing scheme is realized to balance computing tasks.Finally,a multi-GPU framework is further proposed to solve the computational difficulty caused by the massive number of ray tubes in the ray tracing process.By using the established simulation platform,signals of ships at different seas are simulated and their images are achieved as well.It is shown that the higher sea states degrade the averaged peak signal-to-noise ratio(PSNR)of radar image.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金supported by the National Natural Science Foundation of China(Grant Nos.42322408,42188101,41974211,and 42074202)the Key Research Program of Frontier Sciences,Chinese Academy of Sciences(Grant No.QYZDJ-SSW-JSC028)+1 种基金the Strategic Priority Program on Space Science,Chinese Academy of Sciences(Grant Nos.XDA15052500,XDA15350201,and XDA15014800)supported by the Youth Innovation Promotion Association of the Chinese Academy of Sciences(Grant No.Y202045)。
文摘Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.
文摘Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.
基金supported by the National Natural Science Foundation of China(62375144 and 61875092)Tianjin Foundation of Natural Science(21JCYBJC00260)Beijing-Tianjin-Hebei Basic Research Cooperation Special Program(19JCZDJC65300).
文摘Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness.
文摘Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.
基金This research was funded by the Natural Science Foundation of Hebei Province(F2021506004).
文摘Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.
基金financed by the grant from the Youth Fund for Humanities and Social Sciences Research of the Ministry of Education (No. 19YJCZH040)。
文摘The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started translating and introducing anatomical and physiological knowledge about the pancreas.As for the word pancreas,an early and influential translation was “sweet meat”(甜肉),proposed by Benjamin Hobson(合信).The translation “sweet meat” is not faithful to the original meaning of “pancreas”,but is a term coined by Hobson based on his personal habits,and the word “sweet” appeared by chance.However,in the decades since the term “sweet meat” became popular,Chinese medicine practitioners,such as Tang Zonghai(唐宗海),reinterpreted it by drawing new medical illustrations for “sweet meat” and giving new connotations to the word “sweet”.This discussion and interpretation of “sweet meat” in modern China,particularly among Chinese medicine professionals,is not only a dissemination and interpretation of the knowledge of “pancreas”,but also a construction of knowledge around the term “sweet meat”.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金supported by a grant from the Basic Science Research Program through the National Research Foundation(NRF)(2021R1F1A1063634)funded by the Ministry of Science and ICT(MSIT),Republic of KoreaThe authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/13/40)+2 种基金Also,the authors are thankful to Prince Satam bin Abdulaziz University for supporting this study via funding from Prince Satam bin Abdulaziz University project number(PSAU/2024/R/1445)This work was also supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R54)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.
基金supported by the Fujian Science Foundation for Outstanding Youth(Grant No.2023J06039)the National Natural Science Foundation of China(Grant No.41977259 and No.U2005205)Fujian Province natural resources science and technology innovation project(Grant No.KY-090000-04-2022-019)。
文摘Bedding slope is a typical heterogeneous slope consisting of different soil/rock layers and is likely to slide along the weakest interface.Conventional slope protection methods for bedding slopes,such as retaining walls,stabilizing piles,and anchors,are time-consuming and labor-and energy-intensive.This study proposes an innovative polymer grout method to improve the bearing capacity and reduce the displacement of bedding slopes.A series of large-scale model tests were carried out to verify the effectiveness of polymer grout in protecting bedding slopes.Specifically,load-displacement relationships and failure patterns were analyzed for different testing slopes with various dosages of polymer.Results show the great potential of polymer grout in improving bearing capacity,reducing settlement,and protecting slopes from being crushed under shearing.The polymer-treated slopes remained structurally intact,while the untreated slope exhibited considerable damage when subjected to loads surpassing the bearing capacity.It is also found that polymer-cemented soils concentrate around the injection pipe,forming a fan-shaped sheet-like structure.This study proves the improvement of polymer grouting for bedding slope treatment and will contribute to the development of a fast method to protect bedding slopes from landslides.
基金the appreciation to the Deanship of Postgraduate Studies and ScientificResearch atMajmaah University for funding this research work through the Project Number R-2024-922.
文摘This paper emphasizes a faster digital processing time while presenting an accurate method for identifying spinefractures in X-ray pictures. The study focuses on efficiency by utilizing many methods that include picturesegmentation, feature reduction, and image classification. Two important elements are investigated to reducethe classification time: Using feature reduction software and leveraging the capabilities of sophisticated digitalprocessing hardware. The researchers use different algorithms for picture enhancement, including theWiener andKalman filters, and they look into two background correction techniques. The article presents a technique forextracting textural features and evaluates three picture segmentation algorithms and three fractured spine detectionalgorithms using transformdomain, PowerDensity Spectrum(PDS), andHigher-Order Statistics (HOS) for featureextraction.With an emphasis on reducing digital processing time, this all-encompassing method helps to create asimplified system for classifying fractured spine fractures. A feature reduction program code has been built toimprove the processing speed for picture classification. Overall, the proposed approach shows great potential forsignificantly reducing classification time in clinical settings where time is critical. In comparison to other transformdomains, the texture features’ discrete cosine transform (DCT) yielded an exceptional classification rate, and theprocess of extracting features from the transform domain took less time. More capable hardware can also result inquicker execution times for the feature extraction algorithms.
基金supported by the State Grid Science&Technology Project(5100-202114296A-0-0-00).
文摘This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.
文摘Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.
文摘The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.
基金Subjects funded by the National Natural Science Foundation of China(Nos.62275216 and 61775181)the Natural Science Basic Research Programme of Shaanxi Province-Major Basic Research Special Project(Nos.S2018-ZC-TD-0061 and TZ0393)the Special Project for the Development of National Key Scientific Instruments and Equipment No.(51927804).
文摘Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels.
基金funded by Researchers Supporting Program at King Saud University,(RSPD2024R809).
文摘In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.