With the increasing popularity of artificial intelligence applications,machine learning is also playing an increasingly important role in the Internet of Things(IoT)and the Internet of Vehicles(IoV).As an essential pa...With the increasing popularity of artificial intelligence applications,machine learning is also playing an increasingly important role in the Internet of Things(IoT)and the Internet of Vehicles(IoV).As an essential part of the IoV,smart transportation relies heavily on information obtained from images.However,inclement weather,such as snowy weather,negatively impacts the process and can hinder the regular operation of imaging equipment and the acquisition of conventional image information.Not only that,but the snow also makes intelligent transportation systems make the wrong judgment of road conditions and the entire system of the Internet of Vehicles adverse.This paper describes the single image snowremoval task and the use of a vision transformer to generate adversarial networks.The residual structure is used in the algorithm,and the Transformer structure is used in the network structure of the generator in the generative adversarial networks,which improves the accuracy of the snow removal task.Moreover,the vision transformer has good scalability and versatility for larger models and has a more vital fitting ability than the previously popular convolutional neural networks.The Snow100K dataset is used for training,testing and comparison,and the peak signal-to-noise ratio and structural similarity are used as evaluation indicators.The experimental results show that the improved snow removal algorithm performs well and can obtain high-quality snow removal images.展开更多
Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the co...Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the color rendering method based on deep learning,such as poor model stability,poor rendering quality,fuzzy boundaries and crossed color boundaries,we propose a novel hinge-cross-entropy generative adversarial network(HCEGAN).The self-attention mechanism was added and improved to focus on the important information of the image.And the hinge-cross-entropy loss function was used to stabilize the training process of GAN models.In this study,we implement the HCEGAN model for image color rendering based on DIV2K and COCO datasets,and evaluate the results using SSIM and PSNR.The experimental results show that the proposed HCEGAN automatically re-renders images,significantly improves the quality of color rendering and greatly improves the stability of prior GAN models.展开更多
A precise dating of a mafic dyke of a swarm in shield areas has great advantage to identify Large Igneous Provinces(LIPs;short-lived,mantle-generated magmatic event)(Bryan and Ernst,2008;Ernst et al.,2010).Such
Internet of Things(IoT)and blockchain receive significant interest owing to their applicability in different application areas such as healthcare,finance,transportation,etc.Medical image security and privacy become a ...Internet of Things(IoT)and blockchain receive significant interest owing to their applicability in different application areas such as healthcare,finance,transportation,etc.Medical image security and privacy become a critical part of the healthcare sector where digital images and related patient details are communicated over the public networks.This paper presents a new wind driven optimization algorithm based medical image encryption(WDOA-MIE)technique for blockchain enabled IoT environments.The WDOA-MIE model involves three major processes namely data collection,image encryption,optimal key generation,and data transmission.Initially,the medical images were captured from the patient using IoT devices.Then,the captured images are encrypted using signcryption technique.In addition,for improving the performance of the signcryption technique,the optimal key generation procedure was applied by WDOA algorithm.The goal of the WDOA-MIE algorithm is to derive a fitness function dependent upon peak signal to noise ratio(PSNR).Upon successful encryption of images,the IoT devices transmit to the closest server for storing it in the blockchain securely.The performance of the presented method was analyzed utilizing the benchmark medical image dataset.The security and the performance analysis determine that the presented technique offers better security with maximum PSNR of 60.7036 dB.展开更多
This paper applies the narrow band Internet of things communication technology to develop a wireless network equipment and communication system, which can quickly set up a network with a radius of 100 km on water surf...This paper applies the narrow band Internet of things communication technology to develop a wireless network equipment and communication system, which can quickly set up a network with a radius of 100 km on water surface. A disposable micro buoy based on narrow-band Internet of things and Beidou positioning function is also developed and used to collect surface hydrodynamic data online. In addition, a web-based public service platform is designed for the analysis and visualization of the data collected by buoys. Combined with the satellite remote sensing data, the study carries a series of marine experiments and studies such as sediment deposition tracking and garbage floating tracking.展开更多
Recently,the Internet of Medical Things(IoMT)has become a research hotspot due to its various applicability in medical field.However,the data analysis and management in IoMT remain challenging owing to the existence o...Recently,the Internet of Medical Things(IoMT)has become a research hotspot due to its various applicability in medical field.However,the data analysis and management in IoMT remain challenging owing to the existence of a massive number of devices linked to the server environment,generating a massive quantity of healthcare data.In such cases,cognitive computing can be employed that uses many intelligent technologies-machine learning(ML),deep learning(DL),artificial intelligence(AI),natural language processing(NLP)and others-to comprehend data expansively.Furthermore,breast cancer(BC)has been found to be a major cause of mortality among ladies globally.Earlier detection and classification of BC using digital mammograms can decrease the mortality rate.This paper presents a novel deep learning-enabled multi-objective mayfly optimization algorithm(DLMOMFO)for BC diagnosis and classification in the IoMT environment.The goal of this paper is to integrate deep learning(DL)and cognitive computing-based techniques for e-healthcare applications as a part of IoMT technology to detect and classify BC.The proposed DL-MOMFO algorithm involved Adaptive Weighted Mean Filter(AWMF)-based noise removal and contrast-limited adaptive histogram equalisation(CLAHE)-based contrast improvement techniques to improve the quality of the digital mammograms.In addition,a U-Net architecture-based segmentation method was utilised to detect diseased regions in the mammograms.Moreover,a SqueezeNet-based feature extraction and a fuzzy support vector machine(FSVM)classifier were used in the presented technique.To enhance the diagnostic performance of the presented method,the MOMFO algorithm was used to effectively tune the parameters of the SqueezeNet and FSVM techniques.The DL-MOMFO technique was tested on the MIAS database,and the experimental outcomes revealed that the DL-MOMFO technique outperformed existing techniques.展开更多
Extracting useful details from images is essential for the Internet of Things project.However,in real life,various external environments,such as badweather conditions,will cause the occlusion of key target information...Extracting useful details from images is essential for the Internet of Things project.However,in real life,various external environments,such as badweather conditions,will cause the occlusion of key target information and image distortion,resulting in difficulties and obstacles to the extraction of key information,affecting the judgment of the real situation in the process of the Internet of Things,and causing system decision-making errors and accidents.In this paper,we mainly solve the problem of rain on the image occlusion,remove the rain grain in the image,and get a clear image without rain.Therefore,the single image deraining algorithm is studied,and a dual-branch network structure based on the attention module and convolutional neural network(CNN)module is proposed to accomplish the task of rain removal.In order to complete the rain removal of a single image with high quality,we apply the spatial attention module,channel attention module and CNN module to the network structure,and build the network using the coder-decoder structure.In the experiment,with the structural similarity(SSIM)and the peak signal-to-noise ratio(PSNR)as evaluation indexes,the training and testing results on the rain removal dataset show that the proposed structure has a good effect on the single image deraining task.展开更多
The Internet of Things(IoT)offers a new era of connectivity,which goes beyond laptops and smart connected devices for connected vehicles,smart homes,smart cities,and connected healthcare.The massive quantity of data g...The Internet of Things(IoT)offers a new era of connectivity,which goes beyond laptops and smart connected devices for connected vehicles,smart homes,smart cities,and connected healthcare.The massive quantity of data gathered from numerous IoT devices poses security and privacy concerns for users.With the increasing use of multimedia in communications,the content security of remote-sensing images attracted much attention in academia and industry.Image encryption is important for securing remote sensing images in the IoT environment.Recently,researchers have introduced plenty of algorithms for encrypting images.This study introduces an Improved Sine Cosine Algorithm with Chaotic Encryption based Remote Sensing Image Encryption(ISCACE-RSI)technique in IoT Environment.The proposed model follows a three-stage process,namely pre-processing,encryption,and optimal key generation.The remote sensing images were preprocessed at the initial stage to enhance the image quality.Next,the ISCACERSI technique exploits the double-layer remote sensing image encryption(DLRSIE)algorithm for encrypting the images.The DLRSIE methodology incorporates the design of Chaotic Maps and deoxyribonucleic acid(DNA)Strand Displacement(DNASD)approach.The chaotic map is employed for generating pseudorandom sequences and implementing routine scrambling and diffusion processes on the plaintext images.Then,the study presents three DNASD-related encryption rules based on the variety of DNASD,and those rules are applied for encrypting the images at the DNA sequence level.For an optimal key generation of the DLRSIE technique,the ISCA is applied with an objective function of the maximization of peak signal to noise ratio(PSNR).To examine the performance of the ISCACE-RSI model,a detailed set of simulations were conducted.The comparative study reported the better performance of the ISCACE-RSI model over other existing approaches.展开更多
Microvasculature of the retina is considered an alternative marker of cerebral vascular risk in healthy populations.However,the ability of retinal vasculature changes,specifically focusing on retinal vessel diameter,t...Microvasculature of the retina is considered an alternative marker of cerebral vascular risk in healthy populations.However,the ability of retinal vasculature changes,specifically focusing on retinal vessel diameter,to predict the recurrence of cerebrovascular events in patients with ischemic stroke has not been determined comprehensively.While previous studies have shown a link between retinal vessel diameter and recurrent cerebrovascular events,they have not incorporated this information into a predictive model.Therefore,this study aimed to investigate the relationship between retinal vessel diameter and subsequent cerebrovascular events in patients with acute ischemic stroke.Additionally,we sought to establish a predictive model by combining retinal veessel diameter with traditional risk factors.We performed a prospective observational study of 141 patients with acute ischemic stroke who were admitted to the First Affiliated Hospital of Jinan University.All of these patients underwent digital retinal imaging within 72 hours of admission and were followed up for 3 years.We found that,after adjusting for related risk factors,patients with acute ischemic stroke with mean arteriolar diameter within 0.5-1.0 disc diameters of the disc margin(MAD_(0.5-1.0DD))of≥74.14μm and mean venular diameter within 0.5-1.0 disc diameters of the disc margin(MVD_(0.5-1.0DD))of≥83.91μm tended to experience recurrent cerebrovascular events.We established three multivariate Cox proportional hazard regression models:model 1 included traditional risk factors,model 2 added MAD_(0.5-1.0DD)to model 1,and model 3 added MVD0.5-1.0DD to model 1.Model 3 had the greatest potential to predict subsequent cerebrovascular events,followed by model 2,and finally model 1.These findings indicate that combining retinal venular or arteriolar diameter with traditional risk factors could improve the prediction of recurrent cerebrovascular events in patients with acute ischemic stroke,and that retinal imaging could be a useful and non-invasive method for identifying high-risk patients who require closer monitoring and more aggressive management.展开更多
BACKGROUND Liver transplant(LT)patients have become older and sicker.The rate of post-LT major adverse cardiovascular events(MACE)has increased,and this in turn raises 30-d post-LT mortality.Noninvasive cardiac stress...BACKGROUND Liver transplant(LT)patients have become older and sicker.The rate of post-LT major adverse cardiovascular events(MACE)has increased,and this in turn raises 30-d post-LT mortality.Noninvasive cardiac stress testing loses accuracy when applied to pre-LT cirrhotic patients.AIM To assess the feasibility and accuracy of a machine learning model used to predict post-LT MACE in a regional cohort.METHODS This retrospective cohort study involved 575 LT patients from a Southern Brazilian academic center.We developed a predictive model for post-LT MACE(defined as a composite outcome of stroke,new-onset heart failure,severe arrhythmia,and myocardial infarction)using the extreme gradient boosting(XGBoost)machine learning model.We addressed missing data(below 20%)for relevant variables using the k-nearest neighbor imputation method,calculating the mean from the ten nearest neighbors for each case.The modeling dataset included 83 features,encompassing patient and laboratory data,cirrhosis complications,and pre-LT cardiac assessments.Model performance was assessed using the area under the receiver operating characteristic curve(AUROC).We also employed Shapley additive explanations(SHAP)to interpret feature impacts.The dataset was split into training(75%)and testing(25%)sets.Calibration was evaluated using the Brier score.We followed Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis guidelines for reporting.Scikit-learn and SHAP in Python 3 were used for all analyses.The supplementary material includes code for model development and a user-friendly online MACE prediction calculator.RESULTS Of the 537 included patients,23(4.46%)developed in-hospital MACE,with a mean age at transplantation of 52.9 years.The majority,66.1%,were male.The XGBoost model achieved an impressive AUROC of 0.89 during the training stage.This model exhibited accuracy,precision,recall,and F1-score values of 0.84,0.85,0.80,and 0.79,respectively.Calibration,as assessed by the Brier score,indicated excellent model calibration with a score of 0.07.Furthermore,SHAP values highlighted the significance of certain variables in predicting postoperative MACE,with negative noninvasive cardiac stress testing,use of nonselective beta-blockers,direct bilirubin levels,blood type O,and dynamic alterations on myocardial perfusion scintigraphy being the most influential factors at the cohort-wide level.These results highlight the predictive capability of our XGBoost model in assessing the risk of post-LT MACE,making it a valuable tool for clinical practice.CONCLUSION Our study successfully assessed the feasibility and accuracy of the XGBoost machine learning model in predicting post-LT MACE,using both cardiovascular and hepatic variables.The model demonstrated impressive performance,aligning with literature findings,and exhibited excellent calibration.Notably,our cautious approach to prevent overfitting and data leakage suggests the stability of results when applied to prospective data,reinforcing the model’s value as a reliable tool for predicting post-LT MACE in clinical practice.展开更多
In March 2020,the World Health Organization declared the coronavirus disease(COVID-19)outbreak as a pandemic due to its uncontrolled global spread.Reverse transcription polymerase chain reaction is a laboratory test t...In March 2020,the World Health Organization declared the coronavirus disease(COVID-19)outbreak as a pandemic due to its uncontrolled global spread.Reverse transcription polymerase chain reaction is a laboratory test that is widely used for the diagnosis of this deadly disease.However,the limited availability of testing kits and qualified staff and the drastically increasing number of cases have hampered massive testing.To handle COVID19 testing problems,we apply the Internet of Things and artificial intelligence to achieve self-adaptive,secure,and fast resource allocation,real-time tracking,remote screening,and patient monitoring.In addition,we implement a cloud platform for efficient spectrum utilization.Thus,we propose a cloudbased intelligent system for remote COVID-19 screening using cognitiveradio-based Internet of Things and deep learning.Specifically,a deep learning technique recognizes radiographic patterns in chest computed tomography(CT)scans.To this end,contrast-limited adaptive histogram equalization is applied to an input CT scan followed by bilateral filtering to enhance the spatial quality.The image quality assessment of the CT scan is performed using the blind/referenceless image spatial quality evaluator.Then,a deep transfer learning model,VGG-16,is trained to diagnose a suspected CT scan as either COVID-19 positive or negative.Experimental results demonstrate that the proposed VGG-16 model outperforms existing COVID-19 screening models regarding accuracy,sensitivity,and specificity.The results obtained from the proposed system can be verified by doctors and sent to remote places through the Internet.展开更多
Based on the analysis of the high-order compatibility optimization method proposed by predecessors, a new training image optimization method based on data event repetition probability is proposed. The basic idea is to...Based on the analysis of the high-order compatibility optimization method proposed by predecessors, a new training image optimization method based on data event repetition probability is proposed. The basic idea is to extract the data event contained in the condition data and calculate the number of repetitions of the extracted data events and their repetition probability in the training image to obtain two statistical indicators, unmatched ratio and repeated probability variance of data events. The two statistical indicators are used to characterize the diversity and stability of the sedimentary model in the training image and evaluate the matching of the geological volume spatial structure contained in data of the well block to be modeled. The unmatched ratio reflects the completeness of geological model in training image, which is the first choice index. The repeated probability variance reflects the stationarity index of geological model of each training image, and is an auxiliary index. Then, we can integrate the above two indexes to achieve the optimization of training image. Multiple sets of theoretical model tests show that the training image with small variance and low no-matching ratio is the optimal training image. The method is used to optimize the training image of turbidite channel in Plutonio oilfield in Angola. The geological model established by this method is in good agreement with the seismic attributes and can better reproduce the morphological characteristics of the channels and distribution pattern of sands.展开更多
Widespread deployment of the Internet of Things(Io T)has changed the way that network services are developed,deployed,and operated.Most onboard advanced Io T devices are equipped with visual sensors that form the so-c...Widespread deployment of the Internet of Things(Io T)has changed the way that network services are developed,deployed,and operated.Most onboard advanced Io T devices are equipped with visual sensors that form the so-called visual Io T.Typically,the sender would compress images,and then through the communication network,the receiver would decode images,and then analyze the images for applications.However,image compression and semantic inference are generally conducted separately,and thus,current compression algorithms cannot be transplanted for the use of semantic inference directly.A collaborative image compression and classification framework for visual Io T applications is proposed,which combines image compression with semantic inference by using multi-task learning.In particular,the multi-task Generative Adversarial Networks(GANs)are described,which include encoder,quantizer,generator,discriminator,and classifier to conduct simultaneously image compression and classification.The key to the proposed framework is the quantized latent representation used for compression and classification.GANs with perceptual quality can achieve low bitrate compression and reduce the amount of data transmitted.In addition,the design in which two tasks share the same feature can greatly reduce computing resources,which is especially applicable for environments with extremely limited resources.Using extensive experiments,the collaborative compression and classification framework is effective and useful for visual IoT applications.展开更多
The efficient processing of large amounts of data collected by the microseismic monitoring system(MMS),especially the rapid identification of microseismic events in explosions and noise,is essential for mine disaster ...The efficient processing of large amounts of data collected by the microseismic monitoring system(MMS),especially the rapid identification of microseismic events in explosions and noise,is essential for mine disaster prevention.Currently,this work is primarily performed by skilled technicians,which results in severe workloads and inefficiency.In this paper,CNN-based transfer learning combined with computer vision technology was used to achieve automatic recognition and classification of multichannel microseismic signal waveforms.First,data collected by MMS was generated into 6-channel original waveforms based on events.After that,sample data sets of microseismic events,blasts,drillings,and noises were established through manual identification.These datasets were split into training sets and test sets according to a certain proportion,and transfer learning was performed on AlexNet,GoogLeNet,and ResNet50 pre-training network models,respectively.After training and tuning,optimal models were retained and compared with support vector machine classification.Results show that transfer learning models perform well on different test sets.Overall,GoogLeNet performed best,with a recognition accuracy of 99.8%.Finally,the possible effects of the number of training sets and the imbalance of different types of sample data on the accuracy and effectiveness of classification models were discussed.展开更多
Biomedical images are used for capturing the images for diagnosis process and to examine the present condition of organs or tissues.Biomedical image processing concepts are identical to biomedical signal processing,wh...Biomedical images are used for capturing the images for diagnosis process and to examine the present condition of organs or tissues.Biomedical image processing concepts are identical to biomedical signal processing,which includes the investigation,improvement,and exhibition of images gathered using x-ray,ultrasound,MRI,etc.At the same time,cervical cancer becomes a major reason for increased women’s mortality rate.But cervical cancer is an identified at an earlier stage using regular pap smear images.In this aspect,this paper devises a new biomedical pap smear image classification using cascaded deep forest(BPSIC-CDF)model on Internet of Things(IoT)environment.The BPSIC-CDF technique enables the IoT devices for pap smear image acquisition.In addition,the pre-processing of pap smear images takes place using adaptive weighted mean filtering(AWMF)technique.Moreover,sailfish optimizer with Tsallis entropy(SFO-TE)approach has been implemented for the segmentation of pap smear images.Furthermore,a deep learning based Residual Network(ResNet50)method was executed as a feature extractor and CDF as a classifier to determine the class labels of the input pap smear images.In order to showcase the improved diagnostic outcome of the BPSICCDF technique,a comprehensive set of simulations take place on Herlev database.The experimental results highlighted the betterment of the BPSICCDF technique over the recent state of art techniques interms of different performance measures.展开更多
Medical Resonance Imaging(MRI)is a noninvasive,nonradioactive,and meticulous diagnostic modality capability in the field of medical imaging.However,the efficiency of MR image reconstruction is affected by its bulky im...Medical Resonance Imaging(MRI)is a noninvasive,nonradioactive,and meticulous diagnostic modality capability in the field of medical imaging.However,the efficiency of MR image reconstruction is affected by its bulky image sets and slow process implementation.Therefore,to obtain a high-quality reconstructed image we presented a sparse aware noise removal technique that uses convolution neural network(SANR_CNN)for eliminating noise and improving the MR image reconstruction quality.The proposed noise removal or denoising technique adopts a fast CNN architecture that aids in training larger datasets with improved quality,and SARN algorithm is used for building a dictionary learning technique for denoising large image datasets.The proposed SANR_CNN model also preserves the details and edges in the image during reconstruction.An experiment was conducted to analyze the performance of SANR_CNN in a few existing models in regard with peak signal-to-noise ratio(PSNR),structural similarity index(SSIM),and mean squared error(MSE).The proposed SANR_CNN model achieved higher PSNR,SSIM,and MSE efficiency than the other noise removal techniques.The proposed architecture also provides transmission of these denoised medical images through secured IoT architecture.展开更多
A new image reconstruction method was developed for a Compton camera. A simulation to determine a γ-ray source position was performed by using the simulation tool, GEANT4. An image reconstruction was made in two step...A new image reconstruction method was developed for a Compton camera. A simulation to determine a γ-ray source position was performed by using the simulation tool, GEANT4. An image reconstruction was made in two steps. First, a three dimensional image was constructed and projected in one selected plane, then the points from each ellipse was picked up by taking the peak points of a density distribution of crossing points between the ellipse and the first step image. The second step procedure improved the accuracy and the spatial resolution of a position de- termination significantly, comparing with the image obtained by only the first step. The accuracy and the resolution for a point source were obtained to be about 0.02 mm and (1.35±0.15) mm, respectively. The same procedure was applied to an imaging of the distributed γ-ray source.展开更多
In electronic confrontation, Synthetic Aperture Radar (SAR) is vulnerable to different types of electronic jamming. The research on SAR jamming image quality assessment can provide the prerequisite for SAR jamming and...In electronic confrontation, Synthetic Aperture Radar (SAR) is vulnerable to different types of electronic jamming. The research on SAR jamming image quality assessment can provide the prerequisite for SAR jamming and anti-jamming technology, which is an urgent problem that researchers need to solve. Traditional SAR image quality assessment metrics analyze statistical error between the reference image and the jamming image only in the pixel domain; therefore, they cannot reflect the visual perceptual property of SAR jamming images effectively. In this demo, we develop a SAR image quality assessment system based on human visual perception for the application of aircraft electromagnetic countermeasures simulation platform.The internet of things and cloud computing techniques of big data are applied to our system. In the demonstration, we will present the assessment result interface of the SAR image quality assessment system.展开更多
基金supported by School of Computer Science and Technology,Shandong University of Technology.This paper is supported by Shandong Provincial Natural Science Foundation,China(Grant Number ZR2019BF022)National Natural Science Foundation of China(Grant Number 62001272).
文摘With the increasing popularity of artificial intelligence applications,machine learning is also playing an increasingly important role in the Internet of Things(IoT)and the Internet of Vehicles(IoV).As an essential part of the IoV,smart transportation relies heavily on information obtained from images.However,inclement weather,such as snowy weather,negatively impacts the process and can hinder the regular operation of imaging equipment and the acquisition of conventional image information.Not only that,but the snow also makes intelligent transportation systems make the wrong judgment of road conditions and the entire system of the Internet of Vehicles adverse.This paper describes the single image snowremoval task and the use of a vision transformer to generate adversarial networks.The residual structure is used in the algorithm,and the Transformer structure is used in the network structure of the generator in the generative adversarial networks,which improves the accuracy of the snow removal task.Moreover,the vision transformer has good scalability and versatility for larger models and has a more vital fitting ability than the previously popular convolutional neural networks.The Snow100K dataset is used for training,testing and comparison,and the peak signal-to-noise ratio and structural similarity are used as evaluation indicators.The experimental results show that the improved snow removal algorithm performs well and can obtain high-quality snow removal images.
基金Foundation of China(No.61902311)funding for this studysupported in part by the Natural Science Foundation of Shaanxi Province in China under Grants 2022JM-508,2022JM-317 and 2019JM-162.
文摘Computer-aided diagnosis based on image color rendering promotes medical image analysis and doctor-patient communication by highlighting important information of medical diagnosis.To overcome the limitations of the color rendering method based on deep learning,such as poor model stability,poor rendering quality,fuzzy boundaries and crossed color boundaries,we propose a novel hinge-cross-entropy generative adversarial network(HCEGAN).The self-attention mechanism was added and improved to focus on the important information of the image.And the hinge-cross-entropy loss function was used to stabilize the training process of GAN models.In this study,we implement the HCEGAN model for image color rendering based on DIV2K and COCO datasets,and evaluate the results using SSIM and PSNR.The experimental results show that the proposed HCEGAN automatically re-renders images,significantly improves the quality of color rendering and greatly improves the stability of prior GAN models.
文摘A precise dating of a mafic dyke of a swarm in shield areas has great advantage to identify Large Igneous Provinces(LIPs;short-lived,mantle-generated magmatic event)(Bryan and Ernst,2008;Ernst et al.,2010).Such
文摘Internet of Things(IoT)and blockchain receive significant interest owing to their applicability in different application areas such as healthcare,finance,transportation,etc.Medical image security and privacy become a critical part of the healthcare sector where digital images and related patient details are communicated over the public networks.This paper presents a new wind driven optimization algorithm based medical image encryption(WDOA-MIE)technique for blockchain enabled IoT environments.The WDOA-MIE model involves three major processes namely data collection,image encryption,optimal key generation,and data transmission.Initially,the medical images were captured from the patient using IoT devices.Then,the captured images are encrypted using signcryption technique.In addition,for improving the performance of the signcryption technique,the optimal key generation procedure was applied by WDOA algorithm.The goal of the WDOA-MIE algorithm is to derive a fitness function dependent upon peak signal to noise ratio(PSNR).Upon successful encryption of images,the IoT devices transmit to the closest server for storing it in the blockchain securely.The performance of the presented method was analyzed utilizing the benchmark medical image dataset.The security and the performance analysis determine that the presented technique offers better security with maximum PSNR of 60.7036 dB.
基金The National Natural Science Foundation of China under contract No. 41606004。
文摘This paper applies the narrow band Internet of things communication technology to develop a wireless network equipment and communication system, which can quickly set up a network with a radius of 100 km on water surface. A disposable micro buoy based on narrow-band Internet of things and Beidou positioning function is also developed and used to collect surface hydrodynamic data online. In addition, a web-based public service platform is designed for the analysis and visualization of the data collected by buoys. Combined with the satellite remote sensing data, the study carries a series of marine experiments and studies such as sediment deposition tracking and garbage floating tracking.
基金We deeply acknowledge Taif University for supporting this study through Taif University Researchers Supporting Project Number(TURSP-2020/328),Taif University,Taif,Saudi Arabia.
文摘Recently,the Internet of Medical Things(IoMT)has become a research hotspot due to its various applicability in medical field.However,the data analysis and management in IoMT remain challenging owing to the existence of a massive number of devices linked to the server environment,generating a massive quantity of healthcare data.In such cases,cognitive computing can be employed that uses many intelligent technologies-machine learning(ML),deep learning(DL),artificial intelligence(AI),natural language processing(NLP)and others-to comprehend data expansively.Furthermore,breast cancer(BC)has been found to be a major cause of mortality among ladies globally.Earlier detection and classification of BC using digital mammograms can decrease the mortality rate.This paper presents a novel deep learning-enabled multi-objective mayfly optimization algorithm(DLMOMFO)for BC diagnosis and classification in the IoMT environment.The goal of this paper is to integrate deep learning(DL)and cognitive computing-based techniques for e-healthcare applications as a part of IoMT technology to detect and classify BC.The proposed DL-MOMFO algorithm involved Adaptive Weighted Mean Filter(AWMF)-based noise removal and contrast-limited adaptive histogram equalisation(CLAHE)-based contrast improvement techniques to improve the quality of the digital mammograms.In addition,a U-Net architecture-based segmentation method was utilised to detect diseased regions in the mammograms.Moreover,a SqueezeNet-based feature extraction and a fuzzy support vector machine(FSVM)classifier were used in the presented technique.To enhance the diagnostic performance of the presented method,the MOMFO algorithm was used to effectively tune the parameters of the SqueezeNet and FSVM techniques.The DL-MOMFO technique was tested on the MIAS database,and the experimental outcomes revealed that the DL-MOMFO technique outperformed existing techniques.
基金supported by the NationalNatural Science Foundation of China(No.62001272).
文摘Extracting useful details from images is essential for the Internet of Things project.However,in real life,various external environments,such as badweather conditions,will cause the occlusion of key target information and image distortion,resulting in difficulties and obstacles to the extraction of key information,affecting the judgment of the real situation in the process of the Internet of Things,and causing system decision-making errors and accidents.In this paper,we mainly solve the problem of rain on the image occlusion,remove the rain grain in the image,and get a clear image without rain.Therefore,the single image deraining algorithm is studied,and a dual-branch network structure based on the attention module and convolutional neural network(CNN)module is proposed to accomplish the task of rain removal.In order to complete the rain removal of a single image with high quality,we apply the spatial attention module,channel attention module and CNN module to the network structure,and build the network using the coder-decoder structure.In the experiment,with the structural similarity(SSIM)and the peak signal-to-noise ratio(PSNR)as evaluation indexes,the training and testing results on the rain removal dataset show that the proposed structure has a good effect on the single image deraining task.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R319)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4210118DSR48).
文摘The Internet of Things(IoT)offers a new era of connectivity,which goes beyond laptops and smart connected devices for connected vehicles,smart homes,smart cities,and connected healthcare.The massive quantity of data gathered from numerous IoT devices poses security and privacy concerns for users.With the increasing use of multimedia in communications,the content security of remote-sensing images attracted much attention in academia and industry.Image encryption is important for securing remote sensing images in the IoT environment.Recently,researchers have introduced plenty of algorithms for encrypting images.This study introduces an Improved Sine Cosine Algorithm with Chaotic Encryption based Remote Sensing Image Encryption(ISCACE-RSI)technique in IoT Environment.The proposed model follows a three-stage process,namely pre-processing,encryption,and optimal key generation.The remote sensing images were preprocessed at the initial stage to enhance the image quality.Next,the ISCACERSI technique exploits the double-layer remote sensing image encryption(DLRSIE)algorithm for encrypting the images.The DLRSIE methodology incorporates the design of Chaotic Maps and deoxyribonucleic acid(DNA)Strand Displacement(DNASD)approach.The chaotic map is employed for generating pseudorandom sequences and implementing routine scrambling and diffusion processes on the plaintext images.Then,the study presents three DNASD-related encryption rules based on the variety of DNASD,and those rules are applied for encrypting the images at the DNA sequence level.For an optimal key generation of the DLRSIE technique,the ISCA is applied with an objective function of the maximization of peak signal to noise ratio(PSNR).To examine the performance of the ISCACE-RSI model,a detailed set of simulations were conducted.The comparative study reported the better performance of the ISCACE-RSI model over other existing approaches.
基金supported by the Youth Fund of Fundamental Research Fund for the Central Universities of Jinan University,No.11622303(to YZ).
文摘Microvasculature of the retina is considered an alternative marker of cerebral vascular risk in healthy populations.However,the ability of retinal vasculature changes,specifically focusing on retinal vessel diameter,to predict the recurrence of cerebrovascular events in patients with ischemic stroke has not been determined comprehensively.While previous studies have shown a link between retinal vessel diameter and recurrent cerebrovascular events,they have not incorporated this information into a predictive model.Therefore,this study aimed to investigate the relationship between retinal vessel diameter and subsequent cerebrovascular events in patients with acute ischemic stroke.Additionally,we sought to establish a predictive model by combining retinal veessel diameter with traditional risk factors.We performed a prospective observational study of 141 patients with acute ischemic stroke who were admitted to the First Affiliated Hospital of Jinan University.All of these patients underwent digital retinal imaging within 72 hours of admission and were followed up for 3 years.We found that,after adjusting for related risk factors,patients with acute ischemic stroke with mean arteriolar diameter within 0.5-1.0 disc diameters of the disc margin(MAD_(0.5-1.0DD))of≥74.14μm and mean venular diameter within 0.5-1.0 disc diameters of the disc margin(MVD_(0.5-1.0DD))of≥83.91μm tended to experience recurrent cerebrovascular events.We established three multivariate Cox proportional hazard regression models:model 1 included traditional risk factors,model 2 added MAD_(0.5-1.0DD)to model 1,and model 3 added MVD0.5-1.0DD to model 1.Model 3 had the greatest potential to predict subsequent cerebrovascular events,followed by model 2,and finally model 1.These findings indicate that combining retinal venular or arteriolar diameter with traditional risk factors could improve the prediction of recurrent cerebrovascular events in patients with acute ischemic stroke,and that retinal imaging could be a useful and non-invasive method for identifying high-risk patients who require closer monitoring and more aggressive management.
文摘BACKGROUND Liver transplant(LT)patients have become older and sicker.The rate of post-LT major adverse cardiovascular events(MACE)has increased,and this in turn raises 30-d post-LT mortality.Noninvasive cardiac stress testing loses accuracy when applied to pre-LT cirrhotic patients.AIM To assess the feasibility and accuracy of a machine learning model used to predict post-LT MACE in a regional cohort.METHODS This retrospective cohort study involved 575 LT patients from a Southern Brazilian academic center.We developed a predictive model for post-LT MACE(defined as a composite outcome of stroke,new-onset heart failure,severe arrhythmia,and myocardial infarction)using the extreme gradient boosting(XGBoost)machine learning model.We addressed missing data(below 20%)for relevant variables using the k-nearest neighbor imputation method,calculating the mean from the ten nearest neighbors for each case.The modeling dataset included 83 features,encompassing patient and laboratory data,cirrhosis complications,and pre-LT cardiac assessments.Model performance was assessed using the area under the receiver operating characteristic curve(AUROC).We also employed Shapley additive explanations(SHAP)to interpret feature impacts.The dataset was split into training(75%)and testing(25%)sets.Calibration was evaluated using the Brier score.We followed Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis guidelines for reporting.Scikit-learn and SHAP in Python 3 were used for all analyses.The supplementary material includes code for model development and a user-friendly online MACE prediction calculator.RESULTS Of the 537 included patients,23(4.46%)developed in-hospital MACE,with a mean age at transplantation of 52.9 years.The majority,66.1%,were male.The XGBoost model achieved an impressive AUROC of 0.89 during the training stage.This model exhibited accuracy,precision,recall,and F1-score values of 0.84,0.85,0.80,and 0.79,respectively.Calibration,as assessed by the Brier score,indicated excellent model calibration with a score of 0.07.Furthermore,SHAP values highlighted the significance of certain variables in predicting postoperative MACE,with negative noninvasive cardiac stress testing,use of nonselective beta-blockers,direct bilirubin levels,blood type O,and dynamic alterations on myocardial perfusion scintigraphy being the most influential factors at the cohort-wide level.These results highlight the predictive capability of our XGBoost model in assessing the risk of post-LT MACE,making it a valuable tool for clinical practice.CONCLUSION Our study successfully assessed the feasibility and accuracy of the XGBoost machine learning model in predicting post-LT MACE,using both cardiovascular and hepatic variables.The model demonstrated impressive performance,aligning with literature findings,and exhibited excellent calibration.Notably,our cautious approach to prevent overfitting and data leakage suggests the stability of results when applied to prospective data,reinforcing the model’s value as a reliable tool for predicting post-LT MACE in clinical practice.
基金This study was supported by the grant of the National Research Foundation of Korea(NRF 2016M3A9E9942010)the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI)+1 种基金funded by the Ministry of Health&Welfare(HI18C1216)the Soonchunhyang University Research Fund.
文摘In March 2020,the World Health Organization declared the coronavirus disease(COVID-19)outbreak as a pandemic due to its uncontrolled global spread.Reverse transcription polymerase chain reaction is a laboratory test that is widely used for the diagnosis of this deadly disease.However,the limited availability of testing kits and qualified staff and the drastically increasing number of cases have hampered massive testing.To handle COVID19 testing problems,we apply the Internet of Things and artificial intelligence to achieve self-adaptive,secure,and fast resource allocation,real-time tracking,remote screening,and patient monitoring.In addition,we implement a cloud platform for efficient spectrum utilization.Thus,we propose a cloudbased intelligent system for remote COVID-19 screening using cognitiveradio-based Internet of Things and deep learning.Specifically,a deep learning technique recognizes radiographic patterns in chest computed tomography(CT)scans.To this end,contrast-limited adaptive histogram equalization is applied to an input CT scan followed by bilateral filtering to enhance the spatial quality.The image quality assessment of the CT scan is performed using the blind/referenceless image spatial quality evaluator.Then,a deep transfer learning model,VGG-16,is trained to diagnose a suspected CT scan as either COVID-19 positive or negative.Experimental results demonstrate that the proposed VGG-16 model outperforms existing COVID-19 screening models regarding accuracy,sensitivity,and specificity.The results obtained from the proposed system can be verified by doctors and sent to remote places through the Internet.
基金Supported by the China National Science and Technology Major Project(2016ZX05015001-001,2016ZX05033-003-002)
文摘Based on the analysis of the high-order compatibility optimization method proposed by predecessors, a new training image optimization method based on data event repetition probability is proposed. The basic idea is to extract the data event contained in the condition data and calculate the number of repetitions of the extracted data events and their repetition probability in the training image to obtain two statistical indicators, unmatched ratio and repeated probability variance of data events. The two statistical indicators are used to characterize the diversity and stability of the sedimentary model in the training image and evaluate the matching of the geological volume spatial structure contained in data of the well block to be modeled. The unmatched ratio reflects the completeness of geological model in training image, which is the first choice index. The repeated probability variance reflects the stationarity index of geological model of each training image, and is an auxiliary index. Then, we can integrate the above two indexes to achieve the optimization of training image. Multiple sets of theoretical model tests show that the training image with small variance and low no-matching ratio is the optimal training image. The method is used to optimize the training image of turbidite channel in Plutonio oilfield in Angola. The geological model established by this method is in good agreement with the seismic attributes and can better reproduce the morphological characteristics of the channels and distribution pattern of sands.
基金supported by the National Key R&D Program of China(No.:2019YFB1803400)the National Natural Science Foundation of China(Nos.NSFC 61925105,61801260 and U1633121)+1 种基金the Fundamental Research Funds for the Central Universities,China(No.FRF-NP-2003)supported by Tsinghua University-China Mobile Communications Group Co.,Ltd.Joint Institute。
文摘Widespread deployment of the Internet of Things(Io T)has changed the way that network services are developed,deployed,and operated.Most onboard advanced Io T devices are equipped with visual sensors that form the so-called visual Io T.Typically,the sender would compress images,and then through the communication network,the receiver would decode images,and then analyze the images for applications.However,image compression and semantic inference are generally conducted separately,and thus,current compression algorithms cannot be transplanted for the use of semantic inference directly.A collaborative image compression and classification framework for visual Io T applications is proposed,which combines image compression with semantic inference by using multi-task learning.In particular,the multi-task Generative Adversarial Networks(GANs)are described,which include encoder,quantizer,generator,discriminator,and classifier to conduct simultaneously image compression and classification.The key to the proposed framework is the quantized latent representation used for compression and classification.GANs with perceptual quality can achieve low bitrate compression and reduce the amount of data transmitted.In addition,the design in which two tasks share the same feature can greatly reduce computing resources,which is especially applicable for environments with extremely limited resources.Using extensive experiments,the collaborative compression and classification framework is effective and useful for visual IoT applications.
基金the National Key R&D Program of China(No.2021YFC2900500).
文摘The efficient processing of large amounts of data collected by the microseismic monitoring system(MMS),especially the rapid identification of microseismic events in explosions and noise,is essential for mine disaster prevention.Currently,this work is primarily performed by skilled technicians,which results in severe workloads and inefficiency.In this paper,CNN-based transfer learning combined with computer vision technology was used to achieve automatic recognition and classification of multichannel microseismic signal waveforms.First,data collected by MMS was generated into 6-channel original waveforms based on events.After that,sample data sets of microseismic events,blasts,drillings,and noises were established through manual identification.These datasets were split into training sets and test sets according to a certain proportion,and transfer learning was performed on AlexNet,GoogLeNet,and ResNet50 pre-training network models,respectively.After training and tuning,optimal models were retained and compared with support vector machine classification.Results show that transfer learning models perform well on different test sets.Overall,GoogLeNet performed best,with a recognition accuracy of 99.8%.Finally,the possible effects of the number of training sets and the imbalance of different types of sample data on the accuracy and effectiveness of classification models were discussed.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/209/42)This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-Track Path of Research Funding Program.
文摘Biomedical images are used for capturing the images for diagnosis process and to examine the present condition of organs or tissues.Biomedical image processing concepts are identical to biomedical signal processing,which includes the investigation,improvement,and exhibition of images gathered using x-ray,ultrasound,MRI,etc.At the same time,cervical cancer becomes a major reason for increased women’s mortality rate.But cervical cancer is an identified at an earlier stage using regular pap smear images.In this aspect,this paper devises a new biomedical pap smear image classification using cascaded deep forest(BPSIC-CDF)model on Internet of Things(IoT)environment.The BPSIC-CDF technique enables the IoT devices for pap smear image acquisition.In addition,the pre-processing of pap smear images takes place using adaptive weighted mean filtering(AWMF)technique.Moreover,sailfish optimizer with Tsallis entropy(SFO-TE)approach has been implemented for the segmentation of pap smear images.Furthermore,a deep learning based Residual Network(ResNet50)method was executed as a feature extractor and CDF as a classifier to determine the class labels of the input pap smear images.In order to showcase the improved diagnostic outcome of the BPSICCDF technique,a comprehensive set of simulations take place on Herlev database.The experimental results highlighted the betterment of the BPSICCDF technique over the recent state of art techniques interms of different performance measures.
基金This research was financially supported in part by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D program.(Project No.P0016038)and in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2016-0-00312)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘Medical Resonance Imaging(MRI)is a noninvasive,nonradioactive,and meticulous diagnostic modality capability in the field of medical imaging.However,the efficiency of MR image reconstruction is affected by its bulky image sets and slow process implementation.Therefore,to obtain a high-quality reconstructed image we presented a sparse aware noise removal technique that uses convolution neural network(SANR_CNN)for eliminating noise and improving the MR image reconstruction quality.The proposed noise removal or denoising technique adopts a fast CNN architecture that aids in training larger datasets with improved quality,and SARN algorithm is used for building a dictionary learning technique for denoising large image datasets.The proposed SANR_CNN model also preserves the details and edges in the image during reconstruction.An experiment was conducted to analyze the performance of SANR_CNN in a few existing models in regard with peak signal-to-noise ratio(PSNR),structural similarity index(SSIM),and mean squared error(MSE).The proposed SANR_CNN model achieved higher PSNR,SSIM,and MSE efficiency than the other noise removal techniques.The proposed architecture also provides transmission of these denoised medical images through secured IoT architecture.
文摘A new image reconstruction method was developed for a Compton camera. A simulation to determine a γ-ray source position was performed by using the simulation tool, GEANT4. An image reconstruction was made in two steps. First, a three dimensional image was constructed and projected in one selected plane, then the points from each ellipse was picked up by taking the peak points of a density distribution of crossing points between the ellipse and the first step image. The second step procedure improved the accuracy and the spatial resolution of a position de- termination significantly, comparing with the image obtained by only the first step. The accuracy and the resolution for a point source were obtained to be about 0.02 mm and (1.35±0.15) mm, respectively. The same procedure was applied to an imaging of the distributed γ-ray source.
文摘In electronic confrontation, Synthetic Aperture Radar (SAR) is vulnerable to different types of electronic jamming. The research on SAR jamming image quality assessment can provide the prerequisite for SAR jamming and anti-jamming technology, which is an urgent problem that researchers need to solve. Traditional SAR image quality assessment metrics analyze statistical error between the reference image and the jamming image only in the pixel domain; therefore, they cannot reflect the visual perceptual property of SAR jamming images effectively. In this demo, we develop a SAR image quality assessment system based on human visual perception for the application of aircraft electromagnetic countermeasures simulation platform.The internet of things and cloud computing techniques of big data are applied to our system. In the demonstration, we will present the assessment result interface of the SAR image quality assessment system.