The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibilit...Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.展开更多
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta...Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.展开更多
Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate w...Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate with one another and with roadside infrastructure to enhance safety and traffic flow provide a range of value-added services,as they are an essential component of modern smart transportation systems.VANETs steganography has been suggested by many authors for secure,reliable message transfer between terminal/hope to terminal/hope and also to secure it from attack for privacy protection.This paper aims to determine whether using steganography is possible to improve data security and secrecy in VANET applications and to analyze effective steganography techniques for incorporating data into images while minimizing visual quality loss.According to simulations in literature and real-world studies,Image steganography proved to be an effectivemethod for secure communication on VANETs,even in difficult network conditions.In this research,we also explore a variety of steganography approaches for vehicular ad-hoc network transportation systems like vector embedding,statistics,spatial domain(SD),transform domain(TD),distortion,masking,and filtering.This study possibly shall help researchers to improve vehicle networks’ability to communicate securely and lay the door for innovative steganography methods.展开更多
Recently,deep image-hiding techniques have attracted considerable attention in covert communication and high-capacity information hiding.However,these approaches have some limitations.For example,a cover image lacks s...Recently,deep image-hiding techniques have attracted considerable attention in covert communication and high-capacity information hiding.However,these approaches have some limitations.For example,a cover image lacks self-adaptability,information leakage,or weak concealment.To address these issues,this study proposes a universal and adaptable image-hiding method.First,a domain attention mechanism is designed by combining the Atrous convolution,which makes better use of the relationship between the secret image domain and the cover image domain.Second,to improve perceived human similarity,perceptual loss is incorporated into the training process.The experimental results are promising,with the proposed method achieving an average pixel discrepancy(APD)of 1.83 and a peak signal-to-noise ratio(PSNR)value of 40.72 dB between the cover and stego images,indicative of its high-quality output.Furthermore,the structural similarity index measure(SSIM)reaches 0.985 while the learned perceptual image patch similarity(LPIPS)remarkably registers at 0.0001.Moreover,self-testing and cross-experiments demonstrate the model’s adaptability and generalization in unknown hidden spaces,making it suitable for diverse computer vision tasks.展开更多
In the era of internet proliferation,safeguarding digital media copyright and integrity,especially for images,is imperative.Digital watermarking stands out as a pivotal solution for image security.With the advent of d...In the era of internet proliferation,safeguarding digital media copyright and integrity,especially for images,is imperative.Digital watermarking stands out as a pivotal solution for image security.With the advent of deep learning,watermarking has seen significant advancements.Our review focuses on the innovative deep watermarking approaches that employ neural networks to identify robust embedding spaces,resilient to various attacks.These methods,characterized by a streamlined encoder-decoder architecture,have shown enhanced performance through the incorporation of novel training modules.This article offers an in-depth analysis of deep watermarking’s core technologies,current status,and prospective trajectories,evaluating recent scholarly contributions across diverse frameworks.It concludes with an overview of the technical hurdles and prospects,providing essential insights for ongoing and future research endeavors in digital image watermarking.展开更多
The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small e...The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.展开更多
Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove ...Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove regional trends. Major similarities in magnetic field orientation and intensities were observed at identical locations on both the regional and TMI data grids. From the regional and TMI gridded datasets, the residual dataset was generated which represents the very shallow geological features of the basin. Processing this residual data grid using the Source Parameter Imaging (SPI) for magnetic depth suggests that the estimated depths to magnetic sources in the basin range from about 271 m to 3552 m. The highest depths are located in two main locations somewhere around the central portion of the study area which correspond to the area with positive magnetic susceptibilities, as well as the areas extending outwards across the eastern boundary of the study area. Shallow magnetic depths are prominent towards the NW portion of the basin and also correspond to areas of negative magnetic susceptibilities. The basin generally exhibits a variation in depth of magnetic sources with high, average and shallow depths. The presence of intrusive igneous rocks was also observed in this basin. This characteristic is a pointer to the existence of geologic resources of interest for exploration in the basin.展开更多
With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificia...With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificial intelligence.However,it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage.There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology.Therefore,this study proposes a mixed contour data augmentation technique,which is a data augmentation technique using contour images,to solve a problem caused by a lack of data.ResNet,a famous convolutional neural network(CNN)architecture,and CIFAR-10,a benchmark data set,are used for experimental performance evaluation to prove the superiority of the proposed method.And to prove that high performance improvement can be achieved even with a small training dataset,the ratio of the training dataset was divided into 70%,50%,and 30%for comparative analysis.As a result of applying the mixed contour data augmentation technique,it was possible to achieve a classification accuracy improvement of up to 4.64%and high accuracy even with a small amount of data set.In addition,it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.展开更多
In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker....In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker.Due to such massive generation of big data,the utilization of new methods based on Big Data Analytics(BDA),Machine Learning(ML),and Artificial Intelligence(AI)have become essential.In this aspect,the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning(BDA-CSODL)technique for medical image classification on Apache Spark environment.The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately.BDA-CSODL technique involves different stages of operations such as preprocessing,segmentation,fea-ture extraction,and classification.In addition,BDA-CSODL technique also fol-lows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image.Moreover,a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor.Stochastic Gradient Descent(SGD)model is used for parameter tuning process.Furthermore,CSO with Long Short-Term Memory(CSO-LSTM)model is employed as a classification model to determine the appropriate class labels to it.Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique.A wide range of simulations was conducted on benchmark medical image datasets and the com-prehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.展开更多
Data hiding(DH)is an important technology for securely transmitting secret data in networks,and has increasing become a research hotspot throughout the world.However,for Joint photographic experts group(JPEG)images,it...Data hiding(DH)is an important technology for securely transmitting secret data in networks,and has increasing become a research hotspot throughout the world.However,for Joint photographic experts group(JPEG)images,it is difficult to balance the contradiction among embedded capacity,visual quality and the file size increment in existing data hiding schemes.Thus,to deal with this problem,a high-imperceptibility data hiding for JPEG images is proposed based on direction modification.First,this proposed scheme sorts all of the quantized discrete cosine transform(DCT)block in ascending order according to the number of non-consecutive-zero alternating current(AC)coefficients.Then it selects non-consecutive-zero AC coefficients with absolute values less than or equal to 1 at the same frequency position in two adjacent blocks for pairing.Finally,the 2-bit secret data can be embedded into a coefficient-pair by using the filled reference matrix and the designed direction modification rules.The experiment was conducted on 5 standard test images and 1000 images of BOSSbase dataset,respectively.The experimental results showed that the visual quality of the proposed scheme was improved by 1∼4 dB compared with the comparison schemes,and the file size increment was reduced at most to 15%of the comparison schemes.展开更多
In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization tec...In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement.展开更多
[Objective] To extract desertification information of Hulun Buir region based on MODIS image data. [Method] Based on MODIS image data with the spatial res- olution of 1 km, 5 indicators which could reflect different d...[Objective] To extract desertification information of Hulun Buir region based on MODIS image data. [Method] Based on MODIS image data with the spatial res- olution of 1 km, 5 indicators which could reflect different desertification features were selected to conduct inversion. The desertification information of Hulun Buir region was extracted by decision tree classification. [Result] The desertification area of Hu- lun Buir region is 33 862 km2, accounting for 24% of the total area, and it is mainly dominated by sandiness desertification. Though field verification and mining point validation of high-resolution interpretation data, the overall accuracy of this evaluation is above 89%. [Conclusion] Evaluation method used in this study is not only effectively for large scale regional desertification monitoring but also has a better evaluation performance.展开更多
This study presents enhancing images authentication by securing watermarking hidden data via shares generated from counting-based secret sharing.The trustfulness of shares utilised secret-sharing as an applicable priv...This study presents enhancing images authentication by securing watermarking hidden data via shares generated from counting-based secret sharing.The trustfulness of shares utilised secret-sharing as an applicable privacy creation tool for the authentication of real-life complex platforms.This research adjusts embedding the watermarking data over the images by innovative redistribution of shares to be embedded spread over all the images.The anticipated watermarking technique guaranteed to scatter the share bits implanting at different least significant bits of image pixels as boosting up the trust overall authentication practicality.The paper experimentation performance analysis shows that this improved image watermarking authentication(capacity)is averagely better by 33%–67%than other related exclusive-OR oriented and octagon approaches.Interestingly,these measurement improvements did not degrade the robustness and security of the system,inspiring our research for opening novel track of related future counting-based secret-sharing authentication progresses to come.展开更多
The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectivene...The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer’s disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer’s disease onset.展开更多
Assessment of reservoir and fracture parameters is necessary to optimize oil production,especially in heterogeneous reservoirs.Core and image logs are regarded as two of the best methods for this aim.However,due to co...Assessment of reservoir and fracture parameters is necessary to optimize oil production,especially in heterogeneous reservoirs.Core and image logs are regarded as two of the best methods for this aim.However,due to core limitations,using image log is considered as the best method.This study aims to use electrical image logs in the carbonate Asmari Formation reservoir in Zagros Basin,SW Iran,in order to evaluate natural fractures,porosity system,permeability profile and heterogeneity index and accordingly compare the results with core and well data.The results indicated that the electrical image logs are reliable for evaluating fracture and reservoir parameters,when there is no core available for a well.Based on the results from formation micro-imager(FMI)and electrical micro-imager(EMI),Asmari was recognized as a completely fractured reservoir in studied field and the reservoir parameters are mainly controlled by fractures.Furthermore,core and image logs indicated that the secondary porosity varies from 0%to 10%.The permeability indicator indicates that zones 3 and 5 have higher permeability index.Image log permeability index shows a very reasonable permeability profile after scaling against core and modular dynamics tester mobility,mud loss and production index which vary between 1 and 1000 md.In addition,no relationship was observed between core porosity and permeability,while the permeability relied heavily on fracture aperture.Therefore,fracture aperture was considered as the most important parameter for the determination of permeability.Sudden changes were also observed at zones 1-1 and 5 in the permeability trend,due to the high fracture aperture.It can be concluded that the electrical image logs(FMI and EMI)are usable for evaluating both reservoir and fracture parameters in wells with no core data in the Zagros Basin,SW Iran.展开更多
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
Recently,reversible data hiding in encrypted image(RDHEI)has attracted extensive attention,which can be used in secure cloud computing and privacy protection effectively.In this paper,a novel RDHEI scheme based on blo...Recently,reversible data hiding in encrypted image(RDHEI)has attracted extensive attention,which can be used in secure cloud computing and privacy protection effectively.In this paper,a novel RDHEI scheme based on block classification and permutation is proposed.Content owner first divides original image into non-overlapping blocks and then set a threshold to classify these blocks into smooth and non-smooth blocks respectively.After block classification,content owner utilizes a specific encryption method,including stream cipher encryption and block permutation to protect image content securely.For the encrypted image,data hider embeds additional secret information in the most significant bits(MSB)of the encrypted pixels in smooth blocks and the final marked image can be obtained.At the receiver side,secret data will be extracted correctly with data-hiding key.When receiver only has encryption key,after stream cipher decryption,block scrambling decryption and MSB error prediction with threshold,decrypted image will be achieved.When data hiding key and encryption key are both obtained,receiver can find the smooth and non-smooth blocks correctly and MSB in smooth blocks will be predicted correctly,hence,receiver can recover marked image losslessly.Experimental results demonstrate that our scheme can achieve better rate-distortion performance than some of state-of-the-art schemes.展开更多
Intelligent identification of sandstone slice images using deep learning technology is the development trend of mineral identification,and accurate mineral particle segmentation is the most critical step for intellige...Intelligent identification of sandstone slice images using deep learning technology is the development trend of mineral identification,and accurate mineral particle segmentation is the most critical step for intelligent identification.A typical identification model requires many training samples to learn as many distinguishable features as possible.However,limited by the difficulty of data acquisition,the high cost of labeling,and privacy protection,this has led to a sparse sample number and cannot meet the training requirements of deep learning image identification models.In order to increase the number of samples and improve the training effect of deep learning models,this paper proposes a tight sandstone image data augmentation method by combining the advantages of the data deformation method and the data oversampling method in the Putaohua reservoir in the Sanzhao Sag of the Songliao Basin as the target area.First,the Style Generative Adversarial Network(StyleGAN)is improved to generate high-resolution tight sandstone images to improve data diversity.Second,we improve the Automatic Data Augmentation(AutoAugment)algorithm to search for the optimal augmentation strategy to expand the data scale.Finally,we design comparison experiments to demonstrate that this method has obvious advantages in generating image quality and improving the identification effect of deep learning models in real application scenarios.展开更多
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金supported by a grant from the Basic Science Research Program through the National Research Foundation(NRF)(2021R1F1A1063634)funded by the Ministry of Science and ICT(MSIT),Republic of KoreaThe authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/13/40)+2 种基金Also,the authors are thankful to Prince Satam bin Abdulaziz University for supporting this study via funding from Prince Satam bin Abdulaziz University project number(PSAU/2024/R/1445)This work was also supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R54)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.
文摘Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.
基金Dr.Arshiya Sajid Ansari would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2023-910.
文摘Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate with one another and with roadside infrastructure to enhance safety and traffic flow provide a range of value-added services,as they are an essential component of modern smart transportation systems.VANETs steganography has been suggested by many authors for secure,reliable message transfer between terminal/hope to terminal/hope and also to secure it from attack for privacy protection.This paper aims to determine whether using steganography is possible to improve data security and secrecy in VANET applications and to analyze effective steganography techniques for incorporating data into images while minimizing visual quality loss.According to simulations in literature and real-world studies,Image steganography proved to be an effectivemethod for secure communication on VANETs,even in difficult network conditions.In this research,we also explore a variety of steganography approaches for vehicular ad-hoc network transportation systems like vector embedding,statistics,spatial domain(SD),transform domain(TD),distortion,masking,and filtering.This study possibly shall help researchers to improve vehicle networks’ability to communicate securely and lay the door for innovative steganography methods.
基金supported by the National Key R&D Program of China(Grant Number 2021YFB2700900)the National Natural Science Foundation of China(Grant Numbers 62172232,62172233)the Jiangsu Basic Research Program Natural Science Foundation(Grant Number BK20200039).
文摘Recently,deep image-hiding techniques have attracted considerable attention in covert communication and high-capacity information hiding.However,these approaches have some limitations.For example,a cover image lacks self-adaptability,information leakage,or weak concealment.To address these issues,this study proposes a universal and adaptable image-hiding method.First,a domain attention mechanism is designed by combining the Atrous convolution,which makes better use of the relationship between the secret image domain and the cover image domain.Second,to improve perceived human similarity,perceptual loss is incorporated into the training process.The experimental results are promising,with the proposed method achieving an average pixel discrepancy(APD)of 1.83 and a peak signal-to-noise ratio(PSNR)value of 40.72 dB between the cover and stego images,indicative of its high-quality output.Furthermore,the structural similarity index measure(SSIM)reaches 0.985 while the learned perceptual image patch similarity(LPIPS)remarkably registers at 0.0001.Moreover,self-testing and cross-experiments demonstrate the model’s adaptability and generalization in unknown hidden spaces,making it suitable for diverse computer vision tasks.
基金supported by the National Natural Science Foundation of China(Nos.62072465,62102425)the Science and Technology Innovation Program of Hunan Province(Nos.2022RC3061,2023RC3027).
文摘In the era of internet proliferation,safeguarding digital media copyright and integrity,especially for images,is imperative.Digital watermarking stands out as a pivotal solution for image security.With the advent of deep learning,watermarking has seen significant advancements.Our review focuses on the innovative deep watermarking approaches that employ neural networks to identify robust embedding spaces,resilient to various attacks.These methods,characterized by a streamlined encoder-decoder architecture,have shown enhanced performance through the incorporation of novel training modules.This article offers an in-depth analysis of deep watermarking’s core technologies,current status,and prospective trajectories,evaluating recent scholarly contributions across diverse frameworks.It concludes with an overview of the technical hurdles and prospects,providing essential insights for ongoing and future research endeavors in digital image watermarking.
基金supported by the National Key R&D Program of China(grant No.2022YFF0503800)by the National Natural Science Foundation of China(NSFC)(grant No.11427901)+1 种基金by the Strategic Priority Research Program of the Chinese Academy of Sciences(CAS-SPP)(grant No.XDA15320102)by the Youth Innovation Promotion Association(CAS No.2022057)。
文摘The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.
文摘Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove regional trends. Major similarities in magnetic field orientation and intensities were observed at identical locations on both the regional and TMI data grids. From the regional and TMI gridded datasets, the residual dataset was generated which represents the very shallow geological features of the basin. Processing this residual data grid using the Source Parameter Imaging (SPI) for magnetic depth suggests that the estimated depths to magnetic sources in the basin range from about 271 m to 3552 m. The highest depths are located in two main locations somewhere around the central portion of the study area which correspond to the area with positive magnetic susceptibilities, as well as the areas extending outwards across the eastern boundary of the study area. Shallow magnetic depths are prominent towards the NW portion of the basin and also correspond to areas of negative magnetic susceptibilities. The basin generally exhibits a variation in depth of magnetic sources with high, average and shallow depths. The presence of intrusive igneous rocks was also observed in this basin. This characteristic is a pointer to the existence of geologic resources of interest for exploration in the basin.
文摘With the development of artificial intelligence-related technologies such as deep learning,various organizations,including the government,are making various efforts to generate and manage big data for use in artificial intelligence.However,it is difficult to acquire big data due to various social problems and restrictions such as personal information leakage.There are many problems in introducing technology in fields that do not have enough training data necessary to apply deep learning technology.Therefore,this study proposes a mixed contour data augmentation technique,which is a data augmentation technique using contour images,to solve a problem caused by a lack of data.ResNet,a famous convolutional neural network(CNN)architecture,and CIFAR-10,a benchmark data set,are used for experimental performance evaluation to prove the superiority of the proposed method.And to prove that high performance improvement can be achieved even with a small training dataset,the ratio of the training dataset was divided into 70%,50%,and 30%for comparative analysis.As a result of applying the mixed contour data augmentation technique,it was possible to achieve a classification accuracy improvement of up to 4.64%and high accuracy even with a small amount of data set.In addition,it is expected that the mixed contour data augmentation technique can be applied in various fields by proving the excellence of the proposed data augmentation technique using benchmark datasets.
基金The author extends his appreciation to the Deanship of Scientific Research at Majmaah University for funding this study under Project Number(R-2022-61).
文摘In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker.Due to such massive generation of big data,the utilization of new methods based on Big Data Analytics(BDA),Machine Learning(ML),and Artificial Intelligence(AI)have become essential.In this aspect,the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning(BDA-CSODL)technique for medical image classification on Apache Spark environment.The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately.BDA-CSODL technique involves different stages of operations such as preprocessing,segmentation,fea-ture extraction,and classification.In addition,BDA-CSODL technique also fol-lows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image.Moreover,a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor.Stochastic Gradient Descent(SGD)model is used for parameter tuning process.Furthermore,CSO with Long Short-Term Memory(CSO-LSTM)model is employed as a classification model to determine the appropriate class labels to it.Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique.A wide range of simulations was conducted on benchmark medical image datasets and the com-prehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.
基金supported by the National Natural Science Foundation of China (62072325)Shanxi Scholarship Council of China (HGKY2019081)+1 种基金Fundamental Research Program of Shanxi Province (202103021224272)TYUST SRIF (20212039).
文摘Data hiding(DH)is an important technology for securely transmitting secret data in networks,and has increasing become a research hotspot throughout the world.However,for Joint photographic experts group(JPEG)images,it is difficult to balance the contradiction among embedded capacity,visual quality and the file size increment in existing data hiding schemes.Thus,to deal with this problem,a high-imperceptibility data hiding for JPEG images is proposed based on direction modification.First,this proposed scheme sorts all of the quantized discrete cosine transform(DCT)block in ascending order according to the number of non-consecutive-zero alternating current(AC)coefficients.Then it selects non-consecutive-zero AC coefficients with absolute values less than or equal to 1 at the same frequency position in two adjacent blocks for pairing.Finally,the 2-bit secret data can be embedded into a coefficient-pair by using the filled reference matrix and the designed direction modification rules.The experiment was conducted on 5 standard test images and 1000 images of BOSSbase dataset,respectively.The experimental results showed that the visual quality of the proposed scheme was improved by 1∼4 dB compared with the comparison schemes,and the file size increment was reduced at most to 15%of the comparison schemes.
基金This work is supported by the research project (grant No. G20000467) of the Institute of Geology and Geophysics, CAS and bythe China Postdoctoral Science Foundation (No. 2004036083).
文摘In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement.
基金Supported by the Special Fundation of China Geological Survey(1212010911084)~~
文摘[Objective] To extract desertification information of Hulun Buir region based on MODIS image data. [Method] Based on MODIS image data with the spatial res- olution of 1 km, 5 indicators which could reflect different desertification features were selected to conduct inversion. The desertification information of Hulun Buir region was extracted by decision tree classification. [Result] The desertification area of Hu- lun Buir region is 33 862 km2, accounting for 24% of the total area, and it is mainly dominated by sandiness desertification. Though field verification and mining point validation of high-resolution interpretation data, the overall accuracy of this evaluation is above 89%. [Conclusion] Evaluation method used in this study is not only effectively for large scale regional desertification monitoring but also has a better evaluation performance.
文摘This study presents enhancing images authentication by securing watermarking hidden data via shares generated from counting-based secret sharing.The trustfulness of shares utilised secret-sharing as an applicable privacy creation tool for the authentication of real-life complex platforms.This research adjusts embedding the watermarking data over the images by innovative redistribution of shares to be embedded spread over all the images.The anticipated watermarking technique guaranteed to scatter the share bits implanting at different least significant bits of image pixels as boosting up the trust overall authentication practicality.The paper experimentation performance analysis shows that this improved image watermarking authentication(capacity)is averagely better by 33%–67%than other related exclusive-OR oriented and octagon approaches.Interestingly,these measurement improvements did not degrade the robustness and security of the system,inspiring our research for opening novel track of related future counting-based secret-sharing authentication progresses to come.
文摘The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer’s disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer’s disease onset.
基金financial and data support from NISOC Oil Company.
文摘Assessment of reservoir and fracture parameters is necessary to optimize oil production,especially in heterogeneous reservoirs.Core and image logs are regarded as two of the best methods for this aim.However,due to core limitations,using image log is considered as the best method.This study aims to use electrical image logs in the carbonate Asmari Formation reservoir in Zagros Basin,SW Iran,in order to evaluate natural fractures,porosity system,permeability profile and heterogeneity index and accordingly compare the results with core and well data.The results indicated that the electrical image logs are reliable for evaluating fracture and reservoir parameters,when there is no core available for a well.Based on the results from formation micro-imager(FMI)and electrical micro-imager(EMI),Asmari was recognized as a completely fractured reservoir in studied field and the reservoir parameters are mainly controlled by fractures.Furthermore,core and image logs indicated that the secondary porosity varies from 0%to 10%.The permeability indicator indicates that zones 3 and 5 have higher permeability index.Image log permeability index shows a very reasonable permeability profile after scaling against core and modular dynamics tester mobility,mud loss and production index which vary between 1 and 1000 md.In addition,no relationship was observed between core porosity and permeability,while the permeability relied heavily on fracture aperture.Therefore,fracture aperture was considered as the most important parameter for the determination of permeability.Sudden changes were also observed at zones 1-1 and 5 in the permeability trend,due to the high fracture aperture.It can be concluded that the electrical image logs(FMI and EMI)are usable for evaluating both reservoir and fracture parameters in wells with no core data in the Zagros Basin,SW Iran.
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.
基金This work was supported by the National Natural Science Foundation of China(61672354,61702332).
文摘Recently,reversible data hiding in encrypted image(RDHEI)has attracted extensive attention,which can be used in secure cloud computing and privacy protection effectively.In this paper,a novel RDHEI scheme based on block classification and permutation is proposed.Content owner first divides original image into non-overlapping blocks and then set a threshold to classify these blocks into smooth and non-smooth blocks respectively.After block classification,content owner utilizes a specific encryption method,including stream cipher encryption and block permutation to protect image content securely.For the encrypted image,data hider embeds additional secret information in the most significant bits(MSB)of the encrypted pixels in smooth blocks and the final marked image can be obtained.At the receiver side,secret data will be extracted correctly with data-hiding key.When receiver only has encryption key,after stream cipher decryption,block scrambling decryption and MSB error prediction with threshold,decrypted image will be achieved.When data hiding key and encryption key are both obtained,receiver can find the smooth and non-smooth blocks correctly and MSB in smooth blocks will be predicted correctly,hence,receiver can recover marked image losslessly.Experimental results demonstrate that our scheme can achieve better rate-distortion performance than some of state-of-the-art schemes.
基金This research was funded by the National Natural Science Foundation of China(Project No.42172161)Heilongjiang Provincial Natural Science Foundation of China(Project No.LH2020F003)+1 种基金Heilongjiang Provincial Department of Education Project of China(Project No.UNPYSCT-2020144)Northeast Petroleum University Guided Innovation Fund(2021YDL-12).
文摘Intelligent identification of sandstone slice images using deep learning technology is the development trend of mineral identification,and accurate mineral particle segmentation is the most critical step for intelligent identification.A typical identification model requires many training samples to learn as many distinguishable features as possible.However,limited by the difficulty of data acquisition,the high cost of labeling,and privacy protection,this has led to a sparse sample number and cannot meet the training requirements of deep learning image identification models.In order to increase the number of samples and improve the training effect of deep learning models,this paper proposes a tight sandstone image data augmentation method by combining the advantages of the data deformation method and the data oversampling method in the Putaohua reservoir in the Sanzhao Sag of the Songliao Basin as the target area.First,the Style Generative Adversarial Network(StyleGAN)is improved to generate high-resolution tight sandstone images to improve data diversity.Second,we improve the Automatic Data Augmentation(AutoAugment)algorithm to search for the optimal augmentation strategy to expand the data scale.Finally,we design comparison experiments to demonstrate that this method has obvious advantages in generating image quality and improving the identification effect of deep learning models in real application scenarios.