Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subse...Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.展开更多
There is a growing body of clinical research on the utility of synthetic data derivatives,an emerging research tool in medicine.In nephrology,clinicians can use machine learning and artificial intelligence as powerful...There is a growing body of clinical research on the utility of synthetic data derivatives,an emerging research tool in medicine.In nephrology,clinicians can use machine learning and artificial intelligence as powerful aids in their clinical decision-making while also preserving patient privacy.This is especially important given the epidemiology of chronic kidney disease,renal oncology,and hypertension worldwide.However,there remains a need to create a framework for guidance regarding how to better utilize synthetic data as a practical application in this research.展开更多
Time-domain airborne electromagnetic(AEM)data are frequently subject to interference from various types of noise,which can reduce the data quality and affect data inversion and interpretation.Traditional denoising met...Time-domain airborne electromagnetic(AEM)data are frequently subject to interference from various types of noise,which can reduce the data quality and affect data inversion and interpretation.Traditional denoising methods primarily deal with data directly,without analyzing the data in detail;thus,the results are not always satisfactory.In this paper,we propose a method based on dictionary learning for EM data denoising.This method uses dictionary learning to perform feature analysis and to extract and reconstruct the true signal.In the process of dictionary learning,the random noise is fi ltered out as residuals.To verify the eff ectiveness of this dictionary learning approach for denoising,we use a fi xed overcomplete discrete cosine transform(ODCT)dictionary algorithm,the method-of-optimal-directions(MOD)dictionary learning algorithm,and the K-singular value decomposition(K-SVD)dictionary learning algorithm to denoise decay curves at single points and to denoise profi le data for diff erent time channels in time-domain AEM.The results show obvious diff erences among the three dictionaries for denoising AEM data,with the K-SVD dictionary achieving the best performance.展开更多
In this paper,the application of an algorithm for precipitation retrieval based on Himawari-8 (H8) satellite infrared data is studied.Based on GPM precipitation data and H8 Infrared spectrum channel brightness tempera...In this paper,the application of an algorithm for precipitation retrieval based on Himawari-8 (H8) satellite infrared data is studied.Based on GPM precipitation data and H8 Infrared spectrum channel brightness temperature data,corresponding "precipitation field dictionary" and "channel brightness temperature dictionary" are formed.The retrieval of precipitation field based on brightness temperature data is studied through the classification rule of k-nearest neighbor domain (KNN) and regularization constraint.Firstly,the corresponding "dictionary" is constructed according to the training sample database of the matched GPM precipitation data and H8 brightness temperature data.Secondly,according to the fact that precipitation characteristics in small organizations in different storm environments are often repeated,KNN is used to identify the spectral brightness temperature signal of "precipitation" and "non-precipitation" based on "the dictionary".Finally,the precipitation field retrieval is carried out in the precipitation signal "subspace" based on the regular term constraint method.In the process of retrieval,the contribution rate of brightness temperature retrieval of different channels was determined by Bayesian model averaging (BMA) model.The preliminary experimental results based on the "quantitative" evaluation indexes show that the precipitation of H8 retrieval has a good correlation with the GPM truth value,with a small error and similar structure.展开更多
Enhancing seismic resolution is a key component in seismic data processing, which plays a valuable role in raising the prospecting accuracy of oil reservoirs. However, in noisy situations, existing resolution enhancem...Enhancing seismic resolution is a key component in seismic data processing, which plays a valuable role in raising the prospecting accuracy of oil reservoirs. However, in noisy situations, existing resolution enhancement methods are difficult to yield satisfactory processing outcomes for reservoir characterization. To solve this problem, we develop a new approach for simultaneous denoising and resolution enhancement of seismic data based on convolution dictionary learning. First, an elastic convolution dictionary learning algorithm is presented to efficiently learn a convolution dictionary with stronger representation capability from the noisy data to be processed. Specifically, the algorithm introduces the elastic L1/2 norm as a sparsity constraint and employs a steepest gradient descent strategy to efficiently solve the frequency-domain linear system with substantial computational cost in a half-quadratic splitting framework. Then, based on the learned convolution dictionary, a weighted convolutional sparse representation paradigm is designed to encode the noisy data to acquire an optimal sparse approximation of the effective signal. Subsequently, a high-resolution dictionary with a broadband spectrum is constructed by the proposed parameter scaling strategy and matched filtering technique on the basis of atomic spectrum modeling. Finally, the optimal sparse approximation of the effective signal and the constructed high-resolution dictionary are used for data reconstruction to obtain the seismic signal with high resolution and high signal-to-noise ratio. Synthetic and field dataset examples are executed to check the effectiveness and reliability of the developed method. The results indicate that this method has a more competitive performance in seismic applications compared with the conventional deconvolution and spectral whitening methods.展开更多
The critical nature of satellite network traffic provides a challenging environment to detect intrusions. The intrusion detection method presented aims to raise an alert whenever satellite network signals begin to exh...The critical nature of satellite network traffic provides a challenging environment to detect intrusions. The intrusion detection method presented aims to raise an alert whenever satellite network signals begin to exhibit anomalous patterns determined by Euclidian distance metric. In line with anomaly-based intrusion detection systems, the method presented relies heavily on building a model of"normal" through the creation of a signal dictionary using windowing and k-means clustering. The results of three signals fi'om our case study are discussed to highlight the benefits and drawbacks of the method presented. Our preliminary results demonstrate that the clustering technique used has great potential for intrusion detection for non-periodic satellite network signals.展开更多
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose...In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende...Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.展开更多
In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes consid...In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods.展开更多
The word dictionary is very significant,especially in countries with old traditions of using dictionaries in education,literature and other fields,as well as in the process of personal growth and development.For a maj...The word dictionary is very significant,especially in countries with old traditions of using dictionaries in education,literature and other fields,as well as in the process of personal growth and development.For a majority,dictionaries are authoritative and concentrated information sources(Baldun?iks,2012a,p.7).As almost half a century has passed since the first publication of P.Galenieks’s Botanical Dictionary(Botaniskāvārdnīca)in 1950,it is necessary to develop a new dictionary of botanical terms,where Latvian would be one of the languages involved,as lately new dictionaries of this type have not been published.There is no doubt it is necessary to develop such a dictionary,and several studies have been carried out to acknowledge this fact(e.g.,Balode,2012,pp.16-61;Helviga&Peina,2016,pp.127-158;Svi?e,2015,pp.131-140;2016).The analysis of the lexicographic survey described in this article is a justification thereof.A dictionary of botanical terms would be a great aid not only in the work of professional translators;it would be useful to anyone who encounters terms of this field in daily work and translates such terms,for example,translators,teachers of specialised subjects(natural sciences,geography,botany),gardeners,florists,or nature enthusiasts.The study examines the results of a survey of the potential users of the dictionary.展开更多
Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w...Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.展开更多
There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction...There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction temperature estimation approach based on neural network without additional cost is proposed and the lifetime calculation for IGBT using electric vehicle big data is performed.The direct current(DC)voltage,operation current,switching frequency,negative thermal coefficient thermistor(NTC)temperature and IGBT lifetime are inputs.And the junction temperature(T_(j))is output.With the rain flow counting method,the classified irregular temperatures are brought into the life model for the failure cycles.The fatigue accumulation method is then used to calculate the IGBT lifetime.To solve the limited computational and storage resources of electric vehicle controllers,the operation of IGBT lifetime calculation is running on a big data platform.The lifetime is then transmitted wirelessly to electric vehicles as input for neural network.Thus the junction temperature of IGBT under long-term operating conditions can be accurately estimated.A test platform of the motor controller combined with the vehicle big data server is built for the IGBT accelerated aging test.Subsequently,the IGBT lifetime predictions are derived from the junction temperature estimation by the neural network method and the thermal network method.The experiment shows that the lifetime prediction based on a neural network with big data demonstrates a higher accuracy than that of the thermal network,which improves the reliability evaluation of system.展开更多
Many different effective reflection information are often contaminated by exterior and random noise which concealed in the seismic data.Traditional single or fixed transform is not suit for exploiting their complicate...Many different effective reflection information are often contaminated by exterior and random noise which concealed in the seismic data.Traditional single or fixed transform is not suit for exploiting their complicated characteristics and attenuating the noise.Recent years,a novel method so-called morphological component analysis(MCA)is put forward to separate different geometrical components by amalgamating several irrelevance transforms.According to study the local singular and smooth linear components characteristics of seismic data,we propose a method of suppressing noise by integrating with the advantages of adaptive K-singular value decomposition(K-SVD)and wave atom dictionaries to depict the morphological features diversity of seismic signals.Numerical results indicate that our method can dramatically suppress the undesired noises,preserve the information of geologic body and geological structure and improve the signal-to-noise ratio of the data.We also demonstrate the superior performance of this approach by comparing with other novel dictionaries such as discrete cosine transform(DCT),undecimated discrete wavelet transform(UDWT),or curvelet transform,etc.This algorithm provides new ideas for data processing to advance quality and signal-to-noise ratio of seismic data.展开更多
As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The ...As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.展开更多
Since the impoundment of Three Gorges Reservoir(TGR)in 2003,numerous slopes have experienced noticeable movement or destabilization owing to reservoir level changes and seasonal rainfall.One case is the Outang landsli...Since the impoundment of Three Gorges Reservoir(TGR)in 2003,numerous slopes have experienced noticeable movement or destabilization owing to reservoir level changes and seasonal rainfall.One case is the Outang landslide,a large-scale and active landslide,on the south bank of the Yangtze River.The latest monitoring data and site investigations available are analyzed to establish spatial and temporal landslide deformation characteristics.Data mining technology,including the two-step clustering and Apriori algorithm,is then used to identify the dominant triggers of landslide movement.In the data mining process,the two-step clustering method clusters the candidate triggers and displacement rate into several groups,and the Apriori algorithm generates correlation criteria for the cause-and-effect.The analysis considers multiple locations of the landslide and incorporates two types of time scales:longterm deformation on a monthly basis and short-term deformation on a daily basis.This analysis shows that the deformations of the Outang landslide are driven by both rainfall and reservoir water while its deformation varies spatiotemporally mainly due to the difference in local responses to hydrological factors.The data mining results reveal different dominant triggering factors depending on the monitoring frequency:the monthly and bi-monthly cumulative rainfall control the monthly deformation,and the 10-d cumulative rainfall and the 5-d cumulative drop of water level in the reservoir dominate the daily deformation of the landslide.It is concluded that the spatiotemporal deformation pattern and data mining rules associated with precipitation and reservoir water level have the potential to be broadly implemented for improving landslide prevention and control in the dam reservoirs and other landslideprone areas.展开更多
A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°an...A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.展开更多
When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to ...When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.展开更多
Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the g...Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.展开更多
基金supported in part by NIH grants R01NS39600,U01MH114829RF1MH128693(to GAA)。
文摘Many fields,such as neuroscience,are experiencing the vast prolife ration of cellular data,underscoring the need fo r organizing and interpreting large datasets.A popular approach partitions data into manageable subsets via hierarchical clustering,but objective methods to determine the appropriate classification granularity are missing.We recently introduced a technique to systematically identify when to stop subdividing clusters based on the fundamental principle that cells must differ more between than within clusters.Here we present the corresponding protocol to classify cellular datasets by combining datadriven unsupervised hierarchical clustering with statistical testing.These general-purpose functions are applicable to any cellular dataset that can be organized as two-dimensional matrices of numerical values,including molecula r,physiological,and anatomical datasets.We demonstrate the protocol using cellular data from the Janelia MouseLight project to chara cterize morphological aspects of neurons.
文摘There is a growing body of clinical research on the utility of synthetic data derivatives,an emerging research tool in medicine.In nephrology,clinicians can use machine learning and artificial intelligence as powerful aids in their clinical decision-making while also preserving patient privacy.This is especially important given the epidemiology of chronic kidney disease,renal oncology,and hypertension worldwide.However,there remains a need to create a framework for guidance regarding how to better utilize synthetic data as a practical application in this research.
基金financially supported the Strategic Priority Research Program of the Chinese Academy of Sciences (No. XDA14020102)the National Natural Science Foundation of China (Nos. 41774125,41530320 and 41804098)the Key National Research Project of China (Nos. 2016YFC0303100,2017YFC0601900)。
文摘Time-domain airborne electromagnetic(AEM)data are frequently subject to interference from various types of noise,which can reduce the data quality and affect data inversion and interpretation.Traditional denoising methods primarily deal with data directly,without analyzing the data in detail;thus,the results are not always satisfactory.In this paper,we propose a method based on dictionary learning for EM data denoising.This method uses dictionary learning to perform feature analysis and to extract and reconstruct the true signal.In the process of dictionary learning,the random noise is fi ltered out as residuals.To verify the eff ectiveness of this dictionary learning approach for denoising,we use a fi xed overcomplete discrete cosine transform(ODCT)dictionary algorithm,the method-of-optimal-directions(MOD)dictionary learning algorithm,and the K-singular value decomposition(K-SVD)dictionary learning algorithm to denoise decay curves at single points and to denoise profi le data for diff erent time channels in time-domain AEM.The results show obvious diff erences among the three dictionaries for denoising AEM data,with the K-SVD dictionary achieving the best performance.
基金Supported by National Natural Science Foundation of China(41805080)Natural Science Foundation of Anhui Province,China(1708085QD89)+1 种基金Key Research and Development Program Projects of Anhui Province,China(201904a07020099)Open Foundation Project Shenyang Institute of Atmospheric Environment,China Meteorological Administration(2016SYIAE14)
文摘In this paper,the application of an algorithm for precipitation retrieval based on Himawari-8 (H8) satellite infrared data is studied.Based on GPM precipitation data and H8 Infrared spectrum channel brightness temperature data,corresponding "precipitation field dictionary" and "channel brightness temperature dictionary" are formed.The retrieval of precipitation field based on brightness temperature data is studied through the classification rule of k-nearest neighbor domain (KNN) and regularization constraint.Firstly,the corresponding "dictionary" is constructed according to the training sample database of the matched GPM precipitation data and H8 brightness temperature data.Secondly,according to the fact that precipitation characteristics in small organizations in different storm environments are often repeated,KNN is used to identify the spectral brightness temperature signal of "precipitation" and "non-precipitation" based on "the dictionary".Finally,the precipitation field retrieval is carried out in the precipitation signal "subspace" based on the regular term constraint method.In the process of retrieval,the contribution rate of brightness temperature retrieval of different channels was determined by Bayesian model averaging (BMA) model.The preliminary experimental results based on the "quantitative" evaluation indexes show that the precipitation of H8 retrieval has a good correlation with the GPM truth value,with a small error and similar structure.
基金This work is supported by the Laoshan National Laboratoryof ScienceandTechnologyFoundation(No.LSKj202203400)the National Natural Science Foundation of China(No.41874146).
文摘Enhancing seismic resolution is a key component in seismic data processing, which plays a valuable role in raising the prospecting accuracy of oil reservoirs. However, in noisy situations, existing resolution enhancement methods are difficult to yield satisfactory processing outcomes for reservoir characterization. To solve this problem, we develop a new approach for simultaneous denoising and resolution enhancement of seismic data based on convolution dictionary learning. First, an elastic convolution dictionary learning algorithm is presented to efficiently learn a convolution dictionary with stronger representation capability from the noisy data to be processed. Specifically, the algorithm introduces the elastic L1/2 norm as a sparsity constraint and employs a steepest gradient descent strategy to efficiently solve the frequency-domain linear system with substantial computational cost in a half-quadratic splitting framework. Then, based on the learned convolution dictionary, a weighted convolutional sparse representation paradigm is designed to encode the noisy data to acquire an optimal sparse approximation of the effective signal. Subsequently, a high-resolution dictionary with a broadband spectrum is constructed by the proposed parameter scaling strategy and matched filtering technique on the basis of atomic spectrum modeling. Finally, the optimal sparse approximation of the effective signal and the constructed high-resolution dictionary are used for data reconstruction to obtain the seismic signal with high resolution and high signal-to-noise ratio. Synthetic and field dataset examples are executed to check the effectiveness and reliability of the developed method. The results indicate that this method has a more competitive performance in seismic applications compared with the conventional deconvolution and spectral whitening methods.
文摘The critical nature of satellite network traffic provides a challenging environment to detect intrusions. The intrusion detection method presented aims to raise an alert whenever satellite network signals begin to exhibit anomalous patterns determined by Euclidian distance metric. In line with anomaly-based intrusion detection systems, the method presented relies heavily on building a model of"normal" through the creation of a signal dictionary using windowing and k-means clustering. The results of three signals fi'om our case study are discussed to highlight the benefits and drawbacks of the method presented. Our preliminary results demonstrate that the clustering technique used has great potential for intrusion detection for non-periodic satellite network signals.
文摘In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
基金This research was financially supported by the Ministry of Trade,Industry,and Energy(MOTIE),Korea,under the“Project for Research and Development with Middle Markets Enterprises and DNA(Data,Network,AI)Universities”(AI-based Safety Assessment and Management System for Concrete Structures)(ReferenceNumber P0024559)supervised by theKorea Institute for Advancement of Technology(KIAT).
文摘Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.
基金supported by the National Natural Science Foundation of China(61771372,61771367,62101494)the National Outstanding Youth Science Fund Project(61525105)+1 种基金Shenzhen Science and Technology Program(KQTD20190929172704911)the Aeronautic al Science Foundation of China(2019200M1001)。
文摘In electromagnetic countermeasures circumstances,synthetic aperture radar(SAR)imagery usually suffers from severe quality degradation from modulated interrupt sampling repeater jamming(MISRJ),which usually owes considerable coherence with the SAR transmission waveform together with periodical modulation patterns.This paper develops an MISRJ suppression algorithm for SAR imagery with online dictionary learning.In the algorithm,the jamming modulation temporal properties are exploited with extracting and sorting MISRJ slices using fast-time autocorrelation.Online dictionary learning is followed to separate real signals from jamming slices.Under the learned representation,time-varying MISRJs are suppressed effectively.Both simulated and real-measured SAR data are also used to confirm advantages in suppressing time-varying MISRJs over traditional methods.
文摘The word dictionary is very significant,especially in countries with old traditions of using dictionaries in education,literature and other fields,as well as in the process of personal growth and development.For a majority,dictionaries are authoritative and concentrated information sources(Baldun?iks,2012a,p.7).As almost half a century has passed since the first publication of P.Galenieks’s Botanical Dictionary(Botaniskāvārdnīca)in 1950,it is necessary to develop a new dictionary of botanical terms,where Latvian would be one of the languages involved,as lately new dictionaries of this type have not been published.There is no doubt it is necessary to develop such a dictionary,and several studies have been carried out to acknowledge this fact(e.g.,Balode,2012,pp.16-61;Helviga&Peina,2016,pp.127-158;Svi?e,2015,pp.131-140;2016).The analysis of the lexicographic survey described in this article is a justification thereof.A dictionary of botanical terms would be a great aid not only in the work of professional translators;it would be useful to anyone who encounters terms of this field in daily work and translates such terms,for example,translators,teachers of specialised subjects(natural sciences,geography,botany),gardeners,florists,or nature enthusiasts.The study examines the results of a survey of the potential users of the dictionary.
基金Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korea government(Grant No.20214000000140,Graduate School of Convergence for Clean Energy Integrated Power Generation)Korea Basic Science Institute(National Research Facilities and Equipment Center)grant funded by the Ministry of Education(2021R1A6C101A449)the National Research Foundation of Korea grant funded by the Ministry of Science and ICT(2021R1A2C1095139),Republic of Korea。
文摘Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.
文摘There are challenges to the reliability evaluation for insulated gate bipolar transistors(IGBT)on electric vehicles,such as junction temperature measurement,computational and storage resources.In this paper,a junction temperature estimation approach based on neural network without additional cost is proposed and the lifetime calculation for IGBT using electric vehicle big data is performed.The direct current(DC)voltage,operation current,switching frequency,negative thermal coefficient thermistor(NTC)temperature and IGBT lifetime are inputs.And the junction temperature(T_(j))is output.With the rain flow counting method,the classified irregular temperatures are brought into the life model for the failure cycles.The fatigue accumulation method is then used to calculate the IGBT lifetime.To solve the limited computational and storage resources of electric vehicle controllers,the operation of IGBT lifetime calculation is running on a big data platform.The lifetime is then transmitted wirelessly to electric vehicles as input for neural network.Thus the junction temperature of IGBT under long-term operating conditions can be accurately estimated.A test platform of the motor controller combined with the vehicle big data server is built for the IGBT accelerated aging test.Subsequently,the IGBT lifetime predictions are derived from the junction temperature estimation by the neural network method and the thermal network method.The experiment shows that the lifetime prediction based on a neural network with big data demonstrates a higher accuracy than that of the thermal network,which improves the reliability evaluation of system.
基金sponsored by National Natural Science Foundation of China(No.41672325,41602334)National Key Research and Development Program of China(No.2017YFC0601505).
文摘Many different effective reflection information are often contaminated by exterior and random noise which concealed in the seismic data.Traditional single or fixed transform is not suit for exploiting their complicated characteristics and attenuating the noise.Recent years,a novel method so-called morphological component analysis(MCA)is put forward to separate different geometrical components by amalgamating several irrelevance transforms.According to study the local singular and smooth linear components characteristics of seismic data,we propose a method of suppressing noise by integrating with the advantages of adaptive K-singular value decomposition(K-SVD)and wave atom dictionaries to depict the morphological features diversity of seismic signals.Numerical results indicate that our method can dramatically suppress the undesired noises,preserve the information of geologic body and geological structure and improve the signal-to-noise ratio of the data.We also demonstrate the superior performance of this approach by comparing with other novel dictionaries such as discrete cosine transform(DCT),undecimated discrete wavelet transform(UDWT),or curvelet transform,etc.This algorithm provides new ideas for data processing to advance quality and signal-to-noise ratio of seismic data.
基金supported by the Meteorological Soft Science Project(Grant No.2023ZZXM29)the Natural Science Fund Project of Tianjin,China(Grant No.21JCYBJC00740)the Key Research and Development-Social Development Program of Jiangsu Province,China(Grant No.BE2021685).
文摘As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.
基金supported by the Natural Science Foundation of Shandong Province,China(Grant No.ZR2021QD032)。
文摘Since the impoundment of Three Gorges Reservoir(TGR)in 2003,numerous slopes have experienced noticeable movement or destabilization owing to reservoir level changes and seasonal rainfall.One case is the Outang landslide,a large-scale and active landslide,on the south bank of the Yangtze River.The latest monitoring data and site investigations available are analyzed to establish spatial and temporal landslide deformation characteristics.Data mining technology,including the two-step clustering and Apriori algorithm,is then used to identify the dominant triggers of landslide movement.In the data mining process,the two-step clustering method clusters the candidate triggers and displacement rate into several groups,and the Apriori algorithm generates correlation criteria for the cause-and-effect.The analysis considers multiple locations of the landslide and incorporates two types of time scales:longterm deformation on a monthly basis and short-term deformation on a daily basis.This analysis shows that the deformations of the Outang landslide are driven by both rainfall and reservoir water while its deformation varies spatiotemporally mainly due to the difference in local responses to hydrological factors.The data mining results reveal different dominant triggering factors depending on the monitoring frequency:the monthly and bi-monthly cumulative rainfall control the monthly deformation,and the 10-d cumulative rainfall and the 5-d cumulative drop of water level in the reservoir dominate the daily deformation of the landslide.It is concluded that the spatiotemporal deformation pattern and data mining rules associated with precipitation and reservoir water level have the potential to be broadly implemented for improving landslide prevention and control in the dam reservoirs and other landslideprone areas.
基金This work was supported by the general program(No.1177531)joint funding(No.U2067205)from the National Natural Science Foundation of China.
文摘A benchmark experiment on^(238)U slab samples was conducted using a deuterium-tritium neutron source at the China Institute of Atomic Energy.The leakage neutron spectra within energy levels of 0.8-16 MeV at 60°and 120°were measured using the time-of-flight method.The samples were prepared as rectangular slabs with a 30 cm square base and thicknesses of 3,6,and 9 cm.The leakage neutron spectra were also calculated using the MCNP-4C program based on the latest evaluated files of^(238)U evaluated neutron data from CENDL-3.2,ENDF/B-Ⅷ.0,JENDL-5.0,and JEFF-3.3.Based on the comparison,the deficiencies and improvements in^(238)U evaluated nuclear data were analyzed.The results showed the following.(1)The calculated results for CENDL-3.2 significantly overestimated the measurements in the energy interval of elastic scattering at 60°and 120°.(2)The calculated results of CENDL-3.2 overestimated the measurements in the energy interval of inelastic scattering at 120°.(3)The calculated results for CENDL-3.2 significantly overestimated the measurements in the 3-8.5 MeV energy interval at 60°and 120°.(4)The calculated results with JENDL-5.0 were generally consistent with the measurement results.
基金supported by the Yunnan Major Scientific and Technological Projects(Grant No.202302AD080001)the National Natural Science Foundation,China(No.52065033).
文摘When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.
基金funded by the National Natural Science Foundation of China(General Program:No.52074314,No.U19B6003-05)National Key Research and Development Program of China(2019YFA0708303-05)。
文摘Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.