In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model...In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.展开更多
The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can...The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.展开更多
Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid appr...Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid approach combining quantum and classic devices is the Variational Quantum Classifier(VQC),which development seems promising.Albeit being largely studied,VQC implementations for“real-world”datasets are still challenging on Noisy Intermediate Scale Quantum devices(NISQ).In this paper we propose a preprocessing pipeline based on Stokes parameters for data mapping.This pipeline enhances the prediction rates when applying VQC techniques,improving the feasibility of solving classification problems using NISQ devices.By including feature selection techniques and geometrical transformations,enhanced quantum state preparation is achieved.Also,a representation based on the Stokes parameters in the PoincaréSphere is possible for visualizing the data.Our results show that by using the proposed techniques we improve the classification score for the incidence of acute comorbid diseases in Type 2 Diabetes Mellitus patients.We used the implemented version of VQC available on IBM’s framework Qiskit,and obtained with two and three qubits an accuracy of 70%and 72%respectively.展开更多
Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing ...Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy.展开更多
Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which co...Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method.展开更多
Liquid chromatography–mass spectrometry(LC–MS)has enabled the detection of thousands of metabolite features from a single biological sample that produces large and complex datasets.One of the key issues in LC–MS-ba...Liquid chromatography–mass spectrometry(LC–MS)has enabled the detection of thousands of metabolite features from a single biological sample that produces large and complex datasets.One of the key issues in LC–MS-based metabolomics is comprehensive and accurate analysis of enormous amount of data.Many free data preprocessing tools,such as XCMS,MZmine,MAVEN,and MetaboAnalyst,as well as commercial software,have been developed to facilitate data processing.However,researchers are challenged by the inevitable and unconquerable yields of numerous false-positive peaks,and human errors while manually removing such false peaks.Even with continuous improvements of data processing tools,there can still be many mistakes generated during data preprocessing.In addition,many data preprocessing software exist,and every tool has its own advantages and disadvantages.Thereby,a researcher needs to judge what kind of software or tools to choose that most suit their vendor proprietary formats and goal of downstream analysis.Here,we provided a brief introduction of the general steps of raw MS data processing,and properties of automated data processing tools.Then,characteristics of mainly free data preprocessing software were summarized for researchers’consideration in conducting metabolomics study.展开更多
The tendency toward achieving more sustainable and green buildings turned several passive buildings into more dynamic ones.Mosques are the type of buildings that have a unique energy usage pattern.Nevertheless,these t...The tendency toward achieving more sustainable and green buildings turned several passive buildings into more dynamic ones.Mosques are the type of buildings that have a unique energy usage pattern.Nevertheless,these types of buildings have minimal consideration in the ongoing energy efficiency applications.This is due to the unpredictability in the electrical consumption of the mosques affecting the stability of the distribution networks.Therefore,this study addresses this issue by developing a framework for a short-term electricity load forecast for a mosque load located in Riyadh,Saudi Arabia.In this study,and by harvesting the load consumption of the mosque and meteorological datasets,the performance of four forecasting algorithms is investigated,namely Artificial Neural Network and Support Vector Regression(SVR)based on three kernel functions:Radial Basis(RB),Polynomial,and Linear.In addition,this research work examines the impact of 13 different combinations of input attributes since selecting the optimal features has a major influence on yielding precise forecasting outcomes.For the mosque load,the(SVR-RB)with eleven features appeared to be the best forecasting model with the lowest forecasting errors metrics giving RMSE,nRMSE,MAE,and nMAE values of 4.207 kW,2.522%,2.938 kW,and 1.761%,respectively.展开更多
Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill ...Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill from doctors and technicians.In our study,we focused on the 3D dose prediction problem in radiotherapy by applying the deeplearning approach to computed tomography(CT)images of cancer patients.Medical image data has more complex characteristics than normal image data,and this research aims to explore the effectiveness of data preprocessing and augmentation in the context of the 3D dose prediction problem.We proposed four strategies to clarify our hypothesis in different aspects of applying data preprocessing and augmentation.In strategies,we trained our custom convolutional neural network model which has a structure inspired by the U-net,and residual blocks were also applied to the architecture.The output of the network is added with a rectified linear unit(Re-Lu)function for each pixel to ensure there are no negative values,which are absurd with radiation doses.Our experiments were conducted on the dataset of the Open Knowledge-Based Planning Challenge which was collected from head and neck cancer patients treatedwith radiation therapy.The results of four strategies showthat our hypothesis is rational by evaluating metrics in terms of the Dose-score and the Dose-volume histogram score(DVH-score).In the best training cases,the Dose-score is 3.08 and the DVH-score is 1.78.In addition,we also conducted a comparison with the results of another study in the same context of using the loss function.展开更多
Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, wh...Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, which combines the advantages of several machine learning models. The ensemble is made up of various base models, such as Decision Trees, K-Nearest Neighbors (KNN), Multi-Layer Perceptrons (MLP), and Naive Bayes, each of which offers a distinct perspective on the properties of the data. The research adheres to a methodical workflow that begins with thorough data preprocessing to guarantee the accuracy and applicability of the data. In order to extract useful attributes from network traffic data—which are essential for efficient model training—feature engineering is used. The ensemble approach combines these models by training a Logistic Regression model meta-learner on base model predictions. In addition to increasing prediction accuracy, this tiered approach helps get around the drawbacks that come with using individual models. High accuracy, precision, and recall are shown in the model’s evaluation of a network intrusion dataset, indicating the model’s efficacy in identifying malicious activity. Cross-validation is used to make sure the models are reliable and well-generalized to new, untested data. In addition to advancing cybersecurity, the research establishes a foundation for the implementation of flexible and scalable intrusion detection systems. This hybrid, stacked ensemble model has a lot of potential for improving cyberattack prevention, lowering the likelihood of cyberattacks, and offering a scalable solution that can be adjusted to meet new threats and technological advancements.展开更多
It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting ...It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting and amending anomalous samples was proposed. Taking full advantage of wavelet analysis’ properties of multi-resolution and local analysis, this approach is able to detect and amend anomalous samples effectively. To realize the rapid numeric computation of wavelet translation for a discrete sequence, a modified algorithm based on Newton-Cores formula was also proposed. The experimental result shows that the approach is feasible with good result and good practicality.展开更多
In general,the material properties,loads,resistance of the prestressed concrete continuous rigid frame bridge in different construction stages are time-varying.So,it is essential to monitor the internal force state wh...In general,the material properties,loads,resistance of the prestressed concrete continuous rigid frame bridge in different construction stages are time-varying.So,it is essential to monitor the internal force state when the bridge is in construction.Among them,how to assess the safety is one of the challenges.As the continuous monitoring over a long-term period can increase the reliability of the assessment,so,based on a large number of monitored strain data collected from the structural health monitoring system(SHMS)during construction,a calculation method of the punctiform time-varying reliability is proposed in this paper to evaluate the stress state of this type bridge in cantilever construction stage by using the basic reliability theory.At the same time,the optimal stress distribution function in the bridge mid-span base plate is determined when the bridge is closed.This method can provide basis and direction for the internal force control of this type bridge in construction process.So,it can reduce the bridge safety and quality accidents in construction stages.展开更多
Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainab...Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainable growth,primarily the use of wind and solar power.To achieve the prediction of wind power generation,several deep and machine learning models are constructed in this article as base models.These regression models are Deep neural network(DNN),k-nearest neighbor(KNN)regressor,long short-term memory(LSTM),averaging model,random forest(RF)regressor,bagging regressor,and gradient boosting(GB)regressor.In addition,data cleaning and data preprocessing were performed to the data.The dataset used in this study includes 4 features and 50530 instances.To accurately predict the wind power values,we propose in this paper a new optimization technique based on stochastic fractal search and particle swarm optimization(SFSPSO)to optimize the parameters of LSTM network.Five evaluation criteria were utilized to estimate the efficiency of the regression models,namely,mean absolute error(MAE),Nash Sutcliffe Efficiency(NSE),mean square error(MSE),coefficient of determination(R2),root mean squared error(RMSE).The experimental results illustrated that the proposed optimization of LSTM using SFS-PSO model achieved the best results with R2 equals 99.99%in predicting the wind power values.展开更多
Artificial intelligence(AI)relies on data and algorithms.State-of-the-art(SOTA)AI smart algorithms have been developed to improve the performance of AI-oriented structures.However,model-centric approaches are limited ...Artificial intelligence(AI)relies on data and algorithms.State-of-the-art(SOTA)AI smart algorithms have been developed to improve the performance of AI-oriented structures.However,model-centric approaches are limited by the absence of high-quality data.Data-centric AI is an emerging approach for solving machine learning(ML)problems.It is a collection of various data manipulation techniques that allow ML practitioners to systematically improve the quality of the data used in an ML pipeline.However,data-centric AI approaches are not well documented.Researchers have conducted various experiments without a clear set of guidelines.This survey highlights six major data-centric AI aspects that researchers are already using to intentionally or unintentionally improve the quality of AI systems.These include big data quality assessment,data preprocessing,transfer learning,semi-supervised learning,machine learning operations(MLOps),and the effect of adding more data.In addition,it highlights recent data-centric techniques adopted by ML practitioners.We addressed how adding data might harm datasets and how HoloClean can be used to restore and clean them.Finally,we discuss the causes of technical debt in AI.Technical debt builds up when software design and implementation decisions run into“or outright collide with”business goals and timelines.This survey lays the groundwork for future data-centric AI discussions by summarizing various data-centric approaches.展开更多
Feature selection methods have been successfully applied to text categorization but seldom applied to text clustering due to the unavailability of class label information. In this paper, a new feature selection method...Feature selection methods have been successfully applied to text categorization but seldom applied to text clustering due to the unavailability of class label information. In this paper, a new feature selection method for text clustering based on expectation maximization and cluster validity is proposed. It uses supervised feature selection method on the intermediate clustering result which is generated during iterative clustering to do feature selection for text clustering; meanwhile, the Davies-Bouldin's index is used to evaluate the intermediate feature subsets indirectly. Then feature subsets are selected according to the curve of the Davies-Bouldin's index. Experiment is carried out on several popular datasets and the results show the advantages of the proposed method.展开更多
In this paper,we give a systematic description of the 1st Wireless Communication Artificial Intelligence(AI)Competition(WAIC)which is hosted by IMT-2020(5G)Promotion Group 5G+AI Work Group.Firstly,the framework of ful...In this paper,we give a systematic description of the 1st Wireless Communication Artificial Intelligence(AI)Competition(WAIC)which is hosted by IMT-2020(5G)Promotion Group 5G+AI Work Group.Firstly,the framework of full channel state information(F-CSI)feedback problem and its corresponding channel dataset are provided.Then the enhancing schemes for DL-based F-CSI feedback including i)channel data analysis and preprocessing,ii)neural network design and iii)quantization enhancement are elaborated.The final competition results composed of different enhancing schemes are presented.Based on the valuable experience of 1stWAIC,we also list some challenges and potential study areas for the design of AI-based wireless communication systems.展开更多
Complex industrial process often contains multiple operating modes, and the challenge of multimode process monitoring has recently gained much attention. However, most multivariate statistical process monitoring (MSPM...Complex industrial process often contains multiple operating modes, and the challenge of multimode process monitoring has recently gained much attention. However, most multivariate statistical process monitoring (MSPM) methods are based on the assumption that the process has only one nominal mode. When the process data contain different distributions, they may not function as well as in single mode processes. To address this issue, an improved partial least squares (IPLS) method was proposed for multimode process monitoring. By utilizing a novel local standardization strategy, the normal data in multiple modes could be centralized after being standardized and the fundamental assumption of partial least squares (PLS) could be valid again in multimode process. In this way, PLS method was extended to be suitable for not only single mode processes but also multimode processes. The efficiency of the proposed method was illustrated by comparing the monitoring results of PLS and IPLS in Tennessee Eastman(TE) process.展开更多
The method of cross-ocean GPS long distance rapid static positioning has become one of the main technical means of GPS static positioning away from the mainland. The key technology had been analyzed in-cluding data pr...The method of cross-ocean GPS long distance rapid static positioning has become one of the main technical means of GPS static positioning away from the mainland. The key technology had been analyzed in-cluding data preprocessing and quality control, long distance integer ambiguity resolution and static Kalman filter parameter estimation. Effective data processing method of cross-ocean GPS long baseline rapid static positioning had been proposed. Through the analysis of practical examples of coastal and ocean, the feasibility of cross-ocean GPS long distance rapid static positioning based on the method is testified and verified. The results show that the accuracy of one-hour single baseline static positioning for the 500 - 600 km distance can be better than 10cm in the three-dimensional coordinates ocean environment. , which can suffice static positioning accuracy in the special展开更多
Rapid development of local-based social network(LBSN) makes it more convenient for researchers to carry out studies related to social network.Mining potential social relationship in LBSN is the most important one.Trad...Rapid development of local-based social network(LBSN) makes it more convenient for researchers to carry out studies related to social network.Mining potential social relationship in LBSN is the most important one.Traditionally,researchers use topological relation of social network or telecommunication network to mine potential social relationship.But the effect is unsatisfactory as the network can not provide complete information of topological relation.In this work,a new model called PSRMAL is proposed for mining potential social relationships with LBSN.With the model,better performance is obtained and guaranteed,and experiments verify the effectiveness.展开更多
Embedding the original high dimensional data in a low dimensional space helps to overcome the curse of dimensionality and removes noise. The aim of this work is to evaluate the performance of three different linear di...Embedding the original high dimensional data in a low dimensional space helps to overcome the curse of dimensionality and removes noise. The aim of this work is to evaluate the performance of three different linear dimensionality reduction techniques (DR) techniques namely principal component analysis (PCA), multi dimensional scaling (MDS) and linear discriminant analysis (LDA) on classification of cardiac arrhythmias using probabilistic neural network classifier (PNN). The design phase of classification model comprises of the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through daubechies wavelet transform, dimensionality reduction through linear DR techniques specified, and arrhythmia classification using PNN. Linear dimensionality reduction techniques have simple geometric representations and simple computational properties. Entire MIT-BIH arrhythmia database is used for experimentation. The experimental results demonstrates that combination of PNN classifier (spread parameter, σ = 0.08) and PCA DR technique exhibits highest sensitivity and F score of 78.84% and 78.82% respectively with a minimum of 8 dimensions.展开更多
Laser-induced breakdown spectroscopy (LIBS) has attracted much attention in terms of both scientific research and industrial application. An important branch of LIBS research in Asia, the development of data process...Laser-induced breakdown spectroscopy (LIBS) has attracted much attention in terms of both scientific research and industrial application. An important branch of LIBS research in Asia, the development of data processing methods for LIBS, is reviewed. First, the basic principle of LIBS and the characteristics of spectral data are briefly introduced. Next, two aspects of research on and problems with data processing methods are described: i) the basic principles of data preprocessing methods are elaborated in detail on the basis of the characteristics of spectral data; ii) the performance of data analysis methods in qualitative and quantitative analysis of LIBS is described. Finally, a direction for future development of data processing methods for LIBS is also proposed.展开更多
文摘In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.
文摘The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.
基金funded by eVIDA Research group IT-905-16 from Basque Government.
文摘Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid approach combining quantum and classic devices is the Variational Quantum Classifier(VQC),which development seems promising.Albeit being largely studied,VQC implementations for“real-world”datasets are still challenging on Noisy Intermediate Scale Quantum devices(NISQ).In this paper we propose a preprocessing pipeline based on Stokes parameters for data mapping.This pipeline enhances the prediction rates when applying VQC techniques,improving the feasibility of solving classification problems using NISQ devices.By including feature selection techniques and geometrical transformations,enhanced quantum state preparation is achieved.Also,a representation based on the Stokes parameters in the PoincaréSphere is possible for visualizing the data.Our results show that by using the proposed techniques we improve the classification score for the incidence of acute comorbid diseases in Type 2 Diabetes Mellitus patients.We used the implemented version of VQC available on IBM’s framework Qiskit,and obtained with two and three qubits an accuracy of 70%and 72%respectively.
基金Supported by the National Science Foundation(No.IIS-9988642)the Multidisciplinary Research Program
文摘Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy.
基金National Natural Science Foundation of China(Nos.51875199 and 51905165)Hunan Natural Science Fund Project(2019JJ50186)the Ke7y Research and Development Program of Hunan Province(No.2018GK2073).
文摘Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method.
基金National Natural Science Foundation of China(31371515,31671226)。
文摘Liquid chromatography–mass spectrometry(LC–MS)has enabled the detection of thousands of metabolite features from a single biological sample that produces large and complex datasets.One of the key issues in LC–MS-based metabolomics is comprehensive and accurate analysis of enormous amount of data.Many free data preprocessing tools,such as XCMS,MZmine,MAVEN,and MetaboAnalyst,as well as commercial software,have been developed to facilitate data processing.However,researchers are challenged by the inevitable and unconquerable yields of numerous false-positive peaks,and human errors while manually removing such false peaks.Even with continuous improvements of data processing tools,there can still be many mistakes generated during data preprocessing.In addition,many data preprocessing software exist,and every tool has its own advantages and disadvantages.Thereby,a researcher needs to judge what kind of software or tools to choose that most suit their vendor proprietary formats and goal of downstream analysis.Here,we provided a brief introduction of the general steps of raw MS data processing,and properties of automated data processing tools.Then,characteristics of mainly free data preprocessing software were summarized for researchers’consideration in conducting metabolomics study.
基金The author extends his appreciation to the Deputyship for Research&Innovation,Ministry of Education and Qassim University,Saudi Arabia for funding this research work through the Project Number(QU-IF-4-3-3-30013).
文摘The tendency toward achieving more sustainable and green buildings turned several passive buildings into more dynamic ones.Mosques are the type of buildings that have a unique energy usage pattern.Nevertheless,these types of buildings have minimal consideration in the ongoing energy efficiency applications.This is due to the unpredictability in the electrical consumption of the mosques affecting the stability of the distribution networks.Therefore,this study addresses this issue by developing a framework for a short-term electricity load forecast for a mosque load located in Riyadh,Saudi Arabia.In this study,and by harvesting the load consumption of the mosque and meteorological datasets,the performance of four forecasting algorithms is investigated,namely Artificial Neural Network and Support Vector Regression(SVR)based on three kernel functions:Radial Basis(RB),Polynomial,and Linear.In addition,this research work examines the impact of 13 different combinations of input attributes since selecting the optimal features has a major influence on yielding precise forecasting outcomes.For the mosque load,the(SVR-RB)with eleven features appeared to be the best forecasting model with the lowest forecasting errors metrics giving RMSE,nRMSE,MAE,and nMAE values of 4.207 kW,2.522%,2.938 kW,and 1.761%,respectively.
基金sponsored by the Institute of Information Technology(Vietnam Academy of Science and Technology)with Project Code“CS24.01”.
文摘Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill from doctors and technicians.In our study,we focused on the 3D dose prediction problem in radiotherapy by applying the deeplearning approach to computed tomography(CT)images of cancer patients.Medical image data has more complex characteristics than normal image data,and this research aims to explore the effectiveness of data preprocessing and augmentation in the context of the 3D dose prediction problem.We proposed four strategies to clarify our hypothesis in different aspects of applying data preprocessing and augmentation.In strategies,we trained our custom convolutional neural network model which has a structure inspired by the U-net,and residual blocks were also applied to the architecture.The output of the network is added with a rectified linear unit(Re-Lu)function for each pixel to ensure there are no negative values,which are absurd with radiation doses.Our experiments were conducted on the dataset of the Open Knowledge-Based Planning Challenge which was collected from head and neck cancer patients treatedwith radiation therapy.The results of four strategies showthat our hypothesis is rational by evaluating metrics in terms of the Dose-score and the Dose-volume histogram score(DVH-score).In the best training cases,the Dose-score is 3.08 and the DVH-score is 1.78.In addition,we also conducted a comparison with the results of another study in the same context of using the loss function.
文摘Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, which combines the advantages of several machine learning models. The ensemble is made up of various base models, such as Decision Trees, K-Nearest Neighbors (KNN), Multi-Layer Perceptrons (MLP), and Naive Bayes, each of which offers a distinct perspective on the properties of the data. The research adheres to a methodical workflow that begins with thorough data preprocessing to guarantee the accuracy and applicability of the data. In order to extract useful attributes from network traffic data—which are essential for efficient model training—feature engineering is used. The ensemble approach combines these models by training a Logistic Regression model meta-learner on base model predictions. In addition to increasing prediction accuracy, this tiered approach helps get around the drawbacks that come with using individual models. High accuracy, precision, and recall are shown in the model’s evaluation of a network intrusion dataset, indicating the model’s efficacy in identifying malicious activity. Cross-validation is used to make sure the models are reliable and well-generalized to new, untested data. In addition to advancing cybersecurity, the research establishes a foundation for the implementation of flexible and scalable intrusion detection systems. This hybrid, stacked ensemble model has a lot of potential for improving cyberattack prevention, lowering the likelihood of cyberattacks, and offering a scalable solution that can be adjusted to meet new threats and technological advancements.
基金Project(50374079) supported by the National Natural Science Foundation of China
文摘It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting and amending anomalous samples was proposed. Taking full advantage of wavelet analysis’ properties of multi-resolution and local analysis, this approach is able to detect and amend anomalous samples effectively. To realize the rapid numeric computation of wavelet translation for a discrete sequence, a modified algorithm based on Newton-Cores formula was also proposed. The experimental result shows that the approach is feasible with good result and good practicality.
文摘In general,the material properties,loads,resistance of the prestressed concrete continuous rigid frame bridge in different construction stages are time-varying.So,it is essential to monitor the internal force state when the bridge is in construction.Among them,how to assess the safety is one of the challenges.As the continuous monitoring over a long-term period can increase the reliability of the assessment,so,based on a large number of monitored strain data collected from the structural health monitoring system(SHMS)during construction,a calculation method of the punctiform time-varying reliability is proposed in this paper to evaluate the stress state of this type bridge in cantilever construction stage by using the basic reliability theory.At the same time,the optimal stress distribution function in the bridge mid-span base plate is determined when the bridge is closed.This method can provide basis and direction for the internal force control of this type bridge in construction process.So,it can reduce the bridge safety and quality accidents in construction stages.
文摘Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainable growth,primarily the use of wind and solar power.To achieve the prediction of wind power generation,several deep and machine learning models are constructed in this article as base models.These regression models are Deep neural network(DNN),k-nearest neighbor(KNN)regressor,long short-term memory(LSTM),averaging model,random forest(RF)regressor,bagging regressor,and gradient boosting(GB)regressor.In addition,data cleaning and data preprocessing were performed to the data.The dataset used in this study includes 4 features and 50530 instances.To accurately predict the wind power values,we propose in this paper a new optimization technique based on stochastic fractal search and particle swarm optimization(SFSPSO)to optimize the parameters of LSTM network.Five evaluation criteria were utilized to estimate the efficiency of the regression models,namely,mean absolute error(MAE),Nash Sutcliffe Efficiency(NSE),mean square error(MSE),coefficient of determination(R2),root mean squared error(RMSE).The experimental results illustrated that the proposed optimization of LSTM using SFS-PSO model achieved the best results with R2 equals 99.99%in predicting the wind power values.
文摘Artificial intelligence(AI)relies on data and algorithms.State-of-the-art(SOTA)AI smart algorithms have been developed to improve the performance of AI-oriented structures.However,model-centric approaches are limited by the absence of high-quality data.Data-centric AI is an emerging approach for solving machine learning(ML)problems.It is a collection of various data manipulation techniques that allow ML practitioners to systematically improve the quality of the data used in an ML pipeline.However,data-centric AI approaches are not well documented.Researchers have conducted various experiments without a clear set of guidelines.This survey highlights six major data-centric AI aspects that researchers are already using to intentionally or unintentionally improve the quality of AI systems.These include big data quality assessment,data preprocessing,transfer learning,semi-supervised learning,machine learning operations(MLOps),and the effect of adding more data.In addition,it highlights recent data-centric techniques adopted by ML practitioners.We addressed how adding data might harm datasets and how HoloClean can be used to restore and clean them.Finally,we discuss the causes of technical debt in AI.Technical debt builds up when software design and implementation decisions run into“or outright collide with”business goals and timelines.This survey lays the groundwork for future data-centric AI discussions by summarizing various data-centric approaches.
基金Supported by the National Natural Science Foundation of China (60503020, 60373066)the Outstanding Young Scientist’s Fund (60425206)+1 种基金the Natural Science Foundation of Jiangsu Province (BK2005060)the Opening Foundation of Jiangsu Key Laboratory of Computer Informa-tion Processing Technology in Soochow University
文摘Feature selection methods have been successfully applied to text categorization but seldom applied to text clustering due to the unavailability of class label information. In this paper, a new feature selection method for text clustering based on expectation maximization and cluster validity is proposed. It uses supervised feature selection method on the intermediate clustering result which is generated during iterative clustering to do feature selection for text clustering; meanwhile, the Davies-Bouldin's index is used to evaluate the intermediate feature subsets indirectly. Then feature subsets are selected according to the curve of the Davies-Bouldin's index. Experiment is carried out on several popular datasets and the results show the advantages of the proposed method.
文摘In this paper,we give a systematic description of the 1st Wireless Communication Artificial Intelligence(AI)Competition(WAIC)which is hosted by IMT-2020(5G)Promotion Group 5G+AI Work Group.Firstly,the framework of full channel state information(F-CSI)feedback problem and its corresponding channel dataset are provided.Then the enhancing schemes for DL-based F-CSI feedback including i)channel data analysis and preprocessing,ii)neural network design and iii)quantization enhancement are elaborated.The final competition results composed of different enhancing schemes are presented.Based on the valuable experience of 1stWAIC,we also list some challenges and potential study areas for the design of AI-based wireless communication systems.
基金National Natural Science Foundation of China ( No. 61074079) Shanghai Leading Academic Discipline Project,China ( No.B504)
文摘Complex industrial process often contains multiple operating modes, and the challenge of multimode process monitoring has recently gained much attention. However, most multivariate statistical process monitoring (MSPM) methods are based on the assumption that the process has only one nominal mode. When the process data contain different distributions, they may not function as well as in single mode processes. To address this issue, an improved partial least squares (IPLS) method was proposed for multimode process monitoring. By utilizing a novel local standardization strategy, the normal data in multiple modes could be centralized after being standardized and the fundamental assumption of partial least squares (PLS) could be valid again in multimode process. In this way, PLS method was extended to be suitable for not only single mode processes but also multimode processes. The efficiency of the proposed method was illustrated by comparing the monitoring results of PLS and IPLS in Tennessee Eastman(TE) process.
文摘The method of cross-ocean GPS long distance rapid static positioning has become one of the main technical means of GPS static positioning away from the mainland. The key technology had been analyzed in-cluding data preprocessing and quality control, long distance integer ambiguity resolution and static Kalman filter parameter estimation. Effective data processing method of cross-ocean GPS long baseline rapid static positioning had been proposed. Through the analysis of practical examples of coastal and ocean, the feasibility of cross-ocean GPS long distance rapid static positioning based on the method is testified and verified. The results show that the accuracy of one-hour single baseline static positioning for the 500 - 600 km distance can be better than 10cm in the three-dimensional coordinates ocean environment. , which can suffice static positioning accuracy in the special
基金Supported by the National Natural Science Foundation of China(No.61501457)
文摘Rapid development of local-based social network(LBSN) makes it more convenient for researchers to carry out studies related to social network.Mining potential social relationship in LBSN is the most important one.Traditionally,researchers use topological relation of social network or telecommunication network to mine potential social relationship.But the effect is unsatisfactory as the network can not provide complete information of topological relation.In this work,a new model called PSRMAL is proposed for mining potential social relationships with LBSN.With the model,better performance is obtained and guaranteed,and experiments verify the effectiveness.
文摘Embedding the original high dimensional data in a low dimensional space helps to overcome the curse of dimensionality and removes noise. The aim of this work is to evaluate the performance of three different linear dimensionality reduction techniques (DR) techniques namely principal component analysis (PCA), multi dimensional scaling (MDS) and linear discriminant analysis (LDA) on classification of cardiac arrhythmias using probabilistic neural network classifier (PNN). The design phase of classification model comprises of the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through daubechies wavelet transform, dimensionality reduction through linear DR techniques specified, and arrhythmia classification using PNN. Linear dimensionality reduction techniques have simple geometric representations and simple computational properties. Entire MIT-BIH arrhythmia database is used for experimentation. The experimental results demonstrates that combination of PNN classifier (spread parameter, σ = 0.08) and PCA DR technique exhibits highest sensitivity and F score of 78.84% and 78.82% respectively with a minimum of 8 dimensions.
基金Acknowledgements The authors are grateful for the financial support from the National Special Fund for the Development of Major Research Equipment and Instruments (Grant No. 2011YQ160017), the National Natural Science Foundation of China (Grant Nos. 61575073, 51429501, and 61378031), the National Nature Science Foundation of Hubei Province (Grant No. 2015CFB298), and the Fundamental Research Funds for the Central Universities (HUST: 2014QNRC024 and 2015MS002).
文摘Laser-induced breakdown spectroscopy (LIBS) has attracted much attention in terms of both scientific research and industrial application. An important branch of LIBS research in Asia, the development of data processing methods for LIBS, is reviewed. First, the basic principle of LIBS and the characteristics of spectral data are briefly introduced. Next, two aspects of research on and problems with data processing methods are described: i) the basic principles of data preprocessing methods are elaborated in detail on the basis of the characteristics of spectral data; ii) the performance of data analysis methods in qualitative and quantitative analysis of LIBS is described. Finally, a direction for future development of data processing methods for LIBS is also proposed.