This paper discusses some aspects of finite element computation,such as the automatic generation of finite element ,refinement of mesh,process of node density, distribution of load,optimum design and the drawing o...This paper discusses some aspects of finite element computation,such as the automatic generation of finite element ,refinement of mesh,process of node density, distribution of load,optimum design and the drawing of stress contour, and describes the developing process of software for a planar 8 node element.展开更多
New adaptive preprocessing algorithms based on the polar coordinate system were put forward to get high-precision corneal topography calculation results. Adaptive locating algorithms of concentric circle center were c...New adaptive preprocessing algorithms based on the polar coordinate system were put forward to get high-precision corneal topography calculation results. Adaptive locating algorithms of concentric circle center were created to accurately capture the circle center of original Placido-based image, expand the image into matrix centered around the circle center, and convert the matrix into the polar coordinate system with the circle center as pole. Adaptive image smoothing treatment was followed and the characteristics of useful circles were extracted via horizontal edge detection, based on useful circles presenting approximate horizontal lines while noise signals presenting vertical lines or different angles. Effective combination of different operators of morphology were designed to remedy data loss caused by noise disturbances, get complete image about circle edge detection to satisfy the requests of precise calculation on follow-up parameters. The experimental data show that the algorithms meet the requirements of practical detection with characteristics of less data loss, higher data accuracy and easier availability.展开更多
In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model...In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.展开更多
As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the ...As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the preprocessing meth- ods reported in the literatures were not the same. In order to screen the best pre- processing method, this paper took three typical treatments to explore the effect of different preprocessing methods on soil microbial community functional diversity. The results showed that, method B's overall trend of AWCD values was better than A and C's. Method B's microbial utilization of six carbon sources was higher, and the result was relatively stable. The Simpson index, Shannon richness index and Car- bon source utilization richness index of the two treatments were B〉C〉A, while the Mclntosh index and Shannon evenness were not very stable, but the difference of variance analysis was not significant, and the method B was always with a smallest variance. Method B's principal component analysis was better than A and C's. In a word, the method using 250 r/min shaking for 30 minutes and cultivating at 28 ℃ was the best one, because it was simple, convenient, and with good repeatability.展开更多
IoT usage in healthcare is one of the fastest growing domains all over the world which applies to every age group.Internet of Medical Things(IoMT)bridges the gap between the medical and IoT field where medical devices...IoT usage in healthcare is one of the fastest growing domains all over the world which applies to every age group.Internet of Medical Things(IoMT)bridges the gap between the medical and IoT field where medical devices communicate with each other through a wireless communication network.Advancement in IoMT makes human lives easy and better.This paper provides a comprehensive detailed literature survey to investigate different IoMT-driven applications,methodologies,and techniques to ensure the sustainability of IoMT-driven systems.The limitations of existing IoMTframeworks are also analyzed concerning their applicability in real-time driven systems or applications.In addition to this,various issues(gaps),challenges,and needs in the context of such systems are highlighted.The purpose of this paper is to interpret a rigorous review concept related to IoMT and present significant contributions in the field across the research fraternity.Lastly,this paper discusses the opportunities and prospects of IoMT and discusses various open research problems.展开更多
With the high-speed development of transportation industry,highway traffic safety has become a considerable problem.Meanwhile,with the development of embedded system and hardware chip,in recent years,human eye detecti...With the high-speed development of transportation industry,highway traffic safety has become a considerable problem.Meanwhile,with the development of embedded system and hardware chip,in recent years,human eye detection eye tracking and positioning technology have been more and more widely used in man-machine interaction,security access control and visual detection.In this paper,the high parallelism of FPGA was utilized to realize an elliptical approximate real-time human eye tracking system,which was achieved by the series register structure and random sample consensus(RANSAC),thus improving the speed of image processing without using external memory.Because eye images acquired by the camera often generate a lot of noises due to uneven light and dark background,the preprocessing technologies such as color conversion,image filtering,histogram modification and image sharpening were adopted.In terms of feature extraction of images,the eye tracking algorithm in this paper adopted seven-section rectangular eye tracking characteristic method,which increased a section between the mouth and the nose on the basis of the traditional six-section method,so its recognition accuracy is much higher.It is convenient for the realization of hardware parallel system in FPGA.Finally,aiming at the accuracy and real-time performance of the design system,a more comprehensive simulation test was carried out.The human eye tracking system was verified on DE2-115 multimedia development platform,and the performance of VGA(resolution:640×480)images of 8-bit grayscale was tested.The results showed that the detection speed of this system was about 47 frames per second under the condition that the detection rate of human face(front face,no inclination)was 93%,which reached the real-time detection level.Additionally,the accuracy of eye tracking based on FPGA system was more than 95%,and it has achieved ideal results in real-time performance and robustness.展开更多
The conventional poststack inversion uses standard recursion formulas to obtain impedance in a single trace.It cannot allow for lateral regularization.In this paper,ID edge-preserving smoothing(EPS)fi lter is extended...The conventional poststack inversion uses standard recursion formulas to obtain impedance in a single trace.It cannot allow for lateral regularization.In this paper,ID edge-preserving smoothing(EPS)fi lter is extended to 2D/3D for setting precondition of impedance model in impedance inversion.The EPS filter incorporates a priori knowledge into the seismic inversion.The a priori knowledge incorporated from EPS filter preconditioning relates to the blocky features of the impedance model,which makes the formation interfaces and geological edges precise and keeps the inversion procedure robust.Then,the proposed method is performed on two 2D models to show its feasibility and stability.Last,the proposed method is performed on a real 3D seismic work area from Southwest China to predict reef reservoirs in practice.展开更多
Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid appr...Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid approach combining quantum and classic devices is the Variational Quantum Classifier(VQC),which development seems promising.Albeit being largely studied,VQC implementations for“real-world”datasets are still challenging on Noisy Intermediate Scale Quantum devices(NISQ).In this paper we propose a preprocessing pipeline based on Stokes parameters for data mapping.This pipeline enhances the prediction rates when applying VQC techniques,improving the feasibility of solving classification problems using NISQ devices.By including feature selection techniques and geometrical transformations,enhanced quantum state preparation is achieved.Also,a representation based on the Stokes parameters in the PoincaréSphere is possible for visualizing the data.Our results show that by using the proposed techniques we improve the classification score for the incidence of acute comorbid diseases in Type 2 Diabetes Mellitus patients.We used the implemented version of VQC available on IBM’s framework Qiskit,and obtained with two and three qubits an accuracy of 70%and 72%respectively.展开更多
The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can...The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.展开更多
It is one of the major challenges for face recognition to minimize the disadvantage of il- lumination variations of face images in different scenarios. Local Binary Pattern (LBP) has been proved to be successful for f...It is one of the major challenges for face recognition to minimize the disadvantage of il- lumination variations of face images in different scenarios. Local Binary Pattern (LBP) has been proved to be successful for face recognition. However, it is still very rare to take LBP as an illumination preprocessing approach. In this paper, we propose a new LBP-based multi-scale illumination pre- processing method. This method mainly includes three aspects: threshold adjustment, multi-scale addition and symmetry restoration/neighborhood replacement. Our experiment results show that the proposed method performs better than the existing LBP-based methods at the point of illumination preprocessing. Moreover, compared with some face image preprocessing methods, such as histogram equalization, Gamma transformation, Retinex, and simplified LBP operator, our method can effectively improve the robustness for face recognition against illumination variation, and achieve higher recog- nition rate.展开更多
Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a ...Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a novel efficient and effective preprocessing algorithm with three methods for text classification combining the Orthogonal Matching Pursuit algorithm to perform the classification. The main idea of the novel preprocessing strategy is that it combined stopword removal and/or regular filtering with tokenization and lowercase conversion, which can effectively reduce the feature dimension and improve the text feature matrix quality. Simulation tests on the 20 newsgroups dataset show that compared with the existing state-of-the-art method, the new method reduces the number of features by 19.85%, 34.35%, 26.25% and 38.67%, improves accuracy by 7.36%, 8.8%, 5.71% and 7.73%, and increases the speed of text classification by 17.38%, 25.64%, 23.76% and 33.38% on the four data, respectively.展开更多
Artificial intelligence(AI)relies on data and algorithms.State-of-the-art(SOTA)AI smart algorithms have been developed to improve the performance of AI-oriented structures.However,model-centric approaches are limited ...Artificial intelligence(AI)relies on data and algorithms.State-of-the-art(SOTA)AI smart algorithms have been developed to improve the performance of AI-oriented structures.However,model-centric approaches are limited by the absence of high-quality data.Data-centric AI is an emerging approach for solving machine learning(ML)problems.It is a collection of various data manipulation techniques that allow ML practitioners to systematically improve the quality of the data used in an ML pipeline.However,data-centric AI approaches are not well documented.Researchers have conducted various experiments without a clear set of guidelines.This survey highlights six major data-centric AI aspects that researchers are already using to intentionally or unintentionally improve the quality of AI systems.These include big data quality assessment,data preprocessing,transfer learning,semi-supervised learning,machine learning operations(MLOps),and the effect of adding more data.In addition,it highlights recent data-centric techniques adopted by ML practitioners.We addressed how adding data might harm datasets and how HoloClean can be used to restore and clean them.Finally,we discuss the causes of technical debt in AI.Technical debt builds up when software design and implementation decisions run into“or outright collide with”business goals and timelines.This survey lays the groundwork for future data-centric AI discussions by summarizing various data-centric approaches.展开更多
Bordered linear systems arise from many industrial applications, such as reservoir simulation and structural engineering. Traditional ILU preconditioners which throw away the additional equations are often too crude f...Bordered linear systems arise from many industrial applications, such as reservoir simulation and structural engineering. Traditional ILU preconditioners which throw away the additional equations are often too crude for these systems. We describe a practical implementation of ILU preconditioners which are more accurate and more robust. The emphasis of this paper is on implementation rather than on theory.展开更多
Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to pr...Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to protect the network from vulnerabilities and attacks.Although extensive research was reported in IDS,detecting novel intrusions with optimal features and reducing false alarm rates are still challenging.Therefore,we developed a novel fusion-based feature importance method to reduce the high dimensional feature space,which helps to identify attacks accurately with less false alarm rate.Initially,to improve training data quality,various preprocessing techniques are utilized.The Adaptive Synthetic oversampling technique generates synthetic samples for minority classes.In the proposed fusion-based feature importance,we use different approaches from the filter,wrapper,and embedded methods like mutual information,random forest importance,permutation importance,Shapley Additive exPlanations(SHAP)-based feature importance,and statistical feature importance methods like the difference of mean and median and standard deviation to rank each feature according to its rank.Then by simple plurality voting,the most optimal features are retrieved.Then the optimal features are fed to various models like Extra Tree(ET),Logistic Regression(LR),Support vector Machine(SVM),Decision Tree(DT),and Extreme Gradient Boosting Machine(XGBM).Then the hyperparameters of classification models are tuned with Halving Random Search cross-validation to enhance the performance.The experiments were carried out on the original imbalanced data and balanced data.The outcomes demonstrate that the balanced data scenario knocked out the imbalanced data.Finally,the experimental analysis proved that our proposed fusionbased feature importance performed well with XGBM giving an accuracy of 99.86%,99.68%,and 92.4%,with 9,7 and 8 features by training time of 1.5,4.5 and 5.5 s on Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD),Canadian Institute for Cybersecurity(CIC-IDS 2017),and UNSW-NB15,datasets respectively.In addition,the suggested technique has been examined and contrasted with the state of art methods on three datasets.展开更多
At low bitrate, all block discrete cosine transform (BDCT) based video coding algorithms suffer from visible blocking and ringing artifacts in the reconstructed images because the quantization is too coarse and high f...At low bitrate, all block discrete cosine transform (BDCT) based video coding algorithms suffer from visible blocking and ringing artifacts in the reconstructed images because the quantization is too coarse and high frequency DCT coefficients are inclined to be quantized to zeros. Preprocessing algorithms can enhance coding efficiency and thus reduce the likelihood of blocking artifacts and ringing artifacts generated in the video coding process by applying a low-pass filter before video encoding to remove some relatively insignificant high frequent components. In this paper, we introduce a new adaptive preprocessing algo- rithm, which employs an improved bilateral filter to provide adaptive edge-preserving low-pass filtering which is adjusted ac- cording to the quantization parameters. Whether at low or high bit rate, the preprocessing can provide proper filtering to make the video encoder more efficient and have better reconstructed image quality. Experimental results demonstrate that our proposed preprocessing algorithm can significantly improve both subjective and objective quality.展开更多
A parallel hybrid linear solver based on the Schur complement method has the potential to balance the robustness of direct solvers with the efficiency of preconditioned iterative solvers.However,when solving large-sca...A parallel hybrid linear solver based on the Schur complement method has the potential to balance the robustness of direct solvers with the efficiency of preconditioned iterative solvers.However,when solving large-scale highly-indefinite linear systems,this hybrid solver often suffers from either slow convergence or large memory requirements to solve the Schur complement systems.To overcome this challenge,we in this paper discuss techniques to preprocess the Schur complement systems in parallel. Numerical results of solving large-scale highly-indefinite linear systems from various applications demonstrate that these techniques improve the reliability and performance of the hybrid solver and enable efficient solutions of these linear systems on hundreds of processors,which was previously infeasible using existing state-of-the-art solvers.展开更多
Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing ...Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy.展开更多
Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which co...Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method.展开更多
Two blind multiuser detection algorithms for antenna array in Code Division Multiple Access (CDMA) system which apply the linearly constrained condition to the Least Squares Constant Modulus Algorithln (LSCMA) are...Two blind multiuser detection algorithms for antenna array in Code Division Multiple Access (CDMA) system which apply the linearly constrained condition to the Least Squares Constant Modulus Algorithln (LSCMA) are proposed in this paper. One is the Linearly Constrained LSCMA (LC-LSCMA), the other is the Preprocessing LC-LSCMA (PLC-LSCMA). The two algorithms are compared with the conventional LSCMA. The results show that the two algorithms proposed in this paper are superior to the conventional LSCMA and the best one is PLC-LSCMA.展开更多
In this study, we propose a data preprocessing algorithm called D-IMPACT inspired by the IMPACT clustering algorithm. D-IMPACT iteratively moves data points based on attraction and density to detect and remove noise a...In this study, we propose a data preprocessing algorithm called D-IMPACT inspired by the IMPACT clustering algorithm. D-IMPACT iteratively moves data points based on attraction and density to detect and remove noise and outliers, and separate clusters. Our experimental results on two-dimensional datasets and practical datasets show that this algorithm can produce new datasets such that the performance of the clustering algorithm is improved.展开更多
文摘This paper discusses some aspects of finite element computation,such as the automatic generation of finite element ,refinement of mesh,process of node density, distribution of load,optimum design and the drawing of stress contour, and describes the developing process of software for a planar 8 node element.
基金Project(20120321028-01)supported by Scientific and Technological Key Project of Shanxi Province,ChinaProject(20113101)supported by Postgraduate Innovative Key Project of Shanxi Province,China
文摘New adaptive preprocessing algorithms based on the polar coordinate system were put forward to get high-precision corneal topography calculation results. Adaptive locating algorithms of concentric circle center were created to accurately capture the circle center of original Placido-based image, expand the image into matrix centered around the circle center, and convert the matrix into the polar coordinate system with the circle center as pole. Adaptive image smoothing treatment was followed and the characteristics of useful circles were extracted via horizontal edge detection, based on useful circles presenting approximate horizontal lines while noise signals presenting vertical lines or different angles. Effective combination of different operators of morphology were designed to remedy data loss caused by noise disturbances, get complete image about circle edge detection to satisfy the requests of precise calculation on follow-up parameters. The experimental data show that the algorithms meet the requirements of practical detection with characteristics of less data loss, higher data accuracy and easier availability.
文摘In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.
基金Supported by National and International Scientific and Technological Cooperation Project"The application of Microbial Agents on Mining Reclamation and Ecological Recovery"(2011DFR31230)Key Project of Shanxi academy of Agricultural Science"The Research and Application of Bio-organic Fertilizer on Mining Reclamation and Soil Remediation"(2013zd12)Major Science and Technology Programs of Shanxi Province"Key Technology Research and Demonstration of mining waste land ecosystem Restoration and Reconstruction"(20121101009)~~
文摘As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the preprocessing meth- ods reported in the literatures were not the same. In order to screen the best pre- processing method, this paper took three typical treatments to explore the effect of different preprocessing methods on soil microbial community functional diversity. The results showed that, method B's overall trend of AWCD values was better than A and C's. Method B's microbial utilization of six carbon sources was higher, and the result was relatively stable. The Simpson index, Shannon richness index and Car- bon source utilization richness index of the two treatments were B〉C〉A, while the Mclntosh index and Shannon evenness were not very stable, but the difference of variance analysis was not significant, and the method B was always with a smallest variance. Method B's principal component analysis was better than A and C's. In a word, the method using 250 r/min shaking for 30 minutes and cultivating at 28 ℃ was the best one, because it was simple, convenient, and with good repeatability.
文摘IoT usage in healthcare is one of the fastest growing domains all over the world which applies to every age group.Internet of Medical Things(IoMT)bridges the gap between the medical and IoT field where medical devices communicate with each other through a wireless communication network.Advancement in IoMT makes human lives easy and better.This paper provides a comprehensive detailed literature survey to investigate different IoMT-driven applications,methodologies,and techniques to ensure the sustainability of IoMT-driven systems.The limitations of existing IoMTframeworks are also analyzed concerning their applicability in real-time driven systems or applications.In addition to this,various issues(gaps),challenges,and needs in the context of such systems are highlighted.The purpose of this paper is to interpret a rigorous review concept related to IoMT and present significant contributions in the field across the research fraternity.Lastly,this paper discusses the opportunities and prospects of IoMT and discusses various open research problems.
文摘With the high-speed development of transportation industry,highway traffic safety has become a considerable problem.Meanwhile,with the development of embedded system and hardware chip,in recent years,human eye detection eye tracking and positioning technology have been more and more widely used in man-machine interaction,security access control and visual detection.In this paper,the high parallelism of FPGA was utilized to realize an elliptical approximate real-time human eye tracking system,which was achieved by the series register structure and random sample consensus(RANSAC),thus improving the speed of image processing without using external memory.Because eye images acquired by the camera often generate a lot of noises due to uneven light and dark background,the preprocessing technologies such as color conversion,image filtering,histogram modification and image sharpening were adopted.In terms of feature extraction of images,the eye tracking algorithm in this paper adopted seven-section rectangular eye tracking characteristic method,which increased a section between the mouth and the nose on the basis of the traditional six-section method,so its recognition accuracy is much higher.It is convenient for the realization of hardware parallel system in FPGA.Finally,aiming at the accuracy and real-time performance of the design system,a more comprehensive simulation test was carried out.The human eye tracking system was verified on DE2-115 multimedia development platform,and the performance of VGA(resolution:640×480)images of 8-bit grayscale was tested.The results showed that the detection speed of this system was about 47 frames per second under the condition that the detection rate of human face(front face,no inclination)was 93%,which reached the real-time detection level.Additionally,the accuracy of eye tracking based on FPGA system was more than 95%,and it has achieved ideal results in real-time performance and robustness.
基金The National Key S&T Special Projects (No. 2017ZX05008004-008)the National Natural Science Foundation of China (No. 41874146)+2 种基金the National Natural Science Foundation of China (No. 41704134)the Innovation Team of Youth Scientific and Technological in Southwest Petroleum University (No. 2017CXTD08)the Initiative Projects for Ph.Din China West Normal University (No. 19E063)
文摘The conventional poststack inversion uses standard recursion formulas to obtain impedance in a single trace.It cannot allow for lateral regularization.In this paper,ID edge-preserving smoothing(EPS)fi lter is extended to 2D/3D for setting precondition of impedance model in impedance inversion.The EPS filter incorporates a priori knowledge into the seismic inversion.The a priori knowledge incorporated from EPS filter preconditioning relates to the blocky features of the impedance model,which makes the formation interfaces and geological edges precise and keeps the inversion procedure robust.Then,the proposed method is performed on two 2D models to show its feasibility and stability.Last,the proposed method is performed on a real 3D seismic work area from Southwest China to predict reef reservoirs in practice.
基金funded by eVIDA Research group IT-905-16 from Basque Government.
文摘Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid approach combining quantum and classic devices is the Variational Quantum Classifier(VQC),which development seems promising.Albeit being largely studied,VQC implementations for“real-world”datasets are still challenging on Noisy Intermediate Scale Quantum devices(NISQ).In this paper we propose a preprocessing pipeline based on Stokes parameters for data mapping.This pipeline enhances the prediction rates when applying VQC techniques,improving the feasibility of solving classification problems using NISQ devices.By including feature selection techniques and geometrical transformations,enhanced quantum state preparation is achieved.Also,a representation based on the Stokes parameters in the PoincaréSphere is possible for visualizing the data.Our results show that by using the proposed techniques we improve the classification score for the incidence of acute comorbid diseases in Type 2 Diabetes Mellitus patients.We used the implemented version of VQC available on IBM’s framework Qiskit,and obtained with two and three qubits an accuracy of 70%and 72%respectively.
文摘The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.
文摘It is one of the major challenges for face recognition to minimize the disadvantage of il- lumination variations of face images in different scenarios. Local Binary Pattern (LBP) has been proved to be successful for face recognition. However, it is still very rare to take LBP as an illumination preprocessing approach. In this paper, we propose a new LBP-based multi-scale illumination pre- processing method. This method mainly includes three aspects: threshold adjustment, multi-scale addition and symmetry restoration/neighborhood replacement. Our experiment results show that the proposed method performs better than the existing LBP-based methods at the point of illumination preprocessing. Moreover, compared with some face image preprocessing methods, such as histogram equalization, Gamma transformation, Retinex, and simplified LBP operator, our method can effectively improve the robustness for face recognition against illumination variation, and achieve higher recog- nition rate.
文摘Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a novel efficient and effective preprocessing algorithm with three methods for text classification combining the Orthogonal Matching Pursuit algorithm to perform the classification. The main idea of the novel preprocessing strategy is that it combined stopword removal and/or regular filtering with tokenization and lowercase conversion, which can effectively reduce the feature dimension and improve the text feature matrix quality. Simulation tests on the 20 newsgroups dataset show that compared with the existing state-of-the-art method, the new method reduces the number of features by 19.85%, 34.35%, 26.25% and 38.67%, improves accuracy by 7.36%, 8.8%, 5.71% and 7.73%, and increases the speed of text classification by 17.38%, 25.64%, 23.76% and 33.38% on the four data, respectively.
文摘Artificial intelligence(AI)relies on data and algorithms.State-of-the-art(SOTA)AI smart algorithms have been developed to improve the performance of AI-oriented structures.However,model-centric approaches are limited by the absence of high-quality data.Data-centric AI is an emerging approach for solving machine learning(ML)problems.It is a collection of various data manipulation techniques that allow ML practitioners to systematically improve the quality of the data used in an ML pipeline.However,data-centric AI approaches are not well documented.Researchers have conducted various experiments without a clear set of guidelines.This survey highlights six major data-centric AI aspects that researchers are already using to intentionally or unintentionally improve the quality of AI systems.These include big data quality assessment,data preprocessing,transfer learning,semi-supervised learning,machine learning operations(MLOps),and the effect of adding more data.In addition,it highlights recent data-centric techniques adopted by ML practitioners.We addressed how adding data might harm datasets and how HoloClean can be used to restore and clean them.Finally,we discuss the causes of technical debt in AI.Technical debt builds up when software design and implementation decisions run into“or outright collide with”business goals and timelines.This survey lays the groundwork for future data-centric AI discussions by summarizing various data-centric approaches.
文摘Bordered linear systems arise from many industrial applications, such as reservoir simulation and structural engineering. Traditional ILU preconditioners which throw away the additional equations are often too crude for these systems. We describe a practical implementation of ILU preconditioners which are more accurate and more robust. The emphasis of this paper is on implementation rather than on theory.
文摘Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to protect the network from vulnerabilities and attacks.Although extensive research was reported in IDS,detecting novel intrusions with optimal features and reducing false alarm rates are still challenging.Therefore,we developed a novel fusion-based feature importance method to reduce the high dimensional feature space,which helps to identify attacks accurately with less false alarm rate.Initially,to improve training data quality,various preprocessing techniques are utilized.The Adaptive Synthetic oversampling technique generates synthetic samples for minority classes.In the proposed fusion-based feature importance,we use different approaches from the filter,wrapper,and embedded methods like mutual information,random forest importance,permutation importance,Shapley Additive exPlanations(SHAP)-based feature importance,and statistical feature importance methods like the difference of mean and median and standard deviation to rank each feature according to its rank.Then by simple plurality voting,the most optimal features are retrieved.Then the optimal features are fed to various models like Extra Tree(ET),Logistic Regression(LR),Support vector Machine(SVM),Decision Tree(DT),and Extreme Gradient Boosting Machine(XGBM).Then the hyperparameters of classification models are tuned with Halving Random Search cross-validation to enhance the performance.The experiments were carried out on the original imbalanced data and balanced data.The outcomes demonstrate that the balanced data scenario knocked out the imbalanced data.Finally,the experimental analysis proved that our proposed fusionbased feature importance performed well with XGBM giving an accuracy of 99.86%,99.68%,and 92.4%,with 9,7 and 8 features by training time of 1.5,4.5 and 5.5 s on Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD),Canadian Institute for Cybersecurity(CIC-IDS 2017),and UNSW-NB15,datasets respectively.In addition,the suggested technique has been examined and contrasted with the state of art methods on three datasets.
基金Project (No. 2006CB303104) supported by the National Basic Re-search Program (973) of China
文摘At low bitrate, all block discrete cosine transform (BDCT) based video coding algorithms suffer from visible blocking and ringing artifacts in the reconstructed images because the quantization is too coarse and high frequency DCT coefficients are inclined to be quantized to zeros. Preprocessing algorithms can enhance coding efficiency and thus reduce the likelihood of blocking artifacts and ringing artifacts generated in the video coding process by applying a low-pass filter before video encoding to remove some relatively insignificant high frequent components. In this paper, we introduce a new adaptive preprocessing algo- rithm, which employs an improved bilateral filter to provide adaptive edge-preserving low-pass filtering which is adjusted ac- cording to the quantization parameters. Whether at low or high bit rate, the preprocessing can provide proper filtering to make the video encoder more efficient and have better reconstructed image quality. Experimental results demonstrate that our proposed preprocessing algorithm can significantly improve both subjective and objective quality.
基金supported in part by the Director,Office of Science,Office of Advanced Scientific Computing Research,of the U.S.Department of Energy under Contract No.DE-AC02-05CH11231.
文摘A parallel hybrid linear solver based on the Schur complement method has the potential to balance the robustness of direct solvers with the efficiency of preconditioned iterative solvers.However,when solving large-scale highly-indefinite linear systems,this hybrid solver often suffers from either slow convergence or large memory requirements to solve the Schur complement systems.To overcome this challenge,we in this paper discuss techniques to preprocess the Schur complement systems in parallel. Numerical results of solving large-scale highly-indefinite linear systems from various applications demonstrate that these techniques improve the reliability and performance of the hybrid solver and enable efficient solutions of these linear systems on hundreds of processors,which was previously infeasible using existing state-of-the-art solvers.
基金Supported by the National Science Foundation(No.IIS-9988642)the Multidisciplinary Research Program
文摘Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy.
基金National Natural Science Foundation of China(Nos.51875199 and 51905165)Hunan Natural Science Fund Project(2019JJ50186)the Ke7y Research and Development Program of Hunan Province(No.2018GK2073).
文摘Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method.
基金Supported by the National Natural Science Foundation of China (No.60472104)Doctoral innovative fund of Jiangsu province (xm04-32).
文摘Two blind multiuser detection algorithms for antenna array in Code Division Multiple Access (CDMA) system which apply the linearly constrained condition to the Least Squares Constant Modulus Algorithln (LSCMA) are proposed in this paper. One is the Linearly Constrained LSCMA (LC-LSCMA), the other is the Preprocessing LC-LSCMA (PLC-LSCMA). The two algorithms are compared with the conventional LSCMA. The results show that the two algorithms proposed in this paper are superior to the conventional LSCMA and the best one is PLC-LSCMA.
文摘In this study, we propose a data preprocessing algorithm called D-IMPACT inspired by the IMPACT clustering algorithm. D-IMPACT iteratively moves data points based on attraction and density to detect and remove noise and outliers, and separate clusters. Our experimental results on two-dimensional datasets and practical datasets show that this algorithm can produce new datasets such that the performance of the clustering algorithm is improved.