In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model...In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.展开更多
Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a ...Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a novel efficient and effective preprocessing algorithm with three methods for text classification combining the Orthogonal Matching Pursuit algorithm to perform the classification. The main idea of the novel preprocessing strategy is that it combined stopword removal and/or regular filtering with tokenization and lowercase conversion, which can effectively reduce the feature dimension and improve the text feature matrix quality. Simulation tests on the 20 newsgroups dataset show that compared with the existing state-of-the-art method, the new method reduces the number of features by 19.85%, 34.35%, 26.25% and 38.67%, improves accuracy by 7.36%, 8.8%, 5.71% and 7.73%, and increases the speed of text classification by 17.38%, 25.64%, 23.76% and 33.38% on the four data, respectively.展开更多
Analyzing human facial expressions using machine vision systems is indeed a challenging yet fascinating problem in the field of computer vision and artificial intelligence. Facial expressions are a primary means throu...Analyzing human facial expressions using machine vision systems is indeed a challenging yet fascinating problem in the field of computer vision and artificial intelligence. Facial expressions are a primary means through which humans convey emotions, making their automated recognition valuable for various applications including man-computer interaction, affective computing, and psychological research. Pre-processing techniques are applied to every image with the aim of standardizing the images. Frequently used techniques include scaling, blurring, rotating, altering the contour of the image, changing the color to grayscale and normalization. Followed by feature extraction and then the traditional classifiers are applied to infer facial expressions. Increasing the performance of the system is difficult in the typical machine learning approach because feature extraction and classification phases are separate. But in Deep Neural Networks (DNN), the two phases are combined into a single phase. Therefore, the Convolutional Neural Network (CNN) models give better accuracy in Facial Expression Recognition than the traditional classifiers. But still the performance of CNN is hampered by noisy and deviated images in the dataset. This work utilized the preprocessing methods such as resizing, gray-scale conversion and normalization. Also, this research work is motivated by these drawbacks to study the use of image pre-processing techniques to enhance the performance of deep learning methods to implement facial expression recognition. Also, this research aims to recognize emotions using deep learning and show the influences of data pre-processing for further processing of images. The accuracy of each pre-processing methods is compared, then combination between them is analysed and the appropriate preprocessing techniques are identified and implemented to see the variability of accuracies in predicting facial expressions. .展开更多
As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the ...As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the preprocessing meth- ods reported in the literatures were not the same. In order to screen the best pre- processing method, this paper took three typical treatments to explore the effect of different preprocessing methods on soil microbial community functional diversity. The results showed that, method B's overall trend of AWCD values was better than A and C's. Method B's microbial utilization of six carbon sources was higher, and the result was relatively stable. The Simpson index, Shannon richness index and Car- bon source utilization richness index of the two treatments were B〉C〉A, while the Mclntosh index and Shannon evenness were not very stable, but the difference of variance analysis was not significant, and the method B was always with a smallest variance. Method B's principal component analysis was better than A and C's. In a word, the method using 250 r/min shaking for 30 minutes and cultivating at 28 ℃ was the best one, because it was simple, convenient, and with good repeatability.展开更多
This paper discusses some aspects of finite element computation,such as the automatic generation of finite element ,refinement of mesh,process of node density, distribution of load,optimum design and the drawing o...This paper discusses some aspects of finite element computation,such as the automatic generation of finite element ,refinement of mesh,process of node density, distribution of load,optimum design and the drawing of stress contour, and describes the developing process of software for a planar 8 node element.展开更多
The conventional poststack inversion uses standard recursion formulas to obtain impedance in a single trace.It cannot allow for lateral regularization.In this paper,ID edge-preserving smoothing(EPS)fi lter is extended...The conventional poststack inversion uses standard recursion formulas to obtain impedance in a single trace.It cannot allow for lateral regularization.In this paper,ID edge-preserving smoothing(EPS)fi lter is extended to 2D/3D for setting precondition of impedance model in impedance inversion.The EPS filter incorporates a priori knowledge into the seismic inversion.The a priori knowledge incorporated from EPS filter preconditioning relates to the blocky features of the impedance model,which makes the formation interfaces and geological edges precise and keeps the inversion procedure robust.Then,the proposed method is performed on two 2D models to show its feasibility and stability.Last,the proposed method is performed on a real 3D seismic work area from Southwest China to predict reef reservoirs in practice.展开更多
The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can...The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.展开更多
Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid appr...Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid approach combining quantum and classic devices is the Variational Quantum Classifier(VQC),which development seems promising.Albeit being largely studied,VQC implementations for“real-world”datasets are still challenging on Noisy Intermediate Scale Quantum devices(NISQ).In this paper we propose a preprocessing pipeline based on Stokes parameters for data mapping.This pipeline enhances the prediction rates when applying VQC techniques,improving the feasibility of solving classification problems using NISQ devices.By including feature selection techniques and geometrical transformations,enhanced quantum state preparation is achieved.Also,a representation based on the Stokes parameters in the PoincaréSphere is possible for visualizing the data.Our results show that by using the proposed techniques we improve the classification score for the incidence of acute comorbid diseases in Type 2 Diabetes Mellitus patients.We used the implemented version of VQC available on IBM’s framework Qiskit,and obtained with two and three qubits an accuracy of 70%and 72%respectively.展开更多
It is one of the major challenges for face recognition to minimize the disadvantage of il- lumination variations of face images in different scenarios. Local Binary Pattern (LBP) has been proved to be successful for f...It is one of the major challenges for face recognition to minimize the disadvantage of il- lumination variations of face images in different scenarios. Local Binary Pattern (LBP) has been proved to be successful for face recognition. However, it is still very rare to take LBP as an illumination preprocessing approach. In this paper, we propose a new LBP-based multi-scale illumination pre- processing method. This method mainly includes three aspects: threshold adjustment, multi-scale addition and symmetry restoration/neighborhood replacement. Our experiment results show that the proposed method performs better than the existing LBP-based methods at the point of illumination preprocessing. Moreover, compared with some face image preprocessing methods, such as histogram equalization, Gamma transformation, Retinex, and simplified LBP operator, our method can effectively improve the robustness for face recognition against illumination variation, and achieve higher recog- nition rate.展开更多
New adaptive preprocessing algorithms based on the polar coordinate system were put forward to get high-precision corneal topography calculation results. Adaptive locating algorithms of concentric circle center were c...New adaptive preprocessing algorithms based on the polar coordinate system were put forward to get high-precision corneal topography calculation results. Adaptive locating algorithms of concentric circle center were created to accurately capture the circle center of original Placido-based image, expand the image into matrix centered around the circle center, and convert the matrix into the polar coordinate system with the circle center as pole. Adaptive image smoothing treatment was followed and the characteristics of useful circles were extracted via horizontal edge detection, based on useful circles presenting approximate horizontal lines while noise signals presenting vertical lines or different angles. Effective combination of different operators of morphology were designed to remedy data loss caused by noise disturbances, get complete image about circle edge detection to satisfy the requests of precise calculation on follow-up parameters. The experimental data show that the algorithms meet the requirements of practical detection with characteristics of less data loss, higher data accuracy and easier availability.展开更多
At low bitrate, all block discrete cosine transform (BDCT) based video coding algorithms suffer from visible blocking and ringing artifacts in the reconstructed images because the quantization is too coarse and high f...At low bitrate, all block discrete cosine transform (BDCT) based video coding algorithms suffer from visible blocking and ringing artifacts in the reconstructed images because the quantization is too coarse and high frequency DCT coefficients are inclined to be quantized to zeros. Preprocessing algorithms can enhance coding efficiency and thus reduce the likelihood of blocking artifacts and ringing artifacts generated in the video coding process by applying a low-pass filter before video encoding to remove some relatively insignificant high frequent components. In this paper, we introduce a new adaptive preprocessing algo- rithm, which employs an improved bilateral filter to provide adaptive edge-preserving low-pass filtering which is adjusted ac- cording to the quantization parameters. Whether at low or high bit rate, the preprocessing can provide proper filtering to make the video encoder more efficient and have better reconstructed image quality. Experimental results demonstrate that our proposed preprocessing algorithm can significantly improve both subjective and objective quality.展开更多
Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing ...Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy.展开更多
Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which co...Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method.展开更多
In this study, we propose a data preprocessing algorithm called D-IMPACT inspired by the IMPACT clustering algorithm. D-IMPACT iteratively moves data points based on attraction and density to detect and remove noise a...In this study, we propose a data preprocessing algorithm called D-IMPACT inspired by the IMPACT clustering algorithm. D-IMPACT iteratively moves data points based on attraction and density to detect and remove noise and outliers, and separate clusters. Our experimental results on two-dimensional datasets and practical datasets show that this algorithm can produce new datasets such that the performance of the clustering algorithm is improved.展开更多
Manuscript preprocessing is the earliest stage in transliteration process of manuscripts in Javanese scripts. Manuscript preprocessing stage is aimed to produce images of letters which form the manuscripts to be proce...Manuscript preprocessing is the earliest stage in transliteration process of manuscripts in Javanese scripts. Manuscript preprocessing stage is aimed to produce images of letters which form the manuscripts to be processed further in manuscript transliteration system. There are four main steps in manuscript preprocessing, which are manuscript binarization, noise reduction, line segmentation, and character segmentation for every line image produced by line segmentation. The result of the test on parts of PB.A57 manuscript which contains 291 character images, with 95% level of confidence concluded that the success percentage of preprocessing in producing Javanese character images ranged 85.9% - 94.82%.展开更多
In image acquisition process, the quality of microscopic images will be degraded by electrical noise, quantizing noise, light illumination etc. Hence, image preprocessing is necessary and important to improve the qual...In image acquisition process, the quality of microscopic images will be degraded by electrical noise, quantizing noise, light illumination etc. Hence, image preprocessing is necessary and important to improve the quality. The background noise and pulse noise are two common types of noise existing in microscopic images. In this paper, a gradient-based anisotropic filtering algorithm was proposed, which can filter out the background noise while preserve object boundary effectively. The filtering performance was evaluated by comparing that with some other filtering algorithms.展开更多
Over the last decades,infantile brain networks have received increased scientific attention due to the elevated need to understand better the maturational processes of the human brain and the early forms of neural abn...Over the last decades,infantile brain networks have received increased scientific attention due to the elevated need to understand better the maturational processes of the human brain and the early forms of neural abnormalities.Electroencephalography(EEG)is becoming a popular tool for the investigation of functional connectivity(FC)of the immature brain,as it is easily applied in awake,non-sedated infants.However,there are still no universally accepted standards regarding the preprocessing and processing analyses which address the peculiarities of infantile EEG data,resulting in comparability difficulties between different studies.Nevertheless,during the last few years,there is a growing effort in overcoming these issues,with the creation of age-appropriate pipelines.Although FC in infants has been mostly measured via linear metrics and particularly coherence analysis,non-linear methods,such as cross-frequency-coupling(CFC),may be more valuable for the investigation of network communication and early network development.Additionally,graph theory analysis often accompanies linear and non-linear FC computation offering a more comprehensive understanding of the infantile network architecture.The current review attempts to gather the basic information on the preprocessing and processing techniques that are usually employed by infantile FC studies,while providing guidelines for future studies.展开更多
Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill ...Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill from doctors and technicians.In our study,we focused on the 3D dose prediction problem in radiotherapy by applying the deeplearning approach to computed tomography(CT)images of cancer patients.Medical image data has more complex characteristics than normal image data,and this research aims to explore the effectiveness of data preprocessing and augmentation in the context of the 3D dose prediction problem.We proposed four strategies to clarify our hypothesis in different aspects of applying data preprocessing and augmentation.In strategies,we trained our custom convolutional neural network model which has a structure inspired by the U-net,and residual blocks were also applied to the architecture.The output of the network is added with a rectified linear unit(Re-Lu)function for each pixel to ensure there are no negative values,which are absurd with radiation doses.Our experiments were conducted on the dataset of the Open Knowledge-Based Planning Challenge which was collected from head and neck cancer patients treatedwith radiation therapy.The results of four strategies showthat our hypothesis is rational by evaluating metrics in terms of the Dose-score and the Dose-volume histogram score(DVH-score).In the best training cases,the Dose-score is 3.08 and the DVH-score is 1.78.In addition,we also conducted a comparison with the results of another study in the same context of using the loss function.展开更多
Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, wh...Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, which combines the advantages of several machine learning models. The ensemble is made up of various base models, such as Decision Trees, K-Nearest Neighbors (KNN), Multi-Layer Perceptrons (MLP), and Naive Bayes, each of which offers a distinct perspective on the properties of the data. The research adheres to a methodical workflow that begins with thorough data preprocessing to guarantee the accuracy and applicability of the data. In order to extract useful attributes from network traffic data—which are essential for efficient model training—feature engineering is used. The ensemble approach combines these models by training a Logistic Regression model meta-learner on base model predictions. In addition to increasing prediction accuracy, this tiered approach helps get around the drawbacks that come with using individual models. High accuracy, precision, and recall are shown in the model’s evaluation of a network intrusion dataset, indicating the model’s efficacy in identifying malicious activity. Cross-validation is used to make sure the models are reliable and well-generalized to new, untested data. In addition to advancing cybersecurity, the research establishes a foundation for the implementation of flexible and scalable intrusion detection systems. This hybrid, stacked ensemble model has a lot of potential for improving cyberattack prevention, lowering the likelihood of cyberattacks, and offering a scalable solution that can be adjusted to meet new threats and technological advancements.展开更多
The paper presents a comprehensive, newly developed software – poROSE(poROus materials examination SoftwarE) for the qualitative and quantitative assessment of porous materials and analysis methodologies developed by...The paper presents a comprehensive, newly developed software – poROSE(poROus materials examination SoftwarE) for the qualitative and quantitative assessment of porous materials and analysis methodologies developed by the authors as a solution for emerging challenges. A low porosity rock sample was analyzed and thanks to the developed and implemented methodologies in poROSE software, the main geometrical properties were calculated. A tool was also used in preprocessing part of the computational analysis to prepare a geometrical representation of the porous material. The basic functions as elimination of blind pores in the geometrical model were completed and the geometrical model was exported for CFD software. As a result, it was possible to carry out calculations of the basic properties of the analyzed porous material sample. The developed tool allows to carry out quantitative and qualitative analysis to determine the most important properties characterized porous materials. In presented tool the input data can be images from X-ray computed tomography(CT), scanning electron microscope(SEM) or focused ion beam with scanning electron microscope(FIB-SEM) in grey level. A geometric model developed in the proper format can be used as an input to modeling mass, momentum and heat transfer, as well as, in strength or thermo-strength analysis of any porous materials. In this example, thermal analysis was carried out on the skeleton of rock sample. Moreover, thermal conductivity was estimated using empirical equations.展开更多
文摘In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.
文摘Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a novel efficient and effective preprocessing algorithm with three methods for text classification combining the Orthogonal Matching Pursuit algorithm to perform the classification. The main idea of the novel preprocessing strategy is that it combined stopword removal and/or regular filtering with tokenization and lowercase conversion, which can effectively reduce the feature dimension and improve the text feature matrix quality. Simulation tests on the 20 newsgroups dataset show that compared with the existing state-of-the-art method, the new method reduces the number of features by 19.85%, 34.35%, 26.25% and 38.67%, improves accuracy by 7.36%, 8.8%, 5.71% and 7.73%, and increases the speed of text classification by 17.38%, 25.64%, 23.76% and 33.38% on the four data, respectively.
文摘Analyzing human facial expressions using machine vision systems is indeed a challenging yet fascinating problem in the field of computer vision and artificial intelligence. Facial expressions are a primary means through which humans convey emotions, making their automated recognition valuable for various applications including man-computer interaction, affective computing, and psychological research. Pre-processing techniques are applied to every image with the aim of standardizing the images. Frequently used techniques include scaling, blurring, rotating, altering the contour of the image, changing the color to grayscale and normalization. Followed by feature extraction and then the traditional classifiers are applied to infer facial expressions. Increasing the performance of the system is difficult in the typical machine learning approach because feature extraction and classification phases are separate. But in Deep Neural Networks (DNN), the two phases are combined into a single phase. Therefore, the Convolutional Neural Network (CNN) models give better accuracy in Facial Expression Recognition than the traditional classifiers. But still the performance of CNN is hampered by noisy and deviated images in the dataset. This work utilized the preprocessing methods such as resizing, gray-scale conversion and normalization. Also, this research work is motivated by these drawbacks to study the use of image pre-processing techniques to enhance the performance of deep learning methods to implement facial expression recognition. Also, this research aims to recognize emotions using deep learning and show the influences of data pre-processing for further processing of images. The accuracy of each pre-processing methods is compared, then combination between them is analysed and the appropriate preprocessing techniques are identified and implemented to see the variability of accuracies in predicting facial expressions. .
基金Supported by National and International Scientific and Technological Cooperation Project"The application of Microbial Agents on Mining Reclamation and Ecological Recovery"(2011DFR31230)Key Project of Shanxi academy of Agricultural Science"The Research and Application of Bio-organic Fertilizer on Mining Reclamation and Soil Remediation"(2013zd12)Major Science and Technology Programs of Shanxi Province"Key Technology Research and Demonstration of mining waste land ecosystem Restoration and Reconstruction"(20121101009)~~
文摘As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the preprocessing meth- ods reported in the literatures were not the same. In order to screen the best pre- processing method, this paper took three typical treatments to explore the effect of different preprocessing methods on soil microbial community functional diversity. The results showed that, method B's overall trend of AWCD values was better than A and C's. Method B's microbial utilization of six carbon sources was higher, and the result was relatively stable. The Simpson index, Shannon richness index and Car- bon source utilization richness index of the two treatments were B〉C〉A, while the Mclntosh index and Shannon evenness were not very stable, but the difference of variance analysis was not significant, and the method B was always with a smallest variance. Method B's principal component analysis was better than A and C's. In a word, the method using 250 r/min shaking for 30 minutes and cultivating at 28 ℃ was the best one, because it was simple, convenient, and with good repeatability.
文摘This paper discusses some aspects of finite element computation,such as the automatic generation of finite element ,refinement of mesh,process of node density, distribution of load,optimum design and the drawing of stress contour, and describes the developing process of software for a planar 8 node element.
基金The National Key S&T Special Projects (No. 2017ZX05008004-008)the National Natural Science Foundation of China (No. 41874146)+2 种基金the National Natural Science Foundation of China (No. 41704134)the Innovation Team of Youth Scientific and Technological in Southwest Petroleum University (No. 2017CXTD08)the Initiative Projects for Ph.Din China West Normal University (No. 19E063)
文摘The conventional poststack inversion uses standard recursion formulas to obtain impedance in a single trace.It cannot allow for lateral regularization.In this paper,ID edge-preserving smoothing(EPS)fi lter is extended to 2D/3D for setting precondition of impedance model in impedance inversion.The EPS filter incorporates a priori knowledge into the seismic inversion.The a priori knowledge incorporated from EPS filter preconditioning relates to the blocky features of the impedance model,which makes the formation interfaces and geological edges precise and keeps the inversion procedure robust.Then,the proposed method is performed on two 2D models to show its feasibility and stability.Last,the proposed method is performed on a real 3D seismic work area from Southwest China to predict reef reservoirs in practice.
文摘The Moon-based Ultraviolet Telescope (MUVT) is one of the payloads on the Chang'e-3 (CE-3) lunar lander. Because of the advantages of having no at- mospheric disturbances and the slow rotation of the Moon, we can make long-term continuous observations of a series of important celestial objects in the near ultra- violet band (245-340 nm), and perform a sky survey of selected areas, which can- not be completed on Earth. We can find characteristic changes in celestial brightness with time by analyzing image data from the MUVT, and deduce the radiation mech- anism and physical properties of these celestial objects after comparing with a phys- ical model. In order to explain the scientific purposes of MUVT, this article analyzes the preprocessing of MUVT image data and makes a preliminary evaluation of data quality. The results demonstrate that the methods used for data collection and prepro- cessing are effective, and the Level 2A and 2B image data satisfy the requirements of follow-up scientific researches.
基金funded by eVIDA Research group IT-905-16 from Basque Government.
文摘Quantum Machine Learning(QML)techniques have been recently attracting massive interest.However reported applications usually employ synthetic or well-known datasets.One of these techniques based on using a hybrid approach combining quantum and classic devices is the Variational Quantum Classifier(VQC),which development seems promising.Albeit being largely studied,VQC implementations for“real-world”datasets are still challenging on Noisy Intermediate Scale Quantum devices(NISQ).In this paper we propose a preprocessing pipeline based on Stokes parameters for data mapping.This pipeline enhances the prediction rates when applying VQC techniques,improving the feasibility of solving classification problems using NISQ devices.By including feature selection techniques and geometrical transformations,enhanced quantum state preparation is achieved.Also,a representation based on the Stokes parameters in the PoincaréSphere is possible for visualizing the data.Our results show that by using the proposed techniques we improve the classification score for the incidence of acute comorbid diseases in Type 2 Diabetes Mellitus patients.We used the implemented version of VQC available on IBM’s framework Qiskit,and obtained with two and three qubits an accuracy of 70%and 72%respectively.
文摘It is one of the major challenges for face recognition to minimize the disadvantage of il- lumination variations of face images in different scenarios. Local Binary Pattern (LBP) has been proved to be successful for face recognition. However, it is still very rare to take LBP as an illumination preprocessing approach. In this paper, we propose a new LBP-based multi-scale illumination pre- processing method. This method mainly includes three aspects: threshold adjustment, multi-scale addition and symmetry restoration/neighborhood replacement. Our experiment results show that the proposed method performs better than the existing LBP-based methods at the point of illumination preprocessing. Moreover, compared with some face image preprocessing methods, such as histogram equalization, Gamma transformation, Retinex, and simplified LBP operator, our method can effectively improve the robustness for face recognition against illumination variation, and achieve higher recog- nition rate.
基金Project(20120321028-01)supported by Scientific and Technological Key Project of Shanxi Province,ChinaProject(20113101)supported by Postgraduate Innovative Key Project of Shanxi Province,China
文摘New adaptive preprocessing algorithms based on the polar coordinate system were put forward to get high-precision corneal topography calculation results. Adaptive locating algorithms of concentric circle center were created to accurately capture the circle center of original Placido-based image, expand the image into matrix centered around the circle center, and convert the matrix into the polar coordinate system with the circle center as pole. Adaptive image smoothing treatment was followed and the characteristics of useful circles were extracted via horizontal edge detection, based on useful circles presenting approximate horizontal lines while noise signals presenting vertical lines or different angles. Effective combination of different operators of morphology were designed to remedy data loss caused by noise disturbances, get complete image about circle edge detection to satisfy the requests of precise calculation on follow-up parameters. The experimental data show that the algorithms meet the requirements of practical detection with characteristics of less data loss, higher data accuracy and easier availability.
基金Project (No. 2006CB303104) supported by the National Basic Re-search Program (973) of China
文摘At low bitrate, all block discrete cosine transform (BDCT) based video coding algorithms suffer from visible blocking and ringing artifacts in the reconstructed images because the quantization is too coarse and high frequency DCT coefficients are inclined to be quantized to zeros. Preprocessing algorithms can enhance coding efficiency and thus reduce the likelihood of blocking artifacts and ringing artifacts generated in the video coding process by applying a low-pass filter before video encoding to remove some relatively insignificant high frequent components. In this paper, we introduce a new adaptive preprocessing algo- rithm, which employs an improved bilateral filter to provide adaptive edge-preserving low-pass filtering which is adjusted ac- cording to the quantization parameters. Whether at low or high bit rate, the preprocessing can provide proper filtering to make the video encoder more efficient and have better reconstructed image quality. Experimental results demonstrate that our proposed preprocessing algorithm can significantly improve both subjective and objective quality.
基金Supported by the National Science Foundation(No.IIS-9988642)the Multidisciplinary Research Program
文摘Many classifiers and methods are proposed to deal with letter recognition problem. Among them, clustering is a widely used method. But only one time for clustering is not adequately. Here, we adopt data preprocessing and a re kernel clustering method to tackle the letter recognition problem. In order to validate effectiveness and efficiency of proposed method, we introduce re kernel clustering into Kernel Nearest Neighbor classification(KNN), Radial Basis Function Neural Network(RBFNN), and Support Vector Machine(SVM). Furthermore, we compare the difference between re kernel clustering and one time kernel clustering which is denoted as kernel clustering for short. Experimental results validate that re kernel clustering forms fewer and more feasible kernels and attain higher classification accuracy.
基金National Natural Science Foundation of China(Nos.51875199 and 51905165)Hunan Natural Science Fund Project(2019JJ50186)the Ke7y Research and Development Program of Hunan Province(No.2018GK2073).
文摘Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method.
文摘In this study, we propose a data preprocessing algorithm called D-IMPACT inspired by the IMPACT clustering algorithm. D-IMPACT iteratively moves data points based on attraction and density to detect and remove noise and outliers, and separate clusters. Our experimental results on two-dimensional datasets and practical datasets show that this algorithm can produce new datasets such that the performance of the clustering algorithm is improved.
文摘Manuscript preprocessing is the earliest stage in transliteration process of manuscripts in Javanese scripts. Manuscript preprocessing stage is aimed to produce images of letters which form the manuscripts to be processed further in manuscript transliteration system. There are four main steps in manuscript preprocessing, which are manuscript binarization, noise reduction, line segmentation, and character segmentation for every line image produced by line segmentation. The result of the test on parts of PB.A57 manuscript which contains 291 character images, with 95% level of confidence concluded that the success percentage of preprocessing in producing Javanese character images ranged 85.9% - 94.82%.
文摘In image acquisition process, the quality of microscopic images will be degraded by electrical noise, quantizing noise, light illumination etc. Hence, image preprocessing is necessary and important to improve the quality. The background noise and pulse noise are two common types of noise existing in microscopic images. In this paper, a gradient-based anisotropic filtering algorithm was proposed, which can filter out the background noise while preserve object boundary effectively. The filtering performance was evaluated by comparing that with some other filtering algorithms.
文摘Over the last decades,infantile brain networks have received increased scientific attention due to the elevated need to understand better the maturational processes of the human brain and the early forms of neural abnormalities.Electroencephalography(EEG)is becoming a popular tool for the investigation of functional connectivity(FC)of the immature brain,as it is easily applied in awake,non-sedated infants.However,there are still no universally accepted standards regarding the preprocessing and processing analyses which address the peculiarities of infantile EEG data,resulting in comparability difficulties between different studies.Nevertheless,during the last few years,there is a growing effort in overcoming these issues,with the creation of age-appropriate pipelines.Although FC in infants has been mostly measured via linear metrics and particularly coherence analysis,non-linear methods,such as cross-frequency-coupling(CFC),may be more valuable for the investigation of network communication and early network development.Additionally,graph theory analysis often accompanies linear and non-linear FC computation offering a more comprehensive understanding of the infantile network architecture.The current review attempts to gather the basic information on the preprocessing and processing techniques that are usually employed by infantile FC studies,while providing guidelines for future studies.
基金sponsored by the Institute of Information Technology(Vietnam Academy of Science and Technology)with Project Code“CS24.01”.
文摘Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill from doctors and technicians.In our study,we focused on the 3D dose prediction problem in radiotherapy by applying the deeplearning approach to computed tomography(CT)images of cancer patients.Medical image data has more complex characteristics than normal image data,and this research aims to explore the effectiveness of data preprocessing and augmentation in the context of the 3D dose prediction problem.We proposed four strategies to clarify our hypothesis in different aspects of applying data preprocessing and augmentation.In strategies,we trained our custom convolutional neural network model which has a structure inspired by the U-net,and residual blocks were also applied to the architecture.The output of the network is added with a rectified linear unit(Re-Lu)function for each pixel to ensure there are no negative values,which are absurd with radiation doses.Our experiments were conducted on the dataset of the Open Knowledge-Based Planning Challenge which was collected from head and neck cancer patients treatedwith radiation therapy.The results of four strategies showthat our hypothesis is rational by evaluating metrics in terms of the Dose-score and the Dose-volume histogram score(DVH-score).In the best training cases,the Dose-score is 3.08 and the DVH-score is 1.78.In addition,we also conducted a comparison with the results of another study in the same context of using the loss function.
文摘Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, which combines the advantages of several machine learning models. The ensemble is made up of various base models, such as Decision Trees, K-Nearest Neighbors (KNN), Multi-Layer Perceptrons (MLP), and Naive Bayes, each of which offers a distinct perspective on the properties of the data. The research adheres to a methodical workflow that begins with thorough data preprocessing to guarantee the accuracy and applicability of the data. In order to extract useful attributes from network traffic data—which are essential for efficient model training—feature engineering is used. The ensemble approach combines these models by training a Logistic Regression model meta-learner on base model predictions. In addition to increasing prediction accuracy, this tiered approach helps get around the drawbacks that come with using individual models. High accuracy, precision, and recall are shown in the model’s evaluation of a network intrusion dataset, indicating the model’s efficacy in identifying malicious activity. Cross-validation is used to make sure the models are reliable and well-generalized to new, untested data. In addition to advancing cybersecurity, the research establishes a foundation for the implementation of flexible and scalable intrusion detection systems. This hybrid, stacked ensemble model has a lot of potential for improving cyberattack prevention, lowering the likelihood of cyberattacks, and offering a scalable solution that can be adjusted to meet new threats and technological advancements.
基金Project is financed by the National Centre for Research and Development in Poland,program LIDER VI,project no. LIDER/319/L–6/14/NCBR/2015: Innovative method of unconventional oil and gas reservoirs interpretation using computed X-ray tomography
文摘The paper presents a comprehensive, newly developed software – poROSE(poROus materials examination SoftwarE) for the qualitative and quantitative assessment of porous materials and analysis methodologies developed by the authors as a solution for emerging challenges. A low porosity rock sample was analyzed and thanks to the developed and implemented methodologies in poROSE software, the main geometrical properties were calculated. A tool was also used in preprocessing part of the computational analysis to prepare a geometrical representation of the porous material. The basic functions as elimination of blind pores in the geometrical model were completed and the geometrical model was exported for CFD software. As a result, it was possible to carry out calculations of the basic properties of the analyzed porous material sample. The developed tool allows to carry out quantitative and qualitative analysis to determine the most important properties characterized porous materials. In presented tool the input data can be images from X-ray computed tomography(CT), scanning electron microscope(SEM) or focused ion beam with scanning electron microscope(FIB-SEM) in grey level. A geometric model developed in the proper format can be used as an input to modeling mass, momentum and heat transfer, as well as, in strength or thermo-strength analysis of any porous materials. In this example, thermal analysis was carried out on the skeleton of rock sample. Moreover, thermal conductivity was estimated using empirical equations.