Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human...Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.展开更多
Recently,machine learning(ML)has been considered a powerful technological element of different society areas.To transform the computer into a decision maker,several sophisticated methods and algorithms are constantly ...Recently,machine learning(ML)has been considered a powerful technological element of different society areas.To transform the computer into a decision maker,several sophisticated methods and algorithms are constantly created and analyzed.In geophysics,both supervised and unsupervised ML methods have dramatically contributed to the development of seismic and well-log data interpretation.In well-logging,ML algorithms are well-suited for lithologic reconstruction problems,once there is no analytical expressions for computing well-log data produced by a particular rock unit.Additionally,supervised ML methods are strongly dependent on a accurate-labeled training data-set,which is not a simple task to achieve,due to data absences or corruption.Once an adequate supervision is performed,the classification outputs tend to be more accurate than unsupervised methods.This work presents a supervised version of a Self-Organizing Map,named as SSOM,to solve a lithologic reconstruction problem from well-log data.Firstly,we go for a more controlled problem and simulate well-log data directly from an interpreted geologic cross-section.We then define two specific training data-sets composed by density(RHOB),sonic(DT),spontaneous potential(SP)and gamma-ray(GR)logs,all simulated through a Gaussian distribution function per lithology.Once the training data-set is created,we simulate a particular pseudo-well,referred to as classification well,for defining controlled tests.First one comprises a training data-set with no labeled log data of the simulated fault zone.In the second test,we intentionally improve the training data-set with the fault.To bespeak the obtained results for each test,we analyze confusion matrices,logplots,accuracy and precision.Apart from very thin layer misclassifications,the SSOM provides reasonable lithologic reconstructions,especially when the improved training data-set is considered for supervision.The set of numerical experiments shows that our SSOM is extremely well-suited for a supervised lithologic reconstruction,especially to recover lithotypes that are weakly-sampled in the training log-data.On the other hand,some misclassifications are also observed when the cortex could not group the slightly different lithologies.展开更多
Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved throu...Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.展开更多
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv...Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.展开更多
A supervised genetic algorithm (SGA) is proposed to solve the quality of service (QoS) routing problems in computer networks. The supervised rules of intelligent concept are introduced into genetic algorithms (GAs) to...A supervised genetic algorithm (SGA) is proposed to solve the quality of service (QoS) routing problems in computer networks. The supervised rules of intelligent concept are introduced into genetic algorithms (GAs) to solve the constraint optimization problem. One of the main characteristics of SGA is its searching space can be limited in feasible regions rather than infeasible regions. The superiority of SGA to other GAs lies in that some supervised search rules in which the information comes from the problems are incorporated into SGA. The simulation results show that SGA improves the ability of searching an optimum solution and accelerates the convergent process up to 20 times.展开更多
Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features ma...Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. The conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. In this paper, we propose a new unsupervised quick reduct (QR) algorithm using rough set theory. The quality of the reduced data is measured by the classification performance and it is evaluated using WEKA classifier tool. The method is compared with existing supervised methods and the result demonstrates the efficiency of the proposed algorithm.展开更多
The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions va...The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.展开更多
In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extrac...In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extracts the minimum value,standard deviation,deviation from the voltage and current data.It extracts spectral features such as root mean square,spectral centroid,and zero-crossing rate from audio data,fuses the features extracted from multiple sensor signals,and establishes multiple machine learning supervised and unsupervised models.They are used to detect abnormalities in the welding process.The experimental results show that the established multiple machine learning models have high accuracy,among which the supervised learning model,the balanced accuracy of Ada boost is 0.957,and the unsupervised learning model Isolation Forest has a balanced accuracy of 0.909.展开更多
We have presented an integrated approach based on supervised and unsupervised learning tech- nique to improve the accuracy of six predictive models. They are developed to predict outcome of tuberculosis treatment cour...We have presented an integrated approach based on supervised and unsupervised learning tech- nique to improve the accuracy of six predictive models. They are developed to predict outcome of tuberculosis treatment course and their accuracy needs to be improved as they are not precise as much as necessary. The integrated supervised and unsupervised learning method (ISULM) has been proposed as a new way to improve model accuracy. The dataset of 6450 Iranian TB patients under DOTS therapy was applied to initially select the significant predictors and then develop six predictive models using decision tree, Bayesian network, logistic regression, multilayer perceptron, radial basis function, and support vector machine algorithms. Developed models have integrated with k-mean clustering analysis to calculate more accurate predicted outcome of tuberculosis treatment course. Obtained results, then, have been evaluated to compare prediction accuracy before and after ISULM application. Recall, Precision, F-measure, and ROC area are other criteria used to assess the models validity as well as change percentage to show how different are models before and after ISULM. ISULM led to improve the prediction accuracy for all applied classifiers ranging between 4% and 10%. The most and least improvement for prediction accuracy were shown by logistic regression and support vector machine respectively. Pre-learning by k- mean clustering to relocate the objects and put similar cases in the same group can improve the classification accuracy in the process of integrating supervised and unsupervised learning.展开更多
N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning m...N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.展开更多
Detecting naturally arising structures in data is central to knowledge extraction from data. In most applications, the main challenge is in the choice of the appropriate model for exploring the data features. The choi...Detecting naturally arising structures in data is central to knowledge extraction from data. In most applications, the main challenge is in the choice of the appropriate model for exploring the data features. The choice is generally poorly understood and any tentative choice may be too restrictive. Growing volumes of data, disparate data sources and modelling techniques entail the need for model optimization via adaptability rather than comparability. We propose a novel two-stage algorithm to modelling continuous data consisting of an unsupervised stage whereby the algorithm searches through the data for optimal parameter values and a supervised stage that adapts the parameters for predictive modelling. The method is implemented on the sunspots data with inherently Gaussian distributional properties and assumed bi-modality. Optimal values separating high from lows cycles are obtained via multiple simulations. Early patterns for each recorded cycle reveal that the first 3 years provide a sufficient basis for predicting the peak. Multiple Support Vector Machine runs using repeatedly improved data parameters show that the approach yields greater accuracy and reliability than conventional approaches and provides a good basis for model selection. Model reliability is established via multiple simulations of this type.展开更多
The purpose of this study was to explore the effects of supervised movie appreciation on improving the life meaning sense among college students. The intervention combined by “pre-video, post counseling” was conduct...The purpose of this study was to explore the effects of supervised movie appreciation on improving the life meaning sense among college students. The intervention combined by “pre-video, post counseling” was conducted on the experimental group, while the control group received no intervention. Results have shown that the scores on the subscales of will to meaning, life purpose, life control, suffer acceptance and on the total scale have improved significantly. No gender difference was found on the intervention effect, and participants receiving intervention maintained higher level on related subscales a week later, indicating that supervised movie appreciation is an effective way to improve the life meaning sense among college students.展开更多
In order to obtain a high-quality weld during the laser welding process, extracting the characteristic parameters of weld pool is an important issue for automated welding. In this paper, the type 304 austenitic stainl...In order to obtain a high-quality weld during the laser welding process, extracting the characteristic parameters of weld pool is an important issue for automated welding. In this paper, the type 304 austenitic stainless steel is welded by a 5 kW high-power fiber laser and a high-speed camera is employed to capture the topside images of weld pools. Then we propose a robust visual-detection approach for the molten pool based on the supervised descent method. It provides an elegant framework for representing the outline of a weld pool and is especially efficient for weld pool detection in the presence of strong uncertainties and disturbances. Finally, welding experimental results verified that the proposed approach can extract the weld pool boundary accurately, which will lay a solid foundation for controlling the weld quality of fiber laser welding process.展开更多
Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The ma...Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class,and the minimal reconstruction error indicates its corresponding class.However,how to learn a discriminative dictionary is still a difficult work.In this work,we make two contributions.First,we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network(CNN)features.Secondly,we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term.Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.展开更多
Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to ...Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.展开更多
AIM To evaluate the effect of a 12-mo supervised aerobic and resistance training, on renal function and exercise capacity compared to usual care recommendations.METHODS Ninety-nine kidney transplant recipients(KTRs) w...AIM To evaluate the effect of a 12-mo supervised aerobic and resistance training, on renal function and exercise capacity compared to usual care recommendations.METHODS Ninety-nine kidney transplant recipients(KTRs) were assigned to interventional exercise(Group A; n = 52) and a usual care cohort(Group B; n = 47). Blood and urine chemistry, exercise capacity, muscular strength, anthropometric measures and health-related quality of life(HRQo L) were assessed at baseline, and after 6 and 12 mo. Group A underwent a supervised training three times per week for 12 mo. Group B received only general recommendations about home-based physical activities.RESULTS Eighty-five KTRs completed the study(Group A, n = 44; Group B, n = 41). After 12 mo, renal function remained stable in both groups. Group A significantly increased maximum workload(+13 W, P = 0.0003), V'O2 peak(+3.1 mL/kg per minute, P = 0.0099), muscular strength in plantar flexor(+12 kg, P = 0.0368), height in the countermovement jump(+1.9 cm, P = 0.0293) and decreased in Body Mass Index(-0.5 kg/m^2, P = 0.0013). HRQo L significantly improved in physical function(P = 0.0019), physical-role limitations(P = 0.0321) and social functioning scales(P = 0.0346). Noimprovements were found in Group B.CONCLUSION Twelve-month of supervised aerobic and resistance training improves the physiological variables related to physical fitness and cardiovascular risks without consequences on renal function. Recommendations alone are not sufficient to induce changes in exercise capacity of KTRs. Our study is an example of collaborative working between transplant centres, sports medicine and exercise facilities.展开更多
Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this en...Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this end, we propose a training representation based on the dependency paths between entities in adependency tree which we call lexicalized dependency paths (LDPs). We showthat this representation is fast, efficient and transparent. We further propose representations utilizing entity types and its subtypes to refine our model and alleviatethe data sparsity problem. We apply lexicalized dependency paths to supervisedlearning using the ACE corpus and show that it can achieve similar performancelevel to other state-of-the-art methods and even surpass them on severalcategories.展开更多
This study proposes a supervised learning method that does not rely on labels.We use variables associated with the label as indirect labels,and construct an indirect physics-constrained loss based on the physical mech...This study proposes a supervised learning method that does not rely on labels.We use variables associated with the label as indirect labels,and construct an indirect physics-constrained loss based on the physical mechanism to train the model.In the training process,the model prediction is mapped to the space of value that conforms to the physical mechanism through the projection matrix,and then the model is trained based on the indirect labels.The final prediction result of the model conforms to the physical mechanism between indirect label and label,and also meets the constraints of the indirect label.The present study also develops projection matrix normalization and prediction covariance analysis to ensure that the model can be fully trained.Finally,the effect of the physics-constrained indirect supervised learning is verified based on a well log generation problem.展开更多
In the supervised classification process of remotely sensed imagery, the quantity of samples is one of the important factors affecting the accuracy of the image classification as well as the keys used to evaluate the ...In the supervised classification process of remotely sensed imagery, the quantity of samples is one of the important factors affecting the accuracy of the image classification as well as the keys used to evaluate the image classification. In general, the samples are acquired on the basis of prior knowledge, experience and higher resolution images. With the same size of samples and the same sampling model, several sets of training sample data can be obtained. In such sets, which set reflects perfect spectral characteristics and ensure the accuracy of the classification can be known only after the accuracy of the classification has been assessed. So, before classification, it would be a meaningful research to measure and assess the quality of samples for guiding and optimizing the consequent classification process. Then, based on the rough set, a new measuring index for the sample quality is proposed. The experiment data is the Landsat TM imagery of the Chinese Yellow River Delta on August 8th, 1999. The experiment compares the Bhattacharrya distance matrices and purity index zl and △x based on rough set theory of 5 sample data and also analyzes its effect on sample quality.展开更多
In soft sensor field, just-in-time learning(JITL) is an effective approach to model nonlinear and time varying processes. However, most similarity criterions in JITL are computed in the input space only while ignoring...In soft sensor field, just-in-time learning(JITL) is an effective approach to model nonlinear and time varying processes. However, most similarity criterions in JITL are computed in the input space only while ignoring important output information, which may lead to inaccurate construction of relevant sample set. To solve this problem, we propose a novel supervised feature extraction method suitable for the regression problem called supervised local and non-local structure preserving projections(SLNSPP), in which both input and output information can be easily and effectively incorporated through a newly defined similarity index. The SLNSPP can not only retain the virtue of locality preserving projections but also prevent faraway points from nearing after projection,which endues SLNSPP with powerful discriminating ability. Such two good properties of SLNSPP are desirable for JITL as they are expected to enhance the accuracy of similar sample selection. Consequently, we present a SLNSPP-JITL framework for developing adaptive soft sensor, including a sparse learning strategy to limit the scale and update the frequency of database. Finally, two case studies are conducted with benchmark datasets to evaluate the performance of the proposed schemes. The results demonstrate the effectiveness of LNSPP and SLNSPP.展开更多
基金the National Natural Science Foundation of China(42001408,61806097).
文摘Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.
文摘Recently,machine learning(ML)has been considered a powerful technological element of different society areas.To transform the computer into a decision maker,several sophisticated methods and algorithms are constantly created and analyzed.In geophysics,both supervised and unsupervised ML methods have dramatically contributed to the development of seismic and well-log data interpretation.In well-logging,ML algorithms are well-suited for lithologic reconstruction problems,once there is no analytical expressions for computing well-log data produced by a particular rock unit.Additionally,supervised ML methods are strongly dependent on a accurate-labeled training data-set,which is not a simple task to achieve,due to data absences or corruption.Once an adequate supervision is performed,the classification outputs tend to be more accurate than unsupervised methods.This work presents a supervised version of a Self-Organizing Map,named as SSOM,to solve a lithologic reconstruction problem from well-log data.Firstly,we go for a more controlled problem and simulate well-log data directly from an interpreted geologic cross-section.We then define two specific training data-sets composed by density(RHOB),sonic(DT),spontaneous potential(SP)and gamma-ray(GR)logs,all simulated through a Gaussian distribution function per lithology.Once the training data-set is created,we simulate a particular pseudo-well,referred to as classification well,for defining controlled tests.First one comprises a training data-set with no labeled log data of the simulated fault zone.In the second test,we intentionally improve the training data-set with the fault.To bespeak the obtained results for each test,we analyze confusion matrices,logplots,accuracy and precision.Apart from very thin layer misclassifications,the SSOM provides reasonable lithologic reconstructions,especially when the improved training data-set is considered for supervision.The set of numerical experiments shows that our SSOM is extremely well-suited for a supervised lithologic reconstruction,especially to recover lithotypes that are weakly-sampled in the training log-data.On the other hand,some misclassifications are also observed when the cortex could not group the slightly different lithologies.
文摘Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.
文摘Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.
基金China Postdoctoral Foundation (No2005037529)Doctoral Foundation of Education Ministry of China (No2003005607)Tianjin High Education Science Development Foundation (No20041325)
文摘A supervised genetic algorithm (SGA) is proposed to solve the quality of service (QoS) routing problems in computer networks. The supervised rules of intelligent concept are introduced into genetic algorithms (GAs) to solve the constraint optimization problem. One of the main characteristics of SGA is its searching space can be limited in feasible regions rather than infeasible regions. The superiority of SGA to other GAs lies in that some supervised search rules in which the information comes from the problems are incorporated into SGA. The simulation results show that SGA improves the ability of searching an optimum solution and accelerates the convergent process up to 20 times.
基金supported by the UGC, SERO, Hyderabad under FDP during XI plan periodthe UGC, New Delhi for financial assistance under major research project Grant No. F-34-105/2008
文摘Feature selection (FS) is a process to select features which are more informative. It is one of the important steps in knowledge discovery. The problem is that not all features are important. Some of the features may be redundant, and others may be irrelevant and noisy. The conventional supervised FS methods evaluate various feature subsets using an evaluation function or metric to select only those features which are related to the decision classes of the data under consideration. However, for many data mining applications, decision class labels are often unknown or incomplete, thus indicating the significance of unsupervised feature selection. However, in unsupervised learning, decision class labels are not provided. In this paper, we propose a new unsupervised quick reduct (QR) algorithm using rough set theory. The quality of the reduced data is measured by the classification performance and it is evaluated using WEKA classifier tool. The method is compared with existing supervised methods and the result demonstrates the efficiency of the proposed algorithm.
基金funded by the Open Foundation of Anhui EngineeringResearch Center of Intelligent Perception and Elderly Care,Chuzhou University(No.2022OPA03)the Higher EducationNatural Science Foundation of Anhui Province(No.KJ2021B01)and the Innovation Team Projects of Universities in Guangdong(No.2022KCXTD057).
文摘The coronavirus disease 2019(COVID-19)has severely disrupted both human life and the health care system.Timely diagnosis and treatment have become increasingly important;however,the distribution and size of lesions vary widely among individuals,making it challenging to accurately diagnose the disease.This study proposed a deep-learning disease diagnosismodel based onweakly supervised learning and clustering visualization(W_CVNet)that fused classification with segmentation.First,the data were preprocessed.An optimizable weakly supervised segmentation preprocessing method(O-WSSPM)was used to remove redundant data and solve the category imbalance problem.Second,a deep-learning fusion method was used for feature extraction and classification recognition.A dual asymmetric complementary bilinear feature extraction method(D-CBM)was used to fully extract complementary features,which solved the problem of insufficient feature extraction by a single deep learning network.Third,an unsupervised learning method based on Fuzzy C-Means(FCM)clustering was used to segment and visualize COVID-19 lesions enabling physicians to accurately assess lesion distribution and disease severity.In this study,5-fold cross-validation methods were used,and the results showed that the network had an average classification accuracy of 85.8%,outperforming six recent advanced classification models.W_CVNet can effectively help physicians with automated aid in diagnosis to determine if the disease is present and,in the case of COVID-19 patients,to further predict the area of the lesion.
文摘In order to solve the problem of automatic defect detection and process control in the welding and arc additive process,the paper monitors the current,voltage,audio,and other data during the welding process and extracts the minimum value,standard deviation,deviation from the voltage and current data.It extracts spectral features such as root mean square,spectral centroid,and zero-crossing rate from audio data,fuses the features extracted from multiple sensor signals,and establishes multiple machine learning supervised and unsupervised models.They are used to detect abnormalities in the welding process.The experimental results show that the established multiple machine learning models have high accuracy,among which the supervised learning model,the balanced accuracy of Ada boost is 0.957,and the unsupervised learning model Isolation Forest has a balanced accuracy of 0.909.
文摘We have presented an integrated approach based on supervised and unsupervised learning tech- nique to improve the accuracy of six predictive models. They are developed to predict outcome of tuberculosis treatment course and their accuracy needs to be improved as they are not precise as much as necessary. The integrated supervised and unsupervised learning method (ISULM) has been proposed as a new way to improve model accuracy. The dataset of 6450 Iranian TB patients under DOTS therapy was applied to initially select the significant predictors and then develop six predictive models using decision tree, Bayesian network, logistic regression, multilayer perceptron, radial basis function, and support vector machine algorithms. Developed models have integrated with k-mean clustering analysis to calculate more accurate predicted outcome of tuberculosis treatment course. Obtained results, then, have been evaluated to compare prediction accuracy before and after ISULM application. Recall, Precision, F-measure, and ROC area are other criteria used to assess the models validity as well as change percentage to show how different are models before and after ISULM. ISULM led to improve the prediction accuracy for all applied classifiers ranging between 4% and 10%. The most and least improvement for prediction accuracy were shown by logistic regression and support vector machine respectively. Pre-learning by k- mean clustering to relocate the objects and put similar cases in the same group can improve the classification accuracy in the process of integrating supervised and unsupervised learning.
文摘N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.
文摘Detecting naturally arising structures in data is central to knowledge extraction from data. In most applications, the main challenge is in the choice of the appropriate model for exploring the data features. The choice is generally poorly understood and any tentative choice may be too restrictive. Growing volumes of data, disparate data sources and modelling techniques entail the need for model optimization via adaptability rather than comparability. We propose a novel two-stage algorithm to modelling continuous data consisting of an unsupervised stage whereby the algorithm searches through the data for optimal parameter values and a supervised stage that adapts the parameters for predictive modelling. The method is implemented on the sunspots data with inherently Gaussian distributional properties and assumed bi-modality. Optimal values separating high from lows cycles are obtained via multiple simulations. Early patterns for each recorded cycle reveal that the first 3 years provide a sufficient basis for predicting the peak. Multiple Support Vector Machine runs using repeatedly improved data parameters show that the approach yields greater accuracy and reliability than conventional approaches and provides a good basis for model selection. Model reliability is established via multiple simulations of this type.
文摘The purpose of this study was to explore the effects of supervised movie appreciation on improving the life meaning sense among college students. The intervention combined by “pre-video, post counseling” was conducted on the experimental group, while the control group received no intervention. Results have shown that the scores on the subscales of will to meaning, life purpose, life control, suffer acceptance and on the total scale have improved significantly. No gender difference was found on the intervention effect, and participants receiving intervention maintained higher level on related subscales a week later, indicating that supervised movie appreciation is an effective way to improve the life meaning sense among college students.
基金Project was supported by the National Key R&D Program of China(Grant No.2017YFB1104404)
文摘In order to obtain a high-quality weld during the laser welding process, extracting the characteristic parameters of weld pool is an important issue for automated welding. In this paper, the type 304 austenitic stainless steel is welded by a 5 kW high-power fiber laser and a high-speed camera is employed to capture the topside images of weld pools. Then we propose a robust visual-detection approach for the molten pool based on the supervised descent method. It provides an elegant framework for representing the outline of a weld pool and is especially efficient for weld pool detection in the presence of strong uncertainties and disturbances. Finally, welding experimental results verified that the proposed approach can extract the weld pool boundary accurately, which will lay a solid foundation for controlling the weld quality of fiber laser welding process.
基金This research was funded by the National Natural Science Foundation of China(21878124,31771680 and 61773182).
文摘Human action recognition under complex environment is a challenging work.Recently,sparse representation has achieved excellent results of dealing with human action recognition problem under different conditions.The main idea of sparse representation classification is to construct a general classification scheme where the training samples of each class can be considered as the dictionary to express the query class,and the minimal reconstruction error indicates its corresponding class.However,how to learn a discriminative dictionary is still a difficult work.In this work,we make two contributions.First,we build a new and robust human action recognition framework by combining one modified sparse classification model and deep convolutional neural network(CNN)features.Secondly,we construct a novel classification model which consists of the representation-constrained term and the coefficients incoherence term.Experimental results on benchmark datasets show that our modified model can obtain competitive results in comparison to other state-of-the-art models.
基金supported in part by the National Key R&D Program of China(2017YFB0502904)the National Science Foundation of China(61876140)。
文摘Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.
文摘AIM To evaluate the effect of a 12-mo supervised aerobic and resistance training, on renal function and exercise capacity compared to usual care recommendations.METHODS Ninety-nine kidney transplant recipients(KTRs) were assigned to interventional exercise(Group A; n = 52) and a usual care cohort(Group B; n = 47). Blood and urine chemistry, exercise capacity, muscular strength, anthropometric measures and health-related quality of life(HRQo L) were assessed at baseline, and after 6 and 12 mo. Group A underwent a supervised training three times per week for 12 mo. Group B received only general recommendations about home-based physical activities.RESULTS Eighty-five KTRs completed the study(Group A, n = 44; Group B, n = 41). After 12 mo, renal function remained stable in both groups. Group A significantly increased maximum workload(+13 W, P = 0.0003), V'O2 peak(+3.1 mL/kg per minute, P = 0.0099), muscular strength in plantar flexor(+12 kg, P = 0.0368), height in the countermovement jump(+1.9 cm, P = 0.0293) and decreased in Body Mass Index(-0.5 kg/m^2, P = 0.0013). HRQo L significantly improved in physical function(P = 0.0019), physical-role limitations(P = 0.0321) and social functioning scales(P = 0.0346). Noimprovements were found in Group B.CONCLUSION Twelve-month of supervised aerobic and resistance training improves the physiological variables related to physical fitness and cardiovascular risks without consequences on renal function. Recommendations alone are not sufficient to induce changes in exercise capacity of KTRs. Our study is an example of collaborative working between transplant centres, sports medicine and exercise facilities.
文摘Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this end, we propose a training representation based on the dependency paths between entities in adependency tree which we call lexicalized dependency paths (LDPs). We showthat this representation is fast, efficient and transparent. We further propose representations utilizing entity types and its subtypes to refine our model and alleviatethe data sparsity problem. We apply lexicalized dependency paths to supervisedlearning using the ACE corpus and show that it can achieve similar performancelevel to other state-of-the-art methods and even surpass them on severalcategories.
基金partially funded by the National Natural Science Foundation of China (Grants 51520105005 and U1663208)
文摘This study proposes a supervised learning method that does not rely on labels.We use variables associated with the label as indirect labels,and construct an indirect physics-constrained loss based on the physical mechanism to train the model.In the training process,the model prediction is mapped to the space of value that conforms to the physical mechanism through the projection matrix,and then the model is trained based on the indirect labels.The final prediction result of the model conforms to the physical mechanism between indirect label and label,and also meets the constraints of the indirect label.The present study also develops projection matrix normalization and prediction covariance analysis to ensure that the model can be fully trained.Finally,the effect of the physics-constrained indirect supervised learning is verified based on a well log generation problem.
基金Supported in part by the National Natural Science Foundation of China (No.40671136), Open Research Fund from State Key Laboratory of Remote Sensing Science (No.LRSS0610) and the National 863 Program of China (No. 2006AA12Z215).
文摘In the supervised classification process of remotely sensed imagery, the quantity of samples is one of the important factors affecting the accuracy of the image classification as well as the keys used to evaluate the image classification. In general, the samples are acquired on the basis of prior knowledge, experience and higher resolution images. With the same size of samples and the same sampling model, several sets of training sample data can be obtained. In such sets, which set reflects perfect spectral characteristics and ensure the accuracy of the classification can be known only after the accuracy of the classification has been assessed. So, before classification, it would be a meaningful research to measure and assess the quality of samples for guiding and optimizing the consequent classification process. Then, based on the rough set, a new measuring index for the sample quality is proposed. The experiment data is the Landsat TM imagery of the Chinese Yellow River Delta on August 8th, 1999. The experiment compares the Bhattacharrya distance matrices and purity index zl and △x based on rough set theory of 5 sample data and also analyzes its effect on sample quality.
基金Supported by the National Natural Science Foundation of China(61273160)the Fundamental Research Funds for the Central Universities(14CX06067A,13CX05021A)
文摘In soft sensor field, just-in-time learning(JITL) is an effective approach to model nonlinear and time varying processes. However, most similarity criterions in JITL are computed in the input space only while ignoring important output information, which may lead to inaccurate construction of relevant sample set. To solve this problem, we propose a novel supervised feature extraction method suitable for the regression problem called supervised local and non-local structure preserving projections(SLNSPP), in which both input and output information can be easily and effectively incorporated through a newly defined similarity index. The SLNSPP can not only retain the virtue of locality preserving projections but also prevent faraway points from nearing after projection,which endues SLNSPP with powerful discriminating ability. Such two good properties of SLNSPP are desirable for JITL as they are expected to enhance the accuracy of similar sample selection. Consequently, we present a SLNSPP-JITL framework for developing adaptive soft sensor, including a sparse learning strategy to limit the scale and update the frequency of database. Finally, two case studies are conducted with benchmark datasets to evaluate the performance of the proposed schemes. The results demonstrate the effectiveness of LNSPP and SLNSPP.