The COVID-19 pandemic has had a widespread negative impact globally. It shares symptoms with other respiratory illnesses such as pneumonia and influenza, making rapid and accurate diagnosis essential to treat individu...The COVID-19 pandemic has had a widespread negative impact globally. It shares symptoms with other respiratory illnesses such as pneumonia and influenza, making rapid and accurate diagnosis essential to treat individuals and halt further transmission. X-ray imaging of the lungs is one of the most reliable diagnostic tools. Utilizing deep learning, we can train models to recognize the signs of infection, thus aiding in the identification of COVID-19 cases. For our project, we developed a deep learning model utilizing the ResNet50 architecture, pre-trained with ImageNet and CheXNet datasets. We tackled the challenge of an imbalanced dataset, the CoronaHack Chest X-Ray dataset provided by Kaggle, through both binary and multi-class classification approaches. Additionally, we evaluated the performance impact of using Focal loss versus Cross-entropy loss in our model.展开更多
The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy ...The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy of decentralized SCN algorithms while effectively protecting user privacy. To this end, we propose a decentralized semi-supervised learning algorithm for SCN, called DMT-SCN, which introduces teacher and student models by combining the idea of consistency regularization to improve the response speed of model iterations. In order to reduce the possible negative impact of unsupervised data on the model, we purposely change the way of adding noise to the unlabeled data. Simulation results show that the algorithm can effectively utilize unlabeled data to improve the classification accuracy of SCN training and is robust under different ground simulation environments.展开更多
In recent years, the place occupied by the various manifestations of cyber-crime in companies has been considerable. Indeed, due to the rapid evolution of telecommunications technologies, companies, regardless of thei...In recent years, the place occupied by the various manifestations of cyber-crime in companies has been considerable. Indeed, due to the rapid evolution of telecommunications technologies, companies, regardless of their size or sector of activity, are now the target of advanced persistent threats. The Work 2035 study also revealed that cyber crimes (such as critical infrastructure hacks) and massive data breaches are major sources of concern. Thus, it is important for organizations to guarantee a minimum level of security to avoid potential attacks that can cause paralysis of systems, loss of sensitive data, exposure to blackmail, damage to reputation or even a commercial harm. To do this, among other means, hardening is used, the main objective of which is to reduce the attack surface within a company. The execution of the hardening configurations as well as the verification of these are carried out on the servers and network equipment with the aim of reducing the number of openings present by keeping only those which are necessary for proper operation. However, nowadays, in many companies, these tasks are done manually. As a result, the execution and verification of hardening configurations are very often subject to potential errors but also highly consuming human and financial resources. The problem is that it is essential for operators to maintain an optimal level of security while minimizing costs, hence the interest in automating hardening processes and verifying the hardening of servers and network equipment. It is in this logic that we propose within the framework of this work the reinforcement of the security of the information systems (IS) by the automation of the mechanisms of hardening. In our work, we have, on the one hand, set up a hardening procedure in accordance with international security standards for servers, routers and switches and, on the other hand, designed and produced a functional application which makes it possible to: 1) Realise the configuration of the hardening;2) Verify them;3) Correct the non conformities;4) Write and send by mail a verification report for the configurations;5) And finally update the procedures of hardening. Our web application thus created allows in less than fifteen (15) minutes actions that previously took at least five (5) hours of time. This allows supervised network operators to save time and money, but also to improve their security standards in line with international standards.展开更多
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human...Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.展开更多
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv...Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.展开更多
Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved throu...Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.展开更多
The dynamic transformation of land use and land cover has emerged as a crucial aspect in the effective management of natural resources and the continual monitoring of environmental shifts. This study focused on the la...The dynamic transformation of land use and land cover has emerged as a crucial aspect in the effective management of natural resources and the continual monitoring of environmental shifts. This study focused on the land use and land cover (LULC) changes within the catchment area of the Godavari River, assessing the repercussions of land and water resource exploitation. Utilizing LANDSAT satellite images from 2009, 2014, and 2019, this research employed supervised classification through the Quantum Geographic Information System (QGIS) software’s SCP plugin. Maximum likelihood classification algorithm was used for the assessment of supervised land use classification. Seven distinct LULC classes—forest, irrigated cropland, agricultural land (fallow), barren land, shrub land, water, and urban land—are delineated for classification purposes. The study revealed substantial changes in the Godavari basin’s land use patterns over the ten-year period from 2009 to 2019. Spatial and temporal dynamics of land use/cover changes (2009-2019) were quantified using three Satellite/Landsat images, a supervised classification algorithm and the post classification change detection technique in GIS. The total study area of the Godavari basin in Maharashtra encompasses 5138175.48 hectares. Notably, the built-up area increased from 0.14% in 2009 to 1.94% in 2019. The proportion of irrigated cropland, which was 62.32% in 2009, declined to 41.52% in 2019. Shrub land witnessed a noteworthy increase from 0.05% to 2.05% over the last decade. The key findings underscored significant declines in barren land, agricultural land, and irrigated cropland, juxtaposed with an expansion in forest land, shrub land, and urban land. The classification methodology achieved an overall accuracy of 80%, with a Kappa Statistic of 71.9% for the satellite images. The overall classification accuracy along with the Kappa value for 2009, 2014 and 2019 supervised land use land cover classification was good enough to detect the changing scenarios of Godavari River basin under study. These findings provide valuable insights for discerning land utilization across various categories, facilitating the adoption of appropriate strategies for sustainable land use in the region.展开更多
Hydrological models are developed to simulate river flows over a watershed for many practical applications in the field of water resource management. The present paper compares the performance of two recurrent neural ...Hydrological models are developed to simulate river flows over a watershed for many practical applications in the field of water resource management. The present paper compares the performance of two recurrent neural networks for rainfall-runoff modeling in the Zou River basin at Atchérigbé outlet. To this end, we used daily precipitation data over the period 1988-2010 as input of the models, such as the Long Short-Term Memory (LSTM) and Recurrent Gate Networks (GRU) to simulate river discharge in the study area. The investigated models give good results in calibration (R2 = 0.888, NSE = 0.886, and RMSE = 0.42 for LSTM;R2 = 0.9, NSE = 0.9 and RMSE = 0.397 for GRU) and in validation (R2 = 0.865, NSE = 0.851, and RMSE = 0.329 for LSTM;R2 = 0.9, NSE = 0.865 and RMSE = 0.301 for GRU). This good performance of LSTM and GRU models confirms the importance of models based on machine learning in modeling hydrological phenomena for better decision-making.展开更多
As educational reforms intensify and societal emphasis shifts towards empowerment,the traditional discourse paradigm of management and control in educational supervision faces growing challenges.This paper explores th...As educational reforms intensify and societal emphasis shifts towards empowerment,the traditional discourse paradigm of management and control in educational supervision faces growing challenges.This paper explores the transformation of this discourse paradigm through the lens of empowerment,analyzing its distinct characteristics,potential pathways,and effective strategies.This paper begins by reviewing the concept of empowerment and examining the current research landscape surrounding the discourse paradigm in educational supervision.Subsequently,we conduct a comparative analysis of the“control”and“empowerment”paradigms,highlighting their essential differences.This analysis illuminates the key characteristics of an empowerment-oriented approach to educational supervision,particularly its emphasis on dialogue,collaboration,participation,and,crucially,empowerment itself.Ultimately,this research advocates for a shift in educational supervision towards an empowerment-oriented discourse system.This entails a multi-pronged approach:transforming ingrained beliefs,embracing renewed pedagogical concepts,fostering methodological innovation,and optimizing existing mechanisms and strategies within educational supervision.These changes are proposed to facilitate the more effective alignment of educational supervision with the pursuit of high-quality education.展开更多
The effective operation of a design assurance system cannot be achieved without the effective performance of the independent supervision function.As one of the core functions of the design assurance system,the purpose...The effective operation of a design assurance system cannot be achieved without the effective performance of the independent supervision function.As one of the core functions of the design assurance system,the purpose of the independent supervision function is to ensure that the system operates within the scope of procedures and manuals.At present,the function of independent supervision is a difficult and confusing issue for various original equipment manufacturers as well as suppliers,and there is an urgent requirement to put forward relevant requirements and form relevant methods.Based on the above mentioned objective,the basic requirements of the independent supervision function of design assurance system were studied,the problems and deficiencies in the organization,staffing,and methods existing in the current independent supervision function were analyzed,the improvement suggestions and measures for the performance of the independent supervision function from the aspects of the organization,staffing,procedures,and suppliers were put forward.The present work and conclusions provide guidance and direction for the effective operation of the design assurance system.展开更多
N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning m...N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.展开更多
Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article ...Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article notes the particularity of the data and measures the level of precision of predictions of naive Bayes algorithms, decision tree, and SVM (Support Vector Machine) on a corpus of computer jobs taken on the internet. This is due to the data imbalance problem in machine learning. However, this problem essentially focuses on the distribution of the number of documents in each class or subclass. Here, we delve deeper into the problem to the word count distribution in a set of documents. The results are compared with those obtained on a set of French IT offers. It appears that the precision of the classification varies between 88% and 90% for French offers against 67%, at most, for Cameroonian offers. The contribution of this study is twofold. Indeed, it clearly shows that, in a similar job category, job offers on the internet in Cameroon are more unstructured compared to those available in France, for example. Moreover, it makes it possible to emit a strong hypothesis according to which sets of texts having a symmetrical distribution of the number of words obtain better results with supervised learning algorithms.展开更多
Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligenc...Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligence(AI)showed outstanding performance in effectively diagnosing this virus in real-time.Computed tomography is a complementary diagnostic tool to clarify the damage of COVID-19 in the lungs even before symptoms appear in patients.This paper conducts a systematic literature review of deep learning methods for classifying the segmentation of COVID-19 infection in the lungs.We used the methodology of systematic reviews and meta-analyses(PRISMA)flow method.This research aims to systematically analyze the supervised deep learning methods,open resource datasets,data augmentation methods,and loss functions used for various segment shapes of COVID-19 infection from computerized tomography(CT)chest images.We have selected 56 primary studies relevant to the topic of the paper.We have compared different aspects of the algorithms used to segment infected areas in the CT images.Limitations to deep learning in the segmentation of infected areas still need to be developed to predict smaller regions of infection at the beginning of their appearance.展开更多
During the reign of emperor Qianlong of the Qing Dynasty,the political rule was relatively stable,the economy continued to develop,and both civil and martial situations flourished.The emperor’s love for jade articles...During the reign of emperor Qianlong of the Qing Dynasty,the political rule was relatively stable,the economy continued to develop,and both civil and martial situations flourished.The emperor’s love for jade articles made this period a peak in the development of Chinese jade culture.Based on jade articles of emperor Qianlong collected in the Palace Museum and related documents and archives,this paper tries to explore characteristics of this collection and emperor Qianlong’s collecting methods,then explains that this collection not only has the functions of aesthetics,but also has the meaning of morality,religion and politics.展开更多
The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning...The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning being difficult to process large-scale unlabeled data. The existing federated self-supervision framework has problems with low communication efficiency and high communication delay between clients and central servers. Therefore, we added edge servers to the federated self-supervision framework to reduce the pressure on the central server caused by frequent communication between both ends. A communication compression scheme using gradient quantization and sparsification was proposed to optimize the communication of the entire framework, and the algorithm of the sparse communication compression module was improved. Experiments have proved that the learning rate changes of the improved sparse communication compression module are smoother and more stable. Our communication compression scheme effectively reduced the overall communication overhead.展开更多
This paper compares the two different rural management methods of"emperor’s power far away from the countryside"and"town in charge of village affairs",which shows that the extreme grass-roots mana...This paper compares the two different rural management methods of"emperor’s power far away from the countryside"and"town in charge of village affairs",which shows that the extreme grass-roots management system is not conducive to rural development.This paper also points out that rural development needs to find a road of sustainable development in line with its own characteristics,which is the fundamental shortcut to change poverty and become rich for a long time.展开更多
The bronze acupuncture model was produced by Emperor Qianlong’s order in 1744 A.D.It has been 274 years since then,and this model has always been well kept and handed down with a full record.It is of great traditiona...The bronze acupuncture model was produced by Emperor Qianlong’s order in 1744 A.D.It has been 274 years since then,and this model has always been well kept and handed down with a full record.It is of great traditional medical and cultural value,regarded as the treasure of Shanghai Museum of Traditional Chinese Medicine due to its rareness and intactness both at home and abroad.展开更多
Cultural heritage of emperor's tomb of Wulingyuan Mausoleum lies in Xianyang which is located at north-central of Guanzhong Plain and which is the central area of Guanzhong-Tianshui economic development zone.With ...Cultural heritage of emperor's tomb of Wulingyuan Mausoleum lies in Xianyang which is located at north-central of Guanzhong Plain and which is the central area of Guanzhong-Tianshui economic development zone.With special geographical position and excellent location condition,it is the important tourism resource and archaeological remains in Shaanxi Province.By using relevant knowledge on tourism,in the perspective of development principle,necessity,feasibility,construction strategy and thought of top-quality tourism corridor with cultural heritage of emperor's tomb of Wulingyuan Mausoleum as theme experience,the author systematically explained the mode elements and value of experiential tourism products,and publicized the tourism resources to a certain extent.On the basis of publicity,the author strived to provide reference for the sustainable development of economy and ecology in this region.展开更多
文摘The COVID-19 pandemic has had a widespread negative impact globally. It shares symptoms with other respiratory illnesses such as pneumonia and influenza, making rapid and accurate diagnosis essential to treat individuals and halt further transmission. X-ray imaging of the lungs is one of the most reliable diagnostic tools. Utilizing deep learning, we can train models to recognize the signs of infection, thus aiding in the identification of COVID-19 cases. For our project, we developed a deep learning model utilizing the ResNet50 architecture, pre-trained with ImageNet and CheXNet datasets. We tackled the challenge of an imbalanced dataset, the CoronaHack Chest X-Ray dataset provided by Kaggle, through both binary and multi-class classification approaches. Additionally, we evaluated the performance impact of using Focal loss versus Cross-entropy loss in our model.
文摘The aim of this paper is to broaden the application of Stochastic Configuration Network (SCN) in the semi-supervised domain by utilizing common unlabeled data in daily life. It can enhance the classification accuracy of decentralized SCN algorithms while effectively protecting user privacy. To this end, we propose a decentralized semi-supervised learning algorithm for SCN, called DMT-SCN, which introduces teacher and student models by combining the idea of consistency regularization to improve the response speed of model iterations. In order to reduce the possible negative impact of unsupervised data on the model, we purposely change the way of adding noise to the unlabeled data. Simulation results show that the algorithm can effectively utilize unlabeled data to improve the classification accuracy of SCN training and is robust under different ground simulation environments.
文摘In recent years, the place occupied by the various manifestations of cyber-crime in companies has been considerable. Indeed, due to the rapid evolution of telecommunications technologies, companies, regardless of their size or sector of activity, are now the target of advanced persistent threats. The Work 2035 study also revealed that cyber crimes (such as critical infrastructure hacks) and massive data breaches are major sources of concern. Thus, it is important for organizations to guarantee a minimum level of security to avoid potential attacks that can cause paralysis of systems, loss of sensitive data, exposure to blackmail, damage to reputation or even a commercial harm. To do this, among other means, hardening is used, the main objective of which is to reduce the attack surface within a company. The execution of the hardening configurations as well as the verification of these are carried out on the servers and network equipment with the aim of reducing the number of openings present by keeping only those which are necessary for proper operation. However, nowadays, in many companies, these tasks are done manually. As a result, the execution and verification of hardening configurations are very often subject to potential errors but also highly consuming human and financial resources. The problem is that it is essential for operators to maintain an optimal level of security while minimizing costs, hence the interest in automating hardening processes and verifying the hardening of servers and network equipment. It is in this logic that we propose within the framework of this work the reinforcement of the security of the information systems (IS) by the automation of the mechanisms of hardening. In our work, we have, on the one hand, set up a hardening procedure in accordance with international security standards for servers, routers and switches and, on the other hand, designed and produced a functional application which makes it possible to: 1) Realise the configuration of the hardening;2) Verify them;3) Correct the non conformities;4) Write and send by mail a verification report for the configurations;5) And finally update the procedures of hardening. Our web application thus created allows in less than fifteen (15) minutes actions that previously took at least five (5) hours of time. This allows supervised network operators to save time and money, but also to improve their security standards in line with international standards.
基金the National Natural Science Foundation of China(42001408,61806097).
文摘Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.
文摘Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results.
文摘Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.
文摘The dynamic transformation of land use and land cover has emerged as a crucial aspect in the effective management of natural resources and the continual monitoring of environmental shifts. This study focused on the land use and land cover (LULC) changes within the catchment area of the Godavari River, assessing the repercussions of land and water resource exploitation. Utilizing LANDSAT satellite images from 2009, 2014, and 2019, this research employed supervised classification through the Quantum Geographic Information System (QGIS) software’s SCP plugin. Maximum likelihood classification algorithm was used for the assessment of supervised land use classification. Seven distinct LULC classes—forest, irrigated cropland, agricultural land (fallow), barren land, shrub land, water, and urban land—are delineated for classification purposes. The study revealed substantial changes in the Godavari basin’s land use patterns over the ten-year period from 2009 to 2019. Spatial and temporal dynamics of land use/cover changes (2009-2019) were quantified using three Satellite/Landsat images, a supervised classification algorithm and the post classification change detection technique in GIS. The total study area of the Godavari basin in Maharashtra encompasses 5138175.48 hectares. Notably, the built-up area increased from 0.14% in 2009 to 1.94% in 2019. The proportion of irrigated cropland, which was 62.32% in 2009, declined to 41.52% in 2019. Shrub land witnessed a noteworthy increase from 0.05% to 2.05% over the last decade. The key findings underscored significant declines in barren land, agricultural land, and irrigated cropland, juxtaposed with an expansion in forest land, shrub land, and urban land. The classification methodology achieved an overall accuracy of 80%, with a Kappa Statistic of 71.9% for the satellite images. The overall classification accuracy along with the Kappa value for 2009, 2014 and 2019 supervised land use land cover classification was good enough to detect the changing scenarios of Godavari River basin under study. These findings provide valuable insights for discerning land utilization across various categories, facilitating the adoption of appropriate strategies for sustainable land use in the region.
文摘Hydrological models are developed to simulate river flows over a watershed for many practical applications in the field of water resource management. The present paper compares the performance of two recurrent neural networks for rainfall-runoff modeling in the Zou River basin at Atchérigbé outlet. To this end, we used daily precipitation data over the period 1988-2010 as input of the models, such as the Long Short-Term Memory (LSTM) and Recurrent Gate Networks (GRU) to simulate river discharge in the study area. The investigated models give good results in calibration (R2 = 0.888, NSE = 0.886, and RMSE = 0.42 for LSTM;R2 = 0.9, NSE = 0.9 and RMSE = 0.397 for GRU) and in validation (R2 = 0.865, NSE = 0.851, and RMSE = 0.329 for LSTM;R2 = 0.9, NSE = 0.865 and RMSE = 0.301 for GRU). This good performance of LSTM and GRU models confirms the importance of models based on machine learning in modeling hydrological phenomena for better decision-making.
文摘As educational reforms intensify and societal emphasis shifts towards empowerment,the traditional discourse paradigm of management and control in educational supervision faces growing challenges.This paper explores the transformation of this discourse paradigm through the lens of empowerment,analyzing its distinct characteristics,potential pathways,and effective strategies.This paper begins by reviewing the concept of empowerment and examining the current research landscape surrounding the discourse paradigm in educational supervision.Subsequently,we conduct a comparative analysis of the“control”and“empowerment”paradigms,highlighting their essential differences.This analysis illuminates the key characteristics of an empowerment-oriented approach to educational supervision,particularly its emphasis on dialogue,collaboration,participation,and,crucially,empowerment itself.Ultimately,this research advocates for a shift in educational supervision towards an empowerment-oriented discourse system.This entails a multi-pronged approach:transforming ingrained beliefs,embracing renewed pedagogical concepts,fostering methodological innovation,and optimizing existing mechanisms and strategies within educational supervision.These changes are proposed to facilitate the more effective alignment of educational supervision with the pursuit of high-quality education.
文摘The effective operation of a design assurance system cannot be achieved without the effective performance of the independent supervision function.As one of the core functions of the design assurance system,the purpose of the independent supervision function is to ensure that the system operates within the scope of procedures and manuals.At present,the function of independent supervision is a difficult and confusing issue for various original equipment manufacturers as well as suppliers,and there is an urgent requirement to put forward relevant requirements and form relevant methods.Based on the above mentioned objective,the basic requirements of the independent supervision function of design assurance system were studied,the problems and deficiencies in the organization,staffing,and methods existing in the current independent supervision function were analyzed,the improvement suggestions and measures for the performance of the independent supervision function from the aspects of the organization,staffing,procedures,and suppliers were put forward.The present work and conclusions provide guidance and direction for the effective operation of the design assurance system.
文摘N-11-azaartemisinins potentially active against Plasmodium falciparum are designed by combining molecular electrostatic potential (MEP), ligand-receptor interaction, and models built with supervised machine learning methods (PCA, HCA, KNN, SIMCA, and SDA). The optimization of molecular structures was performed using the B3LYP/6-31G* approach. MEP maps and ligand-receptor interactions were used to investigate key structural features required for biological activities and likely interactions between N-11-azaartemisinins and heme, respectively. The supervised machine learning methods allowed the separation of the investigated compounds into two classes: cha and cla, with the properties ε<sub>LUMO+1</sub> (one level above lowest unoccupied molecular orbital energy), d(C<sub>6</sub>-C<sub>5</sub>) (distance between C<sub>6</sub> and C<sub>5</sub> atoms in ligands), and TSA (total surface area) responsible for the classification. The insights extracted from the investigation developed and the chemical intuition enabled the design of sixteen new N-11-azaartemisinins (prediction set), moreover, models built with supervised machine learning methods were applied to this prediction set. The result of this application showed twelve new promising N-11-azaartemisinins for synthesis and biological evaluation.
文摘Nowadays, in data science, supervised learning algorithms are frequently used to perform text classification. However, African textual data, in general, have been studied very little using these methods. This article notes the particularity of the data and measures the level of precision of predictions of naive Bayes algorithms, decision tree, and SVM (Support Vector Machine) on a corpus of computer jobs taken on the internet. This is due to the data imbalance problem in machine learning. However, this problem essentially focuses on the distribution of the number of documents in each class or subclass. Here, we delve deeper into the problem to the word count distribution in a set of documents. The results are compared with those obtained on a set of French IT offers. It appears that the precision of the classification varies between 88% and 90% for French offers against 67%, at most, for Cameroonian offers. The contribution of this study is twofold. Indeed, it clearly shows that, in a similar job category, job offers on the internet in Cameroon are more unstructured compared to those available in France, for example. Moreover, it makes it possible to emit a strong hypothesis according to which sets of texts having a symmetrical distribution of the number of words obtain better results with supervised learning algorithms.
文摘Coronavirus has infected more than 753 million people,ranging in severity from one person to another,where more than six million infected people died worldwide.Computer-aided diagnostic(CAD)with artificial intelligence(AI)showed outstanding performance in effectively diagnosing this virus in real-time.Computed tomography is a complementary diagnostic tool to clarify the damage of COVID-19 in the lungs even before symptoms appear in patients.This paper conducts a systematic literature review of deep learning methods for classifying the segmentation of COVID-19 infection in the lungs.We used the methodology of systematic reviews and meta-analyses(PRISMA)flow method.This research aims to systematically analyze the supervised deep learning methods,open resource datasets,data augmentation methods,and loss functions used for various segment shapes of COVID-19 infection from computerized tomography(CT)chest images.We have selected 56 primary studies relevant to the topic of the paper.We have compared different aspects of the algorithms used to segment infected areas in the CT images.Limitations to deep learning in the segmentation of infected areas still need to be developed to predict smaller regions of infection at the beginning of their appearance.
文摘During the reign of emperor Qianlong of the Qing Dynasty,the political rule was relatively stable,the economy continued to develop,and both civil and martial situations flourished.The emperor’s love for jade articles made this period a peak in the development of Chinese jade culture.Based on jade articles of emperor Qianlong collected in the Palace Museum and related documents and archives,this paper tries to explore characteristics of this collection and emperor Qianlong’s collecting methods,then explains that this collection not only has the functions of aesthetics,but also has the meaning of morality,religion and politics.
文摘The federated self-supervised framework is a distributed machine learning method that combines federated learning and self-supervised learning, which can effectively solve the problem of traditional federated learning being difficult to process large-scale unlabeled data. The existing federated self-supervision framework has problems with low communication efficiency and high communication delay between clients and central servers. Therefore, we added edge servers to the federated self-supervision framework to reduce the pressure on the central server caused by frequent communication between both ends. A communication compression scheme using gradient quantization and sparsification was proposed to optimize the communication of the entire framework, and the algorithm of the sparse communication compression module was improved. Experiments have proved that the learning rate changes of the improved sparse communication compression module are smoother and more stable. Our communication compression scheme effectively reduced the overall communication overhead.
文摘This paper compares the two different rural management methods of"emperor’s power far away from the countryside"and"town in charge of village affairs",which shows that the extreme grass-roots management system is not conducive to rural development.This paper also points out that rural development needs to find a road of sustainable development in line with its own characteristics,which is the fundamental shortcut to change poverty and become rich for a long time.
文摘The bronze acupuncture model was produced by Emperor Qianlong’s order in 1744 A.D.It has been 274 years since then,and this model has always been well kept and handed down with a full record.It is of great traditional medical and cultural value,regarded as the treasure of Shanghai Museum of Traditional Chinese Medicine due to its rareness and intactness both at home and abroad.
基金Supported by Financial Project of Shaanxi Province Key Disciplines:Key Supported Discipline of History and Geography (Landscape Lay-out and Cultural Tourism Development of Wulingyuan Mausoleum)the Financial Project of Shaanxi (College) Philosophy and Social Key Research Base Science -Guanzhong Ancient Mausoleum Culture Research Center~~
文摘Cultural heritage of emperor's tomb of Wulingyuan Mausoleum lies in Xianyang which is located at north-central of Guanzhong Plain and which is the central area of Guanzhong-Tianshui economic development zone.With special geographical position and excellent location condition,it is the important tourism resource and archaeological remains in Shaanxi Province.By using relevant knowledge on tourism,in the perspective of development principle,necessity,feasibility,construction strategy and thought of top-quality tourism corridor with cultural heritage of emperor's tomb of Wulingyuan Mausoleum as theme experience,the author systematically explained the mode elements and value of experiential tourism products,and publicized the tourism resources to a certain extent.On the basis of publicity,the author strived to provide reference for the sustainable development of economy and ecology in this region.