This comprehensive study investigates the multifaceted impact of AI-powered personalization on strategic communications, delving deeply into its opportunities, challenges, and future directions. Employing a rigorous m...This comprehensive study investigates the multifaceted impact of AI-powered personalization on strategic communications, delving deeply into its opportunities, challenges, and future directions. Employing a rigorous mixed-methods approach, we conduct an in-depth analysis of the effects of AI-driven personalization on audience engagement, brand perception, and conversion rates across various industries and communication channels. Our findings reveal that while AI-powered personalization significantly enhances communication effectiveness and offers unprecedented opportunities for audience connection, it also raises critical ethical considerations and implementation challenges. The study contributes substantially to the growing body of literature on AI in communications, offering both theoretical insights and practical guidelines for professionals navigating this rapidly evolving landscape. Furthermore, we propose a novel framework for ethical AI implementation in strategic communications and outline a robust agenda for future research in this dynamic field.展开更多
Recently,convolutional neural network(CNN)-based visual inspec-tion has been developed to detect defects on building surfaces automatically.The CNN model demonstrates remarkable accuracy in image data analysis;however...Recently,convolutional neural network(CNN)-based visual inspec-tion has been developed to detect defects on building surfaces automatically.The CNN model demonstrates remarkable accuracy in image data analysis;however,the predicted results have uncertainty in providing accurate informa-tion to users because of the“black box”problem in the deep learning model.Therefore,this study proposes a visual explanation method to overcome the uncertainty limitation of CNN-based defect identification.The visual repre-sentative gradient-weights class activation mapping(Grad-CAM)method is adopted to provide visually explainable information.A visualizing evaluation index is proposed to quantitatively analyze visual representations;this index reflects a rough estimate of the concordance rate between the visualized heat map and intended defects.In addition,an ablation study,adopting three-branch combinations with the VGG16,is implemented to identify perfor-mance variations by visualizing predicted results.Experiments reveal that the proposed model,combined with hybrid pooling,batch normalization,and multi-attention modules,achieves the best performance with an accuracy of 97.77%,corresponding to an improvement of 2.49%compared with the baseline model.Consequently,this study demonstrates that reliable results from an automatic defect classification model can be provided to an inspector through the visual representation of the predicted results using CNN models.展开更多
Existing explanation methods for Convolutional Neural Networks(CNNs)lack the pixel-level visualization explanations to generate the reliable fine-grained decision features.Since there are inconsistencies between the e...Existing explanation methods for Convolutional Neural Networks(CNNs)lack the pixel-level visualization explanations to generate the reliable fine-grained decision features.Since there are inconsistencies between the explanation and the actual behavior of the model to be interpreted,we propose a Fine-Grained Visual Explanation for CNN,namely F-GVE,which produces a fine-grained explanation with higher consistency to the decision of the original model.The exact backward class-specific gradients with respect to the input image is obtained to highlight the object-related pixels the model used to make prediction.In addition,for better visualization and less noise,F-GVE selects an appropriate threshold to filter the gradient during the calculation and the explanation map is obtained by element-wise multiplying the gradient and the input image to show fine-grained classification decision features.Experimental results demonstrate that F-GVE has good visual performances and highlights the importance of fine-grained decision features.Moreover,the faithfulness of the explanation in this paper is high and it is effective and practical on troubleshooting and debugging detection.展开更多
The flow regimes of GLCC with horizon inlet and a vertical pipe are investigated in experiments,and the velocities and pressure drops data labeled by the corresponding flow regimes are collected.Combined with the flow...The flow regimes of GLCC with horizon inlet and a vertical pipe are investigated in experiments,and the velocities and pressure drops data labeled by the corresponding flow regimes are collected.Combined with the flow regimes data of other GLCC positions from other literatures in existence,the gas and liquid superficial velocities and pressure drops are used as the input of the machine learning algorithms respectively which are applied to identify the flow regimes.The choosing of input data types takes the availability of data for practical industry fields into consideration,and the twelve machine learning algorithms are chosen from the classical and popular algorithms in the area of classification,including the typical ensemble models,SVM,KNN,Bayesian Model and MLP.The results of flow regimes identification show that gas and liquid superficial velocities are the ideal type of input data for the flow regimes identification by machine learning.Most of the ensemble models can identify the flow regimes of GLCC by gas and liquid velocities with the accuracy of 0.99 and more.For the pressure drops as the input of each algorithm,it is not the suitable as gas and liquid velocities,and only XGBoost and Bagging Tree can identify the GLCC flow regimes accurately.The success and confusion of each algorithm are analyzed and explained based on the experimental phenomena of flow regimes evolution processes,the flow regimes map,and the principles of algorithms.The applicability and feasibility of each algorithm according to different types of data for GLCC flow regimes identification are proposed.展开更多
The digital technologies that run based on users’content provide a platform for users to help air their opinions on various aspects of a particular subject or product.The recommendation agents play a crucial role in ...The digital technologies that run based on users’content provide a platform for users to help air their opinions on various aspects of a particular subject or product.The recommendation agents play a crucial role in personalizing the needs of individual users.Therefore,it is essential to improve the user experience.The recommender system focuses on recommending a set of items to a user to help the decision-making process and is prevalent across e-commerce and media websites.In Context-Aware Recommender Systems(CARS),several influential and contextual variables are identified to provide an effective recommendation.A substantial trade-off is applied in context to achieve the proper accuracy and coverage required for a collaborative recommendation.The CARS will generate more recommendations utilizing adapting them to a certain contextual situation of users.However,the key issue is how contextual information is used to create good and intelligent recommender systems.This paper proposes an Artificial Neural Network(ANN)to achieve contextual recommendations based on usergenerated reviews.The ability of ANNs to learn events and make decisions based on similar events makes it effective for personalized recommendations in CARS.Thus,the most appropriate contexts in which a user should choose an item or service are achieved.This work converts every label set into a Multi-Label Classification(MLC)problem to enhance recommendations.Experimental results show that the proposed ANN performs better in the Binary Relevance(BR)Instance-Based Classifier,the BR Decision Tree,and the Multi-label SVM for Trip Advisor and LDOS-CoMoDa Dataset.Furthermore,the accuracy of the proposed ANN achieves better results by 1.1%to 6.1%compared to other existing methods.展开更多
With the rapid development of molecular biology and related disciplines, animal breeding has moved from conventional breeding to molecular breeding. Marker-assisted selection and genomic selection have become mainstre...With the rapid development of molecular biology and related disciplines, animal breeding has moved from conventional breeding to molecular breeding. Marker-assisted selection and genomic selection have become mainstream practices in molecular breeding of livestock. However, these techniques only use information from genomic variation but not multi-omics information, thus do not fully explain the molecular basis of phenotypic variations in complex traits. In addition, the accuracy of breeding value estimation based on these techniques is occasionally controversial in different populations or varieties. Given the rapid development of high-throughput sequencing techniques and functional genome and dramatic reductions in the overall cost of sequencing, it is possible to clarify the interactions between genes and formation of phenotypes using massive sets of omic-level data from studies of the transcriptome, proteome, epigenome, and metabolome. During livestock breeding, multi-omics information regarding breeding populations and individuals should be taken into account. The interactive regulatory networks governing gene regulation and phenotype formation in diverse livestock population, varieties and species should be analyzed. In addition, a multi-omics regulatory breeding model should be constructed. Precision, population-personalized breeding is expected to become a crucial practice in future livestock breeding. Precision breeding of individuals can be achieved by combining population genomic information at multi-omics levels together with genomic selection and genome editing techniques.展开更多
Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the sett...Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the settlement caused by tunneling.However,well-performing ML models are usually less interpretable.Irrelevant input features decrease the performance and interpretability of an ML model.Nonetheless,feature selection,a critical step in the ML pipeline,is usually ignored in most studies that focused on predicting tunneling-induced settlement.This study applies four techniques,i.e.Pearson correlation method,sequential forward selection(SFS),sequential backward selection(SBS)and Boruta algorithm,to investigate the effect of feature selection on the model’s performance when predicting the tunneling-induced maximum surface settlement(S_(max)).The data set used in this study was compiled from two metro tunnel projects excavated in Hangzhou,China using earth pressure balance(EPB)shields and consists of 14 input features and a single output(i.e.S_(max)).The ML model that is trained on features selected from the Boruta algorithm demonstrates the best performance in both the training and testing phases.The relevant features chosen from the Boruta algorithm further indicate that tunneling-induced settlement is affected by parameters related to tunnel geometry,geological conditions and shield operation.The recently proposed Shapley additive explanations(SHAP)method explores how the input features contribute to the output of a complex ML model.It is observed that the larger settlements are induced during shield tunneling in silty clay.Moreover,the SHAP analysis reveals that the low magnitudes of face pressure at the top of the shield increase the model’s output。展开更多
Collaborative Filtering(CF) is a leading approach to build recommender systems which has gained considerable development and popularity. A predominant approach to CF is rating prediction recommender algorithm, aiming ...Collaborative Filtering(CF) is a leading approach to build recommender systems which has gained considerable development and popularity. A predominant approach to CF is rating prediction recommender algorithm, aiming to predict a user's rating for those items which were not rated yet by the user. However, with the increasing number of items and users, thedata is sparse.It is difficult to detectlatent closely relation among the items or users for predicting the user behaviors. In this paper,we enhance the rating prediction approach leading to substantial improvement of prediction accuracy by categorizing according to the genres of movies. Then the probabilities that users are interested in the genres are computed to integrate the prediction of each genre cluster. A novel probabilistic approach based on the sentiment analysis of the user reviews is also proposed to give intuitional explanations of why an item is recommended.To test the novel recommendation approach, a new corpus of user reviews on movies obtained from the Internet Movies Database(IMDB) has been generated. Experimental results show that the proposed framework is effective and achieves a better prediction performance.展开更多
[Objective] The research aimed to analyze explanation effect of the European numerical prediction on temperature. [Method] Based on CMSVM regression method, by using 850 hPa grid point data of the European numerical p...[Objective] The research aimed to analyze explanation effect of the European numerical prediction on temperature. [Method] Based on CMSVM regression method, by using 850 hPa grid point data of the European numerical prediction from 2003 to 2009 and actual data of the maximum and minimum temperatures at 8 automatic stations in Qingyang City, prediction model of the temperature was established, and running effect of the business from 2008 to 2010 was tested and evaluated. [Result] The method had very good guidance role in real-time business running of the temperature prediction. Test and evaluation found that as forecast time prolonged, prediction accuracies of the maximum and minimum temperatures declined. When temperature anomaly was higher (actual temperature was higher than historical mean), prediction accuracy increased. Influence of the European numerical prediction was bigger. [Conclusion] Compared with other methods, operation of the prediction method was convenient, modeling was automatic, running time was short, system was stable, and prediction accuracy was high. It was suitable for implementing of the explanation work for numerical prediction product at meteorological station.展开更多
The Kirk test has good precision for measuring stray light in optical lithography and is the usual method of measuring stray light.However,Kirk did not provide a theoretical explanation to his simulation model.We atte...The Kirk test has good precision for measuring stray light in optical lithography and is the usual method of measuring stray light.However,Kirk did not provide a theoretical explanation to his simulation model.We attempt to give Kirk's model a kind of theoretical explanation and a little improvement based on the model of point spread function of scattering and the theory of statistical optics.It is indicated by simulation that the improved model fits Kirk's measurement data better.展开更多
There is a puzzling astrophysical result concerning the latest observation of the absorption profile of the redshifted radio line 21 cm from the early Universe(as described in Bowman et al.). The amplitude of the prof...There is a puzzling astrophysical result concerning the latest observation of the absorption profile of the redshifted radio line 21 cm from the early Universe(as described in Bowman et al.). The amplitude of the profile was more than a factor of two greater than the largest predictions. This could mean that the primordial hydrogen gas was much cooler than expected. Some explanations in the literature suggested a possible cooling of baryons either by unspecified dark matter particles or by some exotic dark matter particles with a charge a million times smaller than the electron charge. Other explanations required an additional radio background. In the present paper, we entertain a possible different explanation for the above puzzling observational result: the explanation is based on the alternative kind of hydrogen atoms(AKHA),whose existence was previously demonstrated theoretically, as well as by the analysis of atomic experiments. Namely, the AKHA are expected to decouple from the cosmic microwave background(CMB) much earlier(in the course of the Universe expansion) than usual hydrogen atoms, so that the AKHA temperature is significantly lower than that of usual hydrogen atoms. This seems to lower the excitation(spin) temperature of the hyperfine doublet(responsible for the 21 cm line) sufficiently enough for explaining the above puzzling observational result. This possible explanation appears to be more specific and natural than the previous possible explanations. Further observational studies of the redshifted 21 cm radio line from the early Universe could help to verify which explanation is the most relevant.展开更多
With large-scale engineering projects being carried out in China, a large number of fossil localities have been discovered and excavated by responsible agencies, but still some important fossils of great value have be...With large-scale engineering projects being carried out in China, a large number of fossil localities have been discovered and excavated by responsible agencies, but still some important fossils of great value have been removed and smuggled into foreign countries. In the last three years, more than 1345 fossil specimens have been intercepted by Customs in Shenzhen, Shanghai, Tianjin, Beijing and elsewhere, and more than 5000 fossils, most of which are listed as key fossils,展开更多
Plastic has been accumulating on the beaches of Henderson Island in vast quantities according to recent news reports. It is proposed that the plastic is brought to the island by a very broad permanent surface current ...Plastic has been accumulating on the beaches of Henderson Island in vast quantities according to recent news reports. It is proposed that the plastic is brought to the island by a very broad permanent surface current flowing southeastward past the island. Other characteristics of the flow are that its temperature is relatively high, its depth is shallow (about 100 m), its speed is sluggish (10 - 20 cm/sec), and by broad is meant more than 5000 km along 28 S. Henderson Island is located at the east/west midpoint of this wide warm current (130 W). By knowing more definitely where the plastic is coming from, than the vague suggestions provided by the news sources, it may be possible in the future to slow down or stop the piling up of trash on what were pristine beaches of this World Heritage Site.展开更多
Majorana zero modes in the hybrid semiconductor-superconductornanowire is one of the promising candidates for topologicalquantum computing. Recently, in nanowires with a superconductingisland, the signature of Majoran...Majorana zero modes in the hybrid semiconductor-superconductornanowire is one of the promising candidates for topologicalquantum computing. Recently, in nanowires with a superconductingisland, the signature of Majorana zero modescan be revealed as a subgap state whose energy oscillatesaround zero in magnetic field. This oscillation was interpretedas overlapping Majoranas. However, the oscillation amplitudeeither dies away after an overshoot or decays, sharply oppositeto the theoretically predicted enhanced oscillations for Majoranabound states, as the magnetic field increases. Several theoreticalstudies have tried to address this discrepancy, but arepartially successful. This discrepancy has raised the concernson the conclusive identification of Majorana bound states, andhas even endangered the scheme of Majorana qubits basedon the nanowires.展开更多
Often, the explanatory power of a learned model must be traded off against model performance. In the case of predict-ing runaway software projects, we show that the twin goals of high performance and good explanatory ...Often, the explanatory power of a learned model must be traded off against model performance. In the case of predict-ing runaway software projects, we show that the twin goals of high performance and good explanatory power are achievable after applying a variety of data mining techniques (discrimination, feature subset selection, rule covering algorithms). This result is a new high water mark in predicting runaway projects. Measured in terms of precision, this new model is as good as can be expected for our data. Other methods might out-perform our result (e.g. by generating a smaller, more explainable model) but no other method could out-perform the precision of our learned model.展开更多
DyTiFe_(11) compound is a ferromagnetic substance.It has tetragonal body-centered ThMn_(12)-type crystallographic structure.At room temperature,the easy magnetization direction is the c-axis.A spin reorientation begin...DyTiFe_(11) compound is a ferromagnetic substance.It has tetragonal body-centered ThMn_(12)-type crystallographic structure.At room temperature,the easy magnetization direction is the c-axis.A spin reorientation begins to appear at about 175K.The contribution of Fe sublattice to magnetocrystalline anisotropy was determined by experiments and that of Dy sublattice was obtained by using single ion model calculation.Results show that the spin reorientation arises from the competition of anisotropy between Fe and Dy sublattices.展开更多
This picture of "International Space Year" provided by Geo-Space Co. of Austria, which is obtained by geometric accurate correction and digital mounting on NOAA/AVHHR meteorological satellite image, displays...This picture of "International Space Year" provided by Geo-Space Co. of Austria, which is obtained by geometric accurate correction and digital mounting on NOAA/AVHHR meteorological satellite image, displays the vegetable rates of Europen Continent in detail. The snow-cover of the Alps, Scandinavia and Iceland and dry and hot features of Mediterranean area are very clear. The vegetable rates give multi-temporal information for the study of environmental and seasonal change, biomass estimation and crop production, which has been the展开更多
文摘This comprehensive study investigates the multifaceted impact of AI-powered personalization on strategic communications, delving deeply into its opportunities, challenges, and future directions. Employing a rigorous mixed-methods approach, we conduct an in-depth analysis of the effects of AI-driven personalization on audience engagement, brand perception, and conversion rates across various industries and communication channels. Our findings reveal that while AI-powered personalization significantly enhances communication effectiveness and offers unprecedented opportunities for audience connection, it also raises critical ethical considerations and implementation challenges. The study contributes substantially to the growing body of literature on AI in communications, offering both theoretical insights and practical guidelines for professionals navigating this rapidly evolving landscape. Furthermore, we propose a novel framework for ethical AI implementation in strategic communications and outline a robust agenda for future research in this dynamic field.
基金supported by a Korea Agency for Infrastructure Technology Advancement(KAIA)grant funded by the Ministry of Land,Infrastructure,and Transport(Grant 22CTAP-C163951-02).
文摘Recently,convolutional neural network(CNN)-based visual inspec-tion has been developed to detect defects on building surfaces automatically.The CNN model demonstrates remarkable accuracy in image data analysis;however,the predicted results have uncertainty in providing accurate informa-tion to users because of the“black box”problem in the deep learning model.Therefore,this study proposes a visual explanation method to overcome the uncertainty limitation of CNN-based defect identification.The visual repre-sentative gradient-weights class activation mapping(Grad-CAM)method is adopted to provide visually explainable information.A visualizing evaluation index is proposed to quantitatively analyze visual representations;this index reflects a rough estimate of the concordance rate between the visualized heat map and intended defects.In addition,an ablation study,adopting three-branch combinations with the VGG16,is implemented to identify perfor-mance variations by visualizing predicted results.Experiments reveal that the proposed model,combined with hybrid pooling,batch normalization,and multi-attention modules,achieves the best performance with an accuracy of 97.77%,corresponding to an improvement of 2.49%compared with the baseline model.Consequently,this study demonstrates that reliable results from an automatic defect classification model can be provided to an inspector through the visual representation of the predicted results using CNN models.
基金This work was partially supported by Beijing Natural Science Foundation(No.4222038)by Open Research Project of the State Key Laboratory of Media Convergence and Communication(Communication University of China),by the National Key RD Program of China(No.2021YFF0307600)and by Fundamental Research Funds for the Central Universities.
文摘Existing explanation methods for Convolutional Neural Networks(CNNs)lack the pixel-level visualization explanations to generate the reliable fine-grained decision features.Since there are inconsistencies between the explanation and the actual behavior of the model to be interpreted,we propose a Fine-Grained Visual Explanation for CNN,namely F-GVE,which produces a fine-grained explanation with higher consistency to the decision of the original model.The exact backward class-specific gradients with respect to the input image is obtained to highlight the object-related pixels the model used to make prediction.In addition,for better visualization and less noise,F-GVE selects an appropriate threshold to filter the gradient during the calculation and the explanation map is obtained by element-wise multiplying the gradient and the input image to show fine-grained classification decision features.Experimental results demonstrate that F-GVE has good visual performances and highlights the importance of fine-grained decision features.Moreover,the faithfulness of the explanation in this paper is high and it is effective and practical on troubleshooting and debugging detection.
文摘The flow regimes of GLCC with horizon inlet and a vertical pipe are investigated in experiments,and the velocities and pressure drops data labeled by the corresponding flow regimes are collected.Combined with the flow regimes data of other GLCC positions from other literatures in existence,the gas and liquid superficial velocities and pressure drops are used as the input of the machine learning algorithms respectively which are applied to identify the flow regimes.The choosing of input data types takes the availability of data for practical industry fields into consideration,and the twelve machine learning algorithms are chosen from the classical and popular algorithms in the area of classification,including the typical ensemble models,SVM,KNN,Bayesian Model and MLP.The results of flow regimes identification show that gas and liquid superficial velocities are the ideal type of input data for the flow regimes identification by machine learning.Most of the ensemble models can identify the flow regimes of GLCC by gas and liquid velocities with the accuracy of 0.99 and more.For the pressure drops as the input of each algorithm,it is not the suitable as gas and liquid velocities,and only XGBoost and Bagging Tree can identify the GLCC flow regimes accurately.The success and confusion of each algorithm are analyzed and explained based on the experimental phenomena of flow regimes evolution processes,the flow regimes map,and the principles of algorithms.The applicability and feasibility of each algorithm according to different types of data for GLCC flow regimes identification are proposed.
文摘The digital technologies that run based on users’content provide a platform for users to help air their opinions on various aspects of a particular subject or product.The recommendation agents play a crucial role in personalizing the needs of individual users.Therefore,it is essential to improve the user experience.The recommender system focuses on recommending a set of items to a user to help the decision-making process and is prevalent across e-commerce and media websites.In Context-Aware Recommender Systems(CARS),several influential and contextual variables are identified to provide an effective recommendation.A substantial trade-off is applied in context to achieve the proper accuracy and coverage required for a collaborative recommendation.The CARS will generate more recommendations utilizing adapting them to a certain contextual situation of users.However,the key issue is how contextual information is used to create good and intelligent recommender systems.This paper proposes an Artificial Neural Network(ANN)to achieve contextual recommendations based on usergenerated reviews.The ability of ANNs to learn events and make decisions based on similar events makes it effective for personalized recommendations in CARS.Thus,the most appropriate contexts in which a user should choose an item or service are achieved.This work converts every label set into a Multi-Label Classification(MLC)problem to enhance recommendations.Experimental results show that the proposed ANN performs better in the Binary Relevance(BR)Instance-Based Classifier,the BR Decision Tree,and the Multi-label SVM for Trip Advisor and LDOS-CoMoDa Dataset.Furthermore,the accuracy of the proposed ANN achieves better results by 1.1%to 6.1%compared to other existing methods.
基金supported by National High Technology Plan of China (2013 AA102502)the National Natural Science Foundation of China (313300453)the National Key Basic Research Program of China (2015CB943101)
文摘With the rapid development of molecular biology and related disciplines, animal breeding has moved from conventional breeding to molecular breeding. Marker-assisted selection and genomic selection have become mainstream practices in molecular breeding of livestock. However, these techniques only use information from genomic variation but not multi-omics information, thus do not fully explain the molecular basis of phenotypic variations in complex traits. In addition, the accuracy of breeding value estimation based on these techniques is occasionally controversial in different populations or varieties. Given the rapid development of high-throughput sequencing techniques and functional genome and dramatic reductions in the overall cost of sequencing, it is possible to clarify the interactions between genes and formation of phenotypes using massive sets of omic-level data from studies of the transcriptome, proteome, epigenome, and metabolome. During livestock breeding, multi-omics information regarding breeding populations and individuals should be taken into account. The interactive regulatory networks governing gene regulation and phenotype formation in diverse livestock population, varieties and species should be analyzed. In addition, a multi-omics regulatory breeding model should be constructed. Precision, population-personalized breeding is expected to become a crucial practice in future livestock breeding. Precision breeding of individuals can be achieved by combining population genomic information at multi-omics levels together with genomic selection and genome editing techniques.
基金support provided by The Science and Technology Development Fund,Macao SAR,China(File Nos.0057/2020/AGJ and SKL-IOTSC-2021-2023)Science and Technology Program of Guangdong Province,China(Grant No.2021A0505080009).
文摘Accurate prediction of shield tunneling-induced settlement is a complex problem that requires consideration of many influential parameters.Recent studies reveal that machine learning(ML)algorithms can predict the settlement caused by tunneling.However,well-performing ML models are usually less interpretable.Irrelevant input features decrease the performance and interpretability of an ML model.Nonetheless,feature selection,a critical step in the ML pipeline,is usually ignored in most studies that focused on predicting tunneling-induced settlement.This study applies four techniques,i.e.Pearson correlation method,sequential forward selection(SFS),sequential backward selection(SBS)and Boruta algorithm,to investigate the effect of feature selection on the model’s performance when predicting the tunneling-induced maximum surface settlement(S_(max)).The data set used in this study was compiled from two metro tunnel projects excavated in Hangzhou,China using earth pressure balance(EPB)shields and consists of 14 input features and a single output(i.e.S_(max)).The ML model that is trained on features selected from the Boruta algorithm demonstrates the best performance in both the training and testing phases.The relevant features chosen from the Boruta algorithm further indicate that tunneling-induced settlement is affected by parameters related to tunnel geometry,geological conditions and shield operation.The recently proposed Shapley additive explanations(SHAP)method explores how the input features contribute to the output of a complex ML model.It is observed that the larger settlements are induced during shield tunneling in silty clay.Moreover,the SHAP analysis reveals that the low magnitudes of face pressure at the top of the shield increase the model’s output。
基金supported in part by National Science Foundation of China under Grants No.61303105 and 61402304the Humanity&Social Science general project of Ministry of Education under Grants No.14YJAZH046+2 种基金the Beijing Natural Science Foundation under Grants No.4154065the Beijing Educational Committee Science and Technology Development Planned under Grants No.KM201410028017Academic Degree Graduate Courses group projects
文摘Collaborative Filtering(CF) is a leading approach to build recommender systems which has gained considerable development and popularity. A predominant approach to CF is rating prediction recommender algorithm, aiming to predict a user's rating for those items which were not rated yet by the user. However, with the increasing number of items and users, thedata is sparse.It is difficult to detectlatent closely relation among the items or users for predicting the user behaviors. In this paper,we enhance the rating prediction approach leading to substantial improvement of prediction accuracy by categorizing according to the genres of movies. Then the probabilities that users are interested in the genres are computed to integrate the prediction of each genre cluster. A novel probabilistic approach based on the sentiment analysis of the user reviews is also proposed to give intuitional explanations of why an item is recommended.To test the novel recommendation approach, a new corpus of user reviews on movies obtained from the Internet Movies Database(IMDB) has been generated. Experimental results show that the proposed framework is effective and achieves a better prediction performance.
文摘[Objective] The research aimed to analyze explanation effect of the European numerical prediction on temperature. [Method] Based on CMSVM regression method, by using 850 hPa grid point data of the European numerical prediction from 2003 to 2009 and actual data of the maximum and minimum temperatures at 8 automatic stations in Qingyang City, prediction model of the temperature was established, and running effect of the business from 2008 to 2010 was tested and evaluated. [Result] The method had very good guidance role in real-time business running of the temperature prediction. Test and evaluation found that as forecast time prolonged, prediction accuracies of the maximum and minimum temperatures declined. When temperature anomaly was higher (actual temperature was higher than historical mean), prediction accuracy increased. Influence of the European numerical prediction was bigger. [Conclusion] Compared with other methods, operation of the prediction method was convenient, modeling was automatic, running time was short, system was stable, and prediction accuracy was high. It was suitable for implementing of the explanation work for numerical prediction product at meteorological station.
基金by the National Basic Research Program of China under Grant No 2007AA01Z333the National Special Program of China under Grant No 2009ZX02204-008.
文摘The Kirk test has good precision for measuring stray light in optical lithography and is the usual method of measuring stray light.However,Kirk did not provide a theoretical explanation to his simulation model.We attempt to give Kirk's model a kind of theoretical explanation and a little improvement based on the model of point spread function of scattering and the theory of statistical optics.It is indicated by simulation that the improved model fits Kirk's measurement data better.
文摘There is a puzzling astrophysical result concerning the latest observation of the absorption profile of the redshifted radio line 21 cm from the early Universe(as described in Bowman et al.). The amplitude of the profile was more than a factor of two greater than the largest predictions. This could mean that the primordial hydrogen gas was much cooler than expected. Some explanations in the literature suggested a possible cooling of baryons either by unspecified dark matter particles or by some exotic dark matter particles with a charge a million times smaller than the electron charge. Other explanations required an additional radio background. In the present paper, we entertain a possible different explanation for the above puzzling observational result: the explanation is based on the alternative kind of hydrogen atoms(AKHA),whose existence was previously demonstrated theoretically, as well as by the analysis of atomic experiments. Namely, the AKHA are expected to decouple from the cosmic microwave background(CMB) much earlier(in the course of the Universe expansion) than usual hydrogen atoms, so that the AKHA temperature is significantly lower than that of usual hydrogen atoms. This seems to lower the excitation(spin) temperature of the hyperfine doublet(responsible for the 21 cm line) sufficiently enough for explaining the above puzzling observational result. This possible explanation appears to be more specific and natural than the previous possible explanations. Further observational studies of the redshifted 21 cm radio line from the early Universe could help to verify which explanation is the most relevant.
文摘With large-scale engineering projects being carried out in China, a large number of fossil localities have been discovered and excavated by responsible agencies, but still some important fossils of great value have been removed and smuggled into foreign countries. In the last three years, more than 1345 fossil specimens have been intercepted by Customs in Shenzhen, Shanghai, Tianjin, Beijing and elsewhere, and more than 5000 fossils, most of which are listed as key fossils,
文摘Plastic has been accumulating on the beaches of Henderson Island in vast quantities according to recent news reports. It is proposed that the plastic is brought to the island by a very broad permanent surface current flowing southeastward past the island. Other characteristics of the flow are that its temperature is relatively high, its depth is shallow (about 100 m), its speed is sluggish (10 - 20 cm/sec), and by broad is meant more than 5000 km along 28 S. Henderson Island is located at the east/west midpoint of this wide warm current (130 W). By knowing more definitely where the plastic is coming from, than the vague suggestions provided by the news sources, it may be possible in the future to slow down or stop the piling up of trash on what were pristine beaches of this World Heritage Site.
文摘Majorana zero modes in the hybrid semiconductor-superconductornanowire is one of the promising candidates for topologicalquantum computing. Recently, in nanowires with a superconductingisland, the signature of Majorana zero modescan be revealed as a subgap state whose energy oscillatesaround zero in magnetic field. This oscillation was interpretedas overlapping Majoranas. However, the oscillation amplitudeeither dies away after an overshoot or decays, sharply oppositeto the theoretically predicted enhanced oscillations for Majoranabound states, as the magnetic field increases. Several theoreticalstudies have tried to address this discrepancy, but arepartially successful. This discrepancy has raised the concernson the conclusive identification of Majorana bound states, andhas even endangered the scheme of Majorana qubits basedon the nanowires.
文摘Often, the explanatory power of a learned model must be traded off against model performance. In the case of predict-ing runaway software projects, we show that the twin goals of high performance and good explanatory power are achievable after applying a variety of data mining techniques (discrimination, feature subset selection, rule covering algorithms). This result is a new high water mark in predicting runaway projects. Measured in terms of precision, this new model is as good as can be expected for our data. Other methods might out-perform our result (e.g. by generating a smaller, more explainable model) but no other method could out-perform the precision of our learned model.
文摘DyTiFe_(11) compound is a ferromagnetic substance.It has tetragonal body-centered ThMn_(12)-type crystallographic structure.At room temperature,the easy magnetization direction is the c-axis.A spin reorientation begins to appear at about 175K.The contribution of Fe sublattice to magnetocrystalline anisotropy was determined by experiments and that of Dy sublattice was obtained by using single ion model calculation.Results show that the spin reorientation arises from the competition of anisotropy between Fe and Dy sublattices.
文摘This picture of "International Space Year" provided by Geo-Space Co. of Austria, which is obtained by geometric accurate correction and digital mounting on NOAA/AVHHR meteorological satellite image, displays the vegetable rates of Europen Continent in detail. The snow-cover of the Alps, Scandinavia and Iceland and dry and hot features of Mediterranean area are very clear. The vegetable rates give multi-temporal information for the study of environmental and seasonal change, biomass estimation and crop production, which has been the