Ceramic relief mural is a contemporary landscape art that is carefully designed based on human nature,culture,and architectural wall space,combined with social customs,visual sensibility,and art.It may also become the...Ceramic relief mural is a contemporary landscape art that is carefully designed based on human nature,culture,and architectural wall space,combined with social customs,visual sensibility,and art.It may also become the main axis of ceramic art in the future.Taiwan public ceramic relief murals(PCRM)are most distinctive with the PCRM pioneered by Pan-Hsiung Chu of Meinong Kiln in 1987.In addition to breaking through the limitations of traditional public ceramic murals,Chu leveraged local culture and sensibility.The theme of art gives PCRM its unique style and innovative value throughout the Taiwan region.This study mainly analyzes and understands the design image of public ceramic murals,taking Taiwan PCRM’s design and creation as the scope,and applies STEEP analysis,that is,the social,technological,economic,ecological,and political-legal environments are analyzed as core factors;eight main important factors in the artistic design image of ceramic murals are evaluated.Then,interpretive structural modeling(ISM)is used to establish five levels,analyze the four main problems in the main core factor area and the four main target results in the affected factor area;and analyze the problem points and target points as well as their causal relationships.It is expected to sort out the relationship between these factors,obtain the hierarchical relationship of each factor,and provide a reference basis and research methods.展开更多
Through the collection of related literature,we point out the six major factors influencing China's forestry enterprises' financing: insufficient national support; regulations and institutional environmental f...Through the collection of related literature,we point out the six major factors influencing China's forestry enterprises' financing: insufficient national support; regulations and institutional environmental factors; narrow channels of financing; inappropriate existing mortgagebacked approach; forestry production characteristics; forestry enterprises' defects. Then,we use interpretive structural modeling( ISM) from System Engineering to analyze the structure of the six factors and set up ladder-type structure. We put three factors including forestry production characteristics,shortcomings of forestry enterprises and regulatory,institutional and environmental factors as basic factors and put other three factors as important factors. From the perspective of the government and enterprises,we put forward some personal advices and ideas based on the basic factors and important factors to ease the financing difficulties of forestry enterprises.展开更多
Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalizati...Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalization method for multivariate correlated alarms to realize the root cause analysis and alarm prioritization. An information fusion based interpretive structural model is constructed according to the data-driven partial correlation coefficient calculation and process knowledge modification. This hierarchical multi-layer model is helpful in abnormality propagation path identification and root-cause analysis. Revised Likert scale method is adopted to determine the alarm priority and reduce the blindness of alarm handling. As a case study, the Tennessee Eastman process is utilized to show the effectiveness and validity of proposed approach. Alarm system performance comparison shows that our rationalization methodology can reduce the alarm flood to some extent and improve the performance.展开更多
The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods ...The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods are inadequate for collecting data on high-steep rock slopes in complex mountainous regions.This study establishes a high-resolution three-dimensional model of a rock slope using unmanned aerial vehicle(UAV)multi-angle nap-of-the-object photogrammetry to obtain edge feature points of fractures.Fracture opening morphology is characterized using coordinate projection and transformation.Fracture central axis is determined using vertical measuring lines,allowing for the interpretation of aperture of adaptive fracture shape.The feasibility and reliability of the new method are verified at a construction site of a railway in southeast Tibet,China.The study shows that the fracture aperture has a significant interval effect and size effect.The optimal sampling length for fractures is approximately 0.5e1 m,and the optimal aperture interpretation results can be achieved when the measuring line spacing is 1%of the sampling length.Tensile fractures in the study area generally have larger apertures than shear fractures,and their tendency to increase with slope height is also greater than that of shear fractures.The aperture of tensile fractures is generally positively correlated with their trace length,while the correlation between the aperture of shear fractures and their trace length appears to be weak.Fractures of different orientations exhibit certain differences in their distribution of aperture,but generally follow the forms of normal,log-normal,and gamma distributions.This study provides essential data support for rock and slope stability evaluation,which is of significant practical importance.展开更多
The possible risk factors during SAP Business One implementation were studied with depth interview. The results are then adjusted by experts. 20 categories of risk factors that are totally 49 factors were found. Based...The possible risk factors during SAP Business One implementation were studied with depth interview. The results are then adjusted by experts. 20 categories of risk factors that are totally 49 factors were found. Based on the risk factors during the SAP Business One implementation, questionnaire was used to study the key risk factors of SAP Business One implementation. Results illustrate ten key risk factors, these are risk of senior managers leadership, risk of project management, risk of process improvement, risk of implementation team organization, risk of process analysis, risk of based data, risk of personnel coordination, risk of change management, risk of secondary development, and risk of data import. Focus on the key risks of SAP Business One implementation, the interpretative structural modeling approach is used to study the relationship between these factors and establish a seven-level hierarchical structure. The study illustrates that the structure is olive-like, in which the risk of data import is on the top, and the risk of senior managers is on the bottom. They are the most important risk factors.展开更多
Interpretive theory brings forward three phases of interpretation: understanding, deverberlization and re-expression. It needs linguistic knowledge and non-linguistic knowledge. This essay discusses application of int...Interpretive theory brings forward three phases of interpretation: understanding, deverberlization and re-expression. It needs linguistic knowledge and non-linguistic knowledge. This essay discusses application of interpretive theory to business interpretation from the perspective of theory and practice.展开更多
This paper aims to explore teaching of interpreting nowadays by starting from the interpretive theory and its characteristics. The author believes that the theory is mainly based on the study of interpretation practic...This paper aims to explore teaching of interpreting nowadays by starting from the interpretive theory and its characteristics. The author believes that the theory is mainly based on the study of interpretation practice, whose core content, namely,"deverbalization"has made great strides and breakthroughs in the theory of translation; when we examine translation, or rather interpretation once again from the bi-perspective of language and culture, we will have come across new thoughts in terms of translation as well as teaching of interpreting.展开更多
The interpretive theory of translation(ITT) is a school of theory originated in the late 1960 s in France,focusing on the discussion of the theory and teaching of interpreting and non-literary translation. ITT believe...The interpretive theory of translation(ITT) is a school of theory originated in the late 1960 s in France,focusing on the discussion of the theory and teaching of interpreting and non-literary translation. ITT believes that what the translator should convey is not the meaning of linguistic notation,but the non-verbal sense. In this paper,the author is going to briefly introduce ITT and analyze several examples to show different situations where ITT is either useful or unsuitable.展开更多
Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of po...Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.展开更多
Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analy...Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.展开更多
Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only f...Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.展开更多
Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.How...Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.However,cost-effectively designing and screening efficient electrocatalysts remains a challenge.In this study,we have successfully established interpretable machine learning(ML)models to evaluate the catalytic activity of SACs by directly and accurately predicting reaction Gibbs free energy.Our models were trained using non-density functional theory(DFT)calculated features from a dataset comprising 90 graphene-supported SACs.Our results underscore the superior prediction accuracy of the gradient boosting regression(GBR)model for bothΔg(N_(2)→NNH)andΔG(NH_(2)→NH_(3)),boasting coefficient of determination(R^(2))score of 0.972 and 0.984,along with root mean square error(RMSE)of 0.051 and 0.085 eV,respectively.Moreover,feature importance analysis elucidates that the high accuracy of GBR model stems from its adept capture of characteristics pertinent to the active center and coordination environment,unveilling the significance of elementary descriptors,with the colvalent radius playing a dominant role.Additionally,Shapley additive explanations(SHAP)analysis provides global and local interpretation of the working mechanism of the GBR model.Our analysis identifies that a pyrrole-type coordination(flag=0),d-orbitals with a moderate occupation(N_(d)=5),and a moderate difference in covalent radius(r_(TM-ave)near 140 pm)are conducive to achieving high activity.Furthermore,we extend the prediction of activity to more catalysts without additional DFT calculations,validating the reliability of our feature engineering,model training,and design strategy.These findings not only highlight new opportunity for accelerating catalyst design using non-DFT calculated features,but also shed light on the working mechanism of"black box"ML model.Moreover,the model provides valuable guidance for catalytic material design in multiple proton-electron coupling reactions,particularly in driving sustainable CO_(2),O_(2),and N_(2) conversion.展开更多
Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provide...Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.展开更多
With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural network...With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural networks.These intelligent and automated methods significantly reduce manual labor,particularly in the laborious task of manually labeling seismic facies.However,the extensive demand for training data imposes limitations on their wider application.To overcome this challenge,we adopt the UNet architecture as the foundational network structure for seismic facies classification,which has demonstrated effective segmentation results even with small-sample training data.Additionally,we integrate spatial pyramid pooling and dilated convolution modules into the network architecture to enhance the perception of spatial information across a broader range.The seismic facies classification test on the public data from the F3 block verifies the superior performance of our proposed improved network structure in delineating seismic facies boundaries.Comparative analysis against the traditional UNet model reveals that our method achieves more accurate predictive classification results,as evidenced by various evaluation metrics for image segmentation.Obviously,the classification accuracy reaches an impressive 96%.Furthermore,the results of seismic facies classification in the seismic slice dimension provide further confirmation of the superior performance of our proposed method,which accurately defines the range of different seismic facies.This approach holds significant potential for analyzing geological patterns and extracting valuable depositional information.展开更多
The rapid evolution of scientific and technological advancements and industrial changes has profoundly interconnected countries and regions in the digital information era,creating a globalized environment where effect...The rapid evolution of scientific and technological advancements and industrial changes has profoundly interconnected countries and regions in the digital information era,creating a globalized environment where effective communication is paramount.Consequently,the demand for proficient interpreting skills within the scientific and technology sectors has surged,making effective language communication increasingly crucial.This paper explores the potential impact of translation universals on enhancing sci-tech simultaneous interpreter education.By examining the selection of teaching materials,methods,and activities through the lens of translation universals,this study aims to improve the quality of teaching content,innovate instructional approaches,and ultimately,enhance the effectiveness of interpreter education.The findings of this research are expected to provide valuable insights for curriculum development and pedagogical strategies in interpreter education.展开更多
The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformi...The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformity is an angular unconformity,overlying multiple normal faults,and accompanied with a thrust fault which maximizes the region's structural complexity.Additionally,the Pennsylvanian angular unconformity creates pinch-outs between the beds above and below.We computed the spectral decomposition and reflector convergence attributes and analyzed them to characterize the angular unconformity and faults.The spectral decomposition attribute divides the broadband seismic data into different spectral bands to resolve thin beds and show thickness variations.In contrast,the reflector convergence attribute highlights the location and direction of the pinch-outs as they dip south at angles between 2° and 6°.After reviewing findings from RGB blending of the spectrally decomposed frequencies along the Pennsylvanian unconformity,we observed channel-like features and multiple linear bands in addition to the faults and pinch-outs.It can be inferred that the identified linear bands could be the result of different lithologies associated with the tilting of the beds,and the faults may possibly influence hydrocarbon migration or act as a flow barrier to entrap hydrocarbon accumulation.The identification of this angular unconformity and the associated features in the study area are vital for the following reasons:1)the unconformity surface represents a natural stratigraphic boundary;2)the stratigraphic pinch-outs act as fluid flow connectivity boundaries;3)the areal extent of compartmentalized reservoirs'boundaries created by the angular unconformity are better defined;and 4)fault displacements are better understood when planning well locations as faults can be flow barriers,or permeability conduits,depending on facies heterogeneity and/or seal effectiveness of a fault,which can affect hydrocarbon production.The methodology utilized in this study is a further step in the characterization of reservoirs and can be used to expand our knowledge and obtain more information about the Goldsmith Field.展开更多
A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and opt...A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and optimized.Then,the key sub-capabilities are identified by quantitatively calculating the contributions made by each sub-capability to the overall capability.Finally,the overall capability is improved by optimizing the identified key sub-capabilities.The theoretical contributions of the proposed approach are as follows.(i)An interpretable capability evaluation model is constructed by employing BRB which can provide complete access to decision-makers.(ii)Key sub-capabilities are identified according to the quantitative contribution analysis results.(iii)Accountable capability improvement is carried out by only optimizing the identified key sub-capabilities.Case study results show that“Surveillance”,“Positioning”,and“Identification”are identified as key sub-capabilities with a summed contribution of 75.55%in an analytical and deducible fashion based on the interpretable capability evaluation model.As a result,the overall capability is improved by optimizing only the identified key sub-capabilities.The overall capability can be greatly improved from 59.20%to 81.80%with a minimum cost of 397.Furthermore,this paper also investigates how optimizing the BRB with more collected data would affect the evaluation results:only optimizing“Surveillance”and“Positioning”can also improve the overall capability to 81.34%with a cost of 370,which thus validates the efficiency of the proposed approach.展开更多
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new lig...To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.展开更多
The periphery of the Qinghai-Tibet Plateau is renowned for its susceptibility to landslides.However,the northwestern margin of this region,characterised by limited human activities and challenging transportation,remai...The periphery of the Qinghai-Tibet Plateau is renowned for its susceptibility to landslides.However,the northwestern margin of this region,characterised by limited human activities and challenging transportation,remains insufficiently explored concerning landslide occurrence and dispersion.With the planning and construction of the Xinjiang-Tibet Railway,a comprehensive investigation into disastrous landslides in this area is essential for effective disaster preparedness and mitigation strategies.By using the human-computer interaction interpretation approach,the authors established a landslide database encompassing 13003 landslides,collectively spanning an area of 3351.24 km^(2)(36°N-40°N,73°E-78°E).The database incorporates diverse topographical and environmental parameters,including regional elevation,slope angle,slope aspect,distance to faults,distance to roads,distance to rivers,annual precipitation,and stratum.The statistical characteristics of number and area of landslides,landslide number density(LND),and landslide area percentage(LAP)are analyzed.The authors found that a predominant concentration of landslide origins within high slope angle regions,with the highest incidence observed in intervals characterised by average slopes of 20°to 30°,maximum slope angle above 80°,along with orientations towards the north(N),northeast(NE),and southwest(SW).Additionally,elevations above 4.5 km,distance to rivers below 1 km,rainfall between 20-30 mm and 30-40 mm emerge as particularly susceptible to landslide development.The study area’s geological composition primarily comprises Mesozoic and Upper Paleozoic outcrops.Both fault and human engineering activities have different degrees of influence on landslide development.Furthermore,the significance of the landslide database,the relationship between landslide distribution and environmental factors,and the geometric and morphological characteristics of landslides are discussed.The landslide H/L ratios in the study area are mainly concentrated between 0.4 and 0.64.It means the landslides mobility in the region is relatively low,and the authors speculate that landslides in this region more possibly triggered by earthquakes or located in meizoseismal area.展开更多
文摘Ceramic relief mural is a contemporary landscape art that is carefully designed based on human nature,culture,and architectural wall space,combined with social customs,visual sensibility,and art.It may also become the main axis of ceramic art in the future.Taiwan public ceramic relief murals(PCRM)are most distinctive with the PCRM pioneered by Pan-Hsiung Chu of Meinong Kiln in 1987.In addition to breaking through the limitations of traditional public ceramic murals,Chu leveraged local culture and sensibility.The theme of art gives PCRM its unique style and innovative value throughout the Taiwan region.This study mainly analyzes and understands the design image of public ceramic murals,taking Taiwan PCRM’s design and creation as the scope,and applies STEEP analysis,that is,the social,technological,economic,ecological,and political-legal environments are analyzed as core factors;eight main important factors in the artistic design image of ceramic murals are evaluated.Then,interpretive structural modeling(ISM)is used to establish five levels,analyze the four main problems in the main core factor area and the four main target results in the affected factor area;and analyze the problem points and target points as well as their causal relationships.It is expected to sort out the relationship between these factors,obtain the hierarchical relationship of each factor,and provide a reference basis and research methods.
文摘Through the collection of related literature,we point out the six major factors influencing China's forestry enterprises' financing: insufficient national support; regulations and institutional environmental factors; narrow channels of financing; inappropriate existing mortgagebacked approach; forestry production characteristics; forestry enterprises' defects. Then,we use interpretive structural modeling( ISM) from System Engineering to analyze the structure of the six factors and set up ladder-type structure. We put three factors including forestry production characteristics,shortcomings of forestry enterprises and regulatory,institutional and environmental factors as basic factors and put other three factors as important factors. From the perspective of the government and enterprises,we put forward some personal advices and ideas based on the basic factors and important factors to ease the financing difficulties of forestry enterprises.
基金Supported by the National Natural Science Foundation of China(61473026,61104131)the Fundamental Research Funds for the Central Universities(JD1413)
文摘Alarm flood is one of the main problems in the alarm systems of industrial process. Alarm root-cause analysis and alarm prioritization are good for alarm flood reduction. This paper proposes a systematic rationalization method for multivariate correlated alarms to realize the root cause analysis and alarm prioritization. An information fusion based interpretive structural model is constructed according to the data-driven partial correlation coefficient calculation and process knowledge modification. This hierarchical multi-layer model is helpful in abnormality propagation path identification and root-cause analysis. Revised Likert scale method is adopted to determine the alarm priority and reduce the blindness of alarm handling. As a case study, the Tennessee Eastman process is utilized to show the effectiveness and validity of proposed approach. Alarm system performance comparison shows that our rationalization methodology can reduce the alarm flood to some extent and improve the performance.
基金This work was supported by the National Nature Science Foundation of China(Grant Nos.42177139 and 41941017)the Natural Science Foundation Project of Jilin Province,China(Grant No.20230101088JC).The authors would like to thank the anonymous reviewers for their comments and suggestions.
文摘The aperture of natural rock fractures significantly affects the deformation and strength properties of rock masses,as well as the hydrodynamic properties of fractured rock masses.The conventional measurement methods are inadequate for collecting data on high-steep rock slopes in complex mountainous regions.This study establishes a high-resolution three-dimensional model of a rock slope using unmanned aerial vehicle(UAV)multi-angle nap-of-the-object photogrammetry to obtain edge feature points of fractures.Fracture opening morphology is characterized using coordinate projection and transformation.Fracture central axis is determined using vertical measuring lines,allowing for the interpretation of aperture of adaptive fracture shape.The feasibility and reliability of the new method are verified at a construction site of a railway in southeast Tibet,China.The study shows that the fracture aperture has a significant interval effect and size effect.The optimal sampling length for fractures is approximately 0.5e1 m,and the optimal aperture interpretation results can be achieved when the measuring line spacing is 1%of the sampling length.Tensile fractures in the study area generally have larger apertures than shear fractures,and their tendency to increase with slope height is also greater than that of shear fractures.The aperture of tensile fractures is generally positively correlated with their trace length,while the correlation between the aperture of shear fractures and their trace length appears to be weak.Fractures of different orientations exhibit certain differences in their distribution of aperture,but generally follow the forms of normal,log-normal,and gamma distributions.This study provides essential data support for rock and slope stability evaluation,which is of significant practical importance.
文摘The possible risk factors during SAP Business One implementation were studied with depth interview. The results are then adjusted by experts. 20 categories of risk factors that are totally 49 factors were found. Based on the risk factors during the SAP Business One implementation, questionnaire was used to study the key risk factors of SAP Business One implementation. Results illustrate ten key risk factors, these are risk of senior managers leadership, risk of project management, risk of process improvement, risk of implementation team organization, risk of process analysis, risk of based data, risk of personnel coordination, risk of change management, risk of secondary development, and risk of data import. Focus on the key risks of SAP Business One implementation, the interpretative structural modeling approach is used to study the relationship between these factors and establish a seven-level hierarchical structure. The study illustrates that the structure is olive-like, in which the risk of data import is on the top, and the risk of senior managers is on the bottom. They are the most important risk factors.
文摘Interpretive theory brings forward three phases of interpretation: understanding, deverberlization and re-expression. It needs linguistic knowledge and non-linguistic knowledge. This essay discusses application of interpretive theory to business interpretation from the perspective of theory and practice.
文摘This paper aims to explore teaching of interpreting nowadays by starting from the interpretive theory and its characteristics. The author believes that the theory is mainly based on the study of interpretation practice, whose core content, namely,"deverbalization"has made great strides and breakthroughs in the theory of translation; when we examine translation, or rather interpretation once again from the bi-perspective of language and culture, we will have come across new thoughts in terms of translation as well as teaching of interpreting.
文摘The interpretive theory of translation(ITT) is a school of theory originated in the late 1960 s in France,focusing on the discussion of the theory and teaching of interpreting and non-literary translation. ITT believes that what the translator should convey is not the meaning of linguistic notation,but the non-verbal sense. In this paper,the author is going to briefly introduce ITT and analyze several examples to show different situations where ITT is either useful or unsuitable.
基金European Commission,Joint Research Center,Grant/Award Number:HUMAINTMinisterio de Ciencia e Innovación,Grant/Award Number:PID2020‐114924RB‐I00Comunidad de Madrid,Grant/Award Number:S2018/EMT‐4362 SEGVAUTO 4.0‐CM。
文摘Predicting the motion of other road agents enables autonomous vehicles to perform safe and efficient path planning.This task is very complex,as the behaviour of road agents depends on many factors and the number of possible future trajectories can be consid-erable(multi-modal).Most prior approaches proposed to address multi-modal motion prediction are based on complex machine learning systems that have limited interpret-ability.Moreover,the metrics used in current benchmarks do not evaluate all aspects of the problem,such as the diversity and admissibility of the output.The authors aim to advance towards the design of trustworthy motion prediction systems,based on some of the re-quirements for the design of Trustworthy Artificial Intelligence.The focus is on evaluation criteria,robustness,and interpretability of outputs.First,the evaluation metrics are comprehensively analysed,the main gaps of current benchmarks are identified,and a new holistic evaluation framework is proposed.Then,a method for the assessment of spatial and temporal robustness is introduced by simulating noise in the perception system.To enhance the interpretability of the outputs and generate more balanced results in the proposed evaluation framework,an intent prediction layer that can be attached to multi-modal motion prediction models is proposed.The effectiveness of this approach is assessed through a survey that explores different elements in the visualisation of the multi-modal trajectories and intentions.The proposed approach and findings make a significant contribution to the development of trustworthy motion prediction systems for autono-mous vehicles,advancing the field towards greater safety and reliability.
基金Yulin Science and Technology Bureau production Project“Research on Smart Agricultural Product Traceability System”(No.CXY-2022-64)Light of West China(No.XAB2022YN10)+1 种基金The China Postdoctoral Science Foundation(No.2023M740760)Shaanxi Province Key Research and Development Plan(No.2024SF-YBXM-678).
文摘Hyperspectral imagery encompasses spectral and spatial dimensions,reflecting the material properties of objects.Its application proves crucial in search and rescue,concealed target identification,and crop growth analysis.Clustering is an important method of hyperspectral analysis.The vast data volume of hyperspectral imagery,coupled with redundant information,poses significant challenges in swiftly and accurately extracting features for subsequent analysis.The current hyperspectral feature clustering methods,which are mostly studied from space or spectrum,do not have strong interpretability,resulting in poor comprehensibility of the algorithm.So,this research introduces a feature clustering algorithm for hyperspectral imagery from an interpretability perspective.It commences with a simulated perception process,proposing an interpretable band selection algorithm to reduce data dimensions.Following this,amulti-dimensional clustering algorithm,rooted in fuzzy and kernel clustering,is developed to highlight intra-class similarities and inter-class differences.An optimized P systemis then introduced to enhance computational efficiency.This system coordinates all cells within a mapping space to compute optimal cluster centers,facilitating parallel computation.This approach diminishes sensitivity to initial cluster centers and augments global search capabilities,thus preventing entrapment in local minima and enhancing clustering performance.Experiments conducted on 300 datasets,comprising both real and simulated data.The results show that the average accuracy(ACC)of the proposed algorithm is 0.86 and the combination measure(CM)is 0.81.
文摘Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.
基金supported by the Research Grants Council of Hong Kong (City U 11305919 and 11308620)the NSFC/RGC Joint Research Scheme N_City U104/19The Hong Kong Research Grant Council Collaborative Research Fund:C1002-21G and C1017-22G。
文摘Electrocatalytic nitrogen reduction to ammonia has garnered significant attention with the blooming of single-atom catalysts(SACs),showcasing their potential for sustainable and energy-efficient ammonia production.However,cost-effectively designing and screening efficient electrocatalysts remains a challenge.In this study,we have successfully established interpretable machine learning(ML)models to evaluate the catalytic activity of SACs by directly and accurately predicting reaction Gibbs free energy.Our models were trained using non-density functional theory(DFT)calculated features from a dataset comprising 90 graphene-supported SACs.Our results underscore the superior prediction accuracy of the gradient boosting regression(GBR)model for bothΔg(N_(2)→NNH)andΔG(NH_(2)→NH_(3)),boasting coefficient of determination(R^(2))score of 0.972 and 0.984,along with root mean square error(RMSE)of 0.051 and 0.085 eV,respectively.Moreover,feature importance analysis elucidates that the high accuracy of GBR model stems from its adept capture of characteristics pertinent to the active center and coordination environment,unveilling the significance of elementary descriptors,with the colvalent radius playing a dominant role.Additionally,Shapley additive explanations(SHAP)analysis provides global and local interpretation of the working mechanism of the GBR model.Our analysis identifies that a pyrrole-type coordination(flag=0),d-orbitals with a moderate occupation(N_(d)=5),and a moderate difference in covalent radius(r_(TM-ave)near 140 pm)are conducive to achieving high activity.Furthermore,we extend the prediction of activity to more catalysts without additional DFT calculations,validating the reliability of our feature engineering,model training,and design strategy.These findings not only highlight new opportunity for accelerating catalyst design using non-DFT calculated features,but also shed light on the working mechanism of"black box"ML model.Moreover,the model provides valuable guidance for catalytic material design in multiple proton-electron coupling reactions,particularly in driving sustainable CO_(2),O_(2),and N_(2) conversion.
基金financially supported by China Postdoctoral Science Foundation(Grant No.2023M730365)Natural Science Foundation of Hubei Province of China(Grant No.2023AFB232)。
文摘Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.
基金funded by the Fundamental Research Project of CNPC Geophysical Key Lab(2022DQ0604-4)the Strategic Cooperation Technology Projects of China National Petroleum Corporation and China University of Petroleum-Beijing(ZLZX 202003)。
文摘With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural networks.These intelligent and automated methods significantly reduce manual labor,particularly in the laborious task of manually labeling seismic facies.However,the extensive demand for training data imposes limitations on their wider application.To overcome this challenge,we adopt the UNet architecture as the foundational network structure for seismic facies classification,which has demonstrated effective segmentation results even with small-sample training data.Additionally,we integrate spatial pyramid pooling and dilated convolution modules into the network architecture to enhance the perception of spatial information across a broader range.The seismic facies classification test on the public data from the F3 block verifies the superior performance of our proposed improved network structure in delineating seismic facies boundaries.Comparative analysis against the traditional UNet model reveals that our method achieves more accurate predictive classification results,as evidenced by various evaluation metrics for image segmentation.Obviously,the classification accuracy reaches an impressive 96%.Furthermore,the results of seismic facies classification in the seismic slice dimension provide further confirmation of the superior performance of our proposed method,which accurately defines the range of different seismic facies.This approach holds significant potential for analyzing geological patterns and extracting valuable depositional information.
文摘The rapid evolution of scientific and technological advancements and industrial changes has profoundly interconnected countries and regions in the digital information era,creating a globalized environment where effective communication is paramount.Consequently,the demand for proficient interpreting skills within the scientific and technology sectors has surged,making effective language communication increasingly crucial.This paper explores the potential impact of translation universals on enhancing sci-tech simultaneous interpreter education.By examining the selection of teaching materials,methods,and activities through the lens of translation universals,this study aims to improve the quality of teaching content,innovate instructional approaches,and ultimately,enhance the effectiveness of interpreter education.The findings of this research are expected to provide valuable insights for curriculum development and pedagogical strategies in interpreter education.
文摘The Pennsylvanian unconformity,which is a detrital surface,separates the beds of the Permian-aged strata from the Lower Paleozoic in the Central Basin Platform.Seismic data interpretation indicates that the unconformity is an angular unconformity,overlying multiple normal faults,and accompanied with a thrust fault which maximizes the region's structural complexity.Additionally,the Pennsylvanian angular unconformity creates pinch-outs between the beds above and below.We computed the spectral decomposition and reflector convergence attributes and analyzed them to characterize the angular unconformity and faults.The spectral decomposition attribute divides the broadband seismic data into different spectral bands to resolve thin beds and show thickness variations.In contrast,the reflector convergence attribute highlights the location and direction of the pinch-outs as they dip south at angles between 2° and 6°.After reviewing findings from RGB blending of the spectrally decomposed frequencies along the Pennsylvanian unconformity,we observed channel-like features and multiple linear bands in addition to the faults and pinch-outs.It can be inferred that the identified linear bands could be the result of different lithologies associated with the tilting of the beds,and the faults may possibly influence hydrocarbon migration or act as a flow barrier to entrap hydrocarbon accumulation.The identification of this angular unconformity and the associated features in the study area are vital for the following reasons:1)the unconformity surface represents a natural stratigraphic boundary;2)the stratigraphic pinch-outs act as fluid flow connectivity boundaries;3)the areal extent of compartmentalized reservoirs'boundaries created by the angular unconformity are better defined;and 4)fault displacements are better understood when planning well locations as faults can be flow barriers,or permeability conduits,depending on facies heterogeneity and/or seal effectiveness of a fault,which can affect hydrocarbon production.The methodology utilized in this study is a further step in the characterization of reservoirs and can be used to expand our knowledge and obtain more information about the Goldsmith Field.
基金supported by the National Natural Science Foundation of China(72471067,72431011,72471238,72231011,62303474,72301286)the Fundamental Research Funds for the Provincial Universities of Zhejiang(GK239909299001-010).
文摘A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and optimized.Then,the key sub-capabilities are identified by quantitatively calculating the contributions made by each sub-capability to the overall capability.Finally,the overall capability is improved by optimizing the identified key sub-capabilities.The theoretical contributions of the proposed approach are as follows.(i)An interpretable capability evaluation model is constructed by employing BRB which can provide complete access to decision-makers.(ii)Key sub-capabilities are identified according to the quantitative contribution analysis results.(iii)Accountable capability improvement is carried out by only optimizing the identified key sub-capabilities.Case study results show that“Surveillance”,“Positioning”,and“Identification”are identified as key sub-capabilities with a summed contribution of 75.55%in an analytical and deducible fashion based on the interpretable capability evaluation model.As a result,the overall capability is improved by optimizing only the identified key sub-capabilities.The overall capability can be greatly improved from 59.20%to 81.80%with a minimum cost of 397.Furthermore,this paper also investigates how optimizing the BRB with more collected data would affect the evaluation results:only optimizing“Surveillance”and“Positioning”can also improve the overall capability to 81.34%with a cost of 370,which thus validates the efficiency of the proposed approach.
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
基金support provided by the National Natural Science Foundation of China(22122802,22278044,and 21878028)the Chongqing Science Fund for Distinguished Young Scholars(CSTB2022NSCQ-JQX0021)the Fundamental Research Funds for the Central Universities(2022CDJXY-003).
文摘To equip data-driven dynamic chemical process models with strong interpretability,we develop a light attention–convolution–gate recurrent unit(LACG)architecture with three sub-modules—a basic module,a brand-new light attention module,and a residue module—that are specially designed to learn the general dynamic behavior,transient disturbances,and other input factors of chemical processes,respectively.Combined with a hyperparameter optimization framework,Optuna,the effectiveness of the proposed LACG is tested by distributed control system data-driven modeling experiments on the discharge flowrate of an actual deethanization process.The LACG model provides significant advantages in prediction accuracy and model generalization compared with other models,including the feedforward neural network,convolution neural network,long short-term memory(LSTM),and attention-LSTM.Moreover,compared with the simulation results of a deethanization model built using Aspen Plus Dynamics V12.1,the LACG parameters are demonstrated to be interpretable,and more details on the variable interactions can be observed from the model parameters in comparison with the traditional interpretable model attention-LSTM.This contribution enriches interpretable machine learning knowledge and provides a reliable method with high accuracy for actual chemical process modeling,paving a route to intelligent manufacturing.
基金supported by the National Key Research and Development Program of China(2021YFB3901205)National Institute of Natural Hazards,Ministry of Emergency Management of China(2023-JBKY-57)。
文摘The periphery of the Qinghai-Tibet Plateau is renowned for its susceptibility to landslides.However,the northwestern margin of this region,characterised by limited human activities and challenging transportation,remains insufficiently explored concerning landslide occurrence and dispersion.With the planning and construction of the Xinjiang-Tibet Railway,a comprehensive investigation into disastrous landslides in this area is essential for effective disaster preparedness and mitigation strategies.By using the human-computer interaction interpretation approach,the authors established a landslide database encompassing 13003 landslides,collectively spanning an area of 3351.24 km^(2)(36°N-40°N,73°E-78°E).The database incorporates diverse topographical and environmental parameters,including regional elevation,slope angle,slope aspect,distance to faults,distance to roads,distance to rivers,annual precipitation,and stratum.The statistical characteristics of number and area of landslides,landslide number density(LND),and landslide area percentage(LAP)are analyzed.The authors found that a predominant concentration of landslide origins within high slope angle regions,with the highest incidence observed in intervals characterised by average slopes of 20°to 30°,maximum slope angle above 80°,along with orientations towards the north(N),northeast(NE),and southwest(SW).Additionally,elevations above 4.5 km,distance to rivers below 1 km,rainfall between 20-30 mm and 30-40 mm emerge as particularly susceptible to landslide development.The study area’s geological composition primarily comprises Mesozoic and Upper Paleozoic outcrops.Both fault and human engineering activities have different degrees of influence on landslide development.Furthermore,the significance of the landslide database,the relationship between landslide distribution and environmental factors,and the geometric and morphological characteristics of landslides are discussed.The landslide H/L ratios in the study area are mainly concentrated between 0.4 and 0.64.It means the landslides mobility in the region is relatively low,and the authors speculate that landslides in this region more possibly triggered by earthquakes or located in meizoseismal area.