This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inerti...This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inertial parameters and the iterates,which have been assumed by several authors whenever a strongly convergent algorithm with an inertial extrapolation step is proposed for a variational inequality problem.Consequently,our proof arguments are different from what is obtainable in the relevant literature.Finally,we give numerical tests to confirm the theoretical analysis and show that our proposed algorithm is superior to related ones in the literature.展开更多
With the proliferation of the internet,big data continues to grow exponentially,and video has become the largest source.Video big data intro-duces many technological challenges,including compression,storage,trans-miss...With the proliferation of the internet,big data continues to grow exponentially,and video has become the largest source.Video big data intro-duces many technological challenges,including compression,storage,trans-mission,analysis,and recognition.The increase in the number of multimedia resources has brought an urgent need to develop intelligent methods to organize and process them.The integration between Semantic link Networks and multimedia resources provides a new prospect for organizing them with their semantics.The tags and surrounding texts of multimedia resources are used to measure their semantic association.Two evaluation methods including clustering and retrieval are performed to measure the semantic relatedness between images accurately and robustly.A Fuzzy Rule-Based Model for Semantic Content Extraction is designed which performs classification with fuzzy rules.The features extracted are trained with the neural network where each network contains several layers among them each layer of neurons is dedicated to measuring the weight towards different semantic events.Each neuron measures its weight according to different features like shape,size,direction,speed,and other features.The object is identified by subtracting the background features and trained to detect based on the features like size,shape,and direction.The weight measurement is performed according to the fuzzy rules and based on the weight measures.These frameworks enhance the video analytics feature and help in video surveillance systems with better accuracy and precision.展开更多
At present, most commercial computer-aided manufacturing (CAM) systems are deficient in efficiency and performances on generating tool path during machining impellers. To solve the problem, this article develops a s...At present, most commercial computer-aided manufacturing (CAM) systems are deficient in efficiency and performances on generating tool path during machining impellers. To solve the problem, this article develops a special software to plan cutting path for ruled surface impellers. An approximation algorithm to generate cutting path for machining integral ruled surface impellers is proposed. By fitting sampling data points of an impeller blade into a curve, a model of ruled surface blade of an impeller is built up. Furthermore, by calculating the points where the cutter axis vector intersects the free-form hub surface of an impeller, problems about, for instance, the ambiguity in calculation and machining the wide blade surface with a short flute cutter are solved. Finally, an integral impeller cutting path is planned by way of an integrated cutter location control algorithm. Simulation and machining tests with an impeller are performed on a 5-axis computer numerically controlled (CNC) mill machine, which shows the feasibility of the proposed algorithm.展开更多
Considering the efficiency and veracity of rules based optical proximity correction (OPC),the importance of rules in rules based OPC is pointed out.And how to select,to construct and to apply more concise and practi...Considering the efficiency and veracity of rules based optical proximity correction (OPC),the importance of rules in rules based OPC is pointed out.And how to select,to construct and to apply more concise and practical rules base is disscussed.Based on those ideas,four primary rules are suggested.Some data resulted in rules base are shown in table.The patterns on wafer are clearly improved by applying these rules to correct mask.OPCL,the automatic construction of the rules base is an important part of the whole rules based OPC system.展开更多
Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only f...Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.展开更多
The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules...The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules can be translated into machine language and used by autonomous vehicles.In this paper,a translation flow is designed.Beyond the translation,a deeper examination is required,because the semantics of natural languages are rich and complex,and frequently contain hidden assumptions.The issue of how to ensure that digital rules are accurate and consistent with the original intent of the traffic rules they represent is both significant and unresolved.In response,we propose a method of formal verification that combines equivalence verification with model checking.Reasonable and reassuring digital traffic rules can be obtained by utilizing the proposed traffic rule digitization flow and verification method.In addition,we offer a number of simulation applications that employ digital traffic rules to assess vehicle violations.The experimental findings indicate that our digital rules utilizing metric temporal logic(MTL)can be easily incorporated into simulation platforms and autonomous driving systems(ADS).展开更多
As a common transportation facility, speed humps can control the speed of vehicles on special road sections to reduce traffic risks. At the same time, they also cause instantaneous traffic emissions. Based on the clas...As a common transportation facility, speed humps can control the speed of vehicles on special road sections to reduce traffic risks. At the same time, they also cause instantaneous traffic emissions. Based on the classic instantaneous traffic emission model and the limited deceleration capacity microscopic traffic flow model with slow-to-start rules, this paper has investigated the impact of speed humps on traffic flow and the instantaneous emissions of vehicle pollutants in a single lane situation. The numerical simulation results have shown that speed humps have significant effects on traffic flow and traffic emissions. In a free-flow region, the increase of speed humps leads to the continuous rise of CO_(2), NO_(X) and PM emissions. Within some density ranges, one finds that these pollutant emissions can evolve into some higher values under some random seeds. Under other random seeds, they can evolve into some lower values. In a wide moving jam region, the emission values of these pollutants sometimes appear as continuous or intermittent phenomenon. Compared to the refined Na Sch model, the present model has lower instantaneous emissions such as CO_(2), NO_(X) and PM and higher volatile organic components(VOC) emissions. Compared to the limited deceleration capacity model without slow-to-start rules, the present model also has lower instantaneous emissions such as CO_(2), NO_(X) and PM and higher VOC emissions in a wide moving jam region. These results can also be confirmed or explained by the statistical values of vehicle velocity and acceleration.展开更多
BACKGROUND:This study aimed to evaluate the discriminatory performance of 11 vital sign-based early warning scores(EWSs)and three shock indices in early sepsis prediction in the emergency department(ED).METHODS:We per...BACKGROUND:This study aimed to evaluate the discriminatory performance of 11 vital sign-based early warning scores(EWSs)and three shock indices in early sepsis prediction in the emergency department(ED).METHODS:We performed a retrospective study on consecutive adult patients with an infection over 3 months in a public ED in Hong Kong.The primary outcome was sepsis(Sepsis-3 definition)within 48 h of ED presentation.Using c-statistics and the DeLong test,we compared 11 EWSs,including the National Early Warning Score 2(NEWS2),Modified Early Warning Score,and Worthing Physiological Scoring System(WPS),etc.,and three shock indices(the shock index[SI],modified shock index[MSI],and diastolic shock index[DSI]),with Systemic Inflammatory Response Syndrome(SIRS)and quick Sequential Organ Failure Assessment(qSOFA)in predicting the primary outcome,intensive care unit admission,and mortality at different time points.RESULTS:We analyzed 601 patients,of whom 166(27.6%)developed sepsis.NEWS2 had the highest point estimate(area under the receiver operating characteristic curve[AUROC]0.75,95%CI 0.70-0.79)and was significantly better than SIRS,qSOFA,other EWSs and shock indices,except WPS,at predicting the primary outcome.However,the pooled sensitivity and specificity of NEWS2≥5 for the prediction of sepsis were 0.45(95%CI 0.37-0.52)and 0.88(95%CI 0.85-0.91),respectively.The discriminatory performance of all EWSs and shock indices declined when used to predict mortality at a more remote time point.CONCLUSION:NEWS2 compared favorably with other EWSs and shock indices in early sepsis prediction but its low sensitivity at the usual cut-off point requires further modification for sepsis screening.展开更多
Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the exis...Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.展开更多
Traditional clustering algorithms often struggle to produce satisfactory results when dealing with datasets withuneven density. Additionally, they incur substantial computational costs when applied to high-dimensional...Traditional clustering algorithms often struggle to produce satisfactory results when dealing with datasets withuneven density. Additionally, they incur substantial computational costs when applied to high-dimensional datadue to calculating similarity matrices. To alleviate these issues, we employ the KD-Tree to partition the dataset andcompute the K-nearest neighbors (KNN) density for each point, thereby avoiding the computation of similaritymatrices. Moreover, we apply the rules of voting elections, treating each data point as a voter and casting a votefor the point with the highest density among its KNN. By utilizing the vote counts of each point, we develop thestrategy for classifying noise points and potential cluster centers, allowing the algorithm to identify clusters withuneven density and complex shapes. Additionally, we define the concept of “adhesive points” between two clustersto merge adjacent clusters that have similar densities. This process helps us identify the optimal number of clustersautomatically. Experimental results indicate that our algorithm not only improves the efficiency of clustering butalso increases its accuracy.展开更多
Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship ...Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.展开更多
This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed...This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors.展开更多
The security of the wireless sensor network-Internet of Things(WSN-IoT)network is more challenging due to its randomness and self-organized nature.Intrusion detection is one of the key methodologies utilized to ensure...The security of the wireless sensor network-Internet of Things(WSN-IoT)network is more challenging due to its randomness and self-organized nature.Intrusion detection is one of the key methodologies utilized to ensure the security of the network.Conventional intrusion detection mechanisms have issues such as higher misclassification rates,increased model complexity,insignificant feature extraction,increased training time,increased run time complexity,computation overhead,failure to identify new attacks,increased energy consumption,and a variety of other factors that limit the performance of the intrusion system model.In this research a security framework for WSN-IoT,through a deep learning technique is introduced using Modified Fuzzy-Adaptive DenseNet(MF_AdaDenseNet)and is benchmarked with datasets like NSL-KDD,UNSWNB15,CIDDS-001,Edge IIoT,Bot IoT.In this,the optimal feature selection using Capturing Dingo Optimization(CDO)is devised to acquire relevant features by removing redundant features.The proposed MF_AdaDenseNet intrusion detection model offers significant benefits by utilizing optimal feature selection with the CDO algorithm.This results in enhanced Detection Capacity with minimal computation complexity,as well as a reduction in False Alarm Rate(FAR)due to the consideration of classification error in the fitness estimation.As a result,the combined CDO-based feature selection and MF_AdaDenseNet intrusion detection mechanism outperform other state-of-the-art techniques,achieving maximal Detection Capacity,precision,recall,and F-Measure of 99.46%,99.54%,99.91%,and 99.68%,respectively,along with minimal FAR and Mean Absolute Error(MAE)of 0.9%and 0.11.展开更多
To solve the problem that the existing situation awareness research focuses on multi-sensor data fusion,but the expert knowledge is not fully utilized,a heterogeneous informa-tion fusion recognition method based on be...To solve the problem that the existing situation awareness research focuses on multi-sensor data fusion,but the expert knowledge is not fully utilized,a heterogeneous informa-tion fusion recognition method based on belief rule structure is proposed.By defining the continuous probabilistic hesitation fuzzy linguistic term sets(CPHFLTS)and establishing CPHFLTS distance measure,the belief rule base of the relationship between feature space and category space is constructed through information integration,and the evidence reasoning of the input samples is carried out.The experimental results show that the proposed method can make full use of sensor data and expert knowledge for recognition.Compared with the other methods,the proposed method has a higher correct recognition rate under different noise levels.展开更多
Three-way concept analysis is an important tool for information processing,and rule acquisition is one of the research hotspots of three-way concept analysis.However,compared with three-way concept lattices,three-way ...Three-way concept analysis is an important tool for information processing,and rule acquisition is one of the research hotspots of three-way concept analysis.However,compared with three-way concept lattices,three-way semi-concept lattices have three-way operators with weaker constraints,which can generate more concepts.In this article,the problem of rule acquisition for three-way semi-concept lattices is discussed in general.The authors construct the finer relation of three-way semi-concept lattices,and propose a method of rule acquisition for three-way semi-concept lattices.The authors also discuss the set of decision rules and the relationships of decision rules among object-induced three-way semi-concept lattices,object-induced three-way concept lattices,classical concept lattices and semi-concept lattices.Finally,examples are provided to illustrate the validity of our conclusions.展开更多
Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the intro...Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.展开更多
Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized charact...Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model.展开更多
A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and opt...A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and optimized.Then,the key sub-capabilities are identified by quantitatively calculating the contributions made by each sub-capability to the overall capability.Finally,the overall capability is improved by optimizing the identified key sub-capabilities.The theoretical contributions of the proposed approach are as follows.(i)An interpretable capability evaluation model is constructed by employing BRB which can provide complete access to decision-makers.(ii)Key sub-capabilities are identified according to the quantitative contribution analysis results.(iii)Accountable capability improvement is carried out by only optimizing the identified key sub-capabilities.Case study results show that“Surveillance”,“Positioning”,and“Identification”are identified as key sub-capabilities with a summed contribution of 75.55%in an analytical and deducible fashion based on the interpretable capability evaluation model.As a result,the overall capability is improved by optimizing only the identified key sub-capabilities.The overall capability can be greatly improved from 59.20%to 81.80%with a minimum cost of 397.Furthermore,this paper also investigates how optimizing the BRB with more collected data would affect the evaluation results:only optimizing“Surveillance”and“Positioning”can also improve the overall capability to 81.34%with a cost of 370,which thus validates the efficiency of the proposed approach.展开更多
The aim of this research is to demonstrate a novel scheme for approximating the Riemann-Liouville fractional integral operator.This would be achieved by first establishing a fractional-order version of the 2-point Tra...The aim of this research is to demonstrate a novel scheme for approximating the Riemann-Liouville fractional integral operator.This would be achieved by first establishing a fractional-order version of the 2-point Trapezoidal rule and then by proposing another fractional-order version of the(n+1)-composite Trapezoidal rule.In particular,the so-called divided-difference formula is typically employed to derive the 2-point Trapezoidal rule,which has accordingly been used to derive a more accurate fractional-order formula called the(n+1)-composite Trapezoidal rule.Additionally,in order to increase the accuracy of the proposed approximations by reducing the true errors,we incorporate the so-called Romberg integration,which is an extrapolation formula of the Trapezoidal rule for integration,into our proposed approaches.Several numerical examples are provided and compared with a modern definition of the Riemann-Liouville fractional integral operator to illustrate the efficacy of our scheme.展开更多
Recent advancements in science and technology,coupled with the proliferation of data,have also urged laboratory medicine to integrate with the era of artificial intelligence(AI)and machine learning(ML).In the current ...Recent advancements in science and technology,coupled with the proliferation of data,have also urged laboratory medicine to integrate with the era of artificial intelligence(AI)and machine learning(ML).In the current practices of evidencebased medicine,the laboratory tests analysing disease patterns through the association rule mining(ARM)have emerged as a modern tool for the risk assessment and the disease stratification,with the potential to reduce cardiovascular disease(CVD)mortality.CVDs are the well recognised leading global cause of mortality with the higher fatality rates in the Indian population due to associated factors like hypertension,diabetes,and lifestyle choices.AI-driven algorithms have offered deep insights in this field while addressing various challenges such as healthcare systems grappling with the physician shortages.Personalized medicine,well driven by the big data necessitates the integration of ML techniques and high-quality electronic health records to direct the meaningful outcome.These technological advancements enhance the computational analyses for both research and clinical practice.ARM plays a pivotal role by uncovering meaningful relationships within databases,aiding in patient survival prediction and risk factor identification.AI potential in laboratory medicine is vast and it must be cautiously integrated while considering potential ethical,legal,and privacy concerns.Thus,an AI ethics framework is essential to guide its responsible use.Aligning AI algorithms with existing lab practices,promoting education among healthcare professionals,and fostering careful integration into clinical settings are imperative for harnessing the benefits of this transformative technology.展开更多
文摘This paper studies a strongly convergent inertial forward-backward-forward algorithm for the variational inequality problem in Hilbert spaces.In our convergence analysis,we do not assume the on-line rule of the inertial parameters and the iterates,which have been assumed by several authors whenever a strongly convergent algorithm with an inertial extrapolation step is proposed for a variational inequality problem.Consequently,our proof arguments are different from what is obtainable in the relevant literature.Finally,we give numerical tests to confirm the theoretical analysis and show that our proposed algorithm is superior to related ones in the literature.
基金funded in part by Major projects of the National Social Science Fund(16ZDA054)of Chinathe Postgraduate Research&Practice Innovation Program of Jiansu Province(NO.KYCX18_0999)of Chinathe Engineering Research Center for Software Testing and Evaluation of Fujian Province(ST2018004)of China.
文摘With the proliferation of the internet,big data continues to grow exponentially,and video has become the largest source.Video big data intro-duces many technological challenges,including compression,storage,trans-mission,analysis,and recognition.The increase in the number of multimedia resources has brought an urgent need to develop intelligent methods to organize and process them.The integration between Semantic link Networks and multimedia resources provides a new prospect for organizing them with their semantics.The tags and surrounding texts of multimedia resources are used to measure their semantic association.Two evaluation methods including clustering and retrieval are performed to measure the semantic relatedness between images accurately and robustly.A Fuzzy Rule-Based Model for Semantic Content Extraction is designed which performs classification with fuzzy rules.The features extracted are trained with the neural network where each network contains several layers among them each layer of neurons is dedicated to measuring the weight towards different semantic events.Each neuron measures its weight according to different features like shape,size,direction,speed,and other features.The object is identified by subtracting the background features and trained to detect based on the features like size,shape,and direction.The weight measurement is performed according to the fuzzy rules and based on the weight measures.These frameworks enhance the video analytics feature and help in video surveillance systems with better accuracy and precision.
基金Key Development Program of Science and Technology of Heilongjiang Province, China (GB05A501)
文摘At present, most commercial computer-aided manufacturing (CAM) systems are deficient in efficiency and performances on generating tool path during machining impellers. To solve the problem, this article develops a special software to plan cutting path for ruled surface impellers. An approximation algorithm to generate cutting path for machining integral ruled surface impellers is proposed. By fitting sampling data points of an impeller blade into a curve, a model of ruled surface blade of an impeller is built up. Furthermore, by calculating the points where the cutter axis vector intersects the free-form hub surface of an impeller, problems about, for instance, the ambiguity in calculation and machining the wide blade surface with a short flute cutter are solved. Finally, an integral impeller cutting path is planned by way of an integrated cutter location control algorithm. Simulation and machining tests with an impeller are performed on a 5-axis computer numerically controlled (CNC) mill machine, which shows the feasibility of the proposed algorithm.
文摘Considering the efficiency and veracity of rules based optical proximity correction (OPC),the importance of rules in rules based OPC is pointed out.And how to select,to construct and to apply more concise and practical rules base is disscussed.Based on those ideas,four primary rules are suggested.Some data resulted in rules base are shown in table.The patterns on wafer are clearly improved by applying these rules to correct mask.OPCL,the automatic construction of the rules base is an important part of the whole rules based OPC system.
文摘Association rule learning(ARL)is a widely used technique for discovering relationships within datasets.However,it often generates excessive irrelevant or ambiguous rules.Therefore,post-processing is crucial not only for removing irrelevant or redundant rules but also for uncovering hidden associations that impact other factors.Recently,several post-processing methods have been proposed,each with its own strengths and weaknesses.In this paper,we propose THAPE(Tunable Hybrid Associative Predictive Engine),which combines descriptive and predictive techniques.By leveraging both techniques,our aim is to enhance the quality of analyzing generated rules.This includes removing irrelevant or redundant rules,uncovering interesting and useful rules,exploring hidden association rules that may affect other factors,and providing backtracking ability for a given product.The proposed approach offers a tailored method that suits specific goals for retailers,enabling them to gain a better understanding of customer behavior based on factual transactions in the target market.We applied THAPE to a real dataset as a case study in this paper to demonstrate its effectiveness.Through this application,we successfully mined a concise set of highly interesting and useful association rules.Out of the 11,265 rules generated,we identified 125 rules that are particularly relevant to the business context.These identified rules significantly improve the interpretability and usefulness of association rules for decision-making purposes.
文摘The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules can be translated into machine language and used by autonomous vehicles.In this paper,a translation flow is designed.Beyond the translation,a deeper examination is required,because the semantics of natural languages are rich and complex,and frequently contain hidden assumptions.The issue of how to ensure that digital rules are accurate and consistent with the original intent of the traffic rules they represent is both significant and unresolved.In response,we propose a method of formal verification that combines equivalence verification with model checking.Reasonable and reassuring digital traffic rules can be obtained by utilizing the proposed traffic rule digitization flow and verification method.In addition,we offer a number of simulation applications that employ digital traffic rules to assess vehicle violations.The experimental findings indicate that our digital rules utilizing metric temporal logic(MTL)can be easily incorporated into simulation platforms and autonomous driving systems(ADS).
基金funded by the National Natural Science Foundation of China (Grant No. 11875031)the key research projects of Natural Science of Anhui Provincial Colleges and Universities (Grant No. 2022AH050252)。
文摘As a common transportation facility, speed humps can control the speed of vehicles on special road sections to reduce traffic risks. At the same time, they also cause instantaneous traffic emissions. Based on the classic instantaneous traffic emission model and the limited deceleration capacity microscopic traffic flow model with slow-to-start rules, this paper has investigated the impact of speed humps on traffic flow and the instantaneous emissions of vehicle pollutants in a single lane situation. The numerical simulation results have shown that speed humps have significant effects on traffic flow and traffic emissions. In a free-flow region, the increase of speed humps leads to the continuous rise of CO_(2), NO_(X) and PM emissions. Within some density ranges, one finds that these pollutant emissions can evolve into some higher values under some random seeds. Under other random seeds, they can evolve into some lower values. In a wide moving jam region, the emission values of these pollutants sometimes appear as continuous or intermittent phenomenon. Compared to the refined Na Sch model, the present model has lower instantaneous emissions such as CO_(2), NO_(X) and PM and higher volatile organic components(VOC) emissions. Compared to the limited deceleration capacity model without slow-to-start rules, the present model also has lower instantaneous emissions such as CO_(2), NO_(X) and PM and higher VOC emissions in a wide moving jam region. These results can also be confirmed or explained by the statistical values of vehicle velocity and acceleration.
基金supported by the Health and Medical Research Fund of the Food and Health Bureau of the Hong Kong Special Administrative Region(Project No.19201161)Seed Fund from the University of Hong Kong.
文摘BACKGROUND:This study aimed to evaluate the discriminatory performance of 11 vital sign-based early warning scores(EWSs)and three shock indices in early sepsis prediction in the emergency department(ED).METHODS:We performed a retrospective study on consecutive adult patients with an infection over 3 months in a public ED in Hong Kong.The primary outcome was sepsis(Sepsis-3 definition)within 48 h of ED presentation.Using c-statistics and the DeLong test,we compared 11 EWSs,including the National Early Warning Score 2(NEWS2),Modified Early Warning Score,and Worthing Physiological Scoring System(WPS),etc.,and three shock indices(the shock index[SI],modified shock index[MSI],and diastolic shock index[DSI]),with Systemic Inflammatory Response Syndrome(SIRS)and quick Sequential Organ Failure Assessment(qSOFA)in predicting the primary outcome,intensive care unit admission,and mortality at different time points.RESULTS:We analyzed 601 patients,of whom 166(27.6%)developed sepsis.NEWS2 had the highest point estimate(area under the receiver operating characteristic curve[AUROC]0.75,95%CI 0.70-0.79)and was significantly better than SIRS,qSOFA,other EWSs and shock indices,except WPS,at predicting the primary outcome.However,the pooled sensitivity and specificity of NEWS2≥5 for the prediction of sepsis were 0.45(95%CI 0.37-0.52)and 0.88(95%CI 0.85-0.91),respectively.The discriminatory performance of all EWSs and shock indices declined when used to predict mortality at a more remote time point.CONCLUSION:NEWS2 compared favorably with other EWSs and shock indices in early sepsis prediction but its low sensitivity at the usual cut-off point requires further modification for sepsis screening.
基金National Natural Science Foundation of China(62073212).
文摘Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.
基金National Natural Science Foundation of China Nos.61962054 and 62372353.
文摘Traditional clustering algorithms often struggle to produce satisfactory results when dealing with datasets withuneven density. Additionally, they incur substantial computational costs when applied to high-dimensional datadue to calculating similarity matrices. To alleviate these issues, we employ the KD-Tree to partition the dataset andcompute the K-nearest neighbors (KNN) density for each point, thereby avoiding the computation of similaritymatrices. Moreover, we apply the rules of voting elections, treating each data point as a voter and casting a votefor the point with the highest density among its KNN. By utilizing the vote counts of each point, we develop thestrategy for classifying noise points and potential cluster centers, allowing the algorithm to identify clusters withuneven density and complex shapes. Additionally, we define the concept of “adhesive points” between two clustersto merge adjacent clusters that have similar densities. This process helps us identify the optimal number of clustersautomatically. Experimental results indicate that our algorithm not only improves the efficiency of clustering butalso increases its accuracy.
基金funded by the National Science Foundation of China(62006068)Hebei Natural Science Foundation(A2021402008),Natural Science Foundation of Scientific Research Project of Higher Education in Hebei Province(ZD2020185,QN2020188)333 Talent Supported Project of Hebei Province(C20221026).
文摘Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.
文摘This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors.
基金Authors extend their appreciation to King Saud University for funding the publication of this research through the Researchers Supporting Project number(RSPD2024R809),King Saud University,Riyadh,Saudi Arabia.
文摘The security of the wireless sensor network-Internet of Things(WSN-IoT)network is more challenging due to its randomness and self-organized nature.Intrusion detection is one of the key methodologies utilized to ensure the security of the network.Conventional intrusion detection mechanisms have issues such as higher misclassification rates,increased model complexity,insignificant feature extraction,increased training time,increased run time complexity,computation overhead,failure to identify new attacks,increased energy consumption,and a variety of other factors that limit the performance of the intrusion system model.In this research a security framework for WSN-IoT,through a deep learning technique is introduced using Modified Fuzzy-Adaptive DenseNet(MF_AdaDenseNet)and is benchmarked with datasets like NSL-KDD,UNSWNB15,CIDDS-001,Edge IIoT,Bot IoT.In this,the optimal feature selection using Capturing Dingo Optimization(CDO)is devised to acquire relevant features by removing redundant features.The proposed MF_AdaDenseNet intrusion detection model offers significant benefits by utilizing optimal feature selection with the CDO algorithm.This results in enhanced Detection Capacity with minimal computation complexity,as well as a reduction in False Alarm Rate(FAR)due to the consideration of classification error in the fitness estimation.As a result,the combined CDO-based feature selection and MF_AdaDenseNet intrusion detection mechanism outperform other state-of-the-art techniques,achieving maximal Detection Capacity,precision,recall,and F-Measure of 99.46%,99.54%,99.91%,and 99.68%,respectively,along with minimal FAR and Mean Absolute Error(MAE)of 0.9%and 0.11.
基金This work was supported by the Youth Foundation of National Science Foundation of China(62001503)the Special Fund for Taishan Scholar Project(ts 201712072).
文摘To solve the problem that the existing situation awareness research focuses on multi-sensor data fusion,but the expert knowledge is not fully utilized,a heterogeneous informa-tion fusion recognition method based on belief rule structure is proposed.By defining the continuous probabilistic hesitation fuzzy linguistic term sets(CPHFLTS)and establishing CPHFLTS distance measure,the belief rule base of the relationship between feature space and category space is constructed through information integration,and the evidence reasoning of the input samples is carried out.The experimental results show that the proposed method can make full use of sensor data and expert knowledge for recognition.Compared with the other methods,the proposed method has a higher correct recognition rate under different noise levels.
基金Central University Basic Research Fund of China,Grant/Award Number:FWNX04Ningxia Natural Science Foundation,Grant/Award Number:2021AAC03203National Natural Science Foundation of China,Grant/Award Number:61662001。
文摘Three-way concept analysis is an important tool for information processing,and rule acquisition is one of the research hotspots of three-way concept analysis.However,compared with three-way concept lattices,three-way semi-concept lattices have three-way operators with weaker constraints,which can generate more concepts.In this article,the problem of rule acquisition for three-way semi-concept lattices is discussed in general.The authors construct the finer relation of three-way semi-concept lattices,and propose a method of rule acquisition for three-way semi-concept lattices.The authors also discuss the set of decision rules and the relationships of decision rules among object-induced three-way semi-concept lattices,object-induced three-way concept lattices,classical concept lattices and semi-concept lattices.Finally,examples are provided to illustrate the validity of our conclusions.
基金National College Students’Training Programs of Innovation and Entrepreneurship,Grant/Award Number:S202210022060the CACMS Innovation Fund,Grant/Award Number:CI2021A00512the National Nature Science Foundation of China under Grant,Grant/Award Number:62206021。
文摘Media convergence works by processing information from different modalities and applying them to different domains.It is difficult for the conventional knowledge graph to utilise multi-media features because the introduction of a large amount of information from other modalities reduces the effectiveness of representation learning and makes knowledge graph inference less effective.To address the issue,an inference method based on Media Convergence and Rule-guided Joint Inference model(MCRJI)has been pro-posed.The authors not only converge multi-media features of entities but also introduce logic rules to improve the accuracy and interpretability of link prediction.First,a multi-headed self-attention approach is used to obtain the attention of different media features of entities during semantic synthesis.Second,logic rules of different lengths are mined from knowledge graph to learn new entity representations.Finally,knowledge graph inference is performed based on representing entities that converge multi-media features.Numerous experimental results show that MCRJI outperforms other advanced baselines in using multi-media features and knowledge graph inference,demonstrating that MCRJI provides an excellent approach for knowledge graph inference with converged multi-media features.
基金funded by National Natural Science Foundation of China(Grant Nos.42272333,42277147).
文摘Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model.
基金supported by the National Natural Science Foundation of China(72471067,72431011,72471238,72231011,62303474,72301286)the Fundamental Research Funds for the Provincial Universities of Zhejiang(GK239909299001-010).
文摘A new approach is proposed in this study for accountable capability improvement based on interpretable capability evaluation using the belief rule base(BRB).Firstly,a capability evaluation model is constructed and optimized.Then,the key sub-capabilities are identified by quantitatively calculating the contributions made by each sub-capability to the overall capability.Finally,the overall capability is improved by optimizing the identified key sub-capabilities.The theoretical contributions of the proposed approach are as follows.(i)An interpretable capability evaluation model is constructed by employing BRB which can provide complete access to decision-makers.(ii)Key sub-capabilities are identified according to the quantitative contribution analysis results.(iii)Accountable capability improvement is carried out by only optimizing the identified key sub-capabilities.Case study results show that“Surveillance”,“Positioning”,and“Identification”are identified as key sub-capabilities with a summed contribution of 75.55%in an analytical and deducible fashion based on the interpretable capability evaluation model.As a result,the overall capability is improved by optimizing only the identified key sub-capabilities.The overall capability can be greatly improved from 59.20%to 81.80%with a minimum cost of 397.Furthermore,this paper also investigates how optimizing the BRB with more collected data would affect the evaluation results:only optimizing“Surveillance”and“Positioning”can also improve the overall capability to 81.34%with a cost of 370,which thus validates the efficiency of the proposed approach.
文摘The aim of this research is to demonstrate a novel scheme for approximating the Riemann-Liouville fractional integral operator.This would be achieved by first establishing a fractional-order version of the 2-point Trapezoidal rule and then by proposing another fractional-order version of the(n+1)-composite Trapezoidal rule.In particular,the so-called divided-difference formula is typically employed to derive the 2-point Trapezoidal rule,which has accordingly been used to derive a more accurate fractional-order formula called the(n+1)-composite Trapezoidal rule.Additionally,in order to increase the accuracy of the proposed approximations by reducing the true errors,we incorporate the so-called Romberg integration,which is an extrapolation formula of the Trapezoidal rule for integration,into our proposed approaches.Several numerical examples are provided and compared with a modern definition of the Riemann-Liouville fractional integral operator to illustrate the efficacy of our scheme.
文摘Recent advancements in science and technology,coupled with the proliferation of data,have also urged laboratory medicine to integrate with the era of artificial intelligence(AI)and machine learning(ML).In the current practices of evidencebased medicine,the laboratory tests analysing disease patterns through the association rule mining(ARM)have emerged as a modern tool for the risk assessment and the disease stratification,with the potential to reduce cardiovascular disease(CVD)mortality.CVDs are the well recognised leading global cause of mortality with the higher fatality rates in the Indian population due to associated factors like hypertension,diabetes,and lifestyle choices.AI-driven algorithms have offered deep insights in this field while addressing various challenges such as healthcare systems grappling with the physician shortages.Personalized medicine,well driven by the big data necessitates the integration of ML techniques and high-quality electronic health records to direct the meaningful outcome.These technological advancements enhance the computational analyses for both research and clinical practice.ARM plays a pivotal role by uncovering meaningful relationships within databases,aiding in patient survival prediction and risk factor identification.AI potential in laboratory medicine is vast and it must be cautiously integrated while considering potential ethical,legal,and privacy concerns.Thus,an AI ethics framework is essential to guide its responsible use.Aligning AI algorithms with existing lab practices,promoting education among healthcare professionals,and fostering careful integration into clinical settings are imperative for harnessing the benefits of this transformative technology.