Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are...Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are limited in detecting new inconstant patterns and identifying malicious traffic traces in real time.Therefore,there is an urgent need to implement more effective intrusion detection technologies to protect computer security.Methods In this study,we designed a hybrid IDS by combining our incremental learning model(KANSOINN)and active learning to learn new log patterns and detect various network anomalies in real time.Conclusions Experimental results on the NSLKDD dataset showed that KAN-SOINN can be continuously improved and effectively detect malicious logs.Meanwhile,comparative experiments proved that using a hybrid query strategy in active learning can improve the model learning efficiency.展开更多
Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management...Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management and research.Our study aims to develop basal area growth models for tree species cohorts.The analysis is based on a dataset of 423 permanent plots(2,500 m^(2))located in temperate forests in Durango,Mexico.First,we define tree species cohorts based on individual and neighborhood-based variables using a combination of principal component and cluster analyses.Then,we estimate the basal area increment of each cohort through the generalized additive model to describe the effect of tree size,competition,stand density and site quality.The principal component and cluster analyses assign a total of 37 tree species to eight cohorts that differed primarily with regard to the distribution of tree size and vertical position within the community.The generalized additive models provide satisfactory estimates of tree growth for the species cohorts,explaining between 19 and 53 percent of the total variation of basal area increment,and highlight the following results:i)most cohorts show a"rise-and-fall"effect of tree size on tree growth;ii)surprisingly,the competition index"basal area of larger trees"had showed a positive effect in four of the eight cohorts;iii)stand density had a negative effect on basal area increment,though the effect was minor in medium-and high-density stands,and iv)basal area growth was positively correlated with site quality except for an oak cohort.The developed species cohorts and growth models provide insight into their particular ecological features and growth patterns that may support the development of sustainable management strategies for temperate multispecies forests.展开更多
Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values...Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.展开更多
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo...The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.展开更多
We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were use...We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were used to develop double wall angle pyramid with aid of tungsten carbide tool. GRA coupled with PCA was used to plan the experiment conditions. Control factors such as Tool Diameter(TD), Step Depth(SD), Bottom Wall Angle(BWA), Feed Rate(FR) and Spindle Speed(SS) on Top Wall Angle(TWA) and Top Wall Angle Surface Roughness(TWASR) have been studied. Wall angle increases with increasing tool diameter due to large contact area between tool and workpiece. As the step depth, feed rate and spindle speed increase,TWASR decreases with increasing tool diameter. As the step depth increasing, the hydrostatic stress is raised causing severe cracks in the deformed surface. Hence it was concluded that the proposed hybrid method was suitable for optimizing the factors and response.展开更多
To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)pre...To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)prediction model based on the incremental attention mechanism.Firstly,a traversal search is conducted through the traversal layer for finite parameters in the phase space.Then,an incremental attention layer is utilized for parameter judgment based on the dimension weight criteria(DWC).The phase space parameters that best meet DWC are selected and fed into the input layer.Finally,the constructed CNN-LSTM network extracts spatio-temporal features and provides the final prediction results.The model is verified using Logistic,Lorenz,and sunspot chaotic time series,and the performance is compared from the two dimensions of prediction accuracy and network phase space structure.Additionally,the CNN-LSTM network based on incremental attention is compared with long short-term memory(LSTM),convolutional neural network(CNN),recurrent neural network(RNN),and support vector regression(SVR)for prediction accuracy.The experiment results indicate that the proposed composite network model possesses enhanced capability in extracting temporal features and achieves higher prediction accuracy.Also,the algorithm to estimate the phase space parameter is compared with the traditional CAO,false nearest neighbor,and C-C,three typical methods for determining the chaotic phase space parameters.The experiments reveal that the phase space parameter estimation algorithm based on the incremental attention mechanism is superior in prediction accuracy compared with the traditional phase space reconstruction method in five networks,including CNN-LSTM,LSTM,CNN,RNN,and SVR.展开更多
The integration of set-valued ordered rough set models and incremental learning signify a progressive advancement of conventional rough set theory, with the objective of tackling the heterogeneity and ongoing transfor...The integration of set-valued ordered rough set models and incremental learning signify a progressive advancement of conventional rough set theory, with the objective of tackling the heterogeneity and ongoing transformations in information systems. In set-valued ordered decision systems, when changes occur in the attribute value domain, such as adding conditional values, it may result in changes in the preference relation between objects, indirectly leading to changes in approximations. In this paper, we effectively addressed the issue of updating approximations that arose from adding conditional values in set-valued ordered decision systems. Firstly, we classified the research objects into two categories: objects with changes in conditional values and objects without changes, and then conducted theoretical studies on updating approximations for these two categories, presenting approximation update theories for adding conditional values. Subsequently, we presented incremental algorithms corresponding to approximation update theories. We demonstrated the feasibility of the proposed incremental update method with numerical examples and showed that our incremental algorithm outperformed the static algorithm. Ultimately, by comparing experimental results on different datasets, it is evident that the incremental algorithm efficiently reduced processing time. In conclusion, this study offered a promising strategy to address the challenges of set-valued ordered decision systems in dynamic environments.展开更多
Initialization of tropical cyclones plays an important role in typhoon numerical prediction. This study applied a typhoon initialization scheme based on the incremental analysis updates (IAU) technique in a rapid refr...Initialization of tropical cyclones plays an important role in typhoon numerical prediction. This study applied a typhoon initialization scheme based on the incremental analysis updates (IAU) technique in a rapid refresh system to improve the prediction of Typhoon Lekima (2019). Two numerical sensitivity experiments with or without application of the IAU technique after performing vortex relocation and wind adjustment procedures were conducted for comparison with the control experiment, which did not involve a typhoon initialization scheme. Analysis of the initial fields indicated that the relocation procedure shifted the typhoon circulation to the observed typhoon region, and the wind speeds became closer to the observations following the wind adjustment procedure. Comparison of the results of the sensitivity and control experiments revealed that the vortex relocation and wind adjustment procedures could improve the prediction of typhoon track and intensity in the first 6-h period, and that these improvements were extended throughout the first 12-h period of the prediction by the IAU technique. The new typhoon initialization scheme also improved the simulated typhoon structure in terms of not only the wind speed and warm core prediction but also the organization of the eye of Typhoon Lekima. Diagnosis of the tendencies of variables showed that use of the IAU technique in a typhoon initialization scheme is efficacious in resolving the spurious high-frequency noise problem such that the model is able to reach equilibrium as soon as possible.展开更多
A novel incremental nonlinear detection algorithm is presented for Multiple-Input Multiple-Output (MIMO) system. In this algorithm, the received data at multiple receiver antennas are nonlinearly mapped and then sum...A novel incremental nonlinear detection algorithm is presented for Multiple-Input Multiple-Output (MIMO) system. In this algorithm, the received data at multiple receiver antennas are nonlinearly mapped and then summed with weights. The weight coefficients are incrementally computed to avoid direct computation of the inverse of a matrix, which greatly reduce the computational complexity. Simulation and comparison show that the proposed algorithm can obtain better performance of Bit Error Rate (BER) than linear Minimum Mean Square Error (MMSE).展开更多
Maximum Power Point Tracking (MPPT) is an important process in Photovoltaic (PV) systems because of the need to extract maximum power from PV panels used in these systems. Without the ability to track and have PV pane...Maximum Power Point Tracking (MPPT) is an important process in Photovoltaic (PV) systems because of the need to extract maximum power from PV panels used in these systems. Without the ability to track and have PV panels operate at its maximum power point (MPP) entails power losses;resulting in high cost since more panels will be required to provide specified energy needs. To achieve high efficiency and low cost, MPPT has therefore become an imperative in PV systems. In this study, an MPP tracker is modeled using the IC algorithm and its behavior under rapidly changing environmental conditions of temperature and irradiation levels is investigated. This algorithm, based on knowledge of the variation of the conductance of PV cells and the operating point with respect to the voltage and current of the panel calculates the slope of the power characteristics to determine the MPP as the peak of the curve. A simple circuit model of the DC-DC boost converter connected to a PV panel is used in the simulation;and the output of the boost converter is fed through a 3-phase inverter to an electricity grid. The model was simulated and tested using MATLAB/Simulink. Simulation results show the effectiveness of the IC algorithm for tracking the MPP in PV systems operating under rapidly changing temperatures and irradiations with a settling time of 2 seconds.展开更多
Purpose:We propose In Par Ten2,a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework.The proposed method reduces re-decomposition cost and can han...Purpose:We propose In Par Ten2,a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework.The proposed method reduces re-decomposition cost and can handle large tensors.Design/methodology/approach:Considering that tensor addition increases the size of a given tensor along all axes,the proposed method decomposes incoming tensors using existing decomposition results without generating sub-tensors.Additionally,In Par Ten2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform.Findings:The performance of In Par Ten2 is evaluated by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods on various datasets.The results confirm that In Par Ten2 can process large tensors and reduce the re-calculation cost of tensor decomposition.Consequently,the proposed method is faster than existing tensor decomposition algorithms and can significantly reduce re-decomposition cost.Research limitations:There are several Hadoop-based distributed tensor decomposition algorithms as well as MATLAB-based decomposition methods.However,the former require longer iteration time,and therefore their execution time cannot be compared with that of Spark-based algorithms,whereas the latter run on a single machine,thus limiting their ability to handle large data.Practical implications:The proposed algorithm can reduce re-decomposition cost when tensors are added to a given tensor by decomposing them based on existing decomposition results without re-decomposing the entire tensor.Originality/value:The proposed method can handle large tensors and is fast within the limited-memory framework of Apache Spark.Moreover,In Par Ten2 can handle static as well as incremental tensor decomposition.展开更多
Many chronic disease prediction methods have been proposed to predict or evaluate diabetes through artificial neural network.However,due to the complexity of the human body,there are still many challenges to face in t...Many chronic disease prediction methods have been proposed to predict or evaluate diabetes through artificial neural network.However,due to the complexity of the human body,there are still many challenges to face in that process.One of them is how to make the neural network prediction model continuously adapt and learn disease data of different patients,online.This paper presents a novel chronic disease prediction system based on an incremental deep neural network.The propensity of users suffering from chronic diseases can continuously be evaluated in an incremental manner.With time,the system can predict diabetes more and more accurately by processing the feedback information.Many diabetes prediction studies are based on a common dataset,the Pima Indians diabetes dataset,which has only eight input attributes.In order to determine the correlation between the pathological characteristics of diabetic patients and their daily living resources,we have established an in-depth cooperation with a hospital.A Chinese diabetes dataset with 575 diabetics was created.Users’data collected by different sensors were used to train the network model.We evaluated our system using a real-world diabetes dataset to confirm its effectiveness.The experimental results show that the proposed system can not only continuously monitor the users,but also give early warning of physiological data that may indicate future diabetic ailments.展开更多
In recent years, MIMO technology has emerged as one of the technical breakthroughs in the field of wireless communications. Two famous MIMO techniques have become investigated thoroughly throughout the literature;Spat...In recent years, MIMO technology has emerged as one of the technical breakthroughs in the field of wireless communications. Two famous MIMO techniques have become investigated thoroughly throughout the literature;Spatial Multiplexing, and Space Time Block Coding. On one hand, Spatial Multiplexing offers high data rates. On the other hand, Space Time Block Coding presents transmission fidelity. This imposes a fundamental tradeoff between capacity and reliability. Adaptive MIMO Switching schemes have been proposed to select the MIMO scheme that best fits the channel conditions. However, the switching schemes presented in the literature directly switch between the MIMO endpoints. In this paper, an adaptive MIMO system that incrementally switches from multiplexing towards diversity is proposed. The proposed scheme is referred to as incremental diversity and can be set to operate in two different modes;Rate-Adaptive, and Energy-Conservative Incremental Diversity. Results indicate that the proposed incremental diversity framework achieves transmission reliability offered by MIMO diversity, while maintaining a gradual increase in spectral efficiency (in the Rate-Adaptive mode) or a reduction in required number of received symbols (in the Energy-Conservative mode) with increase in the SNR.展开更多
The technique of incremental updating,which can better guarantee the real-time situation of navigational map,is the developing orientation of navigational road network updating.The data center of vehicle navigation sy...The technique of incremental updating,which can better guarantee the real-time situation of navigational map,is the developing orientation of navigational road network updating.The data center of vehicle navigation system is in charge of storing incremental data,and the spatio-temporal data model for storing incremental data does affect the efficiency of the response of the data center to the requirements of incremental data from the vehicle terminal.According to the analysis on the shortcomings of several typical spatio-temporal data models used in the data center and based on the base map with overlay model,the reverse map with overlay model (RMOM) was put forward for the data center to make rapid response to incremental data request.RMOM supports the data center to store not only the current complete road network data,but also the overlays of incremental data from the time when each road network changed to the current moment.Moreover,the storage mechanism and index structure of the incremental data were designed,and the implementation algorithm of RMOM was developed.Taking navigational road network in Guangzhou City as an example,the simulation test was conducted to validate the efficiency of RMOM.Results show that the navigation database in the data center can response to the requirements of incremental data by only one query with RMOM,and costs less time.Compared with the base map with overlay model,the data center does not need to temporarily overlay incremental data with RMOM,so time-consuming of response is significantly reduced.RMOM greatly improves the efficiency of response and provides strong support for the real-time situation of navigational road network.展开更多
Big data are always processed repeatedly with small changes, which is a major form of big data processing. The feature of incremental change of big data shows that incremental computing mode can improve the performanc...Big data are always processed repeatedly with small changes, which is a major form of big data processing. The feature of incremental change of big data shows that incremental computing mode can improve the performance greatly. HDFS is a distributed file system on Hadoop which is the most popular platform for big data analytics. And HDFS adopts fixed-size chunking policy, which is inefficient facing incremental computing. Therefore, in this paper, we proposed iHDFS (incremental HDFS), a distributed file system, which can provide basic guarantee for big data parallel processing. The iHDFS is implemented as an extension to HDFS. In iHDFS, Rabin fingerprint algorithm is applied to achieve content defined chunking. This policy make data chunking has much higher stability, and the intermediate processing results can be reused efficiently, so the performance of incremental data processing can be improved significantly. The effectiveness and efficiency of iHDFS have been demonstrated by the experimental results.展开更多
Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the d...Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials.展开更多
In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-f...In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-frequency oscillations introduced by data assimilation.However,as different scales of increments have unique evolutionary speeds and life histories in a numerical model,the traditional IAU scheme cannot fully meet the requirements of short-term forecasting for the damping of high-frequency noise and may even cause systematic drifts.Therefore,a multi-scale IAU scheme is proposed in this paper.Analysis increments were divided into different scale parts using a spatial filtering technique.For each scale increment,the optimal relaxation time in the IAU scheme was determined by the skill of the forecasting results.Finally,different scales of analysis increments were added to the model integration during their optimal relaxation time.The multi-scale IAU scheme can effectively reduce the noise and further improve the balance between large-scale and small-scale increments in the model initialization stage.To evaluate its performance,several numerical experiments were conducted to simulate the path and intensity of Typhoon Mangkhut(2018)and showed that:(1)the multi-scale IAU scheme had an obvious effect on noise control at the initial stage of data assimilation;(2)the optimal relaxation time for large-scale and small-scale increments was estimated as 6 h and 3 h,respectively;(3)the forecast performance of the multi-scale IAU scheme in the prediction of Typhoon Mangkhut(2018)was better than that of the traditional IAU scheme.The results demonstrate the superiority of the multi-scale IAU scheme.展开更多
Grid-connected reactive-load compensation and harmonic control are becoming a central topic as photovoltaic(PV)grid-connected systems diversified.This research aims to produce a high-performance inverter with a fast d...Grid-connected reactive-load compensation and harmonic control are becoming a central topic as photovoltaic(PV)grid-connected systems diversified.This research aims to produce a high-performance inverter with a fast dynamic response for accurate reference tracking and a low total har-monic distortion(THD)even under nonlinear load applications by improving its control scheme.The proposed system is expected to operate in both stand-alone mode and grid-connected mode.In stand-alone mode,the proposed controller supplies power to critical loads,alternatively during grid-connected mode provide excess energy to the utility.A modified variable step incremental conductance(VS-InCond)algorithm is designed to extract maximum power from PV.Whereas the proposed inverter controller is achieved by using a modified PQ theory with double-band hysteresis current controller(PQ-DBHCC)to produce a reference current based on a decomposition of a single-phase load current.The nonlinear rectifier loads often create significant distortion in the output voltage of single-phase inverters,due to excessive current harmonics in the grid.Therefore,the proposed method generates a close-loop reference current for the switching scheme,hence,minimizing the inverter voltage distortion caused by the excessive grid current harmonics.The simulation findings suggest the proposed control technique can effectively yield more than 97%of power conversion efficiency while suppressing the grid current THD by less than 2%and maintaining the unity power factor at the grid side.The efficacy of the proposed controller is simulated using MATLAB/Simulink.展开更多
Global energy demand is growing rapidly owing to industrial growth and urbanization.Alternative energy sources are driven by limited reserves and rapid depletion of conventional energy sources(e.g.,fossil fuels).Solar...Global energy demand is growing rapidly owing to industrial growth and urbanization.Alternative energy sources are driven by limited reserves and rapid depletion of conventional energy sources(e.g.,fossil fuels).Solar photovol-taic(PV),as a source of electricity,has grown in popularity over the last few dec-ades because of their clean,noise-free,low-maintenance,and abundant availability of solar energy.There are two types of maximum power point track-ing(MPPT)techniques:classical and evolutionary algorithm-based techniques.Precise and less complex perturb and observe(P&O)and incremental conduc-tance(INC)approaches are extensively employed among classical techniques.This study used afield-programmable gate array(FPGA)-based hardware arrange-ment for a grid-connected photovoltaic(PV)system.The PV panels,MPPT con-trollers,and battery management systems are all components of the proposed system.In the developed hardware prototype,various modes of operation of the grid-connected PV system were examined using P&O and incremental con-ductance MPPT approaches.展开更多
Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions w...Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions were proved to be efficient for dealing with the problemof attribute reduction.Unfortunately,the intuitionistic fuzzy sets based methods have not received much interest,while these methods are well-known as a very powerful approach to noisy decision tables,i.e.,data tables with the low initial classification accuracy.Therefore,this paper provides a novel incremental attribute reductionmethod to dealmore effectivelywith noisy decision tables,especially for highdimensional ones.In particular,we define a new reduct and then design an original attribute reduction method based on the distance measure between two intuitionistic fuzzy partitions.It should be noted that the intuitionistic fuzzypartitiondistance iswell-knownas aneffectivemeasure todetermine important attributes.More interestingly,an incremental formula is also developed to quickly compute the intuitionistic fuzzy partition distance in case when the decision table increases in the number of objects.This formula is then applied to construct an incremental attribute reduction algorithm for handling such dynamic tables.Besides,some experiments are conducted on real datasets to show that our method is far superior to the fuzzy rough set based methods in terms of the size of reduct and the classification accuracy.展开更多
基金Supported by SJTU-HUAWEI TECH Cybersecurity Innovation Lab。
文摘Background With the development of information technology,there is a significant increase in the number of network traffic logs mixed with various types of cyberattacks.Traditional intrusion detection systems(IDSs)are limited in detecting new inconstant patterns and identifying malicious traffic traces in real time.Therefore,there is an urgent need to implement more effective intrusion detection technologies to protect computer security.Methods In this study,we designed a hybrid IDS by combining our incremental learning model(KANSOINN)and active learning to learn new log patterns and detect various network anomalies in real time.Conclusions Experimental results on the NSLKDD dataset showed that KAN-SOINN can be continuously improved and effectively detect malicious logs.Meanwhile,comparative experiments proved that using a hybrid query strategy in active learning can improve the model learning efficiency.
基金The National Forestry Commission of Mexico and The Mexican National Council for Science and Technology(CONAFOR-CONACYT-115900)。
文摘Multispecies forests have received increased scientific attention,driven by the hypothesis that biodiversity improves ecological resilience.However,a greater species diversity presents challenges for forest management and research.Our study aims to develop basal area growth models for tree species cohorts.The analysis is based on a dataset of 423 permanent plots(2,500 m^(2))located in temperate forests in Durango,Mexico.First,we define tree species cohorts based on individual and neighborhood-based variables using a combination of principal component and cluster analyses.Then,we estimate the basal area increment of each cohort through the generalized additive model to describe the effect of tree size,competition,stand density and site quality.The principal component and cluster analyses assign a total of 37 tree species to eight cohorts that differed primarily with regard to the distribution of tree size and vertical position within the community.The generalized additive models provide satisfactory estimates of tree growth for the species cohorts,explaining between 19 and 53 percent of the total variation of basal area increment,and highlight the following results:i)most cohorts show a"rise-and-fall"effect of tree size on tree growth;ii)surprisingly,the competition index"basal area of larger trees"had showed a positive effect in four of the eight cohorts;iii)stand density had a negative effect on basal area increment,though the effect was minor in medium-and high-density stands,and iv)basal area growth was positively correlated with site quality except for an oak cohort.The developed species cohorts and growth models provide insight into their particular ecological features and growth patterns that may support the development of sustainable management strategies for temperate multispecies forests.
基金This work was funded by the National Natural Science Foundation of China Nos.U22A2099,61966009,62006057the Graduate Innovation Program No.YCSW2022286.
文摘Humans are experiencing the inclusion of artificial agents in their lives,such as unmanned vehicles,service robots,voice assistants,and intelligent medical care.If the artificial agents cannot align with social values or make ethical decisions,they may not meet the expectations of humans.Traditionally,an ethical decision-making framework is constructed by rule-based or statistical approaches.In this paper,we propose an ethical decision-making framework based on incremental ILP(Inductive Logic Programming),which can overcome the brittleness of rule-based approaches and little interpretability of statistical approaches.As the current incremental ILP makes it difficult to solve conflicts,we propose a novel ethical decision-making framework considering conflicts in this paper,which adopts our proposed incremental ILP system.The framework consists of two processes:the learning process and the deduction process.The first process records bottom clauses with their score functions and learns rules guided by the entailment and the score function.The second process obtains an ethical decision based on the rules.In an ethical scenario about chatbots for teenagers’mental health,we verify that our framework can learn ethical rules and make ethical decisions.Besides,we extract incremental ILP from the framework and compare it with the state-of-the-art ILP systems based on ASP(Answer Set Programming)focusing on conflict resolution.The results of comparisons show that our proposed system can generate better-quality rules than most other systems.
文摘The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.
文摘We investigated the parametric optimization on incremental sheet forming of stainless steel using Grey Relational Analysis(GRA) coupled with Principal Component Analysis(PCA). AISI 316L stainless steel sheets were used to develop double wall angle pyramid with aid of tungsten carbide tool. GRA coupled with PCA was used to plan the experiment conditions. Control factors such as Tool Diameter(TD), Step Depth(SD), Bottom Wall Angle(BWA), Feed Rate(FR) and Spindle Speed(SS) on Top Wall Angle(TWA) and Top Wall Angle Surface Roughness(TWASR) have been studied. Wall angle increases with increasing tool diameter due to large contact area between tool and workpiece. As the step depth, feed rate and spindle speed increase,TWASR decreases with increasing tool diameter. As the step depth increasing, the hydrostatic stress is raised causing severe cracks in the deformed surface. Hence it was concluded that the proposed hybrid method was suitable for optimizing the factors and response.
文摘To improve the prediction accuracy of chaotic time series and reconstruct a more reasonable phase space structure of the prediction network,we propose a convolutional neural network-long short-term memory(CNN-LSTM)prediction model based on the incremental attention mechanism.Firstly,a traversal search is conducted through the traversal layer for finite parameters in the phase space.Then,an incremental attention layer is utilized for parameter judgment based on the dimension weight criteria(DWC).The phase space parameters that best meet DWC are selected and fed into the input layer.Finally,the constructed CNN-LSTM network extracts spatio-temporal features and provides the final prediction results.The model is verified using Logistic,Lorenz,and sunspot chaotic time series,and the performance is compared from the two dimensions of prediction accuracy and network phase space structure.Additionally,the CNN-LSTM network based on incremental attention is compared with long short-term memory(LSTM),convolutional neural network(CNN),recurrent neural network(RNN),and support vector regression(SVR)for prediction accuracy.The experiment results indicate that the proposed composite network model possesses enhanced capability in extracting temporal features and achieves higher prediction accuracy.Also,the algorithm to estimate the phase space parameter is compared with the traditional CAO,false nearest neighbor,and C-C,three typical methods for determining the chaotic phase space parameters.The experiments reveal that the phase space parameter estimation algorithm based on the incremental attention mechanism is superior in prediction accuracy compared with the traditional phase space reconstruction method in five networks,including CNN-LSTM,LSTM,CNN,RNN,and SVR.
文摘The integration of set-valued ordered rough set models and incremental learning signify a progressive advancement of conventional rough set theory, with the objective of tackling the heterogeneity and ongoing transformations in information systems. In set-valued ordered decision systems, when changes occur in the attribute value domain, such as adding conditional values, it may result in changes in the preference relation between objects, indirectly leading to changes in approximations. In this paper, we effectively addressed the issue of updating approximations that arose from adding conditional values in set-valued ordered decision systems. Firstly, we classified the research objects into two categories: objects with changes in conditional values and objects without changes, and then conducted theoretical studies on updating approximations for these two categories, presenting approximation update theories for adding conditional values. Subsequently, we presented incremental algorithms corresponding to approximation update theories. We demonstrated the feasibility of the proposed incremental update method with numerical examples and showed that our incremental algorithm outperformed the static algorithm. Ultimately, by comparing experimental results on different datasets, it is evident that the incremental algorithm efficiently reduced processing time. In conclusion, this study offered a promising strategy to address the challenges of set-valued ordered decision systems in dynamic environments.
基金Science and Technology Project of Zhejiang Province(LGF20D050001)East China Regional Meteorological Science and Technology Innovation Fund Cooperation Project(QYHZ201805)Meteorological Science and Technology Project of Zhejiang Meteorological Service(2018ZD01,2019ZD11)。
文摘Initialization of tropical cyclones plays an important role in typhoon numerical prediction. This study applied a typhoon initialization scheme based on the incremental analysis updates (IAU) technique in a rapid refresh system to improve the prediction of Typhoon Lekima (2019). Two numerical sensitivity experiments with or without application of the IAU technique after performing vortex relocation and wind adjustment procedures were conducted for comparison with the control experiment, which did not involve a typhoon initialization scheme. Analysis of the initial fields indicated that the relocation procedure shifted the typhoon circulation to the observed typhoon region, and the wind speeds became closer to the observations following the wind adjustment procedure. Comparison of the results of the sensitivity and control experiments revealed that the vortex relocation and wind adjustment procedures could improve the prediction of typhoon track and intensity in the first 6-h period, and that these improvements were extended throughout the first 12-h period of the prediction by the IAU technique. The new typhoon initialization scheme also improved the simulated typhoon structure in terms of not only the wind speed and warm core prediction but also the organization of the eye of Typhoon Lekima. Diagnosis of the tendencies of variables showed that use of the IAU technique in a typhoon initialization scheme is efficacious in resolving the spurious high-frequency noise problem such that the model is able to reach equilibrium as soon as possible.
文摘A novel incremental nonlinear detection algorithm is presented for Multiple-Input Multiple-Output (MIMO) system. In this algorithm, the received data at multiple receiver antennas are nonlinearly mapped and then summed with weights. The weight coefficients are incrementally computed to avoid direct computation of the inverse of a matrix, which greatly reduce the computational complexity. Simulation and comparison show that the proposed algorithm can obtain better performance of Bit Error Rate (BER) than linear Minimum Mean Square Error (MMSE).
文摘Maximum Power Point Tracking (MPPT) is an important process in Photovoltaic (PV) systems because of the need to extract maximum power from PV panels used in these systems. Without the ability to track and have PV panels operate at its maximum power point (MPP) entails power losses;resulting in high cost since more panels will be required to provide specified energy needs. To achieve high efficiency and low cost, MPPT has therefore become an imperative in PV systems. In this study, an MPP tracker is modeled using the IC algorithm and its behavior under rapidly changing environmental conditions of temperature and irradiation levels is investigated. This algorithm, based on knowledge of the variation of the conductance of PV cells and the operating point with respect to the voltage and current of the panel calculates the slope of the power characteristics to determine the MPP as the peak of the curve. A simple circuit model of the DC-DC boost converter connected to a PV panel is used in the simulation;and the output of the boost converter is fed through a 3-phase inverter to an electricity grid. The model was simulated and tested using MATLAB/Simulink. Simulation results show the effectiveness of the IC algorithm for tracking the MPP in PV systems operating under rapidly changing temperatures and irradiations with a settling time of 2 seconds.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2016R1D1A1B03931529)。
文摘Purpose:We propose In Par Ten2,a multi-aspect parallel factor analysis three-dimensional tensor decomposition algorithm based on the Apache Spark framework.The proposed method reduces re-decomposition cost and can handle large tensors.Design/methodology/approach:Considering that tensor addition increases the size of a given tensor along all axes,the proposed method decomposes incoming tensors using existing decomposition results without generating sub-tensors.Additionally,In Par Ten2 avoids the calculation of Khari–Rao products and minimizes shuffling by using the Apache Spark platform.Findings:The performance of In Par Ten2 is evaluated by comparing its execution time and accuracy with those of existing distributed tensor decomposition methods on various datasets.The results confirm that In Par Ten2 can process large tensors and reduce the re-calculation cost of tensor decomposition.Consequently,the proposed method is faster than existing tensor decomposition algorithms and can significantly reduce re-decomposition cost.Research limitations:There are several Hadoop-based distributed tensor decomposition algorithms as well as MATLAB-based decomposition methods.However,the former require longer iteration time,and therefore their execution time cannot be compared with that of Spark-based algorithms,whereas the latter run on a single machine,thus limiting their ability to handle large data.Practical implications:The proposed algorithm can reduce re-decomposition cost when tensors are added to a given tensor by decomposing them based on existing decomposition results without re-decomposing the entire tensor.Originality/value:The proposed method can handle large tensors and is fast within the limited-memory framework of Apache Spark.Moreover,In Par Ten2 can handle static as well as incremental tensor decomposition.
基金funding from the Humanities and Social Sciences Projects of the Ministry of Education(Grant No.18YJC760112,Bin Yang)the Social Science Fund of Jiangsu Province(Grant No.18YSD002,Bin Yang)Open Fund of Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle Infrastructure Systems(Changsha University of Science and Technology)(Grant No.kfj180402,Lingyun Xiang).
文摘Many chronic disease prediction methods have been proposed to predict or evaluate diabetes through artificial neural network.However,due to the complexity of the human body,there are still many challenges to face in that process.One of them is how to make the neural network prediction model continuously adapt and learn disease data of different patients,online.This paper presents a novel chronic disease prediction system based on an incremental deep neural network.The propensity of users suffering from chronic diseases can continuously be evaluated in an incremental manner.With time,the system can predict diabetes more and more accurately by processing the feedback information.Many diabetes prediction studies are based on a common dataset,the Pima Indians diabetes dataset,which has only eight input attributes.In order to determine the correlation between the pathological characteristics of diabetic patients and their daily living resources,we have established an in-depth cooperation with a hospital.A Chinese diabetes dataset with 575 diabetics was created.Users’data collected by different sensors were used to train the network model.We evaluated our system using a real-world diabetes dataset to confirm its effectiveness.The experimental results show that the proposed system can not only continuously monitor the users,but also give early warning of physiological data that may indicate future diabetic ailments.
文摘In recent years, MIMO technology has emerged as one of the technical breakthroughs in the field of wireless communications. Two famous MIMO techniques have become investigated thoroughly throughout the literature;Spatial Multiplexing, and Space Time Block Coding. On one hand, Spatial Multiplexing offers high data rates. On the other hand, Space Time Block Coding presents transmission fidelity. This imposes a fundamental tradeoff between capacity and reliability. Adaptive MIMO Switching schemes have been proposed to select the MIMO scheme that best fits the channel conditions. However, the switching schemes presented in the literature directly switch between the MIMO endpoints. In this paper, an adaptive MIMO system that incrementally switches from multiplexing towards diversity is proposed. The proposed scheme is referred to as incremental diversity and can be set to operate in two different modes;Rate-Adaptive, and Energy-Conservative Incremental Diversity. Results indicate that the proposed incremental diversity framework achieves transmission reliability offered by MIMO diversity, while maintaining a gradual increase in spectral efficiency (in the Rate-Adaptive mode) or a reduction in required number of received symbols (in the Energy-Conservative mode) with increase in the SNR.
基金Under the auspices of National High Technology Research and Development Program of China (No.2007AA12Z242)
文摘The technique of incremental updating,which can better guarantee the real-time situation of navigational map,is the developing orientation of navigational road network updating.The data center of vehicle navigation system is in charge of storing incremental data,and the spatio-temporal data model for storing incremental data does affect the efficiency of the response of the data center to the requirements of incremental data from the vehicle terminal.According to the analysis on the shortcomings of several typical spatio-temporal data models used in the data center and based on the base map with overlay model,the reverse map with overlay model (RMOM) was put forward for the data center to make rapid response to incremental data request.RMOM supports the data center to store not only the current complete road network data,but also the overlays of incremental data from the time when each road network changed to the current moment.Moreover,the storage mechanism and index structure of the incremental data were designed,and the implementation algorithm of RMOM was developed.Taking navigational road network in Guangzhou City as an example,the simulation test was conducted to validate the efficiency of RMOM.Results show that the navigation database in the data center can response to the requirements of incremental data by only one query with RMOM,and costs less time.Compared with the base map with overlay model,the data center does not need to temporarily overlay incremental data with RMOM,so time-consuming of response is significantly reduced.RMOM greatly improves the efficiency of response and provides strong support for the real-time situation of navigational road network.
文摘Big data are always processed repeatedly with small changes, which is a major form of big data processing. The feature of incremental change of big data shows that incremental computing mode can improve the performance greatly. HDFS is a distributed file system on Hadoop which is the most popular platform for big data analytics. And HDFS adopts fixed-size chunking policy, which is inefficient facing incremental computing. Therefore, in this paper, we proposed iHDFS (incremental HDFS), a distributed file system, which can provide basic guarantee for big data parallel processing. The iHDFS is implemented as an extension to HDFS. In iHDFS, Rabin fingerprint algorithm is applied to achieve content defined chunking. This policy make data chunking has much higher stability, and the intermediate processing results can be reused efficiently, so the performance of incremental data processing can be improved significantly. The effectiveness and efficiency of iHDFS have been demonstrated by the experimental results.
基金support from the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.XDA27000000.
文摘Deep Convolution Neural Networks(DCNNs)can capture discriminative features from large datasets.However,how to incrementally learn new samples without forgetting old ones and recognize novel classes that arise in the dynamically changing world,e.g.,classifying newly discovered fish species,remains an open problem.We address an even more challenging and realistic setting of this problem where new class samples are insufficient,i.e.,Few-Shot Class-Incremental Learning(FSCIL).Current FSCIL methods augment the training data to alleviate the overfitting of novel classes.By contrast,we propose Filter Bank Networks(FBNs)that augment the learnable filters to capture fine-detailed features for adapting to future new classes.In the forward pass,FBNs augment each convolutional filter to a virtual filter bank containing the canonical one,i.e.,itself,and multiple transformed versions.During back-propagation,FBNs explicitly stimulate fine-detailed features to emerge and collectively align all gradients of each filter bank to learn the canonical one.FBNs capture pattern variants that do not yet exist in the pretraining session,thus making it easy to incorporate new classes in the incremental learning phase.Moreover,FBNs introduce model-level prior knowledge to efficiently utilize the limited few-shot data.Extensive experiments on MNIST,CIFAR100,CUB200,andMini-ImageNet datasets show that FBNs consistently outperformthe baseline by a significantmargin,reporting new state-of-the-art FSCIL results.In addition,we contribute a challenging FSCIL benchmark,Fishshot1K,which contains 8261 underwater images covering 1000 ocean fish species.The code is included in the supplementary materials.
基金jointly sponsored by the Shenzhen Science and Technology Innovation Commission (Grant No. KCXFZ20201221173610028)the key program of the National Natural Science Foundation of China (Grant No. 42130605)
文摘In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-frequency oscillations introduced by data assimilation.However,as different scales of increments have unique evolutionary speeds and life histories in a numerical model,the traditional IAU scheme cannot fully meet the requirements of short-term forecasting for the damping of high-frequency noise and may even cause systematic drifts.Therefore,a multi-scale IAU scheme is proposed in this paper.Analysis increments were divided into different scale parts using a spatial filtering technique.For each scale increment,the optimal relaxation time in the IAU scheme was determined by the skill of the forecasting results.Finally,different scales of analysis increments were added to the model integration during their optimal relaxation time.The multi-scale IAU scheme can effectively reduce the noise and further improve the balance between large-scale and small-scale increments in the model initialization stage.To evaluate its performance,several numerical experiments were conducted to simulate the path and intensity of Typhoon Mangkhut(2018)and showed that:(1)the multi-scale IAU scheme had an obvious effect on noise control at the initial stage of data assimilation;(2)the optimal relaxation time for large-scale and small-scale increments was estimated as 6 h and 3 h,respectively;(3)the forecast performance of the multi-scale IAU scheme in the prediction of Typhoon Mangkhut(2018)was better than that of the traditional IAU scheme.The results demonstrate the superiority of the multi-scale IAU scheme.
基金funded by Geran Galakan Penyelidik Muda GGPM-2020-004 Universiti Kebangsaan Malaysia.
文摘Grid-connected reactive-load compensation and harmonic control are becoming a central topic as photovoltaic(PV)grid-connected systems diversified.This research aims to produce a high-performance inverter with a fast dynamic response for accurate reference tracking and a low total har-monic distortion(THD)even under nonlinear load applications by improving its control scheme.The proposed system is expected to operate in both stand-alone mode and grid-connected mode.In stand-alone mode,the proposed controller supplies power to critical loads,alternatively during grid-connected mode provide excess energy to the utility.A modified variable step incremental conductance(VS-InCond)algorithm is designed to extract maximum power from PV.Whereas the proposed inverter controller is achieved by using a modified PQ theory with double-band hysteresis current controller(PQ-DBHCC)to produce a reference current based on a decomposition of a single-phase load current.The nonlinear rectifier loads often create significant distortion in the output voltage of single-phase inverters,due to excessive current harmonics in the grid.Therefore,the proposed method generates a close-loop reference current for the switching scheme,hence,minimizing the inverter voltage distortion caused by the excessive grid current harmonics.The simulation findings suggest the proposed control technique can effectively yield more than 97%of power conversion efficiency while suppressing the grid current THD by less than 2%and maintaining the unity power factor at the grid side.The efficacy of the proposed controller is simulated using MATLAB/Simulink.
文摘Global energy demand is growing rapidly owing to industrial growth and urbanization.Alternative energy sources are driven by limited reserves and rapid depletion of conventional energy sources(e.g.,fossil fuels).Solar photovol-taic(PV),as a source of electricity,has grown in popularity over the last few dec-ades because of their clean,noise-free,low-maintenance,and abundant availability of solar energy.There are two types of maximum power point track-ing(MPPT)techniques:classical and evolutionary algorithm-based techniques.Precise and less complex perturb and observe(P&O)and incremental conduc-tance(INC)approaches are extensively employed among classical techniques.This study used afield-programmable gate array(FPGA)-based hardware arrange-ment for a grid-connected photovoltaic(PV)system.The PV panels,MPPT con-trollers,and battery management systems are all components of the proposed system.In the developed hardware prototype,various modes of operation of the grid-connected PV system were examined using P&O and incremental con-ductance MPPT approaches.
基金funded by Hanoi University of Industry under Grant Number 27-2022-RD/HD-DHCN (URL:https://www.haui.edu.vn/).
文摘Attribute reduction,also known as feature selection,for decision information systems is one of the most pivotal issues in machine learning and data mining.Approaches based on the rough set theory and some extensions were proved to be efficient for dealing with the problemof attribute reduction.Unfortunately,the intuitionistic fuzzy sets based methods have not received much interest,while these methods are well-known as a very powerful approach to noisy decision tables,i.e.,data tables with the low initial classification accuracy.Therefore,this paper provides a novel incremental attribute reductionmethod to dealmore effectivelywith noisy decision tables,especially for highdimensional ones.In particular,we define a new reduct and then design an original attribute reduction method based on the distance measure between two intuitionistic fuzzy partitions.It should be noted that the intuitionistic fuzzypartitiondistance iswell-knownas aneffectivemeasure todetermine important attributes.More interestingly,an incremental formula is also developed to quickly compute the intuitionistic fuzzy partition distance in case when the decision table increases in the number of objects.This formula is then applied to construct an incremental attribute reduction algorithm for handling such dynamic tables.Besides,some experiments are conducted on real datasets to show that our method is far superior to the fuzzy rough set based methods in terms of the size of reduct and the classification accuracy.