Rule induction(RI)produces classifiers containing simple yet effective‘If–Then’rules for decision makers.RI algorithms normally based on PRISM suffer from a few drawbacks mainly related to rule pruning and rule-sha...Rule induction(RI)produces classifiers containing simple yet effective‘If–Then’rules for decision makers.RI algorithms normally based on PRISM suffer from a few drawbacks mainly related to rule pruning and rule-sharing items(attribute values)in the training data instances.In response to the above two issues,a new dynamic rule induction(DRI)method is proposed.Whenever a rule is produced and its related training data instances are discarded,DRI updates the frequency of attribute values that are used to make the next in-line rule to reflect the data deletion.Therefore,the attribute value frequencies are dynamically adjusted each time a rule is generated rather statically as in PRISM.This enables DRI to generate near perfect rules and realistic classifiers.Experimental results using different University of California Irvine data sets show competitive performance in regards to error rate and classifier size of DRI when compared to other RI algorithms.展开更多
文摘Rule induction(RI)produces classifiers containing simple yet effective‘If–Then’rules for decision makers.RI algorithms normally based on PRISM suffer from a few drawbacks mainly related to rule pruning and rule-sharing items(attribute values)in the training data instances.In response to the above two issues,a new dynamic rule induction(DRI)method is proposed.Whenever a rule is produced and its related training data instances are discarded,DRI updates the frequency of attribute values that are used to make the next in-line rule to reflect the data deletion.Therefore,the attribute value frequencies are dynamically adjusted each time a rule is generated rather statically as in PRISM.This enables DRI to generate near perfect rules and realistic classifiers.Experimental results using different University of California Irvine data sets show competitive performance in regards to error rate and classifier size of DRI when compared to other RI algorithms.