Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision t...Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision tree,GRU-DT,was conducted to represent the prediction process of a neural network,and some rule screening algorithms were proposed to find out significant rules in the prediction.In the empirical study,the data of 10 different industries in China’s CSI 300 were selected for stock price trend prediction,and extracted rules were compared and analyzed.And the method of technical index discretization was used to make rules easy for decision-making.Empirical results show that the AUC of the model is stable between 0.72 and 0.74,and the value of F1 and Accuracy are stable between 0.68 and 0.70,indicating that discretized technical indicators can predict the short-term trend of stock price effectively.And the fidelity of GRU-DT to the GRU model reaches 0.99.The prediction rules of different industries have some commonness and individuality.展开更多
Deep neural networks has beenwidely used inmany fields,but there are growing concerns about its black-box nature.Previous interpretability studies provide four types of explanations including logical rules,revealing h...Deep neural networks has beenwidely used inmany fields,but there are growing concerns about its black-box nature.Previous interpretability studies provide four types of explanations including logical rules,revealing hidden semantics,sensitivity analysis,and providing examples as prototypes.In this paper,an interpretability method is proposed for revealing semantic representations at hidden layers of CNNs through lightweight annotation by occluding.First,visual semantic configurations are defined for a certain class.Then candidate filters whose activations are related to these specified visual semantics are probed by occluding.Finally,lightweight occlusion annotation and a scoring mechanism is used to screen out the filters that recognize these semantics.The method is applied to the datasets of mechanical equipment,animals and clothing images.The proposed method performs well in the experiments assessing interpretability qualitatively and quantitatively.展开更多
Deep convolutional neural networks are widely used in image recognition,but the black box property is always perplexing.In this paper,a method is proposed using visual annotation to interpret the internal structure of...Deep convolutional neural networks are widely used in image recognition,but the black box property is always perplexing.In this paper,a method is proposed using visual annotation to interpret the internal structure of CNN from the semantic perspective.First,filters are screened in the high layers of the CNN.For a certain category,the important filters are selected by their activation values,frequencies and classification contribution.Then,deconvolution is used to visualize the filters,and semantic interpretations of the filters are labelled by referring to the visualized activation region in the original image.Thus,the CNN model is interpreted and analyzed through these filters.Finally,the visualization results of some important filters are shown,and the semantic accuracy of filters are verified with reference to the expert feature image sets.In addition,the results verify the semantic consistency of the same important filters under similar categories,which indicates the stability of semantic annotation of these filters.展开更多
基金National Defense Science and Technology Innovation Special ZoneProject (No. 18-163-11-ZT-002-045-04).
文摘Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision tree,GRU-DT,was conducted to represent the prediction process of a neural network,and some rule screening algorithms were proposed to find out significant rules in the prediction.In the empirical study,the data of 10 different industries in China’s CSI 300 were selected for stock price trend prediction,and extracted rules were compared and analyzed.And the method of technical index discretization was used to make rules easy for decision-making.Empirical results show that the AUC of the model is stable between 0.72 and 0.74,and the value of F1 and Accuracy are stable between 0.68 and 0.70,indicating that discretized technical indicators can predict the short-term trend of stock price effectively.And the fidelity of GRU-DT to the GRU model reaches 0.99.The prediction rules of different industries have some commonness and individuality.
基金National Defense Science and Technology Innovation Special Zone Project(No.18-163-11-ZT-002-045-04)。
文摘Deep neural networks has beenwidely used inmany fields,but there are growing concerns about its black-box nature.Previous interpretability studies provide four types of explanations including logical rules,revealing hidden semantics,sensitivity analysis,and providing examples as prototypes.In this paper,an interpretability method is proposed for revealing semantic representations at hidden layers of CNNs through lightweight annotation by occluding.First,visual semantic configurations are defined for a certain class.Then candidate filters whose activations are related to these specified visual semantics are probed by occluding.Finally,lightweight occlusion annotation and a scoring mechanism is used to screen out the filters that recognize these semantics.The method is applied to the datasets of mechanical equipment,animals and clothing images.The proposed method performs well in the experiments assessing interpretability qualitatively and quantitatively.
基金National Defense Science and Tech-nology Innovation Special Zone Project(No.18-163-11-ZT-002-045-04)。
文摘Deep convolutional neural networks are widely used in image recognition,but the black box property is always perplexing.In this paper,a method is proposed using visual annotation to interpret the internal structure of CNN from the semantic perspective.First,filters are screened in the high layers of the CNN.For a certain category,the important filters are selected by their activation values,frequencies and classification contribution.Then,deconvolution is used to visualize the filters,and semantic interpretations of the filters are labelled by referring to the visualized activation region in the original image.Thus,the CNN model is interpreted and analyzed through these filters.Finally,the visualization results of some important filters are shown,and the semantic accuracy of filters are verified with reference to the expert feature image sets.In addition,the results verify the semantic consistency of the same important filters under similar categories,which indicates the stability of semantic annotation of these filters.