期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Stock Price Forecasting and Rule Extraction Based on L1-Orthogonal Regularized GRU Decision Tree Interpretation Model
1
作者 Wenjun Wu yuechen zhao +1 位作者 Yue Wang Xiuli Wang 《国际计算机前沿大会会议论文集》 2020年第2期309-328,共20页
Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision t... Neural network is widely used in stock price forecasting,but it lacks interpretability because of its“black box”characteristics.In this paper,L1-orthogonal regularization method is used in the GRU model.A decision tree,GRU-DT,was conducted to represent the prediction process of a neural network,and some rule screening algorithms were proposed to find out significant rules in the prediction.In the empirical study,the data of 10 different industries in China’s CSI 300 were selected for stock price trend prediction,and extracted rules were compared and analyzed.And the method of technical index discretization was used to make rules easy for decision-making.Empirical results show that the AUC of the model is stable between 0.72 and 0.74,and the value of F1 and Accuracy are stable between 0.68 and 0.70,indicating that discretized technical indicators can predict the short-term trend of stock price effectively.And the fidelity of GRU-DT to the GRU model reaches 0.99.The prediction rules of different industries have some commonness and individuality. 展开更多
关键词 Explainable artificial intelligence Neural network interpretability Rule extraction Stock forecasting L1-orthogonal regularization
原文传递
Probing Filters to Interpret CNN Semantic Configurations by Occlusion
2
作者 Qian Hong Yue Wang +3 位作者 Huan Li yuechen zhao Weiyu Guo Xiuli Wang 《国际计算机前沿大会会议论文集》 2021年第2期103-115,共13页
Deep neural networks has beenwidely used inmany fields,but there are growing concerns about its black-box nature.Previous interpretability studies provide four types of explanations including logical rules,revealing h... Deep neural networks has beenwidely used inmany fields,but there are growing concerns about its black-box nature.Previous interpretability studies provide four types of explanations including logical rules,revealing hidden semantics,sensitivity analysis,and providing examples as prototypes.In this paper,an interpretability method is proposed for revealing semantic representations at hidden layers of CNNs through lightweight annotation by occluding.First,visual semantic configurations are defined for a certain class.Then candidate filters whose activations are related to these specified visual semantics are probed by occluding.Finally,lightweight occlusion annotation and a scoring mechanism is used to screen out the filters that recognize these semantics.The method is applied to the datasets of mechanical equipment,animals and clothing images.The proposed method performs well in the experiments assessing interpretability qualitatively and quantitatively. 展开更多
关键词 Image classification CNN INTERPRETABILITY Image occlusion
原文传递
Analyzing Interpretability Semantically via CNN Visualization
3
作者 Chunqi Qi yuechen zhao +4 位作者 Yue Wang Yapu zhao Qian Hong Xiuli Wang Weiyu Guo 《国际计算机前沿大会会议论文集》 2021年第2期88-102,共15页
Deep convolutional neural networks are widely used in image recognition,but the black box property is always perplexing.In this paper,a method is proposed using visual annotation to interpret the internal structure of... Deep convolutional neural networks are widely used in image recognition,but the black box property is always perplexing.In this paper,a method is proposed using visual annotation to interpret the internal structure of CNN from the semantic perspective.First,filters are screened in the high layers of the CNN.For a certain category,the important filters are selected by their activation values,frequencies and classification contribution.Then,deconvolution is used to visualize the filters,and semantic interpretations of the filters are labelled by referring to the visualized activation region in the original image.Thus,the CNN model is interpreted and analyzed through these filters.Finally,the visualization results of some important filters are shown,and the semantic accuracy of filters are verified with reference to the expert feature image sets.In addition,the results verify the semantic consistency of the same important filters under similar categories,which indicates the stability of semantic annotation of these filters. 展开更多
关键词 CNN Deconvolution visualization Semantic annotations INTERPRETABILITY
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部