摘要
准确预警内部控制缺陷对防范上市公司重大风险、维护资本市场稳定发展至关重要。已有研究忽视了变量间复杂非线性关系及内部控制缺陷的动态变化规律,因此本文创新地引入深度学习算法,构建了基于循环神经网络(RNN)的内部控制缺陷预测模型,并基于压力-机会-倾向(POP)理论构建了完整的解释变量集。研究发现,RNN内控模型的预测准确度比Logistic模型提升14.11%-28.26%,并且具有明显的经济效应。本文对RNN内控模型优越性的机理进行了分析,从结构可解释性角度发现,RNN模型的非线性机制和跨期间机制显著提升了内部控制缺陷预测能力。从变量可解释性角度发现,机会因素变量对内部控制缺陷的预测能力优于压力和倾向因素。在应用价值方面,RNN内控模型的预测结果能更好地评估企业未来的经营风险、市场风险和股价崩盘风险。本文首次将深度学习算法与内部控制研究相结合,为防范化解企业重大风险提供了有效的监管工具。
Internal control weakness(ICW)prediction is important to prevent major risks of listed firms and maintain the stable development of the capital market.Prior research ignored the challenges of complex nonlinear relationships and dynamic time sequences in ICW prediction.To address these problems,we develop a state-of-the-art ICW prediction model using one of the most powerful deep learning methods,recurrent neural network(RNN).We also identify a comprehensive set of ICW-related variables based on Presure-Opportunity-Predisposition framework.We show that our new RNN model outperforms the logistic model from 14.11%to 28.26%,which is also economically significant.We further investigate the mechanisms of the superiority of our RNN model.From the perspective of structural interpretability,analyses suggest that the outperformance relative to the logistic model stems from both nonlinear mechanism and intertemporal mechanism.In addition,from the perspective of variable interpretability,we show that Opportunity variables play a more important role in predicting ICWs than other input variables.For application,we find that the results of our RNN model can better predict corporate future operation risk,market risk,and crash risk.This paper is the first to combine deep learning algorithms with internal control research,which provides an effective tool for regulators to prevent major risks of listed firms.
作者
刘春丽
林斌
Liu Chunli;Lin Bin
出处
《会计研究》
CSSCI
北大核心
2024年第6期119-134,共16页
Accounting Research
基金
国家自然科学基金项目(72342011)
安徽省自然科学基金资助项目(2308085MG226)
安徽省哲学社会科学规划项目(AHSKY2022D133)
财政部会计名家培养工程(2016)的资助。
关键词
深度学习
内部控制
循环神经网络
可解释性
Deep Learning
Internal Control
Recurrent Neural Network
Interpretability