In this paper we investigated theL 1 norm inequalities of theP square and the maximal functions of two-parameterB-valued strong martingales, which can be applied to characterizep-smoothness andq-convexity of Banach sp...In this paper we investigated theL 1 norm inequalities of theP square and the maximal functions of two-parameterB-valued strong martingales, which can be applied to characterizep-smoothness andq-convexity of Banach spaces.展开更多
Recently deep learning has successfully achieved state-of-the-art performance on many difficulttasks. Deep neural networks allow for model flexibility and process features without the needof domain knowledge. Advantag...Recently deep learning has successfully achieved state-of-the-art performance on many difficulttasks. Deep neural networks allow for model flexibility and process features without the needof domain knowledge. Advantage learning (A-learning) is a popular method in dynamic treatment regime (DTR). It models the advantage function, which is of direct relevance to optimaltreatment decision. No assumptions on baseline function are made. However, there is a paucityof literature on deep A-learning. In this paper, we present a deep A-learning approach to estimate optimal DTR. We use an inverse probability weighting method to estimate the differencebetween potential outcomes. Parameter sharing of convolutional neural networks (CNN) greatlyreduces the amount of parameters in neural networks, which allows for high scalability. Convexified convolutional neural networks (CCNN) relax the constraints of CNN for optimisation purpose.Different architectures of CNN and CCNN are implemented for contrast function estimation.Both simulation results and application to the STAR*D (Sequenced Treatment Alternatives toRelieve Depression) trial indicate that the proposed methods outperform penalised least squareestimator.展开更多
基金Supported by the National Natural Science Foundation of China
文摘In this paper we investigated theL 1 norm inequalities of theP square and the maximal functions of two-parameterB-valued strong martingales, which can be applied to characterizep-smoothness andq-convexity of Banach spaces.
基金This work was supported by National Institutes of Health[5P01CA142538].
文摘Recently deep learning has successfully achieved state-of-the-art performance on many difficulttasks. Deep neural networks allow for model flexibility and process features without the needof domain knowledge. Advantage learning (A-learning) is a popular method in dynamic treatment regime (DTR). It models the advantage function, which is of direct relevance to optimaltreatment decision. No assumptions on baseline function are made. However, there is a paucityof literature on deep A-learning. In this paper, we present a deep A-learning approach to estimate optimal DTR. We use an inverse probability weighting method to estimate the differencebetween potential outcomes. Parameter sharing of convolutional neural networks (CNN) greatlyreduces the amount of parameters in neural networks, which allows for high scalability. Convexified convolutional neural networks (CCNN) relax the constraints of CNN for optimisation purpose.Different architectures of CNN and CCNN are implemented for contrast function estimation.Both simulation results and application to the STAR*D (Sequenced Treatment Alternatives toRelieve Depression) trial indicate that the proposed methods outperform penalised least squareestimator.