Palaeoskapha sichuanensis gen. et sp. nov. of Menispermaceae is described here for the first time based on a well preserved fossil fruit. The specimen was found in the Relu Formation of western Sichuan, West China. Th...Palaeoskapha sichuanensis gen. et sp. nov. of Menispermaceae is described here for the first time based on a well preserved fossil fruit. The specimen was found in the Relu Formation of western Sichuan, West China. The specimen, straight, boat-shaped endocarp with large ventral condyle, clearly belongs to the tribe Tinosporeae. The wide aperture of the double condyle, combined with a whole shape not deeply invaginated, indicates a genus different from what was already known to science for this tribe. This fossil widens the distribution of the tribe during Eocene from North America and Europe to Asia, where it was formerly unknown.展开更多
平整机轧制力的预报对轧制过程的优化控制有着重要意义。针对平整机轧制力预测精度不高的问题,提出采用Re LU(Rectified Linear Units)激活函数的神经网络模型来预报平整机的轧制力。在对数据进行主成分分析后,得到影响轧制力的主要因素...平整机轧制力的预报对轧制过程的优化控制有着重要意义。针对平整机轧制力预测精度不高的问题,提出采用Re LU(Rectified Linear Units)激活函数的神经网络模型来预报平整机的轧制力。在对数据进行主成分分析后,得到影响轧制力的主要因素,并将其作为神经网络的输入层,将平整机轧制力作为输出层,通过使用Python语言编程进行实验,对神经网络模型隐层的相关参数及算法进行单一变量筛选,建立了保证轧制力预报精度最高的神经网络模型。实验结果表明,通过调整隐层层数、神经元数、传播算法、正则化方法,该模型能够将预测误差控制在10%以内,且该实验方法能够对不同输入参数下的平整机轧制力进行精确预报。展开更多
尝试引入Re LU function核的ELM算法及Relief Algorithm对开采区最大下沉量进行预测。首先基于Relief Algorithm对现场岩移数据进行筛选优化;然后通过隐含层数目循环实验选出预测精度较高的ELM预测模型隐含层数目;再筛选优化后的参数为...尝试引入Re LU function核的ELM算法及Relief Algorithm对开采区最大下沉量进行预测。首先基于Relief Algorithm对现场岩移数据进行筛选优化;然后通过隐含层数目循环实验选出预测精度较高的ELM预测模型隐含层数目;再筛选优化后的参数为输入,最大下沉为目标分别建立基于Re LU function核、igmoid function核、Radial basis function核及Hardlim function核的ELM预测模型;最后对4种模型的预测结果进行对比分析。结果表明:采厚、平均采深、走向长度和倾向长度与最大下沉关系显著;以Re LU function核、隐含层神经元数目为57的ELM的预测结果精度显著优于对比组。展开更多
Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its b...Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its behaviors, particularly the relation between input and output. In addition, such specifications ought to be verified automatically. ReLU (rectified linear unit) neural networks are intensively used in practice. In this paper, we present ReLU Temporal Logic (ReTL), whose semantics is defined with respect to ReLU neural networks, which could specify value-related properties about the network. We show that the model checking algorithm for theΣ2∪Π2 fragment of ReTL, which can express properties such as output reachability, is decidable in EXPSPACE. We have also implemented our algorithm with a prototype tool, and experimental results demonstrate the feasibility of the presented model checking approach.展开更多
Deep Neural Networks(DNNs)have become the tool of choice for machine learning practitioners today.One important aspect of designing a neural network is the choice of the activation function to be used at the neurons o...Deep Neural Networks(DNNs)have become the tool of choice for machine learning practitioners today.One important aspect of designing a neural network is the choice of the activation function to be used at the neurons of the different layers.In this work,we introduce a four-output activation function called the Reflected Rectified Linear Unit(RRe LU)activation which considers both a feature and its negation during computation.Our activation function is"sparse",in that only two of the four possible outputs are active at a given time.We test our activation function on the standard MNIST and CIFAR-10 datasets,which are classification problems,as well as on a novel Computational Fluid Dynamics(CFD)dataset which is posed as a regression problem.On the baseline network for the MNIST dataset,having two hidden layers,our activation function improves the validation accuracy from 0.09 to 0.97 compared to the well-known Re LU activation.For the CIFAR-10 dataset,we use a deep baseline network that achieves 0.78 validation accuracy with 20 epochs but overfits the data.Using the RRe LU activation,we can achieve the same accuracy without overfitting the data.For the CFD dataset,we show that the RRe LU activation can reduce the number of epochs from 100(using Re LU)to 10 while obtaining the same levels of performance.展开更多
We prove a theorem concerning the approximation of generalized bandlimited multivariate functions by deep ReLU networks for which the curse of the dimensionality is overcome.Our theorem is based on a result by Maurey ...We prove a theorem concerning the approximation of generalized bandlimited multivariate functions by deep ReLU networks for which the curse of the dimensionality is overcome.Our theorem is based on a result by Maurey and on the ability of deep ReLU networks to approximate Chebyshev polynomials and analytic functions efficiently.展开更多
文摘Palaeoskapha sichuanensis gen. et sp. nov. of Menispermaceae is described here for the first time based on a well preserved fossil fruit. The specimen was found in the Relu Formation of western Sichuan, West China. The specimen, straight, boat-shaped endocarp with large ventral condyle, clearly belongs to the tribe Tinosporeae. The wide aperture of the double condyle, combined with a whole shape not deeply invaginated, indicates a genus different from what was already known to science for this tribe. This fossil widens the distribution of the tribe during Eocene from North America and Europe to Asia, where it was formerly unknown.
文摘平整机轧制力的预报对轧制过程的优化控制有着重要意义。针对平整机轧制力预测精度不高的问题,提出采用Re LU(Rectified Linear Units)激活函数的神经网络模型来预报平整机的轧制力。在对数据进行主成分分析后,得到影响轧制力的主要因素,并将其作为神经网络的输入层,将平整机轧制力作为输出层,通过使用Python语言编程进行实验,对神经网络模型隐层的相关参数及算法进行单一变量筛选,建立了保证轧制力预报精度最高的神经网络模型。实验结果表明,通过调整隐层层数、神经元数、传播算法、正则化方法,该模型能够将预测误差控制在10%以内,且该实验方法能够对不同输入参数下的平整机轧制力进行精确预报。
文摘尝试引入Re LU function核的ELM算法及Relief Algorithm对开采区最大下沉量进行预测。首先基于Relief Algorithm对现场岩移数据进行筛选优化;然后通过隐含层数目循环实验选出预测精度较高的ELM预测模型隐含层数目;再筛选优化后的参数为输入,最大下沉为目标分别建立基于Re LU function核、igmoid function核、Radial basis function核及Hardlim function核的ELM预测模型;最后对4种模型的预测结果进行对比分析。结果表明:采厚、平均采深、走向长度和倾向长度与最大下沉关系显著;以Re LU function核、隐含层神经元数目为57的ELM的预测结果精度显著优于对比组。
基金This work is supported by the National Natural Science Foundation of China under Grant No.61872371the Open Fund from the State Key Laboratory of High Performance Computing of China(HPCL)under Grant No.202001-07the Natural Key Research and Development Program of China under Grant No.2018YFB0204301.
文摘Neural networks, as an important computing model, have a wide application in artificial intelligence (AI) domain. From the perspective of computer science, such a computing model requires a formal description of its behaviors, particularly the relation between input and output. In addition, such specifications ought to be verified automatically. ReLU (rectified linear unit) neural networks are intensively used in practice. In this paper, we present ReLU Temporal Logic (ReTL), whose semantics is defined with respect to ReLU neural networks, which could specify value-related properties about the network. We show that the model checking algorithm for theΣ2∪Π2 fragment of ReTL, which can express properties such as output reachability, is decidable in EXPSPACE. We have also implemented our algorithm with a prototype tool, and experimental results demonstrate the feasibility of the presented model checking approach.
文摘Deep Neural Networks(DNNs)have become the tool of choice for machine learning practitioners today.One important aspect of designing a neural network is the choice of the activation function to be used at the neurons of the different layers.In this work,we introduce a four-output activation function called the Reflected Rectified Linear Unit(RRe LU)activation which considers both a feature and its negation during computation.Our activation function is"sparse",in that only two of the four possible outputs are active at a given time.We test our activation function on the standard MNIST and CIFAR-10 datasets,which are classification problems,as well as on a novel Computational Fluid Dynamics(CFD)dataset which is posed as a regression problem.On the baseline network for the MNIST dataset,having two hidden layers,our activation function improves the validation accuracy from 0.09 to 0.97 compared to the well-known Re LU activation.For the CIFAR-10 dataset,we use a deep baseline network that achieves 0.78 validation accuracy with 20 epochs but overfits the data.Using the RRe LU activation,we can achieve the same accuracy without overfitting the data.For the CFD dataset,we show that the RRe LU activation can reduce the number of epochs from 100(using Re LU)to 10 while obtaining the same levels of performance.
基金The research of the second author was partially supported by US NSF under the grant award DMS-1945029The research of the third author is supported in part by US NSF DMS-1719699the NSF TRIPODS program CCF-1704833.
文摘We prove a theorem concerning the approximation of generalized bandlimited multivariate functions by deep ReLU networks for which the curse of the dimensionality is overcome.Our theorem is based on a result by Maurey and on the ability of deep ReLU networks to approximate Chebyshev polynomials and analytic functions efficiently.