The Bernoulli convolution ν λ measure is shown to be absolutely continuous with L 2 density for almost all 12<λ<1,and singular if λ -1 is a Pisot number. It is an open question whether the Pisot typ...The Bernoulli convolution ν λ measure is shown to be absolutely continuous with L 2 density for almost all 12<λ<1,and singular if λ -1 is a Pisot number. It is an open question whether the Pisot type Bernoulli convolutions are the only singular ones. In this paper,we construct a family of non-Pisot type Bernoulli convolutions ν λ such that their density functions,if they exist,are not L 2. We also construct other Bernolulli convolutions whose density functions,if they exist,behave rather badly.展开更多
Louis Pierre Gratiolet (1815-1865) was one of the first modern anatomists to pay attention to cerebral convolutions. Born in Sainte-Foy-la-Grande (Gironde), he moved to Paris in 1834 to study medicine, as well as comp...Louis Pierre Gratiolet (1815-1865) was one of the first modern anatomists to pay attention to cerebral convolutions. Born in Sainte-Foy-la-Grande (Gironde), he moved to Paris in 1834 to study medicine, as well as comparative anatomy under Henri de Blainville (1777-1850). In 1842, he accepted de Blainville’s offer to become his assistant at the Muséum d’histoire naturelle and progressively abandoned medicine for comparative anatomy. He undertook a detailed study of brains of human and nonhuman primates and soon realized that the organizational pattern of cerebral convolutions was so predictable that it could serve as a criterion to classify primate groups. He noted that only the deepest sulci exist in lower primate forms, while the complexity of cortical folding increases markedly in great apes and humans. Gratiolet provided the first cogent description of the lobular organization of primate cerebral hemispheres. He saw the insula as a central lobe around which revolved the frontal, parietal, temporal (temporo-sphenoidal) and occipital lobes. He correctly identified most gyri and sulci on all brain surfaces, introduced the term “plis de passage” for some interconnecting gyri, and provided the first description of the optic radiations. In the early 1860s, Gratiolet fought a highly publicized battle against Paul Broca (1824-1880) on the relationship between brain and intelligence. Gratiolet agreed that the brain was most likely the seat of intelligence, but he considered human cognition far too subtle to have any direct relationship with brain size. He argued that a detailed study of the human brain architecture would be more profitable than Broca’s vain speculations on the relationship between brain weight and intelligence, which he considered a monolithic entity. Despite remarkable scientific achievements and a unique teaching capacity, Gratiolet was unable to secure any academic position until three years before his sudden death in Paris at age 49.展开更多
Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid mo...Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.展开更多
A discrete algorithm suitable for the computation of complex frequency-domain convolution on computers was derived. The Durbin's numerical inversion of Laplace transforms can be used to figure out the time-domain ...A discrete algorithm suitable for the computation of complex frequency-domain convolution on computers was derived. The Durbin's numerical inversion of Laplace transforms can be used to figure out the time-domain digital solution of the result of complex frequency-domain convolutions. Compared with the digital solutions and corresponding analytical solutions, it is shown that the digital solutions have high precision.展开更多
Based on quantum mechanical representation and operator theory,this paper restates the two new convolutions of fractional Fourier transform(FrFT)by making full use of the conversion relationship between two mutual con...Based on quantum mechanical representation and operator theory,this paper restates the two new convolutions of fractional Fourier transform(FrFT)by making full use of the conversion relationship between two mutual conjugates:coordinate representation and momentum representation.This paper gives full play to the efficiency of Dirac notation and proves the convolutions of fractional Fourier transform from the perspective of quantum optics,a field that has been developing rapidly.These two new convolution methods have potential value in signal processing.展开更多
Graphconvolutional networks(GCNs)have become prevalent in recommender system(RS)due to their superiority in modeling collaborative patterns.Although improving the overall accuracy,GCNs unfortunately amplify popularity...Graphconvolutional networks(GCNs)have become prevalent in recommender system(RS)due to their superiority in modeling collaborative patterns.Although improving the overall accuracy,GCNs unfortunately amplify popularity bias-tail items are less likely to be recommended.This effect prevents the GCN-based RS from making precise and fair recommendations,decreasing the effectiveness of recommender systems in the long run.In this paper,we investigate how graph convolutions amplify the popularity bias in RS.Through theoretical analyses,we identify two fundamental factors:(1)with graph convolution(i.e.,neighborhood aggregation),popular items exert larger influence than tail items on neighbor users,making the users move towards popular items in the representation space;(2)after multiple times of graph convolution,popular items would affect more high-order neighbors and become more influential.The two points make popular items get closer to almost users and thus being recommended more frequently.To rectify this,we propose to estimate the amplified effect of popular nodes on each node's representation,and intervene the effect after each graph convolution.Specifically,we adopt clustering to discover highly-influential nodes and estimate the amplification effect of each node,then remove the effect from the node embeddings at each graph convolution layer.Our method is simple and generic-it can be used in the inference stage to correct existing models rather than training a new model from scratch,and can be applied to various GCN models.We demonstrate our method on two representative GCN backbones LightGCN and UltraGCN,verifying its ability in improving the recommendations of tail items without sacrificing the performance of popular items.Codes are open-sourced^(1)).展开更多
Let k be a local field.Let I_(v) and I_(v′)be smooth principal series representations of GLn(k)and GL_(n-1)(k),respectively.The Rankin-Selberg integrals yield a continuous bilinear map I_(v)×I_(v′)→C with a ce...Let k be a local field.Let I_(v) and I_(v′)be smooth principal series representations of GLn(k)and GL_(n-1)(k),respectively.The Rankin-Selberg integrals yield a continuous bilinear map I_(v)×I_(v′)→C with a certain invariance property.We study integrals over a certain open orbit that also yield a continuous bilinear map I_(v)×I_(v′)→C with the same invariance property and show that these integrals equal the Rankin-Selberg integrals up to an explicit constant.Similar results are also obtained for Rankin-Selberg integrals for GLn(k)×GLn(k).展开更多
Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the g...Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.展开更多
Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly b...Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China.展开更多
Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images ha...Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.展开更多
Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes i...Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.展开更多
Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and e...Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and efficient geomechanical upscaling technique for heterogeneous geological reservoirs is lacking to advance the applications of three-dimensional(3D)reservoir-scale geomechanical simulation considering detailed geological heterogeneities.Here,we develop convolutional neural network(CNN)proxies that reproduce the anisotropic nonlinear geomechanical response caused by lithological heterogeneity,and compute upscaled geomechanical properties from CNN proxies.The CNN proxies are trained using a large dataset of randomly generated spatially correlated sand-shale realizations as inputs and simulation results of their macroscopic geomechanical response as outputs.The trained CNN models can provide the upscaled shear strength(R^(2)>0.949),stress-strain behavior(R^(2)>0.925),and volumetric strain changes(R^(2)>0.958)that highly agree with the numerical simulation results while saving over two orders of magnitude of computational time.This is a major advantage in computing the upscaled geomechanical properties directly from geological realizations without the need to perform local numerical simulations to obtain the geomechanical response.The proposed CNN proxybased upscaling technique has the ability to(1)bridge the gap between the fine-scale geocellular models considering geological uncertainties and computationally efficient geomechanical models used to assess the geomechanical risks of large-scale subsurface development,and(2)improve the efficiency of numerical upscaling techniques that rely on local numerical simulations,leading to significantly increased computational time for uncertainty quantification using numerous geological realizations.展开更多
How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is pro...How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.展开更多
Accurately estimating the ocean subsurface salinity structure(OSSS)is crucial for understanding ocean dynamics and predicting climate variations.We present a convolutional neural network(CNN)model to estimate the OSSS...Accurately estimating the ocean subsurface salinity structure(OSSS)is crucial for understanding ocean dynamics and predicting climate variations.We present a convolutional neural network(CNN)model to estimate the OSSS in the Indian Ocean using satellite data and Argo observations.We evaluated the performance of the CNN model in terms of its vertical and spatial distribution,as well as seasonal variation of OSSS estimation.Results demonstrate that the CNN model accurately estimates the most significant salinity features in the Indian Ocean using sea surface data with no significant differences from Argo-derived OSSS.However,the estimation accuracy of the CNN model varies with depth,with the most challenging depth being approximately 70 m,corresponding to the halocline layer.Validations of the CNN model’s accuracy in estimating OSSS in the Indian Ocean are also conducted by comparing Argo observations and CNN model estimations along two selected sections and four selected boxes.The results show that the CNN model effectively captures the seasonal variability of salinity,demonstrating its high performance in salinity estimation using sea surface data.Our analysis reveals that sea surface salinity has the strongest correlation with OSSS in shallow layers,while sea surface height anomaly plays a more significant role in deeper layers.These preliminary results provide valuable insights into the feasibility of estimating OSSS using satellite observations and have implications for studying upper ocean dynamics using machine learning techniques.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the da...Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues.展开更多
Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study intr...Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.展开更多
While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning me...While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning methods relying on expert experience and the insufficient representation capabilities of existing deep learning methods for encrypted malicious traffic,we propose an encrypted malicious traffic classification method that integrates global semantic features with local spatiotemporal features,called BERT-based Spatio-Temporal Features Network(BSTFNet).At the packet-level granularity,the model captures the global semantic features of packets through the attention mechanism of the Bidirectional Encoder Representations from Transformers(BERT)model.At the byte-level granularity,we initially employ the Bidirectional Gated Recurrent Unit(BiGRU)model to extract temporal features from bytes,followed by the utilization of the Text Convolutional Neural Network(TextCNN)model with multi-sized convolution kernels to extract local multi-receptive field spatial features.The fusion of features from both granularities serves as the ultimate multidimensional representation of malicious traffic.Our approach achieves accuracy and F1-score of 99.39%and 99.40%,respectively,on the publicly available USTC-TFC2016 dataset,and effectively reduces sample confusion within the Neris and Virut categories.The experimental results demonstrate that our method has outstanding representation and classification capabilities for encrypted malicious traffic.展开更多
针对景区手写诗词存在背景纹理复杂、字体尺寸及风格多样等特点导致景区游客难以识别手写诗词的问题,首先,分析研究景区手写诗词的识别场景,设计景区诗词检测网络(detection of poetry in scenic areas-network,DPSA-Net)以提取景区手...针对景区手写诗词存在背景纹理复杂、字体尺寸及风格多样等特点导致景区游客难以识别手写诗词的问题,首先,分析研究景区手写诗词的识别场景,设计景区诗词检测网络(detection of poetry in scenic areas-network,DPSA-Net)以提取景区手写诗词不同尺度的特征,并结合手写诗词字符间的链接依赖关系实现景区手写诗词检测;其次,设计了卷积循环聚合网络(convolution recurrent aggregation network,CRA-Net)以对景区手写诗词进行识别,结合卷积神经网络(convolutional neural networks,CNN)和双向长短期记忆网络提取手写诗词图像的序列特征,并通过聚合交叉熵(aggregation cross-entropy,ACE)实现特征向文本的转换;最后,结合景区知识图谱对CRA-Net的输出进行校正,进而提高景区手写诗词的识别准确率。实验结果表明,通过景区手写诗词矫正技术对CRA-Net的识别结果矫正后,识别准确率达到了79.04%,同时,该技术具有较好的抗干扰能力和良好的应用前景。展开更多
Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR ...Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.展开更多
文摘The Bernoulli convolution ν λ measure is shown to be absolutely continuous with L 2 density for almost all 12<λ<1,and singular if λ -1 is a Pisot number. It is an open question whether the Pisot type Bernoulli convolutions are the only singular ones. In this paper,we construct a family of non-Pisot type Bernoulli convolutions ν λ such that their density functions,if they exist,are not L 2. We also construct other Bernolulli convolutions whose density functions,if they exist,behave rather badly.
文摘Louis Pierre Gratiolet (1815-1865) was one of the first modern anatomists to pay attention to cerebral convolutions. Born in Sainte-Foy-la-Grande (Gironde), he moved to Paris in 1834 to study medicine, as well as comparative anatomy under Henri de Blainville (1777-1850). In 1842, he accepted de Blainville’s offer to become his assistant at the Muséum d’histoire naturelle and progressively abandoned medicine for comparative anatomy. He undertook a detailed study of brains of human and nonhuman primates and soon realized that the organizational pattern of cerebral convolutions was so predictable that it could serve as a criterion to classify primate groups. He noted that only the deepest sulci exist in lower primate forms, while the complexity of cortical folding increases markedly in great apes and humans. Gratiolet provided the first cogent description of the lobular organization of primate cerebral hemispheres. He saw the insula as a central lobe around which revolved the frontal, parietal, temporal (temporo-sphenoidal) and occipital lobes. He correctly identified most gyri and sulci on all brain surfaces, introduced the term “plis de passage” for some interconnecting gyri, and provided the first description of the optic radiations. In the early 1860s, Gratiolet fought a highly publicized battle against Paul Broca (1824-1880) on the relationship between brain and intelligence. Gratiolet agreed that the brain was most likely the seat of intelligence, but he considered human cognition far too subtle to have any direct relationship with brain size. He argued that a detailed study of the human brain architecture would be more profitable than Broca’s vain speculations on the relationship between brain weight and intelligence, which he considered a monolithic entity. Despite remarkable scientific achievements and a unique teaching capacity, Gratiolet was unable to secure any academic position until three years before his sudden death in Paris at age 49.
基金Fundamental Research Funds for the Central University,China(No.2232018D3-17)。
文摘Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model.
文摘A discrete algorithm suitable for the computation of complex frequency-domain convolution on computers was derived. The Durbin's numerical inversion of Laplace transforms can be used to figure out the time-domain digital solution of the result of complex frequency-domain convolutions. Compared with the digital solutions and corresponding analytical solutions, it is shown that the digital solutions have high precision.
基金National Natural Science Foundation of China(Grant Number:11304126)College Students' Innovation Training Program(Grant Number:202110299696X)。
文摘Based on quantum mechanical representation and operator theory,this paper restates the two new convolutions of fractional Fourier transform(FrFT)by making full use of the conversion relationship between two mutual conjugates:coordinate representation and momentum representation.This paper gives full play to the efficiency of Dirac notation and proves the convolutions of fractional Fourier transform from the perspective of quantum optics,a field that has been developing rapidly.These two new convolution methods have potential value in signal processing.
基金This work was supported by the National Key R&D Program of China(2021ZD0111802)the National Natural Science Foundation of China(Grant No.19A2079)the CCCD Key Lab of Ministry of Culture and Tourism.
文摘Graphconvolutional networks(GCNs)have become prevalent in recommender system(RS)due to their superiority in modeling collaborative patterns.Although improving the overall accuracy,GCNs unfortunately amplify popularity bias-tail items are less likely to be recommended.This effect prevents the GCN-based RS from making precise and fair recommendations,decreasing the effectiveness of recommender systems in the long run.In this paper,we investigate how graph convolutions amplify the popularity bias in RS.Through theoretical analyses,we identify two fundamental factors:(1)with graph convolution(i.e.,neighborhood aggregation),popular items exert larger influence than tail items on neighbor users,making the users move towards popular items in the representation space;(2)after multiple times of graph convolution,popular items would affect more high-order neighbors and become more influential.The two points make popular items get closer to almost users and thus being recommended more frequently.To rectify this,we propose to estimate the amplified effect of popular nodes on each node's representation,and intervene the effect after each graph convolution.Specifically,we adopt clustering to discover highly-influential nodes and estimate the amplification effect of each node,then remove the effect from the node embeddings at each graph convolution layer.Our method is simple and generic-it can be used in the inference stage to correct existing models rather than training a new model from scratch,and can be applied to various GCN models.We demonstrate our method on two representative GCN backbones LightGCN and UltraGCN,verifying its ability in improving the recommendations of tail items without sacrificing the performance of popular items.Codes are open-sourced^(1)).
基金supported by the Natural Science Foundation of Zhejiang Province(Grant No.LZ22A010006)National Natural Science Foundation of China(Grant No.12171421)+2 种基金Feng Su was supported by National Natural Science Foundation of China(Grant No.11901466)the Qinglan Project of Jiangsu Provincesupported by the National Key Research and Development Program of China(Grant No.2020YFA0712600).
文摘Let k be a local field.Let I_(v) and I_(v′)be smooth principal series representations of GLn(k)and GL_(n-1)(k),respectively.The Rankin-Selberg integrals yield a continuous bilinear map I_(v)×I_(v′)→C with a certain invariance property.We study integrals over a certain open orbit that also yield a continuous bilinear map I_(v)×I_(v′)→C with the same invariance property and show that these integrals equal the Rankin-Selberg integrals up to an explicit constant.Similar results are also obtained for Rankin-Selberg integrals for GLn(k)×GLn(k).
基金funded by the National Natural Science Foundation of China(General Program:No.52074314,No.U19B6003-05)National Key Research and Development Program of China(2019YFA0708303-05)。
文摘Accurate prediction of formation pore pressure is essential to predict fluid flow and manage hydrocarbon production in petroleum engineering.Recent deep learning technique has been receiving more interest due to the great potential to deal with pore pressure prediction.However,most of the traditional deep learning models are less efficient to address generalization problems.To fill this technical gap,in this work,we developed a new adaptive physics-informed deep learning model with high generalization capability to predict pore pressure values directly from seismic data.Specifically,the new model,named CGP-NN,consists of a novel parametric features extraction approach(1DCPP),a stacked multilayer gated recurrent model(multilayer GRU),and an adaptive physics-informed loss function.Through machine training,the developed model can automatically select the optimal physical model to constrain the results for each pore pressure prediction.The CGP-NN model has the best generalization when the physicsrelated metricλ=0.5.A hybrid approach combining Eaton and Bowers methods is also proposed to build machine-learnable labels for solving the problem of few labels.To validate the developed model and methodology,a case study on a complex reservoir in Tarim Basin was further performed to demonstrate the high accuracy on the pore pressure prediction of new wells along with the strong generalization ability.The adaptive physics-informed deep learning approach presented here has potential application in the prediction of pore pressures coupled with multiple genesis mechanisms using seismic data.
基金supported in part by the Beijing Natural Science Foundation(Grant No.8222051)the National Key R&D Program of China(Grant No.2022YFC3004103)+2 种基金the National Natural Foundation of China(Grant Nos.42275003 and 42275012)the China Meteorological Administration Key Innovation Team(Grant Nos.CMA2022ZD04 and CMA2022ZD07)the Beijing Science and Technology Program(Grant No.Z221100005222012).
文摘Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China.
基金the China Postdoctoral Science Foundation under Grant 2021M701838the Natural Science Foundation of Hainan Province of China under Grants 621MS042 and 622MS067the Hainan Medical University Teaching Achievement Award Cultivation under Grant HYjcpx202209.
文摘Watermarks can provide reliable and secure copyright protection for optical coherence tomography(OCT)fundus images.The effective image segmentation is helpful for promoting OCT image watermarking.However,OCT images have a large amount of low-quality data,which seriously affects the performance of segmentationmethods.Therefore,this paper proposes an effective segmentation method for OCT fundus image watermarking using a rough convolutional neural network(RCNN).First,the rough-set-based feature discretization module is designed to preprocess the input data.Second,a dual attention mechanism for feature channels and spatial regions in the CNN is added to enable the model to adaptively select important information for fusion.Finally,the refinement module for enhancing the extraction power of multi-scale information is added to improve the edge accuracy in segmentation.RCNN is compared with CE-Net and MultiResUNet on 83 gold standard 3D retinal OCT data samples.The average dice similarly coefficient(DSC)obtained by RCNN is 6%higher than that of CE-Net.The average 95 percent Hausdorff distance(95HD)and average symmetric surface distance(ASD)obtained by RCNN are 32.4%and 33.3%lower than those of MultiResUNet,respectively.We also evaluate the effect of feature discretization,as well as analyze the initial learning rate of RCNN and conduct ablation experiments with the four different models.The experimental results indicate that our method can improve the segmentation accuracy of OCT fundus images,providing strong support for its application in medical image watermarking.
基金supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)+2 种基金JST Through the Establishment of University Fellowships Towards the Creation of Science Technology Innovation(JPMJFS2115)the National Natural Science Foundation of China(52078382)the State Key Laboratory of Disaster Reduction in Civil Engineering(CE19-A-01)。
文摘Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.
基金financial support provided by the Future Energy System at University of Alberta and NSERC Discovery Grant RGPIN-2023-04084。
文摘Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and efficient geomechanical upscaling technique for heterogeneous geological reservoirs is lacking to advance the applications of three-dimensional(3D)reservoir-scale geomechanical simulation considering detailed geological heterogeneities.Here,we develop convolutional neural network(CNN)proxies that reproduce the anisotropic nonlinear geomechanical response caused by lithological heterogeneity,and compute upscaled geomechanical properties from CNN proxies.The CNN proxies are trained using a large dataset of randomly generated spatially correlated sand-shale realizations as inputs and simulation results of their macroscopic geomechanical response as outputs.The trained CNN models can provide the upscaled shear strength(R^(2)>0.949),stress-strain behavior(R^(2)>0.925),and volumetric strain changes(R^(2)>0.958)that highly agree with the numerical simulation results while saving over two orders of magnitude of computational time.This is a major advantage in computing the upscaled geomechanical properties directly from geological realizations without the need to perform local numerical simulations to obtain the geomechanical response.The proposed CNN proxybased upscaling technique has the ability to(1)bridge the gap between the fine-scale geocellular models considering geological uncertainties and computationally efficient geomechanical models used to assess the geomechanical risks of large-scale subsurface development,and(2)improve the efficiency of numerical upscaling techniques that rely on local numerical simulations,leading to significantly increased computational time for uncertainty quantification using numerous geological realizations.
文摘How to use a few defect samples to complete the defect classification is a key challenge in the production of mobile phone screens.An attention-relation network for the mobile phone screen defect classification is proposed in this paper.The architecture of the attention-relation network contains two modules:a feature extract module and a feature metric module.Different from other few-shot models,an attention mechanism is applied to metric learning in our model to measure the distance between features,so as to pay attention to the correlation between features and suppress unwanted information.Besides,we combine dilated convolution and skip connection to extract more feature information for follow-up processing.We validate attention-relation network on the mobile phone screen defect dataset.The experimental results show that the classification accuracy of the attentionrelation network is 0.9486 under the 5-way 1-shot training strategy and 0.9039 under the 5-way 5-shot setting.It achieves the excellent effect of classification for mobile phone screen defects and outperforms with dominant advantages.
基金Supported by the National Key Research and Development Program of China(No.2022YFF0801400)the National Natural Science Foundation of China(No.42176010)the Natural Science Foundation of Shandong Province,China(No.ZR2021MD022)。
文摘Accurately estimating the ocean subsurface salinity structure(OSSS)is crucial for understanding ocean dynamics and predicting climate variations.We present a convolutional neural network(CNN)model to estimate the OSSS in the Indian Ocean using satellite data and Argo observations.We evaluated the performance of the CNN model in terms of its vertical and spatial distribution,as well as seasonal variation of OSSS estimation.Results demonstrate that the CNN model accurately estimates the most significant salinity features in the Indian Ocean using sea surface data with no significant differences from Argo-derived OSSS.However,the estimation accuracy of the CNN model varies with depth,with the most challenging depth being approximately 70 m,corresponding to the halocline layer.Validations of the CNN model’s accuracy in estimating OSSS in the Indian Ocean are also conducted by comparing Argo observations and CNN model estimations along two selected sections and four selected boxes.The results show that the CNN model effectively captures the seasonal variability of salinity,demonstrating its high performance in salinity estimation using sea surface data.Our analysis reveals that sea surface salinity has the strongest correlation with OSSS in shallow layers,while sea surface height anomaly plays a more significant role in deeper layers.These preliminary results provide valuable insights into the feasibility of estimating OSSS using satellite observations and have implications for studying upper ocean dynamics using machine learning techniques.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
基金supported by the National Natural Science Foundation of China(Grant No.42307218)the Foundation of Key Laboratory of Soft Soils and Geoenvironmental Engineering(Zhejiang University),Ministry of Education(Grant No.2022P08)the Natural Science Foundation of Zhejiang Province(Grant No.LTZ21E080001).
文摘Data-driven approaches such as neural networks are increasingly used for deep excavations due to the growing amount of available monitoring data in practical projects.However,most neural network models only use the data from a single monitoring point and neglect the spatial relationships between multiple monitoring points.Besides,most models lack flexibility in providing predictions for multiple days after monitoring activity.This study proposes a sequence-to-sequence(seq2seq)two-dimensional(2D)convolutional long short-term memory neural network(S2SCL2D)for predicting the spatiotemporal wall deflections induced by deep excavations.The model utilizes the data from all monitoring points on the entire wall and extracts spatiotemporal features from data by combining the 2D convolutional layers and long short-term memory(LSTM)layers.The S2SCL2D model achieves a long-term prediction of wall deflections through a recursive seq2seq structure.The excavation depth,which has a significant impact on wall deflections,is also considered using a feature fusion method.An excavation project in Hangzhou,China,is used to illustrate the proposed model.The results demonstrate that the S2SCL2D model has superior prediction accuracy and robustness than that of the LSTM and S2SCL1D(one-dimensional)models.The prediction model demonstrates a strong generalizability when applied to an adjacent excavation.Based on the long-term prediction results,practitioners can plan and allocate resources in advance to address the potential engineering issues.
基金funded by the National Science and Technology Council,Taiwan(Grant No.NSTC 112-2121-M-039-001)by China Medical University(Grant No.CMU112-MF-79).
文摘Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.
基金This research was funded by National Natural Science Foundation of China under Grant No.61806171Sichuan University of Science&Engineering Talent Project under Grant No.2021RC15+2 种基金Open Fund Project of Key Laboratory for Non-Destructive Testing and Engineering Computer of Sichuan Province Universities on Bridge Inspection and Engineering under Grant No.2022QYJ06Sichuan University of Science&Engineering Graduate Student Innovation Fund under Grant No.Y2023115The Scientific Research and Innovation Team Program of Sichuan University of Science and Technology under Grant No.SUSE652A006.
文摘While encryption technology safeguards the security of network communications,malicious traffic also uses encryption protocols to obscure its malicious behavior.To address the issues of traditional machine learning methods relying on expert experience and the insufficient representation capabilities of existing deep learning methods for encrypted malicious traffic,we propose an encrypted malicious traffic classification method that integrates global semantic features with local spatiotemporal features,called BERT-based Spatio-Temporal Features Network(BSTFNet).At the packet-level granularity,the model captures the global semantic features of packets through the attention mechanism of the Bidirectional Encoder Representations from Transformers(BERT)model.At the byte-level granularity,we initially employ the Bidirectional Gated Recurrent Unit(BiGRU)model to extract temporal features from bytes,followed by the utilization of the Text Convolutional Neural Network(TextCNN)model with multi-sized convolution kernels to extract local multi-receptive field spatial features.The fusion of features from both granularities serves as the ultimate multidimensional representation of malicious traffic.Our approach achieves accuracy and F1-score of 99.39%and 99.40%,respectively,on the publicly available USTC-TFC2016 dataset,and effectively reduces sample confusion within the Neris and Virut categories.The experimental results demonstrate that our method has outstanding representation and classification capabilities for encrypted malicious traffic.
基金This research was funded by the National Natural Science Foundation of China(Nos.71762010,62262019,62162025,61966013,12162012)the Hainan Provincial Natural Science Foundation of China(Nos.823RC488,623RC481,620RC603,621QN241,620RC602,121RC536)+1 种基金the Haikou Science and Technology Plan Project of China(No.2022-016)the Project supported by the Education Department of Hainan Province,No.Hnky2021-23.
文摘Artificial Intelligence(AI)is being increasingly used for diagnosing Vision-Threatening Diabetic Retinopathy(VTDR),which is a leading cause of visual impairment and blindness worldwide.However,previous automated VTDR detection methods have mainly relied on manual feature extraction and classification,leading to errors.This paper proposes a novel VTDR detection and classification model that combines different models through majority voting.Our proposed methodology involves preprocessing,data augmentation,feature extraction,and classification stages.We use a hybrid convolutional neural network-singular value decomposition(CNN-SVD)model for feature extraction and selection and an improved SVM-RBF with a Decision Tree(DT)and K-Nearest Neighbor(KNN)for classification.We tested our model on the IDRiD dataset and achieved an accuracy of 98.06%,a sensitivity of 83.67%,and a specificity of 100%for DR detection and evaluation tests,respectively.Our proposed approach outperforms baseline techniques and provides a more robust and accurate method for VTDR detection.