The scarcity of in-situ ocean observations poses a challenge for real-time information acquisition in the ocean.Among the crucial hydroacoustic environmental parameters,ocean sound velocity exhibits significant spatia...The scarcity of in-situ ocean observations poses a challenge for real-time information acquisition in the ocean.Among the crucial hydroacoustic environmental parameters,ocean sound velocity exhibits significant spatial and temporal variability and it is highly relevant to oceanic research.In this study,we propose a new data-driven approach,leveraging deep learning techniques,for the prediction of sound velocity fields(SVFs).Our novel spatiotemporal prediction model,STLSTM-SA,combines Spatiotemporal Long Short-Term Memory(ST-LSTM) with a self-attention mechanism to enable accurate and real-time prediction of SVFs.To circumvent the limited amount of observational data,we employ transfer learning by first training the model using reanalysis datasets,followed by fine-tuning it using in-situ analysis data to obtain the final prediction model.By utilizing the historical 12-month SVFs as input,our model predicts the SVFs for the subsequent three months.We compare the performance of five models:Artificial Neural Networks(ANN),Long ShortTerm Memory(LSTM),Convolutional LSTM(ConvLSTM),ST-LSTM,and our proposed ST-LSTM-SA model in a test experiment spanning 2019 to 2022.Our results demonstrate that the ST-LSTM-SA model significantly improves the prediction accuracy and stability of sound velocity in both temporal and spatial dimensions.The ST-LSTM-SA model not only accurately predicts the ocean sound velocity field(SVF),but also provides valuable insights for spatiotemporal prediction of other oceanic environmental variables.展开更多
Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient...Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient representation of land-surface processes.In addition to PB models,deep learning(DL)models have been widely used in SM predictions recently.However,few pure DL models have notably high success rates due to lacking physical information.Thus,we developed hybrid models to effectively integrate the outputs of PB models into DL models to improve SM predictions.To this end,we first developed a hybrid model based on the attention mechanism to take advantage of PB models at each forecast time scale(attention model).We further built an ensemble model that combined the advantages of different hybrid schemes(ensemble model).We utilized SM forecasts from the Global Forecast System to enhance the convolutional long short-term memory(ConvLSTM)model for 1–16 days of SM predictions.The performances of the proposed hybrid models were investigated and compared with two existing hybrid models.The results showed that the attention model could leverage benefits of PB models and achieved the best predictability of drought events among the different hybrid models.Moreover,the ensemble model performed best among all hybrid models at all forecast time scales and different soil conditions.It is highlighted that the ensemble model outperformed the pure DL model over 79.5%of in situ stations for 16-day predictions.These findings suggest that our proposed hybrid models can adequately exploit the benefits of PB model outputs to aid DL models in making SM predictions.展开更多
The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ...The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.展开更多
In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,ma...In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications.展开更多
BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some ...BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some unresolved challenges.AIM To propose a novel and effective classification and detection model to automatically identify various SB lesions and their bleeding risks,and label the lesions accurately so as to enhance the diagnostic efficiency of physicians and the ability to identify high-risk bleeding groups.METHODS The proposed model represents a two-stage method that combined image classification with object detection.First,we utilized the improved ResNet-50 classification model to classify endoscopic images into SB lesion images,normal SB mucosa images,and invalid images.Then,the improved YOLO-V5 detection model was utilized to detect the type of lesion and its risk of bleeding,and the location of the lesion was marked.We constructed training and testing sets and compared model-assisted reading with physician reading.RESULTS The accuracy of the model constructed in this study reached 98.96%,which was higher than the accuracy of other systems using only a single module.The sensitivity,specificity,and accuracy of the model-assisted reading detection of all images were 99.17%,99.92%,and 99.86%,which were significantly higher than those of the endoscopists’diagnoses.The image processing time of the model was 48 ms/image,and the image processing time of the physicians was 0.40±0.24 s/image(P<0.001).CONCLUSION The deep learning model of image classification combined with object detection exhibits a satisfactory diagnostic effect on a variety of SB lesions and their bleeding risks in CE images,which enhances the diagnostic efficiency of physicians and improves the ability of physicians to identify high-risk bleeding groups.展开更多
The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a k...The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a key parameter of TBM operation and reflects the TBM-ground interaction,for which a reliable prediction helps optimize the TBM performance.Here,we develop a hybrid neural network model,called Attention-ResNet-LSTM,for accurate prediction of the TBM advance rate.A database including geological properties and TBM operational parameters from the Yangtze River Natural Gas Pipeline Project is used to train and test this deep learning model.The evolutionary polynomial regression method is adopted to aid the selection of input parameters.The results of numerical exper-iments show that our Attention-ResNet-LSTM model outperforms other commonly-used intelligent models with a lower root mean square error and a lower mean absolute percentage error.Further,parametric analyses are conducted to explore the effects of the sequence length of historical data and the model architecture on the prediction accuracy.A correlation analysis between the input and output parameters is also implemented to provide guidance for adjusting relevant TBM operational parameters.The performance of our hybrid intelligent model is demonstrated in a case study of TBM tunneling through a complex ground with variable strata.Finally,data collected from the Baimang River Tunnel Project in Shenzhen of China are used to further test the generalization of our model.The results indicate that,compared to the conventional ResNet-LSTM model,our model has a better predictive capability for scenarios with unknown datasets due to its self-adaptive characteristic.展开更多
Static Poisson’s ratio(vs)is crucial for determining geomechanical properties in petroleum applications,namely sand production.Some models have been used to predict vs;however,the published models were limited to spe...Static Poisson’s ratio(vs)is crucial for determining geomechanical properties in petroleum applications,namely sand production.Some models have been used to predict vs;however,the published models were limited to specific data ranges with an average absolute percentage relative error(AAPRE)of more than 10%.The published gated recurrent unit(GRU)models do not consider trend analysis to show physical behaviors.In this study,we aim to develop a GRU model using trend analysis and three inputs for predicting n s based on a broad range of data,n s(value of 0.1627-0.4492),bulk formation density(RHOB)(0.315-2.994 g/mL),compressional time(DTc)(44.43-186.9 μs/ft),and shear time(DTs)(72.9-341.2μ s/ft).The GRU model was evaluated using different approaches,including statistical error an-alyses.The GRU model showed the proper trends,and the model data ranges were wider than previous ones.The GRU model has the largest correlation coefficient(R)of 0.967 and the lowest AAPRE,average percent relative error(APRE),root mean square error(RMSE),and standard deviation(SD)of 3.228%,1.054%,4.389,and 0.013,respectively,compared to other models.The GRU model has a high accuracy for the different datasets:training,validation,testing,and the whole datasets with R and AAPRE values were 0.981 and 2.601%,0.966 and 3.274%,0.967 and 3.228%,and 0.977 and 2.861%,respectively.The group error analyses of all inputs show that the GRU model has less than 5% AAPRE for all input ranges,which is superior to other models that have different AAPRE values of more than 10% at various ranges of inputs.展开更多
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w...Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.展开更多
TO perform well,deep learning(DL)models have to be trained well.Which optimizer should be adopted?We answer this question by discussing how optimizers have evolved from traditional methods like gradient descent to mor...TO perform well,deep learning(DL)models have to be trained well.Which optimizer should be adopted?We answer this question by discussing how optimizers have evolved from traditional methods like gradient descent to more advanced techniques to address challenges posed by highdimensional and non-convex problem space.Ongoing challenges include their hyperparameter sensitivity,balancing between convergence and generalization performance,and improving interpretability of optimization processes.Researchers continue to seek robust,efficient,and universally applicable optimizers to advance the field of DL across various domains.展开更多
Identifying fractures along a well trajectory is of immense significance in determining the subsurface fracture network distribution.Typically,conventional logs exhibit responses in fracture zones,and almost all wells...Identifying fractures along a well trajectory is of immense significance in determining the subsurface fracture network distribution.Typically,conventional logs exhibit responses in fracture zones,and almost all wells have such logs.However,detecting fractures through logging responses can be challenging since the log response intensity is weak and complex.To address this problem,we propose a deep learning model for fracture identification using deep forest,which is based on a cascade structure comprising multi-layer random forests.Deep forest can extract complex nonlinear features of fractures in conventional logs through ensemble learning and deep learning.The proposed approach is tested using a dataset from the Oligocene to Miocene tight carbonate reservoirs in D oilfield,Zagros Basin,Middle East,and eight logs are selected to construct the fracture identification model based on sensitivity analysis of logging curves against fractures.The log package includes the gamma-ray,caliper,density,compensated neutron,acoustic transit time,and shallow,deep,and flushed zone resistivity logs.Experiments have shown that the deep forest obtains high recall and accuracy(>92%).In a blind well test,results from the deep forest learning model have a good correlation with fracture observation from cores.Compared to the random forest method,a widely used ensemble learning method,the proposed deep forest model improves accuracy by approximately 4.6%.展开更多
Objective To observe the value of deep learning (DL) models for automatic classification of echocardiographic views. Methods Totally 100 patients after heart transplantation were retrospectively enrolled and divided i...Objective To observe the value of deep learning (DL) models for automatic classification of echocardiographic views. Methods Totally 100 patients after heart transplantation were retrospectively enrolled and divided into training set, validation set and test set at a ratio of 7 ∶ 2 ∶ 1. ResNet18, ResNet34, Swin Transformer and Swin Transformer V2 models were established based on 2D apical two chamber view, 2D apical three chamber view, 2D apical four chamber view, 2D subcostal view, parasternal long-axis view of left ventricle, short-axis view of great arteries, short-axis view of apex of left ventricle, short-axis view of papillary muscle of left ventricle, short-axis view of mitral valve of left ventricle, also 3D and CDFI views of echocardiography. The accuracy, precision, recall, F1 score and confusion matrix were used to evaluate the performance of each model for automatically classifying echocardiographic views. The interactive interface was designed based on Qt Designer software and deployed on the desktop. Results The performance of models for automatically classifying echocardiographic views in test set were all good, with relatively poor performance for 2D short-axis view of left ventricle and superior performance for 3D and CDFI views. Swin Transformer V2 was the optimal model for automatically classifying echocardiographic views, with high accuracy, precision, recall and F1 score was 92.56%, 89.01%, 89.97% and 89.31%, respectively, which also had the highest diagonal value in confusion matrix and showed the best classification effect on various views in t-SNE figure. Conclusion DL model had good performance for automatically classifying echocardiographic views, especially Swin Transformer V2 model had the best performance. Using interactive classification interface could improve the interpretability of prediction results to some extent.展开更多
Background:Gallbladder carcinoma(GBC)is highly malignant,and its early diagnosis remains difficult.This study aimed to develop a deep learning model based on contrast-enhanced computed tomography(CT)images to assist r...Background:Gallbladder carcinoma(GBC)is highly malignant,and its early diagnosis remains difficult.This study aimed to develop a deep learning model based on contrast-enhanced computed tomography(CT)images to assist radiologists in identifying GBC.Methods:We retrospectively enrolled 278 patients with gallbladder lesions(>10 mm)who underwent contrast-enhanced CT and cholecystectomy and divided them into the training(n=194)and validation(n=84)datasets.The deep learning model was developed based on ResNet50 network.Radiomics and clinical models were built based on support vector machine(SVM)method.We comprehensively compared the performance of deep learning,radiomics,clinical models,and three radiologists.Results:Three radiomics features including LoG_3.0 gray-level size zone matrix zone variance,HHL firstorder kurtosis,and LHL gray-level co-occurrence matrix dependence variance were significantly different between benign gallbladder lesions and GBC,and were selected for developing radiomics model.Multivariate regression analysis revealed that age≥65 years[odds ratios(OR)=4.4,95%confidence interval(CI):2.1-9.1,P<0.001],lesion size(OR=2.6,95%CI:1.6-4.1,P<0.001),and CA-19-9>37 U/mL(OR=4.0,95%CI:1.6-10.0,P=0.003)were significant clinical risk factors of GBC.The deep learning model achieved the area under the receiver operating characteristic curve(AUC)values of 0.864(95%CI:0.814-0.915)and 0.857(95%CI:0.773-0.942)in the training and validation datasets,which were comparable with radiomics,clinical models and three radiologists.The sensitivity of deep learning model was the highest both in the training[90%(95%CI:82%-96%)]and validation[85%(95%CI:68%-95%)]datasets.Conclusions:The deep learning model may be a useful tool for radiologists to distinguish between GBC and benign gallbladder lesions.展开更多
Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intr...Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.展开更多
Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functiona...Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functionalgrading (FG). However, the procedure is usually complex and often is time-consuming. We thus put forward adeep learning method to model the geometrically nonlinear bending behavior of FG plates, bypassing the complexIGA simulation process. A long bidirectional short-term memory (BLSTM) recurrent neural network is trainedusing the load and gradient index as inputs and the displacement responses as outputs. The nonlinear relationshipbetween the outputs and the inputs is constructed usingmachine learning so that the displacements can be directlyestimated by the deep learning network. To provide enough training data, we use S-FSDT Von-Karman IGA andobtain the displacement responses for different loads and gradient indexes. Results show that the recognition erroris low, and demonstrate the feasibility of deep learning technique as a fast and accurate alternative to IGA formodeling the geometrically nonlinear bending behavior of FG plates.展开更多
Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being...Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate.展开更多
AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hos...AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application.展开更多
Objective:Matrix metalloproteinase 13(MMP13)is an extracellular matrix protease that affects the progression of atherosclerotic plaques and arterial thrombi by degrading collagens,modifying protein structures and regu...Objective:Matrix metalloproteinase 13(MMP13)is an extracellular matrix protease that affects the progression of atherosclerotic plaques and arterial thrombi by degrading collagens,modifying protein structures and regulating inflammatory responses,but its role in deep vein thrombosis(DVT)has not been determined.The purpose of this study was to investigate the potential effects of MMP13 and MMP13-related genes on the formation of DVT.Methods:We altered the expression level of MMP13 in vivo and conducted a transcriptome study to examine the expression and relationship between MMP13 and MMP13-related genes in a mouse model of DVT.After screening genes possibly related to MMP13 in DVT mice,the expression levels of candidate genes in human umbilical vein endothelial cells(HUVECs)and the venous wall were evaluated.The effect of MMP13 on platelet aggregation in HUVECs was investigated in vitro.Results:Among the differentially expressed genes,interleukin 1 beta,podoplanin(Pdpn),and factor VIII von Willebrand factor(F8VWF)were selected for analysis in mice.When MMP13 was inhibited,the expression level of PDPN decreased significantly in vitro.In HUVECs,overexpression of MMP13 led to an increase in the expression level of PDPN and induced platelet aggregation,while transfection of PDPN-siRNA weakened the ability of MMP13 to increase platelet aggregation.Conclusions:Inhibiting the expression of MMP13 could reduce the burden of DVT in mice.The mechanism involves downregulating the expression of Pdpn through MMP13,which could provide a novel gene target for DVT diagnosis and treatment.展开更多
This study was carried out explore the mechanism underlying the inhibition of platelet activation by kelp fucoidans in deep venous thrombosis(DVT)mouse.In the control and sham mice,the walls of deep vein were regular ...This study was carried out explore the mechanism underlying the inhibition of platelet activation by kelp fucoidans in deep venous thrombosis(DVT)mouse.In the control and sham mice,the walls of deep vein were regular and smooth with intact intima,myometrium and adventitia.The blood vessel was wrapped with the tissue and there was no thrombosis in the lumen.In the DVT model,the wall was uneven with thicken intima,myometrium and adventitia.After treated with fucoidans LF1 and LF2,the thrombus was dissolved and the blood vessel was recanalized.Compared with the control group,the ROS content,ET-1 and VWF content and the expression of PKC-βand NF-κB in the model were significantly higher(P<0.05);these levels were significantly reduced following treatments with LF2 and LF1.Compared with H_(2)O_(2)treated-HUVECs,combined LF1 and LF2 treatment resulted in significant decrease in the expression of PKC-β,NF-κB,VWF and TM protein(P<0.05).It is clear that LF1 and LF2 reduces DVT-induced ET-1,VWF and TM expressions and production of ROS,thus inhibiting the activation of PKC-β/NF-κB signal pathway and the activation of coagulation system and ultimately reducing the formation of venous thrombus.展开更多
Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL...Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.展开更多
When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in inco...When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in incomplete road extraction and low accuracy.We propose the introduction of spatial and channel attention modules to the convolutional neural network ConvNeXt.Then,ConvNeXt is used as the backbone network,which cooperates with the perceptual analysis network UPerNet,retains the detection head of the semantic segmentation,and builds a new model ConvNeXt-UPerNet to suppress noise interference.Training on the open-source DeepGlobe and CHN6-CUG datasets and introducing the DiceLoss on the basis of CrossEntropyLoss solves the problem of positive and negative sample imbalance.Experimental results show that the new network model can achieve the following performance on the DeepGlobe dataset:79.40%for precision(Pre),97.93% for accuracy(Acc),69.28% for intersection over union(IoU),and 83.56% for mean intersection over union(MIoU).On the CHN6-CUG dataset,the model achieves the respective values of 78.17%for Pre,97.63%for Acc,65.4% for IoU,and 81.46% for MIoU.Compared with other network models,the fused ConvNeXt-UPerNet model can extract road information better when faced with the influence of noise contained in high-resolution remote sensing images.It also achieves multiscale image feature information with unified perception,ultimately improving the generalization ability of deep learning technology in extracting complex roads from high-resolution remote sensing images.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.42004030)Basic Scientific Fund for National Public Research Institutes of China(Grant No.2022S03)+1 种基金Science and Technology Innovation Project(LSKJ202205102)funded by Laoshan Laboratory,and the National Key Research and Development Program of China(2020YFB0505805).
文摘The scarcity of in-situ ocean observations poses a challenge for real-time information acquisition in the ocean.Among the crucial hydroacoustic environmental parameters,ocean sound velocity exhibits significant spatial and temporal variability and it is highly relevant to oceanic research.In this study,we propose a new data-driven approach,leveraging deep learning techniques,for the prediction of sound velocity fields(SVFs).Our novel spatiotemporal prediction model,STLSTM-SA,combines Spatiotemporal Long Short-Term Memory(ST-LSTM) with a self-attention mechanism to enable accurate and real-time prediction of SVFs.To circumvent the limited amount of observational data,we employ transfer learning by first training the model using reanalysis datasets,followed by fine-tuning it using in-situ analysis data to obtain the final prediction model.By utilizing the historical 12-month SVFs as input,our model predicts the SVFs for the subsequent three months.We compare the performance of five models:Artificial Neural Networks(ANN),Long ShortTerm Memory(LSTM),Convolutional LSTM(ConvLSTM),ST-LSTM,and our proposed ST-LSTM-SA model in a test experiment spanning 2019 to 2022.Our results demonstrate that the ST-LSTM-SA model significantly improves the prediction accuracy and stability of sound velocity in both temporal and spatial dimensions.The ST-LSTM-SA model not only accurately predicts the ocean sound velocity field(SVF),but also provides valuable insights for spatiotemporal prediction of other oceanic environmental variables.
基金supported by the Natural Science Foundation of China(Grant Nos.42088101 and 42205149)Zhongwang WEI was supported by the Natural Science Foundation of China(Grant No.42075158)+1 种基金Wei SHANGGUAN was supported by the Natural Science Foundation of China(Grant No.41975122)Yonggen ZHANG was supported by the National Natural Science Foundation of Tianjin(Grant No.20JCQNJC01660).
文摘Accurate soil moisture(SM)prediction is critical for understanding hydrological processes.Physics-based(PB)models exhibit large uncertainties in SM predictions arising from uncertain parameterizations and insufficient representation of land-surface processes.In addition to PB models,deep learning(DL)models have been widely used in SM predictions recently.However,few pure DL models have notably high success rates due to lacking physical information.Thus,we developed hybrid models to effectively integrate the outputs of PB models into DL models to improve SM predictions.To this end,we first developed a hybrid model based on the attention mechanism to take advantage of PB models at each forecast time scale(attention model).We further built an ensemble model that combined the advantages of different hybrid schemes(ensemble model).We utilized SM forecasts from the Global Forecast System to enhance the convolutional long short-term memory(ConvLSTM)model for 1–16 days of SM predictions.The performances of the proposed hybrid models were investigated and compared with two existing hybrid models.The results showed that the attention model could leverage benefits of PB models and achieved the best predictability of drought events among the different hybrid models.Moreover,the ensemble model performed best among all hybrid models at all forecast time scales and different soil conditions.It is highlighted that the ensemble model outperformed the pure DL model over 79.5%of in situ stations for 16-day predictions.These findings suggest that our proposed hybrid models can adequately exploit the benefits of PB model outputs to aid DL models in making SM predictions.
基金financially supported by the National Natural Science Foundation of China (Nos.51974023 and52374321)the funding of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing,China (No.41620007)。
文摘The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.
基金supported by the National Natural Science Foundation of China(Grant Nos.41976193 and 42176243).
文摘In recent years,deep learning methods have gradually been applied to prediction tasks related to Arctic sea ice concentration,but relatively little research has been conducted for larger spatial and temporal scales,mainly due to the limited time coverage of observations and reanalysis data.Meanwhile,deep learning predictions of sea ice thickness(SIT)have yet to receive ample attention.In this study,two data-driven deep learning(DL)models are built based on the ConvLSTM and fully convolutional U-net(FC-Unet)algorithms and trained using CMIP6 historical simulations for transfer learning and fine-tuned using reanalysis/observations.These models enable monthly predictions of Arctic SIT without considering the complex physical processes involved.Through comprehensive assessments of prediction skills by season and region,the results suggest that using a broader set of CMIP6 data for transfer learning,as well as incorporating multiple climate variables as predictors,contribute to better prediction results,although both DL models can effectively predict the spatiotemporal features of SIT anomalies.Regarding the predicted SIT anomalies of the FC-Unet model,the spatial correlations with reanalysis reach an average level of 89%over all months,while the temporal anomaly correlation coefficients are close to unity in most cases.The models also demonstrate robust performances in predicting SIT and SIE during extreme events.The effectiveness and reliability of the proposed deep transfer learning models in predicting Arctic SIT can facilitate more accurate pan-Arctic predictions,aiding climate change research and real-time business applications.
基金The Shanxi Provincial Administration of Traditional Chinese Medicine,No.2023ZYYDA2005.
文摘BACKGROUND Deep learning provides an efficient automatic image recognition method for small bowel(SB)capsule endoscopy(CE)that can assist physicians in diagnosis.However,the existing deep learning models present some unresolved challenges.AIM To propose a novel and effective classification and detection model to automatically identify various SB lesions and their bleeding risks,and label the lesions accurately so as to enhance the diagnostic efficiency of physicians and the ability to identify high-risk bleeding groups.METHODS The proposed model represents a two-stage method that combined image classification with object detection.First,we utilized the improved ResNet-50 classification model to classify endoscopic images into SB lesion images,normal SB mucosa images,and invalid images.Then,the improved YOLO-V5 detection model was utilized to detect the type of lesion and its risk of bleeding,and the location of the lesion was marked.We constructed training and testing sets and compared model-assisted reading with physician reading.RESULTS The accuracy of the model constructed in this study reached 98.96%,which was higher than the accuracy of other systems using only a single module.The sensitivity,specificity,and accuracy of the model-assisted reading detection of all images were 99.17%,99.92%,and 99.86%,which were significantly higher than those of the endoscopists’diagnoses.The image processing time of the model was 48 ms/image,and the image processing time of the physicians was 0.40±0.24 s/image(P<0.001).CONCLUSION The deep learning model of image classification combined with object detection exhibits a satisfactory diagnostic effect on a variety of SB lesions and their bleeding risks in CE images,which enhances the diagnostic efficiency of physicians and improves the ability of physicians to identify high-risk bleeding groups.
基金The research was supported by the National Natural Science Foundation of China(Grant No.52008307)the Shanghai Sci-ence and Technology Innovation Program(Grant No.19DZ1201004)The third author would like to acknowledge the funding by the China Postdoctoral Science Foundation(Grant No.2023M732670).
文摘The technology of tunnel boring machine(TBM)has been widely applied for underground construction worldwide;however,how to ensure the TBM tunneling process safe and efficient remains a major concern.Advance rate is a key parameter of TBM operation and reflects the TBM-ground interaction,for which a reliable prediction helps optimize the TBM performance.Here,we develop a hybrid neural network model,called Attention-ResNet-LSTM,for accurate prediction of the TBM advance rate.A database including geological properties and TBM operational parameters from the Yangtze River Natural Gas Pipeline Project is used to train and test this deep learning model.The evolutionary polynomial regression method is adopted to aid the selection of input parameters.The results of numerical exper-iments show that our Attention-ResNet-LSTM model outperforms other commonly-used intelligent models with a lower root mean square error and a lower mean absolute percentage error.Further,parametric analyses are conducted to explore the effects of the sequence length of historical data and the model architecture on the prediction accuracy.A correlation analysis between the input and output parameters is also implemented to provide guidance for adjusting relevant TBM operational parameters.The performance of our hybrid intelligent model is demonstrated in a case study of TBM tunneling through a complex ground with variable strata.Finally,data collected from the Baimang River Tunnel Project in Shenzhen of China are used to further test the generalization of our model.The results indicate that,compared to the conventional ResNet-LSTM model,our model has a better predictive capability for scenarios with unknown datasets due to its self-adaptive characteristic.
基金The authors thank the Yayasan Universiti Teknologi PETRONAS(YUTP FRG Grant No.015LC0-428)at Universiti Teknologi PETRO-NAS for supporting this study.
文摘Static Poisson’s ratio(vs)is crucial for determining geomechanical properties in petroleum applications,namely sand production.Some models have been used to predict vs;however,the published models were limited to specific data ranges with an average absolute percentage relative error(AAPRE)of more than 10%.The published gated recurrent unit(GRU)models do not consider trend analysis to show physical behaviors.In this study,we aim to develop a GRU model using trend analysis and three inputs for predicting n s based on a broad range of data,n s(value of 0.1627-0.4492),bulk formation density(RHOB)(0.315-2.994 g/mL),compressional time(DTc)(44.43-186.9 μs/ft),and shear time(DTs)(72.9-341.2μ s/ft).The GRU model was evaluated using different approaches,including statistical error an-alyses.The GRU model showed the proper trends,and the model data ranges were wider than previous ones.The GRU model has the largest correlation coefficient(R)of 0.967 and the lowest AAPRE,average percent relative error(APRE),root mean square error(RMSE),and standard deviation(SD)of 3.228%,1.054%,4.389,and 0.013,respectively,compared to other models.The GRU model has a high accuracy for the different datasets:training,validation,testing,and the whole datasets with R and AAPRE values were 0.981 and 2.601%,0.966 and 3.274%,0.967 and 3.228%,and 0.977 and 2.861%,respectively.The group error analyses of all inputs show that the GRU model has less than 5% AAPRE for all input ranges,which is superior to other models that have different AAPRE values of more than 10% at various ranges of inputs.
基金via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.
基金supported by the Guangxi Universities and Colleges Young and Middle-aged Teachers’Scientific Research Basic Ability Enhancement Project(2023KY0055).
文摘TO perform well,deep learning(DL)models have to be trained well.Which optimizer should be adopted?We answer this question by discussing how optimizers have evolved from traditional methods like gradient descent to more advanced techniques to address challenges posed by highdimensional and non-convex problem space.Ongoing challenges include their hyperparameter sensitivity,balancing between convergence and generalization performance,and improving interpretability of optimization processes.Researchers continue to seek robust,efficient,and universally applicable optimizers to advance the field of DL across various domains.
基金funded by the National Natural Science Foundation of China(Grant No.42002134)China Postdoctoral Science Foundation(Grant No.2021T140735).
文摘Identifying fractures along a well trajectory is of immense significance in determining the subsurface fracture network distribution.Typically,conventional logs exhibit responses in fracture zones,and almost all wells have such logs.However,detecting fractures through logging responses can be challenging since the log response intensity is weak and complex.To address this problem,we propose a deep learning model for fracture identification using deep forest,which is based on a cascade structure comprising multi-layer random forests.Deep forest can extract complex nonlinear features of fractures in conventional logs through ensemble learning and deep learning.The proposed approach is tested using a dataset from the Oligocene to Miocene tight carbonate reservoirs in D oilfield,Zagros Basin,Middle East,and eight logs are selected to construct the fracture identification model based on sensitivity analysis of logging curves against fractures.The log package includes the gamma-ray,caliper,density,compensated neutron,acoustic transit time,and shallow,deep,and flushed zone resistivity logs.Experiments have shown that the deep forest obtains high recall and accuracy(>92%).In a blind well test,results from the deep forest learning model have a good correlation with fracture observation from cores.Compared to the random forest method,a widely used ensemble learning method,the proposed deep forest model improves accuracy by approximately 4.6%.
文摘Objective To observe the value of deep learning (DL) models for automatic classification of echocardiographic views. Methods Totally 100 patients after heart transplantation were retrospectively enrolled and divided into training set, validation set and test set at a ratio of 7 ∶ 2 ∶ 1. ResNet18, ResNet34, Swin Transformer and Swin Transformer V2 models were established based on 2D apical two chamber view, 2D apical three chamber view, 2D apical four chamber view, 2D subcostal view, parasternal long-axis view of left ventricle, short-axis view of great arteries, short-axis view of apex of left ventricle, short-axis view of papillary muscle of left ventricle, short-axis view of mitral valve of left ventricle, also 3D and CDFI views of echocardiography. The accuracy, precision, recall, F1 score and confusion matrix were used to evaluate the performance of each model for automatically classifying echocardiographic views. The interactive interface was designed based on Qt Designer software and deployed on the desktop. Results The performance of models for automatically classifying echocardiographic views in test set were all good, with relatively poor performance for 2D short-axis view of left ventricle and superior performance for 3D and CDFI views. Swin Transformer V2 was the optimal model for automatically classifying echocardiographic views, with high accuracy, precision, recall and F1 score was 92.56%, 89.01%, 89.97% and 89.31%, respectively, which also had the highest diagonal value in confusion matrix and showed the best classification effect on various views in t-SNE figure. Conclusion DL model had good performance for automatically classifying echocardiographic views, especially Swin Transformer V2 model had the best performance. Using interactive classification interface could improve the interpretability of prediction results to some extent.
基金the National Natural Science Foundation of China(81572975)Key Research and Devel-opment Project of Science and Technology Department of Zhejiang(2015C03053)+1 种基金Chen Xiao-Ping Foundation for the Development of Science and Technology of Hubei Province(CXPJJH11900009-07)Zhejiang Provincial Program for the Cultivation of High-level Innovative Health Talents.
文摘Background:Gallbladder carcinoma(GBC)is highly malignant,and its early diagnosis remains difficult.This study aimed to develop a deep learning model based on contrast-enhanced computed tomography(CT)images to assist radiologists in identifying GBC.Methods:We retrospectively enrolled 278 patients with gallbladder lesions(>10 mm)who underwent contrast-enhanced CT and cholecystectomy and divided them into the training(n=194)and validation(n=84)datasets.The deep learning model was developed based on ResNet50 network.Radiomics and clinical models were built based on support vector machine(SVM)method.We comprehensively compared the performance of deep learning,radiomics,clinical models,and three radiologists.Results:Three radiomics features including LoG_3.0 gray-level size zone matrix zone variance,HHL firstorder kurtosis,and LHL gray-level co-occurrence matrix dependence variance were significantly different between benign gallbladder lesions and GBC,and were selected for developing radiomics model.Multivariate regression analysis revealed that age≥65 years[odds ratios(OR)=4.4,95%confidence interval(CI):2.1-9.1,P<0.001],lesion size(OR=2.6,95%CI:1.6-4.1,P<0.001),and CA-19-9>37 U/mL(OR=4.0,95%CI:1.6-10.0,P=0.003)were significant clinical risk factors of GBC.The deep learning model achieved the area under the receiver operating characteristic curve(AUC)values of 0.864(95%CI:0.814-0.915)and 0.857(95%CI:0.773-0.942)in the training and validation datasets,which were comparable with radiomics,clinical models and three radiologists.The sensitivity of deep learning model was the highest both in the training[90%(95%CI:82%-96%)]and validation[85%(95%CI:68%-95%)]datasets.Conclusions:The deep learning model may be a useful tool for radiologists to distinguish between GBC and benign gallbladder lesions.
文摘Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.
基金the National Natural Science Foundation of China(NSFC)under Grant Nos.12272124 and 11972146.
文摘Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functionalgrading (FG). However, the procedure is usually complex and often is time-consuming. We thus put forward adeep learning method to model the geometrically nonlinear bending behavior of FG plates, bypassing the complexIGA simulation process. A long bidirectional short-term memory (BLSTM) recurrent neural network is trainedusing the load and gradient index as inputs and the displacement responses as outputs. The nonlinear relationshipbetween the outputs and the inputs is constructed usingmachine learning so that the displacements can be directlyestimated by the deep learning network. To provide enough training data, we use S-FSDT Von-Karman IGA andobtain the displacement responses for different loads and gradient indexes. Results show that the recognition erroris low, and demonstrate the feasibility of deep learning technique as a fast and accurate alternative to IGA formodeling the geometrically nonlinear bending behavior of FG plates.
文摘Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate.
文摘AIM:To establish pupil diameter measurement algorithms based on infrared images that can be used in real-world clinical settings.METHODS:A total of 188 patients from outpatient clinic at He Eye Specialist Shenyang Hospital from Spetember to December 2022 were included,and 13470 infrared pupil images were collected for the study.All infrared images for pupil segmentation were labeled using the Labelme software.The computation of pupil diameter is divided into four steps:image pre-processing,pupil identification and localization,pupil segmentation,and diameter calculation.Two major models are used in the computation process:the modified YoloV3 and Deeplabv 3+models,which must be trained beforehand.RESULTS:The test dataset included 1348 infrared pupil images.On the test dataset,the modified YoloV3 model had a detection rate of 99.98% and an average precision(AP)of 0.80 for pupils.The DeeplabV3+model achieved a background intersection over union(IOU)of 99.23%,a pupil IOU of 93.81%,and a mean IOU of 96.52%.The pupil diameters in the test dataset ranged from 20 to 56 pixels,with a mean of 36.06±6.85 pixels.The absolute error in pupil diameters between predicted and actual values ranged from 0 to 7 pixels,with a mean absolute error(MAE)of 1.06±0.96 pixels.CONCLUSION:This study successfully demonstrates a robust infrared image-based pupil diameter measurement algorithm,proven to be highly accurate and reliable for clinical application.
基金supported by grants from General Project of Yunnan Basic Research Program(No.202301AT070104)the Joint Project of Kunming Medical University and Science and Technology Department of Yunnan Province(No.202001AY070001-185)+1 种基金the Joint Project of Kunming Medical University and Science and Technology Department of Yunnan Province(No.202101AY070001-119)Yunnan Provincial Orthopedic and Sports Rehabilitation Clinical Medicine Research Center(No.202102AA310068).
文摘Objective:Matrix metalloproteinase 13(MMP13)is an extracellular matrix protease that affects the progression of atherosclerotic plaques and arterial thrombi by degrading collagens,modifying protein structures and regulating inflammatory responses,but its role in deep vein thrombosis(DVT)has not been determined.The purpose of this study was to investigate the potential effects of MMP13 and MMP13-related genes on the formation of DVT.Methods:We altered the expression level of MMP13 in vivo and conducted a transcriptome study to examine the expression and relationship between MMP13 and MMP13-related genes in a mouse model of DVT.After screening genes possibly related to MMP13 in DVT mice,the expression levels of candidate genes in human umbilical vein endothelial cells(HUVECs)and the venous wall were evaluated.The effect of MMP13 on platelet aggregation in HUVECs was investigated in vitro.Results:Among the differentially expressed genes,interleukin 1 beta,podoplanin(Pdpn),and factor VIII von Willebrand factor(F8VWF)were selected for analysis in mice.When MMP13 was inhibited,the expression level of PDPN decreased significantly in vitro.In HUVECs,overexpression of MMP13 led to an increase in the expression level of PDPN and induced platelet aggregation,while transfection of PDPN-siRNA weakened the ability of MMP13 to increase platelet aggregation.Conclusions:Inhibiting the expression of MMP13 could reduce the burden of DVT in mice.The mechanism involves downregulating the expression of Pdpn through MMP13,which could provide a novel gene target for DVT diagnosis and treatment.
基金supported by the Special Fund for Clinical Scientific Research of Shandong Medical Association(No.YXH2020ZX058).
文摘This study was carried out explore the mechanism underlying the inhibition of platelet activation by kelp fucoidans in deep venous thrombosis(DVT)mouse.In the control and sham mice,the walls of deep vein were regular and smooth with intact intima,myometrium and adventitia.The blood vessel was wrapped with the tissue and there was no thrombosis in the lumen.In the DVT model,the wall was uneven with thicken intima,myometrium and adventitia.After treated with fucoidans LF1 and LF2,the thrombus was dissolved and the blood vessel was recanalized.Compared with the control group,the ROS content,ET-1 and VWF content and the expression of PKC-βand NF-κB in the model were significantly higher(P<0.05);these levels were significantly reduced following treatments with LF2 and LF1.Compared with H_(2)O_(2)treated-HUVECs,combined LF1 and LF2 treatment resulted in significant decrease in the expression of PKC-β,NF-κB,VWF and TM protein(P<0.05).It is clear that LF1 and LF2 reduces DVT-induced ET-1,VWF and TM expressions and production of ROS,thus inhibiting the activation of PKC-β/NF-κB signal pathway and the activation of coagulation system and ultimately reducing the formation of venous thrombus.
文摘Recently,anomaly detection(AD)in streaming data gained significant attention among research communities due to its applicability in finance,business,healthcare,education,etc.The recent developments of deep learning(DL)models find helpful in the detection and classification of anomalies.This article designs an oversampling with an optimal deep learning-based streaming data classification(OS-ODLSDC)model.The aim of the OSODLSDC model is to recognize and classify the presence of anomalies in the streaming data.The proposed OS-ODLSDC model initially undergoes preprocessing step.Since streaming data is unbalanced,support vector machine(SVM)-Synthetic Minority Over-sampling Technique(SVM-SMOTE)is applied for oversampling process.Besides,the OS-ODLSDC model employs bidirectional long short-term memory(Bi LSTM)for AD and classification.Finally,the root means square propagation(RMSProp)optimizer is applied for optimal hyperparameter tuning of the Bi LSTM model.For ensuring the promising performance of the OS-ODLSDC model,a wide-ranging experimental analysis is performed using three benchmark datasets such as CICIDS 2018,KDD-Cup 1999,and NSL-KDD datasets.
基金This work was supported in part by the Key Project of Natural Science Research of Anhui Provincial Department of Education under Grant KJ2017A416in part by the Fund of National Sensor Network Engineering Technology Research Center(No.NSNC202103).
文摘When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in incomplete road extraction and low accuracy.We propose the introduction of spatial and channel attention modules to the convolutional neural network ConvNeXt.Then,ConvNeXt is used as the backbone network,which cooperates with the perceptual analysis network UPerNet,retains the detection head of the semantic segmentation,and builds a new model ConvNeXt-UPerNet to suppress noise interference.Training on the open-source DeepGlobe and CHN6-CUG datasets and introducing the DiceLoss on the basis of CrossEntropyLoss solves the problem of positive and negative sample imbalance.Experimental results show that the new network model can achieve the following performance on the DeepGlobe dataset:79.40%for precision(Pre),97.93% for accuracy(Acc),69.28% for intersection over union(IoU),and 83.56% for mean intersection over union(MIoU).On the CHN6-CUG dataset,the model achieves the respective values of 78.17%for Pre,97.63%for Acc,65.4% for IoU,and 81.46% for MIoU.Compared with other network models,the fused ConvNeXt-UPerNet model can extract road information better when faced with the influence of noise contained in high-resolution remote sensing images.It also achieves multiscale image feature information with unified perception,ultimately improving the generalization ability of deep learning technology in extracting complex roads from high-resolution remote sensing images.