Due to the lack of accurate data and complex parameterization,the prediction of groundwater depth is a chal-lenge for numerical models.Machine learning can effectively solve this issue and has been proven useful in th...Due to the lack of accurate data and complex parameterization,the prediction of groundwater depth is a chal-lenge for numerical models.Machine learning can effectively solve this issue and has been proven useful in the prediction of groundwater depth in many areas.In this study,two new models are applied to the prediction of groundwater depth in the Ningxia area,China.The two models combine the improved dung beetle optimizer(DBO)algorithm with two deep learning models:The Multi-head Attention-Convolution Neural Network-Long Short Term Memory networks(MH-CNN-LSTM)and the Multi-head Attention-Convolution Neural Network-Gated Recurrent Unit(MH-CNN-GRU).The models with DBO show better prediction performance,with larger R(correlation coefficient),RPD(residual prediction deviation),and lower RMSE(root-mean-square error).Com-pared with the models with the original DBO,the R and RPD of models with the improved DBO increase by over 1.5%,and the RMSE decreases by over 1.8%,indicating better prediction results.In addition,compared with the multiple linear regression model,a traditional statistical model,deep learning models have better prediction performance.展开更多
Prediction of stability in SG(Smart Grid)is essential in maintaining consistency and reliability of power supply in grid infrastructure.Analyzing the fluctuations in power generation and consumption patterns of smart ...Prediction of stability in SG(Smart Grid)is essential in maintaining consistency and reliability of power supply in grid infrastructure.Analyzing the fluctuations in power generation and consumption patterns of smart cities assists in effectively managing continuous power supply in the grid.It also possesses a better impact on averting overloading and permitting effective energy storage.Even though many traditional techniques have predicted the consumption rate for preserving stability,enhancement is required in prediction measures with minimized loss.To overcome the complications in existing studies,this paper intends to predict stability from the smart grid stability prediction dataset using machine learning algorithms.To accomplish this,pre-processing is performed initially to handle missing values since it develops biased models when missing values are mishandled and performs feature scaling to normalize independent data features.Then,the pre-processed data are taken for training and testing.Following that,the regression process is performed using Modified PSO(Particle Swarm Optimization)optimized XGBoost Technique with dynamic inertia weight update,which analyses variables like gamma(G),reaction time(tau1–tau4),and power balance(p1–p4)for providing effective future stability in SG.Since PSO attains optimal solution by adjusting position through dynamic inertial weights,it is integrated with XGBoost due to its scalability and faster computational speed characteristics.The hyperparameters of XGBoost are fine-tuned in the training process for achieving promising outcomes on prediction.Regression results are measured through evaluation metrics such as MSE(Mean Square Error)of 0.011312781,MAE(Mean Absolute Error)of 0.008596322,and RMSE(Root Mean Square Error)of 0.010636156 and MAPE(Mean Absolute Percentage Error)value of 0.0052 which determine the efficacy of the system.展开更多
In this paper,to present a lightweight-developed front underrun protection device(FUPD)for heavy-duty trucks,plain weave carbon fiber reinforced plastic(CFRP)is used instead of the original high-strength steel.First,t...In this paper,to present a lightweight-developed front underrun protection device(FUPD)for heavy-duty trucks,plain weave carbon fiber reinforced plastic(CFRP)is used instead of the original high-strength steel.First,the mechanical and structural properties of plain carbon fiber composite anti-collision beams are comparatively analyzed from a multi-scale perspective.For studying the design capability of carbon fiber composite materials,we investigate the effects of TC-33 carbon fiber diameter(D),fiber yarn width(W)and height(H),and fiber yarn density(N)on the front underrun protective beam of carbon fiber compositematerials.Based on the investigation,a material-structure matching strategy suitable for the front underrun protective beam of heavy-duty trucks is proposed.Next,the composite material structure is optimized by applying size optimization and stack sequence optimization methods to obtain the higher performance carbon fiber composite front underrun protection beam of commercial vehicles.The results show that the fiber yarn height(H)has the greatest influence on the protective beam,and theH1matching scheme for the front underrun protective beamwith a carbon fiber composite structure exhibits superior performance.The proposed method achieves a weight reduction of 55.21% while still meeting regulatory requirements,which demonstrates its remarkable weight reduction effect.展开更多
The cutoff frequency is one of the crucial parameters that characterize the environment. In this paper, we estimate the cutoff frequency of the Ohmic spectral density by applying the π-pulse sequences(both equidistan...The cutoff frequency is one of the crucial parameters that characterize the environment. In this paper, we estimate the cutoff frequency of the Ohmic spectral density by applying the π-pulse sequences(both equidistant and optimized)to a quantum probe coupled to a bosonic environment. To demonstrate the precision of cutoff frequency estimation, we theoretically derive the quantum Fisher information(QFI) and quantum signal-to-noise ratio(QSNR) across sub-Ohmic,Ohmic, and super-Ohmic environments, and investigate their behaviors through numerical examples. The results indicate that, compared to the equidistant π-pulse sequence, the optimized π-pulse sequence significantly shortens the time to reach maximum QFI while enhancing the precision of cutoff frequency estimation, particularly in deep sub-Ohmic and deep super-Ohmic environments.展开更多
Local and global optimization methods are widely used in geophysical inversion but each has its own advantages and disadvantages. The combination of the two methods will make it possible to overcome their weaknesses. ...Local and global optimization methods are widely used in geophysical inversion but each has its own advantages and disadvantages. The combination of the two methods will make it possible to overcome their weaknesses. Based on the simulated annealing genetic algorithm (SAGA) and the simplex algorithm, an efficient and robust 2-D nonlinear method for seismic travel-time inversion is presented in this paper. First we do a global search over a large range by SAGA and then do a rapid local search using the simplex method. A multi-scale tomography method is adopted in order to reduce non-uniqueness. The velocity field is divided into different spatial scales and velocities at the grid nodes are taken as unknown parameters. The model is parameterized by a bi-cubic spline function. The finite-difference method is used to solve the forward problem while the hybrid method combining multi-scale SAGA and simplex algorithms is applied to the inverse problem. The algorithm has been applied to a numerical test and a travel-time perturbation test using an anomalous low-velocity body. For a practical example, it is used in the study of upper crustal velocity structure of the A'nyemaqen suture zone at the north-east edge of the Qinghai-Tibet Plateau. The model test and practical application both prove that the method is effective and robust.展开更多
A large number of nanopores and complex fracture structures in shale reservoirs results in multi-scale flow of oil. With the development of shale oil reservoirs, the permeability of multi-scale media undergoes changes...A large number of nanopores and complex fracture structures in shale reservoirs results in multi-scale flow of oil. With the development of shale oil reservoirs, the permeability of multi-scale media undergoes changes due to stress sensitivity, which plays a crucial role in controlling pressure propagation and oil flow. This paper proposes a multi-scale coupled flow mathematical model of matrix nanopores, induced fractures, and hydraulic fractures. In this model, the micro-scale effects of shale oil flow in fractal nanopores, fractal induced fracture network, and stress sensitivity of multi-scale media are considered. We solved the model iteratively using Pedrosa transform, semi-analytic Segmented Bessel function, Laplace transform. The results of this model exhibit good agreement with the numerical solution and field production data, confirming the high accuracy of the model. As well, the influence of stress sensitivity on permeability, pressure and production is analyzed. It is shown that the permeability and production decrease significantly when induced fractures are weakly supported. Closed induced fractures can inhibit interporosity flow in the stimulated reservoir volume (SRV). It has been shown in sensitivity analysis that hydraulic fractures are beneficial to early production, and induced fractures in SRV are beneficial to middle production. The model can characterize multi-scale flow characteristics of shale oil, providing theoretical guidance for rapid productivity evaluation.展开更多
Multi-scale system remains a classical scientific problem in fluid dynamics,biology,etc.In the present study,a scheme of multi-scale Physics-informed neural networks is proposed to solve the boundary layer flow at hig...Multi-scale system remains a classical scientific problem in fluid dynamics,biology,etc.In the present study,a scheme of multi-scale Physics-informed neural networks is proposed to solve the boundary layer flow at high Reynolds numbers without any data.The flow is divided into several regions with different scales based on Prandtl's boundary theory.Different regions are solved with governing equations in different scales.The method of matched asymptotic expansions is used to make the flow field continuously.A flow on a semi infinite flat plate at a high Reynolds number is considered a multi-scale problem because the boundary layer scale is much smaller than the outer flow scale.The results are compared with the reference numerical solutions,which show that the msPINNs can solve the multi-scale problem of the boundary layer in high Reynolds number flows.This scheme can be developed for more multi-scale problems in the future.展开更多
As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance le...As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance level of TBMs in complex geological conditions is still a great challenge for practitioners and researchers.On the other hand,a reliable and accurate prediction of TBM performance is essential to planning an applicable tunnel construction schedule.The performance of TBM is very difficult to estimate due to various geotechnical and geological factors and machine specifications.The previously-proposed intelligent techniques in this field are mostly based on a single or base model with a low level of accuracy.Hence,this study aims to introduce a hybrid randomforest(RF)technique optimized by global harmony search with generalized oppositionbased learning(GOGHS)for forecasting TBM advance rate(AR).Optimizing the RF hyper-parameters in terms of,e.g.,tree number and maximum tree depth is the main objective of using the GOGHS-RF model.In the modelling of this study,a comprehensive databasewith themost influential parameters onTBMtogetherwithTBM AR were used as input and output variables,respectively.To examine the capability and power of the GOGHSRF model,three more hybrid models of particle swarm optimization-RF,genetic algorithm-RF and artificial bee colony-RF were also constructed to forecast TBM AR.Evaluation of the developed models was performed by calculating several performance indices,including determination coefficient(R2),root-mean-square-error(RMSE),and mean-absolute-percentage-error(MAPE).The results showed that theGOGHS-RF is a more accurate technique for estimatingTBMAR compared to the other applied models.The newly-developedGOGHS-RFmodel enjoyed R2=0.9937 and 0.9844,respectively,for train and test stages,which are higher than a pre-developed RF.Also,the importance of the input parameters was interpreted through the SHapley Additive exPlanations(SHAP)method,and it was found that thrust force per cutter is the most important variable on TBMAR.The GOGHS-RF model can be used in mechanized tunnel projects for predicting and checking performance.展开更多
This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspi...This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspiration from the sit-and-wait hunting strategy of these lizards.The algorithm’s core principles are meticulously detailed and mathematically structured into two distinct phases:(i)an exploration phase,which mimics the lizard’s sudden attack on its prey,and(ii)an exploitation phase,which simulates the lizard’s retreat to the treetops after feeding.To assess FLO’s efficacy in addressing optimization problems,its performance is rigorously tested on fifty-two standard benchmark functions.These functions include unimodal,high-dimensional multimodal,and fixed-dimensional multimodal functions,as well as the challenging CEC 2017 test suite.FLO’s performance is benchmarked against twelve established metaheuristic algorithms,providing a comprehensive comparative analysis.The simulation results demonstrate that FLO excels in both exploration and exploitation,effectively balancing these two critical aspects throughout the search process.This balanced approach enables FLO to outperform several competing algorithms in numerous test cases.Additionally,FLO is applied to twenty-two constrained optimization problems from the CEC 2011 test suite and four complex engineering design problems,further validating its robustness and versatility in solving real-world optimization challenges.Overall,the study highlights FLO’s superior performance and its potential as a powerful tool for tackling a wide range of optimization problems.展开更多
This paper deals with the concurrent multi-scale optimization design of frame structure composed of glass or carbon fiber reinforced polymer laminates. In the composite frame structure, the fiber winding angle at the ...This paper deals with the concurrent multi-scale optimization design of frame structure composed of glass or carbon fiber reinforced polymer laminates. In the composite frame structure, the fiber winding angle at the micro-material scale and the geometrical parameter of components of the frame in the macro-structural scale are introduced as the independent variables on the two geometrical scales. Considering manufacturing requirements, discrete fiber winding angles are specified for the micro design variable. The improved Heaviside penalization discrete material optimization interpolation scheme has been applied to achieve the discrete optimization design of the fiber winding angle. An optimization model based on the minimum structural compliance and the specified fiber material volume constraint has been established. The sensitivity information about the two geometrical scales design variables are also deduced considering the characteristics of discrete fiber winding angles. The optimization results of the fiber winding angle or the macro structural topology on the two single geometrical scales, together with the concurrent two-scale optimization, is separately studied and compared in the paper. Numerical examples in the paper show that the concurrent multi-scale optimization can further explore the coupling effect between the macro-structure and micro-material of the composite to achieve an ultralight design of the composite frame structure. The novel two geometrical scales optimization model provides a new opportunity for the design of composite structure in aerospace and other industries.展开更多
Second-generation high-temperature superconducting(HTS)conductors,specifically rare earth-barium-copper-oxide(REBCO)coated conductor(CC)tapes,are promising candidates for high-energy and high-field superconducting app...Second-generation high-temperature superconducting(HTS)conductors,specifically rare earth-barium-copper-oxide(REBCO)coated conductor(CC)tapes,are promising candidates for high-energy and high-field superconducting applications.With respect to epoxy-impregnated REBCO composite magnets that comprise multilayer components,the thermomechanical characteristics of each component differ considerably under extremely low temperatures and strong electromagnetic fields.Traditional numerical models include homogenized orthotropic models,which simplify overall field calculation but miss detailed multi-physics aspects,and full refinement(FR)ones that are thorough but computationally demanding.Herein,we propose an extended multi-scale approach for analyzing the multi-field characteristics of an epoxy-impregnated composite magnet assembled by HTS pancake coils.This approach combines a global homogenization(GH)scheme based on the homogenized electromagnetic T-A model,a method for solving Maxwell's equations for superconducting materials based on the current vector potential T and the magnetic field vector potential A,and a homogenized orthotropic thermoelastic model to assess the electromagnetic and thermoelastic properties at the macroscopic scale.We then identify“dangerous regions”at the macroscopic scale and obtain finer details using a local refinement(LR)scheme to capture the responses of each component material in the HTS composite tapes at the mesoscopic scale.The results of the present GH-LR multi-scale approach agree well with those of the FR scheme and the experimental data in the literature,indicating that the present approach is accurate and efficient.The proposed GH-LR multi-scale approach can serve as a valuable tool for evaluating the risk of failure in large-scale HTS composite magnets.展开更多
BACKGROUND A cure for Helicobacter pylori(H.pylori)remains a problem of global concern.The prevalence of antimicrobial resistance is widely rising and becoming a challenging issue worldwide.Optimizing sequential thera...BACKGROUND A cure for Helicobacter pylori(H.pylori)remains a problem of global concern.The prevalence of antimicrobial resistance is widely rising and becoming a challenging issue worldwide.Optimizing sequential therapy seems to be one of the most attractive strategies in terms of efficacy,tolerability and cost.The most common sequential therapy consists of a dual therapy[proton-pump inhibitors(PPIs)and amoxicillin]for the first period(5 to 7 d),followed by a triple therapy for the second period(PPI,clarithromycin and metronidazole).PPIs play a key role in maintaining a gastric pH at a level that allows an optimal efficacy of antibiotics,hence the idea of using new generation molecules.This open-label prospective study randomized 328 patients with confirmed H.pylori infection into three groups(1:1:1):The first group received quadruple therapy consisting of twice-daily(bid)omeprazole 20 mg,amoxicillin 1 g,clarith-romycin 500 mg and metronidazole 500 mg for 10 d(QT-10),the second group received a 14 d quadruple therapy following the same regimen(QT-14),and the third group received an optimized sequential therapy consisting of bid rabe-prazole 20 mg plus amoxicillin 1 g for 7 d,followed by bid rabeprazole 20 mg,clarithromycin 500 mg and metronidazole 500 mg for the next 7 d(OST-14).AEs were recorded throughout the study,and the H.pylori eradication rate was determined 4 to 6 wk after the end of treatment,using the 13C urea breath test.RESULTS In the intention-to-treat and per-protocol analysis,the eradication rate was higher in the OST-14 group compared to the QT-10 group:(93.5%,85.5%P=0.04)and(96.2%,89.5%P=0.03)respectively.However,there was no statist-ically significant difference in eradication rates between the OST-14 and QT-14 groups:(93.5%,91.8%P=0.34)and(96.2%,94.4%P=0.35),respectively.The overall incidence of AEs was significantly lower in the OST-14 group(P=0.01).Furthermore,OST-14 was the most cost-effective among the three groups.CONCLUSION The optimized 14-d sequential therapy is a safe and effective alternative.Its eradication rate is comparable to that of the 14-d concomitant therapy while causing fewer AEs and allowing a gain in terms of cost.展开更多
With the development of information technology,a large number of product quality data in the entire manufacturing process is accumulated,but it is not explored and used effectively.The traditional product quality pred...With the development of information technology,a large number of product quality data in the entire manufacturing process is accumulated,but it is not explored and used effectively.The traditional product quality prediction models have many disadvantages,such as high complexity and low accuracy.To overcome the above problems,we propose an optimized data equalization method to pre-process dataset and design a simple but effective product quality prediction model:radial basis function model optimized by the firefly algorithm with Levy flight mechanism(RBFFALM).First,the new data equalization method is introduced to pre-process the dataset,which reduces the dimension of the data,removes redundant features,and improves the data distribution.Then the RBFFALFM is used to predict product quality.Comprehensive expe riments conducted on real-world product quality datasets validate that the new model RBFFALFM combining with the new data pre-processing method outperforms other previous me thods on predicting product quality.展开更多
Network traffic identification is critical for maintaining network security and further meeting various demands of network applications.However,network traffic data typically possesses high dimensionality and complexi...Network traffic identification is critical for maintaining network security and further meeting various demands of network applications.However,network traffic data typically possesses high dimensionality and complexity,leading to practical problems in traffic identification data analytics.Since the original Dung Beetle Optimizer(DBO)algorithm,Grey Wolf Optimization(GWO)algorithm,Whale Optimization Algorithm(WOA),and Particle Swarm Optimization(PSO)algorithm have the shortcomings of slow convergence and easily fall into the local optimal solution,an Improved Dung Beetle Optimizer(IDBO)algorithm is proposed for network traffic identification.Firstly,the Sobol sequence is utilized to initialize the dung beetle population,laying the foundation for finding the global optimal solution.Next,an integration of levy flight and golden sine strategy is suggested to give dung beetles a greater probability of exploring unvisited areas,escaping from the local optimal solution,and converging more effectively towards a global optimal solution.Finally,an adaptive weight factor is utilized to enhance the search capabilities of the original DBO algorithm and accelerate convergence.With the improvements above,the proposed IDBO algorithm is then applied to traffic identification data analytics and feature selection,as so to find the optimal subset for K-Nearest Neighbor(KNN)classification.The simulation experiments use the CICIDS2017 dataset to verify the effectiveness of the proposed IDBO algorithm and compare it with the original DBO,GWO,WOA,and PSO algorithms.The experimental results show that,compared with other algorithms,the accuracy and recall are improved by 1.53%and 0.88%in binary classification,and the Distributed Denial of Service(DDoS)class identification is the most effective in multi-classification,with an improvement of 5.80%and 0.33%for accuracy and recall,respectively.Therefore,the proposed IDBO algorithm is effective in increasing the efficiency of traffic identification and solving the problem of the original DBO algorithm that converges slowly and falls into the local optimal solution when dealing with high-dimensional data analytics and feature selection for network traffic identification.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the f...Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the features in lung X-ray images.A pneumonia classification model based on multi-scale directional feature enhancement MSD-Net is proposed in this paper.The main innovations are as follows:Firstly,the Multi-scale Residual Feature Extraction Module(MRFEM)is designed to effectively extract multi-scale features.The MRFEM uses dilated convolutions with different expansion rates to increase the receptive field and extract multi-scale features effectively.Secondly,the Multi-scale Directional Feature Perception Module(MDFPM)is designed,which uses a three-branch structure of different sizes convolution to transmit direction feature layer by layer,and focuses on the target region to enhance the feature information.Thirdly,the Axial Compression Former Module(ACFM)is designed to perform global calculations to enhance the perception ability of global features in different directions.To verify the effectiveness of the MSD-Net,comparative experiments and ablation experiments are carried out.In the COVID-19 RADIOGRAPHY DATABASE,the Accuracy,Recall,Precision,F1 Score,and Specificity of MSD-Net are 97.76%,95.57%,95.52%,95.52%,and 98.51%,respectively.In the chest X-ray dataset,the Accuracy,Recall,Precision,F1 Score and Specificity of MSD-Net are 97.78%,95.22%,96.49%,95.58%,and 98.11%,respectively.This model improves the accuracy of lung image recognition effectively and provides an important clinical reference to pneumonia Computer-Aided Diagnosis.展开更多
The hands and face are the most important parts for expressing sign language morphemes in sign language videos.However,we find that existing Continuous Sign Language Recognition(CSLR)methods lack the mining of hand an...The hands and face are the most important parts for expressing sign language morphemes in sign language videos.However,we find that existing Continuous Sign Language Recognition(CSLR)methods lack the mining of hand and face information in visual backbones or use expensive and time-consuming external extractors to explore this information.In addition,the signs have different lengths,whereas previous CSLR methods typically use a fixed-length window to segment the video to capture sequential features and then perform global temporal modeling,which disturbs the perception of complete signs.In this study,we propose a Multi-Scale Context-Aware network(MSCA-Net)to solve the aforementioned problems.Our MSCA-Net contains two main modules:(1)Multi-Scale Motion Attention(MSMA),which uses the differences among frames to perceive information of the hands and face in multiple spatial scales,replacing the heavy feature extractors;and(2)Multi-Scale Temporal Modeling(MSTM),which explores crucial temporal information in the sign language video from different temporal scales.We conduct extensive experiments using three widely used sign language datasets,i.e.,RWTH-PHOENIX-Weather-2014,RWTH-PHOENIX-Weather-2014T,and CSL-Daily.The proposed MSCA-Net achieve state-of-the-art performance,demonstrating the effectiveness of our approach.展开更多
Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false...Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method.展开更多
Rock fracture mechanisms can be inferred from moment tensors(MT)inverted from microseismic events.However,MT can only be inverted for events whose waveforms are acquired across a network of sensors.This is limiting fo...Rock fracture mechanisms can be inferred from moment tensors(MT)inverted from microseismic events.However,MT can only be inverted for events whose waveforms are acquired across a network of sensors.This is limiting for underground mines where the microseismic stations often lack azimuthal coverage.Thus,there is a need for a method to invert fracture mechanisms using waveforms acquired by a sparse microseismic network.Here,we present a novel,multi-scale framework to classify whether a rock crack contracts or dilates based on a single waveform.The framework consists of a deep learning model that is initially trained on 2400000+manually labelled field-scale seismic and microseismic waveforms acquired across 692 stations.Transfer learning is then applied to fine-tune the model on 300000+MT-labelled labscale acoustic emission waveforms from 39 individual experiments instrumented with different sensor layouts,loading,and rock types in training.The optimal model achieves over 86%F-score on unseen waveforms at both the lab-and field-scale.This model outperforms existing empirical methods in classification of rock fracture mechanisms monitored by a sparse microseismic network.This facilitates rapid assessment of,and early warning against,various rock engineering hazard such as induced earthquakes and rock bursts.展开更多
基金supported by the National Natural Science Foundation of China [grant numbers 42088101 and 42375048]。
文摘Due to the lack of accurate data and complex parameterization,the prediction of groundwater depth is a chal-lenge for numerical models.Machine learning can effectively solve this issue and has been proven useful in the prediction of groundwater depth in many areas.In this study,two new models are applied to the prediction of groundwater depth in the Ningxia area,China.The two models combine the improved dung beetle optimizer(DBO)algorithm with two deep learning models:The Multi-head Attention-Convolution Neural Network-Long Short Term Memory networks(MH-CNN-LSTM)and the Multi-head Attention-Convolution Neural Network-Gated Recurrent Unit(MH-CNN-GRU).The models with DBO show better prediction performance,with larger R(correlation coefficient),RPD(residual prediction deviation),and lower RMSE(root-mean-square error).Com-pared with the models with the original DBO,the R and RPD of models with the improved DBO increase by over 1.5%,and the RMSE decreases by over 1.8%,indicating better prediction results.In addition,compared with the multiple linear regression model,a traditional statistical model,deep learning models have better prediction performance.
基金Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1445)。
文摘Prediction of stability in SG(Smart Grid)is essential in maintaining consistency and reliability of power supply in grid infrastructure.Analyzing the fluctuations in power generation and consumption patterns of smart cities assists in effectively managing continuous power supply in the grid.It also possesses a better impact on averting overloading and permitting effective energy storage.Even though many traditional techniques have predicted the consumption rate for preserving stability,enhancement is required in prediction measures with minimized loss.To overcome the complications in existing studies,this paper intends to predict stability from the smart grid stability prediction dataset using machine learning algorithms.To accomplish this,pre-processing is performed initially to handle missing values since it develops biased models when missing values are mishandled and performs feature scaling to normalize independent data features.Then,the pre-processed data are taken for training and testing.Following that,the regression process is performed using Modified PSO(Particle Swarm Optimization)optimized XGBoost Technique with dynamic inertia weight update,which analyses variables like gamma(G),reaction time(tau1–tau4),and power balance(p1–p4)for providing effective future stability in SG.Since PSO attains optimal solution by adjusting position through dynamic inertial weights,it is integrated with XGBoost due to its scalability and faster computational speed characteristics.The hyperparameters of XGBoost are fine-tuned in the training process for achieving promising outcomes on prediction.Regression results are measured through evaluation metrics such as MSE(Mean Square Error)of 0.011312781,MAE(Mean Absolute Error)of 0.008596322,and RMSE(Root Mean Square Error)of 0.010636156 and MAPE(Mean Absolute Percentage Error)value of 0.0052 which determine the efficacy of the system.
基金supported by the Guangxi Science and Technology Plan and Project(Grant Numbers 2021AC19131 and 2022AC21140)Guangxi University of Science and Technology Doctoral Fund Project(Grant Number 20Z40).
文摘In this paper,to present a lightweight-developed front underrun protection device(FUPD)for heavy-duty trucks,plain weave carbon fiber reinforced plastic(CFRP)is used instead of the original high-strength steel.First,the mechanical and structural properties of plain carbon fiber composite anti-collision beams are comparatively analyzed from a multi-scale perspective.For studying the design capability of carbon fiber composite materials,we investigate the effects of TC-33 carbon fiber diameter(D),fiber yarn width(W)and height(H),and fiber yarn density(N)on the front underrun protective beam of carbon fiber compositematerials.Based on the investigation,a material-structure matching strategy suitable for the front underrun protective beam of heavy-duty trucks is proposed.Next,the composite material structure is optimized by applying size optimization and stack sequence optimization methods to obtain the higher performance carbon fiber composite front underrun protection beam of commercial vehicles.The results show that the fiber yarn height(H)has the greatest influence on the protective beam,and theH1matching scheme for the front underrun protective beamwith a carbon fiber composite structure exhibits superior performance.The proposed method achieves a weight reduction of 55.21% while still meeting regulatory requirements,which demonstrates its remarkable weight reduction effect.
基金Project supported by the National Natural Science Foundation of China (Grant No. 62403150)the Innovation Project of Guangxi Graduate Education (Grant No. YCSW2024129)the Guangxi Science and Technology Base and Talent Project (Grant No. Guike AD23026208)。
文摘The cutoff frequency is one of the crucial parameters that characterize the environment. In this paper, we estimate the cutoff frequency of the Ohmic spectral density by applying the π-pulse sequences(both equidistant and optimized)to a quantum probe coupled to a bosonic environment. To demonstrate the precision of cutoff frequency estimation, we theoretically derive the quantum Fisher information(QFI) and quantum signal-to-noise ratio(QSNR) across sub-Ohmic,Ohmic, and super-Ohmic environments, and investigate their behaviors through numerical examples. The results indicate that, compared to the equidistant π-pulse sequence, the optimized π-pulse sequence significantly shortens the time to reach maximum QFI while enhancing the precision of cutoff frequency estimation, particularly in deep sub-Ohmic and deep super-Ohmic environments.
基金supported by the National Natural Science Foundation of China (Grant Nos.40334040 and 40974033)the Promoting Foundation for Advanced Persons of Talent of NCWU
文摘Local and global optimization methods are widely used in geophysical inversion but each has its own advantages and disadvantages. The combination of the two methods will make it possible to overcome their weaknesses. Based on the simulated annealing genetic algorithm (SAGA) and the simplex algorithm, an efficient and robust 2-D nonlinear method for seismic travel-time inversion is presented in this paper. First we do a global search over a large range by SAGA and then do a rapid local search using the simplex method. A multi-scale tomography method is adopted in order to reduce non-uniqueness. The velocity field is divided into different spatial scales and velocities at the grid nodes are taken as unknown parameters. The model is parameterized by a bi-cubic spline function. The finite-difference method is used to solve the forward problem while the hybrid method combining multi-scale SAGA and simplex algorithms is applied to the inverse problem. The algorithm has been applied to a numerical test and a travel-time perturbation test using an anomalous low-velocity body. For a practical example, it is used in the study of upper crustal velocity structure of the A'nyemaqen suture zone at the north-east edge of the Qinghai-Tibet Plateau. The model test and practical application both prove that the method is effective and robust.
基金This study was supported by the National Natural Science Foundation of China(U22B2075,52274056,51974356).
文摘A large number of nanopores and complex fracture structures in shale reservoirs results in multi-scale flow of oil. With the development of shale oil reservoirs, the permeability of multi-scale media undergoes changes due to stress sensitivity, which plays a crucial role in controlling pressure propagation and oil flow. This paper proposes a multi-scale coupled flow mathematical model of matrix nanopores, induced fractures, and hydraulic fractures. In this model, the micro-scale effects of shale oil flow in fractal nanopores, fractal induced fracture network, and stress sensitivity of multi-scale media are considered. We solved the model iteratively using Pedrosa transform, semi-analytic Segmented Bessel function, Laplace transform. The results of this model exhibit good agreement with the numerical solution and field production data, confirming the high accuracy of the model. As well, the influence of stress sensitivity on permeability, pressure and production is analyzed. It is shown that the permeability and production decrease significantly when induced fractures are weakly supported. Closed induced fractures can inhibit interporosity flow in the stimulated reservoir volume (SRV). It has been shown in sensitivity analysis that hydraulic fractures are beneficial to early production, and induced fractures in SRV are beneficial to middle production. The model can characterize multi-scale flow characteristics of shale oil, providing theoretical guidance for rapid productivity evaluation.
文摘Multi-scale system remains a classical scientific problem in fluid dynamics,biology,etc.In the present study,a scheme of multi-scale Physics-informed neural networks is proposed to solve the boundary layer flow at high Reynolds numbers without any data.The flow is divided into several regions with different scales based on Prandtl's boundary theory.Different regions are solved with governing equations in different scales.The method of matched asymptotic expansions is used to make the flow field continuously.A flow on a semi infinite flat plate at a high Reynolds number is considered a multi-scale problem because the boundary layer scale is much smaller than the outer flow scale.The results are compared with the reference numerical solutions,which show that the msPINNs can solve the multi-scale problem of the boundary layer in high Reynolds number flows.This scheme can be developed for more multi-scale problems in the future.
基金the National Natural Science Foundation of China(Grant 42177164)the Distinguished Youth Science Foundation of Hunan Province of China(2022JJ10073).
文摘As massive underground projects have become popular in dense urban cities,a problem has arisen:which model predicts the best for Tunnel Boring Machine(TBM)performance in these tunneling projects?However,performance level of TBMs in complex geological conditions is still a great challenge for practitioners and researchers.On the other hand,a reliable and accurate prediction of TBM performance is essential to planning an applicable tunnel construction schedule.The performance of TBM is very difficult to estimate due to various geotechnical and geological factors and machine specifications.The previously-proposed intelligent techniques in this field are mostly based on a single or base model with a low level of accuracy.Hence,this study aims to introduce a hybrid randomforest(RF)technique optimized by global harmony search with generalized oppositionbased learning(GOGHS)for forecasting TBM advance rate(AR).Optimizing the RF hyper-parameters in terms of,e.g.,tree number and maximum tree depth is the main objective of using the GOGHS-RF model.In the modelling of this study,a comprehensive databasewith themost influential parameters onTBMtogetherwithTBM AR were used as input and output variables,respectively.To examine the capability and power of the GOGHSRF model,three more hybrid models of particle swarm optimization-RF,genetic algorithm-RF and artificial bee colony-RF were also constructed to forecast TBM AR.Evaluation of the developed models was performed by calculating several performance indices,including determination coefficient(R2),root-mean-square-error(RMSE),and mean-absolute-percentage-error(MAPE).The results showed that theGOGHS-RF is a more accurate technique for estimatingTBMAR compared to the other applied models.The newly-developedGOGHS-RFmodel enjoyed R2=0.9937 and 0.9844,respectively,for train and test stages,which are higher than a pre-developed RF.Also,the importance of the input parameters was interpreted through the SHapley Additive exPlanations(SHAP)method,and it was found that thrust force per cutter is the most important variable on TBMAR.The GOGHS-RF model can be used in mechanized tunnel projects for predicting and checking performance.
文摘This research presents a novel nature-inspired metaheuristic algorithm called Frilled Lizard Optimization(FLO),which emulates the unique hunting behavior of frilled lizards in their natural habitat.FLO draws its inspiration from the sit-and-wait hunting strategy of these lizards.The algorithm’s core principles are meticulously detailed and mathematically structured into two distinct phases:(i)an exploration phase,which mimics the lizard’s sudden attack on its prey,and(ii)an exploitation phase,which simulates the lizard’s retreat to the treetops after feeding.To assess FLO’s efficacy in addressing optimization problems,its performance is rigorously tested on fifty-two standard benchmark functions.These functions include unimodal,high-dimensional multimodal,and fixed-dimensional multimodal functions,as well as the challenging CEC 2017 test suite.FLO’s performance is benchmarked against twelve established metaheuristic algorithms,providing a comprehensive comparative analysis.The simulation results demonstrate that FLO excels in both exploration and exploitation,effectively balancing these two critical aspects throughout the search process.This balanced approach enables FLO to outperform several competing algorithms in numerous test cases.Additionally,FLO is applied to twenty-two constrained optimization problems from the CEC 2011 test suite and four complex engineering design problems,further validating its robustness and versatility in solving real-world optimization challenges.Overall,the study highlights FLO’s superior performance and its potential as a powerful tool for tackling a wide range of optimization problems.
基金financial support for this research was provided by the Program (Grants 11372060, 91216201) of the National Natural Science Foundation of ChinaProgram (LJQ2015026 ) for Excellent Talents at Colleges and Universities in Liaoning Province+3 种基金the Major National Science and Technology Project (2011ZX02403-002)111 project (B14013)Fundamental Research Funds for the Central Universities (DUT14LK30)the China Scholarship Fund
文摘This paper deals with the concurrent multi-scale optimization design of frame structure composed of glass or carbon fiber reinforced polymer laminates. In the composite frame structure, the fiber winding angle at the micro-material scale and the geometrical parameter of components of the frame in the macro-structural scale are introduced as the independent variables on the two geometrical scales. Considering manufacturing requirements, discrete fiber winding angles are specified for the micro design variable. The improved Heaviside penalization discrete material optimization interpolation scheme has been applied to achieve the discrete optimization design of the fiber winding angle. An optimization model based on the minimum structural compliance and the specified fiber material volume constraint has been established. The sensitivity information about the two geometrical scales design variables are also deduced considering the characteristics of discrete fiber winding angles. The optimization results of the fiber winding angle or the macro structural topology on the two single geometrical scales, together with the concurrent two-scale optimization, is separately studied and compared in the paper. Numerical examples in the paper show that the concurrent multi-scale optimization can further explore the coupling effect between the macro-structure and micro-material of the composite to achieve an ultralight design of the composite frame structure. The novel two geometrical scales optimization model provides a new opportunity for the design of composite structure in aerospace and other industries.
基金Project supported by the National Natural Science Foundation of China(Nos.11932008 and 12272156)the Fundamental Research Funds for the Central Universities(No.lzujbky-2022-kb06)+1 种基金the Gansu Science and Technology ProgramLanzhou City’s Scientific Research Funding Subsidy to Lanzhou University of China。
文摘Second-generation high-temperature superconducting(HTS)conductors,specifically rare earth-barium-copper-oxide(REBCO)coated conductor(CC)tapes,are promising candidates for high-energy and high-field superconducting applications.With respect to epoxy-impregnated REBCO composite magnets that comprise multilayer components,the thermomechanical characteristics of each component differ considerably under extremely low temperatures and strong electromagnetic fields.Traditional numerical models include homogenized orthotropic models,which simplify overall field calculation but miss detailed multi-physics aspects,and full refinement(FR)ones that are thorough but computationally demanding.Herein,we propose an extended multi-scale approach for analyzing the multi-field characteristics of an epoxy-impregnated composite magnet assembled by HTS pancake coils.This approach combines a global homogenization(GH)scheme based on the homogenized electromagnetic T-A model,a method for solving Maxwell's equations for superconducting materials based on the current vector potential T and the magnetic field vector potential A,and a homogenized orthotropic thermoelastic model to assess the electromagnetic and thermoelastic properties at the macroscopic scale.We then identify“dangerous regions”at the macroscopic scale and obtain finer details using a local refinement(LR)scheme to capture the responses of each component material in the HTS composite tapes at the mesoscopic scale.The results of the present GH-LR multi-scale approach agree well with those of the FR scheme and the experimental data in the literature,indicating that the present approach is accurate and efficient.The proposed GH-LR multi-scale approach can serve as a valuable tool for evaluating the risk of failure in large-scale HTS composite magnets.
文摘BACKGROUND A cure for Helicobacter pylori(H.pylori)remains a problem of global concern.The prevalence of antimicrobial resistance is widely rising and becoming a challenging issue worldwide.Optimizing sequential therapy seems to be one of the most attractive strategies in terms of efficacy,tolerability and cost.The most common sequential therapy consists of a dual therapy[proton-pump inhibitors(PPIs)and amoxicillin]for the first period(5 to 7 d),followed by a triple therapy for the second period(PPI,clarithromycin and metronidazole).PPIs play a key role in maintaining a gastric pH at a level that allows an optimal efficacy of antibiotics,hence the idea of using new generation molecules.This open-label prospective study randomized 328 patients with confirmed H.pylori infection into three groups(1:1:1):The first group received quadruple therapy consisting of twice-daily(bid)omeprazole 20 mg,amoxicillin 1 g,clarith-romycin 500 mg and metronidazole 500 mg for 10 d(QT-10),the second group received a 14 d quadruple therapy following the same regimen(QT-14),and the third group received an optimized sequential therapy consisting of bid rabe-prazole 20 mg plus amoxicillin 1 g for 7 d,followed by bid rabeprazole 20 mg,clarithromycin 500 mg and metronidazole 500 mg for the next 7 d(OST-14).AEs were recorded throughout the study,and the H.pylori eradication rate was determined 4 to 6 wk after the end of treatment,using the 13C urea breath test.RESULTS In the intention-to-treat and per-protocol analysis,the eradication rate was higher in the OST-14 group compared to the QT-10 group:(93.5%,85.5%P=0.04)and(96.2%,89.5%P=0.03)respectively.However,there was no statist-ically significant difference in eradication rates between the OST-14 and QT-14 groups:(93.5%,91.8%P=0.34)and(96.2%,94.4%P=0.35),respectively.The overall incidence of AEs was significantly lower in the OST-14 group(P=0.01).Furthermore,OST-14 was the most cost-effective among the three groups.CONCLUSION The optimized 14-d sequential therapy is a safe and effective alternative.Its eradication rate is comparable to that of the 14-d concomitant therapy while causing fewer AEs and allowing a gain in terms of cost.
基金supported by the National Science and Technology Innovation 2030 Next-Generation Artifical Intelligence Major Project(2018AAA0101801)the National Natural Science Foundation of China(72271188)。
文摘With the development of information technology,a large number of product quality data in the entire manufacturing process is accumulated,but it is not explored and used effectively.The traditional product quality prediction models have many disadvantages,such as high complexity and low accuracy.To overcome the above problems,we propose an optimized data equalization method to pre-process dataset and design a simple but effective product quality prediction model:radial basis function model optimized by the firefly algorithm with Levy flight mechanism(RBFFALM).First,the new data equalization method is introduced to pre-process the dataset,which reduces the dimension of the data,removes redundant features,and improves the data distribution.Then the RBFFALFM is used to predict product quality.Comprehensive expe riments conducted on real-world product quality datasets validate that the new model RBFFALFM combining with the new data pre-processing method outperforms other previous me thods on predicting product quality.
基金supported by the National Natural Science Foundation of China under Grant 61602162the Hubei Provincial Science and Technology Plan Project under Grant 2023BCB041.
文摘Network traffic identification is critical for maintaining network security and further meeting various demands of network applications.However,network traffic data typically possesses high dimensionality and complexity,leading to practical problems in traffic identification data analytics.Since the original Dung Beetle Optimizer(DBO)algorithm,Grey Wolf Optimization(GWO)algorithm,Whale Optimization Algorithm(WOA),and Particle Swarm Optimization(PSO)algorithm have the shortcomings of slow convergence and easily fall into the local optimal solution,an Improved Dung Beetle Optimizer(IDBO)algorithm is proposed for network traffic identification.Firstly,the Sobol sequence is utilized to initialize the dung beetle population,laying the foundation for finding the global optimal solution.Next,an integration of levy flight and golden sine strategy is suggested to give dung beetles a greater probability of exploring unvisited areas,escaping from the local optimal solution,and converging more effectively towards a global optimal solution.Finally,an adaptive weight factor is utilized to enhance the search capabilities of the original DBO algorithm and accelerate convergence.With the improvements above,the proposed IDBO algorithm is then applied to traffic identification data analytics and feature selection,as so to find the optimal subset for K-Nearest Neighbor(KNN)classification.The simulation experiments use the CICIDS2017 dataset to verify the effectiveness of the proposed IDBO algorithm and compare it with the original DBO,GWO,WOA,and PSO algorithms.The experimental results show that,compared with other algorithms,the accuracy and recall are improved by 1.53%and 0.88%in binary classification,and the Distributed Denial of Service(DDoS)class identification is the most effective in multi-classification,with an improvement of 5.80%and 0.33%for accuracy and recall,respectively.Therefore,the proposed IDBO algorithm is effective in increasing the efficiency of traffic identification and solving the problem of the original DBO algorithm that converges slowly and falls into the local optimal solution when dealing with high-dimensional data analytics and feature selection for network traffic identification.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金supported in part by the National Natural Science Foundation of China(Grant No.62062003)Natural Science Foundation of Ningxia(Grant No.2023AAC03293).
文摘Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the features in lung X-ray images.A pneumonia classification model based on multi-scale directional feature enhancement MSD-Net is proposed in this paper.The main innovations are as follows:Firstly,the Multi-scale Residual Feature Extraction Module(MRFEM)is designed to effectively extract multi-scale features.The MRFEM uses dilated convolutions with different expansion rates to increase the receptive field and extract multi-scale features effectively.Secondly,the Multi-scale Directional Feature Perception Module(MDFPM)is designed,which uses a three-branch structure of different sizes convolution to transmit direction feature layer by layer,and focuses on the target region to enhance the feature information.Thirdly,the Axial Compression Former Module(ACFM)is designed to perform global calculations to enhance the perception ability of global features in different directions.To verify the effectiveness of the MSD-Net,comparative experiments and ablation experiments are carried out.In the COVID-19 RADIOGRAPHY DATABASE,the Accuracy,Recall,Precision,F1 Score,and Specificity of MSD-Net are 97.76%,95.57%,95.52%,95.52%,and 98.51%,respectively.In the chest X-ray dataset,the Accuracy,Recall,Precision,F1 Score and Specificity of MSD-Net are 97.78%,95.22%,96.49%,95.58%,and 98.11%,respectively.This model improves the accuracy of lung image recognition effectively and provides an important clinical reference to pneumonia Computer-Aided Diagnosis.
基金Supported by the National Natural Science Foundation of China(62072334).
文摘The hands and face are the most important parts for expressing sign language morphemes in sign language videos.However,we find that existing Continuous Sign Language Recognition(CSLR)methods lack the mining of hand and face information in visual backbones or use expensive and time-consuming external extractors to explore this information.In addition,the signs have different lengths,whereas previous CSLR methods typically use a fixed-length window to segment the video to capture sequential features and then perform global temporal modeling,which disturbs the perception of complete signs.In this study,we propose a Multi-Scale Context-Aware network(MSCA-Net)to solve the aforementioned problems.Our MSCA-Net contains two main modules:(1)Multi-Scale Motion Attention(MSMA),which uses the differences among frames to perceive information of the hands and face in multiple spatial scales,replacing the heavy feature extractors;and(2)Multi-Scale Temporal Modeling(MSTM),which explores crucial temporal information in the sign language video from different temporal scales.We conduct extensive experiments using three widely used sign language datasets,i.e.,RWTH-PHOENIX-Weather-2014,RWTH-PHOENIX-Weather-2014T,and CSL-Daily.The proposed MSCA-Net achieve state-of-the-art performance,demonstrating the effectiveness of our approach.
基金the Scientific Research Fund of Hunan Provincial Education Department(23A0423).
文摘Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method.
基金supported by Western Research Interdisciplinary Initiative R6259A03.
文摘Rock fracture mechanisms can be inferred from moment tensors(MT)inverted from microseismic events.However,MT can only be inverted for events whose waveforms are acquired across a network of sensors.This is limiting for underground mines where the microseismic stations often lack azimuthal coverage.Thus,there is a need for a method to invert fracture mechanisms using waveforms acquired by a sparse microseismic network.Here,we present a novel,multi-scale framework to classify whether a rock crack contracts or dilates based on a single waveform.The framework consists of a deep learning model that is initially trained on 2400000+manually labelled field-scale seismic and microseismic waveforms acquired across 692 stations.Transfer learning is then applied to fine-tune the model on 300000+MT-labelled labscale acoustic emission waveforms from 39 individual experiments instrumented with different sensor layouts,loading,and rock types in training.The optimal model achieves over 86%F-score on unseen waveforms at both the lab-and field-scale.This model outperforms existing empirical methods in classification of rock fracture mechanisms monitored by a sparse microseismic network.This facilitates rapid assessment of,and early warning against,various rock engineering hazard such as induced earthquakes and rock bursts.