We have proposed a methodology to assess the robustness of underground tunnels against potential failure.This involves developing vulnerability functions for various qualities of rock mass and static loading intensiti...We have proposed a methodology to assess the robustness of underground tunnels against potential failure.This involves developing vulnerability functions for various qualities of rock mass and static loading intensities.To account for these variations,we utilized a Monte Carlo Simulation(MCS)technique coupled with the finite difference code FLAC^(3D),to conduct two thousand seven hundred numerical simulations of a horseshoe tunnel located within a rock mass with different geological strength index system(GSIs)and subjected to different states of static loading.To quantify the severity of damage within the rock mass,we selected one stress-based(brittle shear ratio(BSR))and one strain-based failure criterion(plastic damage index(PDI)).Based on these criteria,we then developed fragility curves.Additionally,we used mathematical approximation techniques to produce vulnerability functions that relate the probabilities of various damage states to loading intensities for different quality classes of blocky rock mass.The results indicated that the fragility curves we obtained could accurately depict the evolution of the inner and outer shell damage around the tunnel.Therefore,we have provided engineers with a tool that can predict levels of damages associated with different failure mechanisms based on variations in rock mass quality and in situ stress state.Our method is a numerically developed,multi-variate approach that can aid engineers in making informed decisions about the robustness of underground tunnels.展开更多
BACKGROUND Previous studies have validated the efficacy of both magnetic compression and surgical techniques in creating rabbit tracheoesophageal fistula(TEF)models.Magnetic compression achieves a 100%success rate but...BACKGROUND Previous studies have validated the efficacy of both magnetic compression and surgical techniques in creating rabbit tracheoesophageal fistula(TEF)models.Magnetic compression achieves a 100%success rate but requires more time,while surgery,though less frequently successful,offers rapid model establishment and technical maturity in larger animal models.AIM To determine the optimal approach for rabbit disease modeling and refine the process.METHODS TEF models were created in 12 rabbits using both the modified magnetic compression technique and surgery.Comparisons of the time to model establishment,success rate,food and water intake,weight changes,activity levels,bronchoscopy findings,white blood cell counts,and biopsies were performed.In response to the failures encountered during modified magnetic compression modeling,we increased the sample size to 15 rabbit models and assessed the repeatability and stability of the models,comparing them with the original magnetic compression technique.RESULTS The modified magnetic compression technique achieved a 66.7%success rate,whereas the success rate of the surgery technique was 33.3%.Surviving surgical rabbits might not meet subsequent experimental requirements due to TEF-related inflammation.In the modified magnetic compression group,one rabbit died,possibly due to magnet corrosion,and another died from tracheal magnet obstruction.Similar events occurred during the second round of modified magnetic compression modeling,with one rabbit possibly succumbing to aggravated lung infection.The operation time of the first round of modified magnetic compression was 3.2±0.6 min,which was significantly reduced to 2.1±0.4 min in the second round,compared to both the first round and that of the original technique.CONCLUSION The modified magnetic compression technique exhibits lower stress responses,a simple procedure,a high success rate,and lower modeling costs,making it a more appropriate choice for constructing TEF models in rabbits.展开更多
An internal defect meter is an instrument to detect the internal inclusion defects of cold-rolled strip steel.The detection accuracy of the equipment can be evaluated based on the similarity of the multiple detection ...An internal defect meter is an instrument to detect the internal inclusion defects of cold-rolled strip steel.The detection accuracy of the equipment can be evaluated based on the similarity of the multiple detection data obtained for the same steel coil.Based on the cosine similarity model and eigenvalue matrix model,a comprehensive evaluation method to calculate the weighted average of similarity is proposed.Results show that the new method is consistent with and can even replace artificial evaluation to realize the automatic evaluation of strip defect detection results.展开更多
To provide new insights into the development and utilization of Douchi artificial starters,three common strains(Aspergillus oryzae,Mucor racemosus,and Rhizopus oligosporus)were used to study their influence on the fer...To provide new insights into the development and utilization of Douchi artificial starters,three common strains(Aspergillus oryzae,Mucor racemosus,and Rhizopus oligosporus)were used to study their influence on the fermentation of Douchi.The results showed that the biogenic amine contents of the three types of Douchi were all within the safe range and far lower than those of traditional fermented Douchi.Aspergillus-type Douchi produced more free amino acids than the other two types of Douchi,and its umami taste was more prominent in sensory evaluation(P<0.01),while Mucor-type and Rhizopus-type Douchi produced more esters and pyrazines,making the aroma,sauce,and Douchi flavor more abundant.According to the Pearson and PLS analyses results,sweetness was significantly negatively correlated with phenylalanine,cysteine,and acetic acid(P<0.05),bitterness was significantly negatively correlated with malic acid(P<0.05),the sour taste was significantly positively correlated with citric acid and most free amino acids(P<0.05),while astringency was significantly negatively correlated with glucose(P<0.001).Thirteen volatile compounds such as furfuryl alcohol,phenethyl alcohol,and benzaldehyde caused the flavor difference of three types of Douchi.This study provides theoretical basis for the selection of starting strains for commercial Douchi production.展开更多
In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making d...In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.展开更多
Since the high penetration of renewable energy complicates the dynamic characteristics of the AC power electronic system(ACPES),it is essential to establish an accurate dynamic model to obtain its dynamic behavior for...Since the high penetration of renewable energy complicates the dynamic characteristics of the AC power electronic system(ACPES),it is essential to establish an accurate dynamic model to obtain its dynamic behavior for ensure the safe and stable operation of the system.However,due to the no or limited internal control details,the state-space modeling method cannot be realized.It leads to the ACPES system becoming a black-box dynamic system.The dynamic modeling method based on deep neural network can simulate the dynamic behavior using port data without obtaining internal control details.However,deep neural network modeling methods are rarely systematically evaluated.In practice,the construction of neural network faces the selection of massive data and various network structure parameters.However,different sample distributions make the trained network performance quite different.Different network structure hyperparameters also mean different convergence time.Due to the lack of systematic evaluation and targeted suggestions,neural network modeling with high precision and high training speed cannot be realized quickly and conveniently in practical engineering applications.To fill this gap,this paper systematically evaluates the deep neural network from sample distribution and structural hyperparameter selection.The influence on modeling accuracy is analyzed in detail,then some modeling suggestions are presented.Simulation results under multiple operating points verify the effectiveness of the proposed method.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
Machine learning(ML)provides a new surrogate method for investigating groundwater flow dynamics in unsaturated soils.Traditional pure data-driven methods(e.g.deep neural network,DNN)can provide rapid predictions,but t...Machine learning(ML)provides a new surrogate method for investigating groundwater flow dynamics in unsaturated soils.Traditional pure data-driven methods(e.g.deep neural network,DNN)can provide rapid predictions,but they do require sufficient on-site data for accurate training,and lack interpretability to the physical processes within the data.In this paper,we provide a physics and equalityconstrained artificial neural network(PECANN),to derive unsaturated infiltration solutions with a small amount of initial and boundary data.PECANN takes the physics-informed neural network(PINN)as a foundation,encodes the unsaturated infiltration physical laws(i.e.Richards equation,RE)into the loss function,and uses the augmented Lagrangian method to constrain the learning process of the solutions of RE by adding stronger penalty for the initial and boundary conditions.Four unsaturated infiltration cases are designed to test the training performance of PECANN,i.e.one-dimensional(1D)steady-state unsaturated infiltration,1D transient-state infiltration,two-dimensional(2D)transient-state infiltration,and 1D coupled unsaturated infiltration and deformation.The predicted results of PECANN are compared with the finite difference solutions or analytical solutions.The results indicate that PECANN can accurately capture the variations of pressure head during the unsaturated infiltration,and present higher precision and robustness than DNN and PINN.It is also revealed that PECANN can achieve the same accuracy as the finite difference method with fewer initial and boundary training data.Additionally,we investigate the effect of the hyperparameters of PECANN on solving RE problem.PECANN provides an effective tool for simulating unsaturated infiltration.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
In order to further improve the utility of unmanned aerial vehicle(UAV)remote-sensing for quickly and accurately monitoring the growth of winter wheat under film mulching, this study examined the treatments of ridge m...In order to further improve the utility of unmanned aerial vehicle(UAV)remote-sensing for quickly and accurately monitoring the growth of winter wheat under film mulching, this study examined the treatments of ridge mulching,ridge–furrow full mulching, and flat cropping full mulching in winter wheat.Based on the fuzzy comprehensive evaluation (FCE) method, four agronomic parameters (leaf area index, above-ground biomass, plant height, and leaf chlorophyll content) were used to calculate the comprehensive growth evaluation index (CGEI) of the winter wheat, and 14 visible and near-infrared spectral indices were calculated using spectral purification technology to process the remote-sensing image data of winter wheat obtained by multispectral UAV.Four machine learning algorithms, partial least squares, support vector machines, random forests, and artificial neural network networks(ANN), were used to build the winter wheat growth monitoring model under film mulching, and accuracy evaluation and mapping of the spatial and temporal distribution of winter wheat growth status were carried out.The results showed that the CGEI of winter wheat under film mulching constructed using the FCE method could objectively and comprehensively evaluate the crop growth status.The accuracy of remote-sensing inversion of the CGEI based on the ANN model was higher than for the individual agronomic parameters, with a coefficient of determination of 0.75,a root mean square error of 8.40, and a mean absolute value error of 6.53.Spectral purification could eliminate the interference of background effects caused by mulching and soil, effectively improving the accuracy of the remotesensing inversion of winter wheat under film mulching, with the best inversion effect achieved on the ridge–furrow full mulching area after spectral purification.The results of this study provide a theoretical reference for the use of UAV remote-sensing to monitor the growth status of winter wheat with film mulching.展开更多
Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive...Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.展开更多
Natural slopes usually display complicated exposed rock surfaces that are characterized by complex and substantial terrain undulation and ubiquitous undesirable phenomena such as vegetation cover and rockfalls.This st...Natural slopes usually display complicated exposed rock surfaces that are characterized by complex and substantial terrain undulation and ubiquitous undesirable phenomena such as vegetation cover and rockfalls.This study presents a systematic outcrop research of fracture pattern variations in a complicated rock slope,and the qualitative and quantitative study of the complex phenomena impact on threedimensional(3D)discrete fracture network(DFN)modeling.As the studies of the outcrop fracture pattern have been so far focused on local variations,thus,we put forward a statistical analysis of global variations.The entire outcrop is partitioned into several subzones,and the subzone-scale variability of fracture geometric properties is analyzed(including the orientation,the density,and the trace length).The results reveal significant variations in fracture characteristics(such as the concentrative degree,the average orientation,the density,and the trace length)among different subzones.Moreover,the density of fracture sets,which is approximately parallel to the slope surface,exhibits a notably higher value compared to other fracture sets across all subzones.To improve the accuracy of the DFN modeling,the effects of three common phenomena resulting from vegetation and rockfalls are qualitatively analyzed and the corresponding quantitative data processing solutions are proposed.Subsequently,the 3D fracture geometric parameters are determined for different areas of the high-steep rock slope in terms of the subzone dimensions.The results show significant variations in the same set of 3D fracture parameters across different regions with density differing by up to tenfold and mean trace length exhibiting differences of 3e4 times.The study results present precise geological structural information,improve modeling accuracy,and provide practical solutions for addressing complex outcrop issues.展开更多
As the main link of ground engineering,crude oil gathering and transportation systems require huge energy consumption and complex structures.It is necessary to establish an energy efficiency evaluation system for crud...As the main link of ground engineering,crude oil gathering and transportation systems require huge energy consumption and complex structures.It is necessary to establish an energy efficiency evaluation system for crude oil gathering and transportation systems and identify the energy efficiency gaps.In this paper,the energy efficiency evaluation system of the crude oil gathering and transportation system in an oilfield in western China is established.Combined with the big data analysis method,the GA-BP neural network is used to establish the energy efficiency index prediction model for crude oil gathering and transportation systems.The comprehensive energy consumption,gas consumption,power consumption,energy utilization rate,heat utilization rate,and power utilization rate of crude oil gathering and transportation systems are predicted.Considering the efficiency and unit consumption index of the crude oil gathering and transportation system,the energy efficiency evaluation system of the crude oil gathering and transportation system is established based on a game theory combined weighting method and TOPSIS evaluation method,and the subjective weight is determined by the triangular fuzzy analytic hierarchy process.The entropy weight method determines the objective weight,and the combined weight of game theory combines subjectivity with objectivity to comprehensively evaluate the comprehensive energy efficiency of crude oil gathering and transportation systems and their subsystems.Finally,the weak links in energy utilization are identified,and energy conservation and consumption reduction are improved.The above research provides technical support for the green,efficient and intelligent development of crude oil gathering and transportation systems.展开更多
Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochast...Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.展开更多
The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters...The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters may be concerned about the validity of the collected data.Hence,it is vital to evaluate the quality of the data collected by the task workers while protecting privacy in spatial crowdsourcing(SC)data collection tasks with IoT.To this end,this paper proposes a privacy-preserving data reliability evaluation for SC in IoT,named PARE.First,we design a data uploading format using blockchain and Paillier homomorphic cryptosystem,providing unchangeable and traceable data while overcoming privacy concerns.Secondly,based on the uploaded data,we propose a method to determine the approximate correct value region without knowing the exact value.Finally,we offer a data filtering mechanism based on the Paillier cryptosystem using this value region.The evaluation and analysis results show that PARE outperforms the existing solution in terms of performance and privacy protection.展开更多
Landslide hazard susceptibility evaluation takes on critical significance in early warning and disaster prevention and reduction.In order to solve the problems of poor effectiveness of landslide data and complex calcu...Landslide hazard susceptibility evaluation takes on critical significance in early warning and disaster prevention and reduction.In order to solve the problems of poor effectiveness of landslide data and complex calculation of weights for multiple evaluation factors in the existing landslide susceptibility evaluation models,in this study,a method of landslide hazard susceptibility evaluation is proposed by combining SBAS-InSAR(Small Baseline Subsets-Interferometric Synthetic Aperture Radar)and SSA-BP(Sparrow Search Algorithm-Back Propagation)neural network algorithm.The SBAS-InSAR technology is adopted to identify potential landslide hazards in the study area,update the cataloging data of landslide hazards,and 11 evaluation factors are chosen for constructing the SSA-BP model for training and validation.Baihetan Reservoir area is selected as a case study for validation.As indicated by the results,the application of SBAS-InSAR technology,combined with both ascending and descending orbit data,effectively addresses the incomplete identification of landslide hazards caused by geometric distortion of single orbit SAR data(e.g.,shadow,overlay,and perspective contraction)in deep canyon areas,thereby enabling the acquisition of up-to-date landslide hazard data.Moreover,in comparison to the conventional BP(Back Propagation)algorithm,the accuracy of the model constructed by the SSA-BP algorithm exhibits a significant increase,with mean squared error and mean absolute error reduced by 0.0142 and 0.0607,respectively.Additionally,during the process of susceptibility evaluation,the SSA-BP model effectively circumvents the issue of considerable manual interventions in calculating the weight of evaluation factors.The area under the curve of this model reaches 0.909,surpassing BP(0.835),random forest(0.792),and the information value method(0.699).The risk of landslide occurrence in the Baihetan Reservoir area is positively correlated with slope,surface temperature,and deformation rate,while it is negatively correlated with fault distance and normalized difference vegetation index.Geological lithology exerts minimal influence on the occurrence of landslides,with the risk being low in forest land and high in grassland.The method proposed in this study provides a useful reference for disaster prevention and mitigation departments to perform landslide hazard susceptibility evaluations in deep canyon areas under complex geological conditions.展开更多
First,we propose a cross-domain authentication architecture based on trust evaluation mechanism,including registration,certificate issuance,and cross-domain authentication processes.A direct trust evaluation mechanism...First,we propose a cross-domain authentication architecture based on trust evaluation mechanism,including registration,certificate issuance,and cross-domain authentication processes.A direct trust evaluation mechanism based on the time decay factor is proposed,taking into account the influence of historical interaction records.We weight the time attenuation factor to each historical interaction record for updating and got the new historical record data.We refer to the beta distribution to enhance the flexibility and adaptability of the direct trust assessment model to better capture time trends in the historical record.Then we propose an autoencoder-based trust clustering algorithm.We perform feature extraction based on autoencoders.Kullback leibler(KL)divergence is used to calculate the reconstruction error.When constructing a convolutional autoencoder,we introduce convolutional neural networks to improve training efficiency and introduce sparse constraints into the hidden layer of the autoencoder.The sparse penalty term in the loss function measures the difference through the KL divergence.Trust clustering is performed based on the density based spatial clustering of applications with noise(DBSCAN)clustering algorithm.During the clustering process,edge nodes have a variety of trustworthy attribute characteristics.We assign different attribute weights according to the relative importance of each attribute in the clustering process,and a larger weight means that the attribute occupies a greater weight in the calculation of distance.Finally,we introduced adaptive weights to calculate comprehensive trust evaluation.Simulation experiments prove that our trust evaluation mechanism has excellent reliability and accuracy.展开更多
Ethylene glycol(EG)plays a pivotal role as a primary raw material in the polyester industry,and the syngas-to-EG route has become a significant technical route in production.The carbon monoxide(CO)gas-phase catalytic ...Ethylene glycol(EG)plays a pivotal role as a primary raw material in the polyester industry,and the syngas-to-EG route has become a significant technical route in production.The carbon monoxide(CO)gas-phase catalytic coupling to synthesize dimethyl oxalate(DMO)is a crucial process in the syngas-to-EG route,whereby the composition of the reactor outlet exerts influence on the ultimate quality of the EG product and the energy consumption during the subsequent separation process.However,measuring product quality in real time or establishing accurate dynamic mechanism models is challenging.To effectively model the DMO synthesis process,this study proposes a hybrid modeling strategy that integrates process mechanisms and data-driven approaches.The CO gas-phase catalytic coupling mechanism model is developed based on intrinsic kinetics and material balance,while a long short-term memory(LSTM)neural network is employed to predict the macroscopic reaction rate by leveraging temporal relationships derived from archived measurements.The proposed model is trained semi-supervised to accommodate limited-label data scenarios,leveraging historical data.By integrating these predictions with the mechanism model,the hybrid modeling approach provides reliable and interpretable forecasts of mass fractions.Empirical investigations unequivocally validate the superiority of the proposed hybrid modeling approach over conventional data-driven models(DDMs)and other hybrid modeling techniques.展开更多
1) Background: Rapid and acurate diagnostic testing for case identification, quarantine, and contact tracing is essential for managing the COVID 19 pandemic. Rapid antigen detection tests are available, however, it is...1) Background: Rapid and acurate diagnostic testing for case identification, quarantine, and contact tracing is essential for managing the COVID 19 pandemic. Rapid antigen detection tests are available, however, it is important to evaluate their performances before use. We tested a rapid antigen detection of SARS-CoV-2, based on the immunochromatography (Boson Biotech SARS-CoV-2 Ag Test (Xiamen Boson Biotech Co., Ltd., China)) and the results were compared with the real time reverse transcriptase-Polymerase chain reaction (RT-PCR) (Gold standard) results;2) Methods: From November 2021 to December 2021, samples were collected from symptomatic patients and asymptomatic individuals referred for testing in a hospital during the second pandemic wave in Gabon. All these participants attending “CTA Angondjé”, a field hospital set up as part of the management of COVID-19 in Gabon. Two nasopharyngeal swabs were collected in all the patients, one for Ag test and the other for RT-PCR;3) Results: A total of 300 samples were collected from 189 symptomatic and 111 asymptomatic individuals. The sensitivity and specificity of the antigen test were 82.5% [95%CI 73.8 - 89.3] and 97.9 % [95%CI 92.2 - 98.2] respectively, and the diagnostic accuracy was 84.4% (95% CI: 79.8 - 88.3%). The antigen test was more likely to be positive for samples with RT-PCR Ct values ≤ 32, with a sensitivity of 89.8%;4) Conclusions: The Boson Biotech SARS-CoV-2 Ag Test has good sensitivity and can detect SARS-CoV-2 infection, especially among symptomatic individuals with low viral load. This test could be incorporated into efficient testing algorithms as an alternative to PCR to decrease diagnostic delays and curb viral transmission.展开更多
Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical ener...Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical energy storage.However,the performance of MIBs is significantly influenced by numerous variables,resulting in multi-dimensional and long-term challenges in the field of battery research and performance enhancement.Machine learning(ML),with its capability to solve intricate tasks and perform robust data processing,is now catalyzing a revolutionary transformation in the development of MIB materials and devices.In this review,we summarize the utilization of ML algorithms that have expedited research on MIBs over the past five years.We present an extensive overview of existing algorithms,elucidating their details,advantages,and limitations in various applications,which encompass electrode screening,material property prediction,electrolyte formulation design,electrode material characterization,manufacturing parameter optimization,and real-time battery status monitoring.Finally,we propose potential solutions and future directions for the application of ML in advancing MIB development.展开更多
基金funding received by a grant from the Natural Sciences and Engineering Research Council of Canada(NSERC)(Grant No.CRDPJ 469057e14).
文摘We have proposed a methodology to assess the robustness of underground tunnels against potential failure.This involves developing vulnerability functions for various qualities of rock mass and static loading intensities.To account for these variations,we utilized a Monte Carlo Simulation(MCS)technique coupled with the finite difference code FLAC^(3D),to conduct two thousand seven hundred numerical simulations of a horseshoe tunnel located within a rock mass with different geological strength index system(GSIs)and subjected to different states of static loading.To quantify the severity of damage within the rock mass,we selected one stress-based(brittle shear ratio(BSR))and one strain-based failure criterion(plastic damage index(PDI)).Based on these criteria,we then developed fragility curves.Additionally,we used mathematical approximation techniques to produce vulnerability functions that relate the probabilities of various damage states to loading intensities for different quality classes of blocky rock mass.The results indicated that the fragility curves we obtained could accurately depict the evolution of the inner and outer shell damage around the tunnel.Therefore,we have provided engineers with a tool that can predict levels of damages associated with different failure mechanisms based on variations in rock mass quality and in situ stress state.Our method is a numerically developed,multi-variate approach that can aid engineers in making informed decisions about the robustness of underground tunnels.
基金Independent Scientific Research Project for Graduate Students of Beijing University of Chinese Medicine(2023),No.ZJKT2023020.
文摘BACKGROUND Previous studies have validated the efficacy of both magnetic compression and surgical techniques in creating rabbit tracheoesophageal fistula(TEF)models.Magnetic compression achieves a 100%success rate but requires more time,while surgery,though less frequently successful,offers rapid model establishment and technical maturity in larger animal models.AIM To determine the optimal approach for rabbit disease modeling and refine the process.METHODS TEF models were created in 12 rabbits using both the modified magnetic compression technique and surgery.Comparisons of the time to model establishment,success rate,food and water intake,weight changes,activity levels,bronchoscopy findings,white blood cell counts,and biopsies were performed.In response to the failures encountered during modified magnetic compression modeling,we increased the sample size to 15 rabbit models and assessed the repeatability and stability of the models,comparing them with the original magnetic compression technique.RESULTS The modified magnetic compression technique achieved a 66.7%success rate,whereas the success rate of the surgery technique was 33.3%.Surviving surgical rabbits might not meet subsequent experimental requirements due to TEF-related inflammation.In the modified magnetic compression group,one rabbit died,possibly due to magnet corrosion,and another died from tracheal magnet obstruction.Similar events occurred during the second round of modified magnetic compression modeling,with one rabbit possibly succumbing to aggravated lung infection.The operation time of the first round of modified magnetic compression was 3.2±0.6 min,which was significantly reduced to 2.1±0.4 min in the second round,compared to both the first round and that of the original technique.CONCLUSION The modified magnetic compression technique exhibits lower stress responses,a simple procedure,a high success rate,and lower modeling costs,making it a more appropriate choice for constructing TEF models in rabbits.
文摘An internal defect meter is an instrument to detect the internal inclusion defects of cold-rolled strip steel.The detection accuracy of the equipment can be evaluated based on the similarity of the multiple detection data obtained for the same steel coil.Based on the cosine similarity model and eigenvalue matrix model,a comprehensive evaluation method to calculate the weighted average of similarity is proposed.Results show that the new method is consistent with and can even replace artificial evaluation to realize the automatic evaluation of strip defect detection results.
基金supported by Special key project of technological innovation and application development in Yongchuan District,Chongqing(2021yc-cxfz20002)the special funds of central government for guiding local science and technology developmentthe funds for the platform projects of professional technology innovation(CSTC2018ZYCXPT0006).
文摘To provide new insights into the development and utilization of Douchi artificial starters,three common strains(Aspergillus oryzae,Mucor racemosus,and Rhizopus oligosporus)were used to study their influence on the fermentation of Douchi.The results showed that the biogenic amine contents of the three types of Douchi were all within the safe range and far lower than those of traditional fermented Douchi.Aspergillus-type Douchi produced more free amino acids than the other two types of Douchi,and its umami taste was more prominent in sensory evaluation(P<0.01),while Mucor-type and Rhizopus-type Douchi produced more esters and pyrazines,making the aroma,sauce,and Douchi flavor more abundant.According to the Pearson and PLS analyses results,sweetness was significantly negatively correlated with phenylalanine,cysteine,and acetic acid(P<0.05),bitterness was significantly negatively correlated with malic acid(P<0.05),the sour taste was significantly positively correlated with citric acid and most free amino acids(P<0.05),while astringency was significantly negatively correlated with glucose(P<0.001).Thirteen volatile compounds such as furfuryl alcohol,phenethyl alcohol,and benzaldehyde caused the flavor difference of three types of Douchi.This study provides theoretical basis for the selection of starting strains for commercial Douchi production.
文摘In a competitive digital age where data volumes are increasing with time, the ability to extract meaningful knowledge from high-dimensional data using machine learning (ML) and data mining (DM) techniques and making decisions based on the extracted knowledge is becoming increasingly important in all business domains. Nevertheless, high-dimensional data remains a major challenge for classification algorithms due to its high computational cost and storage requirements. The 2016 Demographic and Health Survey of Ethiopia (EDHS 2016) used as the data source for this study which is publicly available contains several features that may not be relevant to the prediction task. In this paper, we developed a hybrid multidimensional metrics framework for predictive modeling for both model performance evaluation and feature selection to overcome the feature selection challenges and select the best model among the available models in DM and ML. The proposed hybrid metrics were used to measure the efficiency of the predictive models. Experimental results show that the decision tree algorithm is the most efficient model. The higher score of HMM (m, r) = 0.47 illustrates the overall significant model that encompasses almost all the user’s requirements, unlike the classical metrics that use a criterion to select the most appropriate model. On the other hand, the ANNs were found to be the most computationally intensive for our prediction task. Moreover, the type of data and the class size of the dataset (unbalanced data) have a significant impact on the efficiency of the model, especially on the computational cost, and the interpretability of the parameters of the model would be hampered. And the efficiency of the predictive model could be improved with other feature selection algorithms (especially hybrid metrics) considering the experts of the knowledge domain, as the understanding of the business domain has a significant impact.
基金supported in part by the Science Search Foundation of Liaoning Educational Department。
文摘Since the high penetration of renewable energy complicates the dynamic characteristics of the AC power electronic system(ACPES),it is essential to establish an accurate dynamic model to obtain its dynamic behavior for ensure the safe and stable operation of the system.However,due to the no or limited internal control details,the state-space modeling method cannot be realized.It leads to the ACPES system becoming a black-box dynamic system.The dynamic modeling method based on deep neural network can simulate the dynamic behavior using port data without obtaining internal control details.However,deep neural network modeling methods are rarely systematically evaluated.In practice,the construction of neural network faces the selection of massive data and various network structure parameters.However,different sample distributions make the trained network performance quite different.Different network structure hyperparameters also mean different convergence time.Due to the lack of systematic evaluation and targeted suggestions,neural network modeling with high precision and high training speed cannot be realized quickly and conveniently in practical engineering applications.To fill this gap,this paper systematically evaluates the deep neural network from sample distribution and structural hyperparameter selection.The influence on modeling accuracy is analyzed in detail,then some modeling suggestions are presented.Simulation results under multiple operating points verify the effectiveness of the proposed method.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.
基金funding support from the science and technology innovation Program of Hunan Province(Grant No.2023RC1017)Hunan Provincial Postgraduate Research and Innovation Project(Grant No.CX20220109)National Natural Science Foundation of China Youth Fund(Grant No.52208378).
文摘Machine learning(ML)provides a new surrogate method for investigating groundwater flow dynamics in unsaturated soils.Traditional pure data-driven methods(e.g.deep neural network,DNN)can provide rapid predictions,but they do require sufficient on-site data for accurate training,and lack interpretability to the physical processes within the data.In this paper,we provide a physics and equalityconstrained artificial neural network(PECANN),to derive unsaturated infiltration solutions with a small amount of initial and boundary data.PECANN takes the physics-informed neural network(PINN)as a foundation,encodes the unsaturated infiltration physical laws(i.e.Richards equation,RE)into the loss function,and uses the augmented Lagrangian method to constrain the learning process of the solutions of RE by adding stronger penalty for the initial and boundary conditions.Four unsaturated infiltration cases are designed to test the training performance of PECANN,i.e.one-dimensional(1D)steady-state unsaturated infiltration,1D transient-state infiltration,two-dimensional(2D)transient-state infiltration,and 1D coupled unsaturated infiltration and deformation.The predicted results of PECANN are compared with the finite difference solutions or analytical solutions.The results indicate that PECANN can accurately capture the variations of pressure head during the unsaturated infiltration,and present higher precision and robustness than DNN and PINN.It is also revealed that PECANN can achieve the same accuracy as the finite difference method with fewer initial and boundary training data.Additionally,we investigate the effect of the hyperparameters of PECANN on solving RE problem.PECANN provides an effective tool for simulating unsaturated infiltration.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
基金This study was funded by the National Key R&D Program of China(2021YFD1900700)the National Natural Science Foundation of China(51909221)the China Postdoctoral Science Foundation(2020T130541 and 2019M650277).
文摘In order to further improve the utility of unmanned aerial vehicle(UAV)remote-sensing for quickly and accurately monitoring the growth of winter wheat under film mulching, this study examined the treatments of ridge mulching,ridge–furrow full mulching, and flat cropping full mulching in winter wheat.Based on the fuzzy comprehensive evaluation (FCE) method, four agronomic parameters (leaf area index, above-ground biomass, plant height, and leaf chlorophyll content) were used to calculate the comprehensive growth evaluation index (CGEI) of the winter wheat, and 14 visible and near-infrared spectral indices were calculated using spectral purification technology to process the remote-sensing image data of winter wheat obtained by multispectral UAV.Four machine learning algorithms, partial least squares, support vector machines, random forests, and artificial neural network networks(ANN), were used to build the winter wheat growth monitoring model under film mulching, and accuracy evaluation and mapping of the spatial and temporal distribution of winter wheat growth status were carried out.The results showed that the CGEI of winter wheat under film mulching constructed using the FCE method could objectively and comprehensively evaluate the crop growth status.The accuracy of remote-sensing inversion of the CGEI based on the ANN model was higher than for the individual agronomic parameters, with a coefficient of determination of 0.75,a root mean square error of 8.40, and a mean absolute value error of 6.53.Spectral purification could eliminate the interference of background effects caused by mulching and soil, effectively improving the accuracy of the remotesensing inversion of winter wheat under film mulching, with the best inversion effect achieved on the ridge–furrow full mulching area after spectral purification.The results of this study provide a theoretical reference for the use of UAV remote-sensing to monitor the growth status of winter wheat with film mulching.
基金support from the National Key R&D plan(Grant No.2022YFC3004303)the National Natural Science Foundation of China(Grant No.42107161)+3 种基金the State Key Laboratory of Hydroscience and Hydraulic Engineering(Grant No.2021-KY-04)the Open Research Fund Program of State Key Laboratory of Hydroscience and Engineering(sklhse-2023-C-01)the Open Research Fund Program of Key Laboratory of the Hydrosphere of the Ministry of Water Resources(mklhs-2023-04)the China Three Gorges Corporation(XLD/2117).
文摘Rock fragmentation plays a critical role in rock avalanches,yet conventional approaches such as classical granular flow models or the bonded particle model have limitations in accurately characterizing the progressive disintegration and kinematics of multi-deformable rock blocks during rockslides.The present study proposes a discrete-continuous numerical model,based on a cohesive zone model,to explicitly incorporate the progressive fragmentation and intricate interparticle interactions inherent in rockslides.Breakable rock granular assemblies are released along an inclined plane and flow onto a horizontal plane.The numerical scenarios are established to incorporate variations in slope angle,initial height,friction coefficient,and particle number.The evolutions of fragmentation,kinematic,runout and depositional characteristics are quantitatively analyzed and compared with experimental and field data.A positive linear relationship between the equivalent friction coefficient and the apparent friction coefficient is identified.In general,the granular mass predominantly exhibits characteristics of a dense granular flow,with the Savage number exhibiting a decreasing trend as the volume of mass increases.The process of particle breakage gradually occurs in a bottom-up manner,leading to a significant increase in the angular velocities of the rock blocks with increasing depth.The simulation results reproduce the field observations of inverse grading and source stratigraphy preservation in the deposit.We propose a disintegration index that incorporates factors such as drop height,rock mass volume,and rock strength.Our findings demonstrate a consistent linear relationship between this index and the fragmentation degree in all tested scenarios.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFC3080200)the National Natural Science Foundation of China(Grant No.42022053)the China Postdoctoral Science Foundation(Grant No.2023M731264).
文摘Natural slopes usually display complicated exposed rock surfaces that are characterized by complex and substantial terrain undulation and ubiquitous undesirable phenomena such as vegetation cover and rockfalls.This study presents a systematic outcrop research of fracture pattern variations in a complicated rock slope,and the qualitative and quantitative study of the complex phenomena impact on threedimensional(3D)discrete fracture network(DFN)modeling.As the studies of the outcrop fracture pattern have been so far focused on local variations,thus,we put forward a statistical analysis of global variations.The entire outcrop is partitioned into several subzones,and the subzone-scale variability of fracture geometric properties is analyzed(including the orientation,the density,and the trace length).The results reveal significant variations in fracture characteristics(such as the concentrative degree,the average orientation,the density,and the trace length)among different subzones.Moreover,the density of fracture sets,which is approximately parallel to the slope surface,exhibits a notably higher value compared to other fracture sets across all subzones.To improve the accuracy of the DFN modeling,the effects of three common phenomena resulting from vegetation and rockfalls are qualitatively analyzed and the corresponding quantitative data processing solutions are proposed.Subsequently,the 3D fracture geometric parameters are determined for different areas of the high-steep rock slope in terms of the subzone dimensions.The results show significant variations in the same set of 3D fracture parameters across different regions with density differing by up to tenfold and mean trace length exhibiting differences of 3e4 times.The study results present precise geological structural information,improve modeling accuracy,and provide practical solutions for addressing complex outcrop issues.
基金This work was financially supported by the National Natural Science Foundation of China(52074089 and 52104064)Natural Science Foundation of Heilongjiang Province of China(LH2019E019).
文摘As the main link of ground engineering,crude oil gathering and transportation systems require huge energy consumption and complex structures.It is necessary to establish an energy efficiency evaluation system for crude oil gathering and transportation systems and identify the energy efficiency gaps.In this paper,the energy efficiency evaluation system of the crude oil gathering and transportation system in an oilfield in western China is established.Combined with the big data analysis method,the GA-BP neural network is used to establish the energy efficiency index prediction model for crude oil gathering and transportation systems.The comprehensive energy consumption,gas consumption,power consumption,energy utilization rate,heat utilization rate,and power utilization rate of crude oil gathering and transportation systems are predicted.Considering the efficiency and unit consumption index of the crude oil gathering and transportation system,the energy efficiency evaluation system of the crude oil gathering and transportation system is established based on a game theory combined weighting method and TOPSIS evaluation method,and the subjective weight is determined by the triangular fuzzy analytic hierarchy process.The entropy weight method determines the objective weight,and the combined weight of game theory combines subjectivity with objectivity to comprehensively evaluate the comprehensive energy efficiency of crude oil gathering and transportation systems and their subsystems.Finally,the weak links in energy utilization are identified,and energy conservation and consumption reduction are improved.The above research provides technical support for the green,efficient and intelligent development of crude oil gathering and transportation systems.
基金supported by the National Natural Science Foundation of China(Grant Nos.82173620 to Yang Zhao and 82041024 to Feng Chen)partially supported by the Bill&Melinda Gates Foundation(Grant No.INV-006371 to Feng Chen)Priority Academic Program Development of Jiangsu Higher Education Institutions.
文摘Deterministic compartment models(CMs)and stochastic models,including stochastic CMs and agent-based models,are widely utilized in epidemic modeling.However,the relationship between CMs and their corresponding stochastic models is not well understood.The present study aimed to address this gap by conducting a comparative study using the susceptible,exposed,infectious,and recovered(SEIR)model and its extended CMs from the coronavirus disease 2019 modeling literature.We demonstrated the equivalence of the numerical solution of CMs using the Euler scheme and their stochastic counterparts through theoretical analysis and simulations.Based on this equivalence,we proposed an efficient model calibration method that could replicate the exact solution of CMs in the corresponding stochastic models through parameter adjustment.The advancement in calibration techniques enhanced the accuracy of stochastic modeling in capturing the dynamics of epidemics.However,it should be noted that discrete-time stochastic models cannot perfectly reproduce the exact solution of continuous-time CMs.Additionally,we proposed a new stochastic compartment and agent mixed model as an alternative to agent-based models for large-scale population simulations with a limited number of agents.This model offered a balance between computational efficiency and accuracy.The results of this research contributed to the comparison and unification of deterministic CMs and stochastic models in epidemic modeling.Furthermore,the results had implications for the development of hybrid models that integrated the strengths of both frameworks.Overall,the present study has provided valuable epidemic modeling techniques and their practical applications for understanding and controlling the spread of infectious diseases.
基金This work was supported by the National Natural Science Foundation of China under Grant 62233003the National Key Research and Development Program of China under Grant 2020YFB1708602.
文摘The proliferation of intelligent,connected Internet of Things(IoT)devices facilitates data collection.However,task workers may be reluctant to participate in data collection due to privacy concerns,and task requesters may be concerned about the validity of the collected data.Hence,it is vital to evaluate the quality of the data collected by the task workers while protecting privacy in spatial crowdsourcing(SC)data collection tasks with IoT.To this end,this paper proposes a privacy-preserving data reliability evaluation for SC in IoT,named PARE.First,we design a data uploading format using blockchain and Paillier homomorphic cryptosystem,providing unchangeable and traceable data while overcoming privacy concerns.Secondly,based on the uploaded data,we propose a method to determine the approximate correct value region without knowing the exact value.Finally,we offer a data filtering mechanism based on the Paillier cryptosystem using this value region.The evaluation and analysis results show that PARE outperforms the existing solution in terms of performance and privacy protection.
基金funded by the National Natural Science Foundation of China(Grant No.41861134008)Muhammad Asif Khan academician workstation of Yunnan Province(Grant No.202105AF150076)+6 种基金General program of Yunnan Province Science and Technology Department(Grant No.202105AF150076)Key Project of Natural Science Foundation of Yunnan Province(Grant No.202101AS070019)Key R&D Program of Yunnan Province(Grant No.202003AC100002)General Program of basic research plan of Yunnan Province(Grant No.202001AT070059)Major scientific and technological projects of Yunnan Province:Research on Key Technologies of ecological environment monitoring and intelligent management of natural resources in Yunnan(No:202202AD080010)“Study on High-Level Hidden Landslide Identification Based on Multi-Source Data”of Key Laboratory of Early Rapid Identification,Prevention and Control of Geological Diseases in Traffic Corridor of High Intensity Earthquake Mountainous Area of Yunnan Province(KLGDTC-2021-02)Guizhou Scientific and Technology Fund(QKHJ-ZK[2023]YB 193).
文摘Landslide hazard susceptibility evaluation takes on critical significance in early warning and disaster prevention and reduction.In order to solve the problems of poor effectiveness of landslide data and complex calculation of weights for multiple evaluation factors in the existing landslide susceptibility evaluation models,in this study,a method of landslide hazard susceptibility evaluation is proposed by combining SBAS-InSAR(Small Baseline Subsets-Interferometric Synthetic Aperture Radar)and SSA-BP(Sparrow Search Algorithm-Back Propagation)neural network algorithm.The SBAS-InSAR technology is adopted to identify potential landslide hazards in the study area,update the cataloging data of landslide hazards,and 11 evaluation factors are chosen for constructing the SSA-BP model for training and validation.Baihetan Reservoir area is selected as a case study for validation.As indicated by the results,the application of SBAS-InSAR technology,combined with both ascending and descending orbit data,effectively addresses the incomplete identification of landslide hazards caused by geometric distortion of single orbit SAR data(e.g.,shadow,overlay,and perspective contraction)in deep canyon areas,thereby enabling the acquisition of up-to-date landslide hazard data.Moreover,in comparison to the conventional BP(Back Propagation)algorithm,the accuracy of the model constructed by the SSA-BP algorithm exhibits a significant increase,with mean squared error and mean absolute error reduced by 0.0142 and 0.0607,respectively.Additionally,during the process of susceptibility evaluation,the SSA-BP model effectively circumvents the issue of considerable manual interventions in calculating the weight of evaluation factors.The area under the curve of this model reaches 0.909,surpassing BP(0.835),random forest(0.792),and the information value method(0.699).The risk of landslide occurrence in the Baihetan Reservoir area is positively correlated with slope,surface temperature,and deformation rate,while it is negatively correlated with fault distance and normalized difference vegetation index.Geological lithology exerts minimal influence on the occurrence of landslides,with the risk being low in forest land and high in grassland.The method proposed in this study provides a useful reference for disaster prevention and mitigation departments to perform landslide hazard susceptibility evaluations in deep canyon areas under complex geological conditions.
基金This work is supported by the 2022 National Key Research and Development Plan“Security Protection Technology for Critical Information Infrastructure of Distribution Network”(2022YFB3105100).
文摘First,we propose a cross-domain authentication architecture based on trust evaluation mechanism,including registration,certificate issuance,and cross-domain authentication processes.A direct trust evaluation mechanism based on the time decay factor is proposed,taking into account the influence of historical interaction records.We weight the time attenuation factor to each historical interaction record for updating and got the new historical record data.We refer to the beta distribution to enhance the flexibility and adaptability of the direct trust assessment model to better capture time trends in the historical record.Then we propose an autoencoder-based trust clustering algorithm.We perform feature extraction based on autoencoders.Kullback leibler(KL)divergence is used to calculate the reconstruction error.When constructing a convolutional autoencoder,we introduce convolutional neural networks to improve training efficiency and introduce sparse constraints into the hidden layer of the autoencoder.The sparse penalty term in the loss function measures the difference through the KL divergence.Trust clustering is performed based on the density based spatial clustering of applications with noise(DBSCAN)clustering algorithm.During the clustering process,edge nodes have a variety of trustworthy attribute characteristics.We assign different attribute weights according to the relative importance of each attribute in the clustering process,and a larger weight means that the attribute occupies a greater weight in the calculation of distance.Finally,we introduced adaptive weights to calculate comprehensive trust evaluation.Simulation experiments prove that our trust evaluation mechanism has excellent reliability and accuracy.
基金supported in part by the National Key Research and Development Program of China(2022YFB3305300)the National Natural Science Foundation of China(62173178).
文摘Ethylene glycol(EG)plays a pivotal role as a primary raw material in the polyester industry,and the syngas-to-EG route has become a significant technical route in production.The carbon monoxide(CO)gas-phase catalytic coupling to synthesize dimethyl oxalate(DMO)is a crucial process in the syngas-to-EG route,whereby the composition of the reactor outlet exerts influence on the ultimate quality of the EG product and the energy consumption during the subsequent separation process.However,measuring product quality in real time or establishing accurate dynamic mechanism models is challenging.To effectively model the DMO synthesis process,this study proposes a hybrid modeling strategy that integrates process mechanisms and data-driven approaches.The CO gas-phase catalytic coupling mechanism model is developed based on intrinsic kinetics and material balance,while a long short-term memory(LSTM)neural network is employed to predict the macroscopic reaction rate by leveraging temporal relationships derived from archived measurements.The proposed model is trained semi-supervised to accommodate limited-label data scenarios,leveraging historical data.By integrating these predictions with the mechanism model,the hybrid modeling approach provides reliable and interpretable forecasts of mass fractions.Empirical investigations unequivocally validate the superiority of the proposed hybrid modeling approach over conventional data-driven models(DDMs)and other hybrid modeling techniques.
文摘1) Background: Rapid and acurate diagnostic testing for case identification, quarantine, and contact tracing is essential for managing the COVID 19 pandemic. Rapid antigen detection tests are available, however, it is important to evaluate their performances before use. We tested a rapid antigen detection of SARS-CoV-2, based on the immunochromatography (Boson Biotech SARS-CoV-2 Ag Test (Xiamen Boson Biotech Co., Ltd., China)) and the results were compared with the real time reverse transcriptase-Polymerase chain reaction (RT-PCR) (Gold standard) results;2) Methods: From November 2021 to December 2021, samples were collected from symptomatic patients and asymptomatic individuals referred for testing in a hospital during the second pandemic wave in Gabon. All these participants attending “CTA Angondjé”, a field hospital set up as part of the management of COVID-19 in Gabon. Two nasopharyngeal swabs were collected in all the patients, one for Ag test and the other for RT-PCR;3) Results: A total of 300 samples were collected from 189 symptomatic and 111 asymptomatic individuals. The sensitivity and specificity of the antigen test were 82.5% [95%CI 73.8 - 89.3] and 97.9 % [95%CI 92.2 - 98.2] respectively, and the diagnostic accuracy was 84.4% (95% CI: 79.8 - 88.3%). The antigen test was more likely to be positive for samples with RT-PCR Ct values ≤ 32, with a sensitivity of 89.8%;4) Conclusions: The Boson Biotech SARS-CoV-2 Ag Test has good sensitivity and can detect SARS-CoV-2 infection, especially among symptomatic individuals with low viral load. This test could be incorporated into efficient testing algorithms as an alternative to PCR to decrease diagnostic delays and curb viral transmission.
基金supported by the National Natural Science Foundation of China(52203364,52188101,52020105010)the National Key R&D Program of China(2021YFB3800300,2022YFB3803400)+2 种基金the Strategic Priority Research Program of Chinese Academy of Science(XDA22010602)the China Postdoctoral Science Foundation(2022M713214)the China National Postdoctoral Program for Innovative Talents(BX2021321)。
文摘Metal-ion batteries(MIBs),including alkali metal-ion(Li^(+),Na^(+),and K^(3)),multi-valent metal-ion(Zn^(2+),Mg^(2+),and Al^(3+)),metal-air,and metal-sulfur batteries,play an indispensable role in electrochemical energy storage.However,the performance of MIBs is significantly influenced by numerous variables,resulting in multi-dimensional and long-term challenges in the field of battery research and performance enhancement.Machine learning(ML),with its capability to solve intricate tasks and perform robust data processing,is now catalyzing a revolutionary transformation in the development of MIB materials and devices.In this review,we summarize the utilization of ML algorithms that have expedited research on MIBs over the past five years.We present an extensive overview of existing algorithms,elucidating their details,advantages,and limitations in various applications,which encompass electrode screening,material property prediction,electrolyte formulation design,electrode material characterization,manufacturing parameter optimization,and real-time battery status monitoring.Finally,we propose potential solutions and future directions for the application of ML in advancing MIB development.