Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is...Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.展开更多
For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of sol...For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics.展开更多
This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysi...This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.展开更多
Recent industrial explosions globally have intensified the focus in mechanical engineering on designing infras-tructure systems and networks capable of withstanding blast loading.Initially centered on high-profile fac...Recent industrial explosions globally have intensified the focus in mechanical engineering on designing infras-tructure systems and networks capable of withstanding blast loading.Initially centered on high-profile facilities such as embassies and petrochemical plants,this concern now extends to a wider array of infrastructures and facilities.Engineers and scholars increasingly prioritize structural safety against explosions,particularly to prevent disproportionate collapse and damage to nearby structures.Urbanization has further amplified the reliance on oil and gas pipelines,making them vital for urban life and prime targets for terrorist activities.Consequently,there is a growing imperative for computational engineering solutions to tackle blast loading on pipelines and mitigate associated risks to avert disasters.In this study,an empty pipe model was successfully validated under contact blast conditions using Abaqus software,a powerful tool in mechanical engineering for simulating blast effects on buried pipelines.Employing a Eulerian-Lagrangian computational fluid dynamics approach,the investigation extended to above-surface and below-surface blasts at standoff distances of 25 and 50 mm.Material descriptions in the numerical model relied on Abaqus’default mechanical models.Comparative analysis revealed varying pipe performance,with deformation decreasing as explosion-to-pipe distance increased.The explosion’s location relative to the pipe surface notably influenced deformation levels,a key finding highlighted in the study.Moreover,quantitative findings indicated varying ratios of plastic dissipation energy(PDE)for different blast scenarios compared to the contact blast(P0).Specifically,P1(25 mm subsurface blast)and P2(50 mm subsurface blast)showed approximately 24.07%and 14.77%of P0’s PDE,respectively,while P3(25 mm above-surface blast)and P4(50 mm above-surface blast)exhibited lower PDE values,accounting for about 18.08%and 9.67%of P0’s PDE,respectively.Utilising energy-absorbing materials such as thin coatings of ultra-high-strength concrete,metallic foams,carbon fiber-reinforced polymer wraps,and others on the pipeline to effectively mitigate blast damage is recommended.This research contributes to the advancement of mechanical engineering by providing insights and solutions crucial for enhancing the resilience and safety of underground pipelines in the face of blast events.展开更多
The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of e...The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.展开更多
An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities...An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉).展开更多
This paper takes the assessment and evaluation of computational mechanics course as the background,and constructs a diversified course evaluation system that is student-centered and integrates both quantitative and qu...This paper takes the assessment and evaluation of computational mechanics course as the background,and constructs a diversified course evaluation system that is student-centered and integrates both quantitative and qualitative evaluation methods.The system not only pays attention to students’practical operation and theoretical knowledge mastery but also puts special emphasis on the cultivation of students’innovative abilities.In order to realize a comprehensive and objective evaluation,the assessment and evaluation method of the entropy weight model combining TOPSIS(Technique for Order Preference by Similarity to Ideal Solution)multi-attribute decision analysis and entropy weight theory is adopted,and its validity and practicability are verified through example analysis.This method can not only comprehensively and objectively evaluate students’learning outcomes,but also provide a scientific decision-making basis for curriculum teaching reform.The implementation of this diversified course evaluation system can better reflect the comprehensive ability of students and promote the continuous improvement of teaching quality.展开更多
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour...Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.展开更多
Owing to the constraints on the fabrication ofγ-ray coding plates with many pixels,few studies have been carried out onγ-ray computational ghost imaging.Thus,the development of coding plates with fewer pixels is ess...Owing to the constraints on the fabrication ofγ-ray coding plates with many pixels,few studies have been carried out onγ-ray computational ghost imaging.Thus,the development of coding plates with fewer pixels is essential to achieveγ-ray computational ghost imaging.Based on the regional similarity between Hadamard subcoding plates,this study presents an optimization method to reduce the number of pixels of Hadamard coding plates.First,a moving distance matrix was obtained to describe the regional similarity quantitatively.Second,based on the matrix,we used two ant colony optimization arrangement algorithms to maximize the reuse of pixels in the regional similarity area and obtain new compressed coding plates.With full sampling,these two algorithms improved the pixel utilization of the coding plate,and the compression ratio values were 54.2%and 58.9%,respectively.In addition,three undersampled sequences(the Harr,Russian dolls,and cake-cutting sequences)with different sampling rates were tested and discussed.With different sampling rates,our method reduced the number of pixels of all three sequences,especially for the Russian dolls and cake-cutting sequences.Therefore,our method can reduce the number of pixels,manufacturing cost,and difficulty of the coding plate,which is beneficial for the implementation and application ofγ-ray computational ghost imaging.展开更多
Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown th...Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.展开更多
Lithium-ion batteries(LIBs)and lithium-sulfur(Li–S)batteries are two types of energy storage systems with significance in both scientific research and commercialization.Nevertheless,the rational design of electrode m...Lithium-ion batteries(LIBs)and lithium-sulfur(Li–S)batteries are two types of energy storage systems with significance in both scientific research and commercialization.Nevertheless,the rational design of electrode materials for overcoming the bottlenecks of LIBs and Li–S batteries(such as low diffusion rates in LIBs and low sulfur utilization in Li–S batteries)remain the greatest challenge,while two-dimensional(2D)electrodes materials provide a solution because of their unique structural and electrochemical properties.In this article,from the perspective of ab-initio simulations,we review the design of 2D electrode materials for LIBs and Li–S batteries.We first propose the theoretical design principles for 2D electrodes,including stability,electronic properties,capacity,and ion diffusion descriptors.Next,classified examples of promising 2D electrodes designed by theoretical simulations are given,covering graphene,phosphorene,MXene,transition metal sulfides,and so on.Finally,common challenges and a future perspective are provided.This review paves the way for rational design of 2D electrode materials for LIBs and Li–S battery applications and may provide a guide for future experiments.展开更多
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl...Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.展开更多
Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across ...Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across multiple related samples.However,the improvement of variants identification using the mutual support information from mul-tiple samples remains quite limited for population-scale genotyping.Results In this study,we developed a computational framework for joint calling genetic variants from 5,061 sheep by incorporating the sequencing error and optimizing mutual support information from multiple samples’data.The variants were accurately identified from multiple samples by using four steps:(1)Probabilities of variants from two widely used algorithms,GATK and Freebayes,were calculated by Poisson model incorporating base sequencing error potential;(2)The variants with high mapping quality or consistently identified from at least two samples by GATK and Freebayes were used to construct the raw high-confidence identification(rHID)variants database;(3)The high confidence variants identified in single sample were ordered by probability value and controlled by false discovery rate(FDR)using rHID database;(4)To avoid the elimination of potentially true variants from rHID database,the vari-ants that failed FDR were reexamined to rescued potential true variants and ensured high accurate identification variants.The results indicated that the percent of concordant SNPs and Indels from Freebayes and GATK after our new method were significantly improved 12%-32%compared with raw variants and advantageously found low frequency variants of individual sheep involved several traits including nipples number(GPC5),scrapie pathology(PAPSS2),sea-sonal reproduction and litter size(GRM1),coat color(RAB27A),and lentivirus susceptibility(TMEM154).Conclusion The new method used the computational strategy to reduce the number of false positives,and simulta-neously improve the identification of genetic variants.This strategy did not incur any extra cost by using any addi-tional samples or sequencing data information and advantageously identified rare variants which can be important for practical applications of animal breeding.展开更多
Liquid phase exfoliation(LPE)process for graphene production is usually carried out in stirred tank reactor and the interactions between the solvent and the graphite particles are important as to improve the productio...Liquid phase exfoliation(LPE)process for graphene production is usually carried out in stirred tank reactor and the interactions between the solvent and the graphite particles are important as to improve the production efficiency.In this paper,these interactions were revealed by computational fluid dynamics–discrete element method(CFD-DEM)method.Based on simulation results,both liquid phase flow hydrodynamics and particle motion behavior have been analyzed,which gave the general information of the multiphase flow behavior inside the stirred tank reactor as to graphene production.By calculating the threshold at the beginning of graphite exfoliation process,the shear force from the slip velocity was determined as the active force.These results can support the optimization of the graphene production process.展开更多
Computational fluid dynamics(CFD)provides a powerful tool for investigating complicated fluid flows.This paper aims to study the applicability of CFD in the preliminary design of linear and nonlinear fluid viscous dam...Computational fluid dynamics(CFD)provides a powerful tool for investigating complicated fluid flows.This paper aims to study the applicability of CFD in the preliminary design of linear and nonlinear fluid viscous dampers.Two fluid viscous dampers were designed based on CFD models.The first device was a linear viscous damper with straight orifices.The second was a nonlinear viscous damper containing a one-way pressure-responsive valve inside its orifices.Both dampers were detailed based on CFD simulations,and their internal fluid flows were investigated.Full-scale specimens of both dampers were manufactured and tested under dynamic loads.According to the tests results,both dampers demonstrate stable cyclic behaviors,and as expected,the nonlinear damper generally tends to dissipate more energy compared to its linear counterpart.Good compatibility was achieved between the experimentally measured damper force-velocity curves and those estimated from CFD analyses.Using a thermography camera,a rise in temperature of the dampers was measured during the tests.It was found that output force of the manufactured devices was virtually independent of temperature even during long duration loadings.Accordingly,temperature dependence can be ignored in CFD models,because a reliable temperature compensator mechanism was used(or intended to be used)by the damper manufacturer.展开更多
This paper presents a time-efficient numerical approach to modelling high explosive(HE)blastwave propagation using Computational Fluid Dynamics(CFD).One of the main issues of using conventional CFD modelling in high e...This paper presents a time-efficient numerical approach to modelling high explosive(HE)blastwave propagation using Computational Fluid Dynamics(CFD).One of the main issues of using conventional CFD modelling in high explosive simulations is the ability to accurately define the initial blastwave properties that arise from the ignition and consequent explosion.Specialised codes often employ Jones-Wilkins-Lee(JWL)or similar equation of state(EOS)to simulate blasts.However,most available CFD codes are limited in terms of EOS modelling.They are restrictive to the Ideal Gas Law(IGL)for compressible flows,which is generally unsuitable for blast simulations.To this end,this paper presents a numerical approach to simulate blastwave propagation for any generic CFD code using the IGL EOS.A new method known as the Input Cavity Method(ICM)is defined where input conditions of the high explosives are given in the form of pressure,velocity and temperature time-history curves.These time history curves are input at a certain distance from the centre of the charge.It is shown that the ICM numerical method can accurately predict over-pressure and impulse time history at measured locations for the incident,reflective and complex multiple reflection scenarios with high numerical accuracy compared to experimental measurements.The ICM is compared to the Pressure Bubble Method(PBM),a common approach to replicating initial conditions for a high explosive in Finite Volume modelling.It is shown that the ICM outperforms the PBM on multiple fronts,such as peak values and overall overpressure curve shape.Finally,the paper also presents the importance of choosing an appropriate solver between the Pressure Based Solver(PBS)and Density-Based Solver(DBS)and provides the advantages and disadvantages of either choice.In general,it is shown that the PBS can resolve and capture the interactions of blastwaves to a higher degree of resolution than the DBS.This is achieved at a much higher computational cost,showing that the DBS is much preferred for quick turnarounds.展开更多
Background:The assessment of renal function is important to the prognosis of patients needing Fontan palliation due to the reconstructed compromised circulation.To know the relationship between the kidney perfusion an...Background:The assessment of renal function is important to the prognosis of patients needing Fontan palliation due to the reconstructed compromised circulation.To know the relationship between the kidney perfusion and hemodynamic characteristics during surgical design could reduce the risk of acute kidney injury(AKI)and the postoperative complications.However,the issue is still unsolved because the current clinical evaluation methods are unable to predict the hemodynamic changes in renal artery(RA).Methods:We reconstructed a three-dimensional(3D)vascular model of a patient requiring Fontan palliation.The technique of computational fluid dynamics(CFD)was utilized to explore the changes of RA hemodynamics under different possible blood flow rates.The relationship between the kidney perfusion and hemodynamic characteristics was investigated.Results:The calculated results indicated the declined tendency of the pressure and pressure drop as the flow rate decreased.When the flow rate decreased to two-thirds of its baseline,both the pressure of left renal artery(LRA)and the pressure of right renal artery(RRA)dipped below 50%,and the pressure of RRA fell more quickly than that of LRA.Uneven distribution of WSS was observed on the trunk of RA,and the lowest WSS was found at the distal of RA.The average WSS in RA dropped to around 50%as the flow rate reached one-third of its baseline.Conclusions:As a promising approach,CFD can be utilized to quantitatively evaluate the hemodynamic characteristics of RA and contribute to offsetting the drawbacks of clinical assessments of renal function,to help realize better prognosis for the patients with Fontan palliation.展开更多
Imaging through fluctuating scattering media such as fog is of challenge since it seriously degrades the image quality.We investigate how the image quality of computational ghost imaging is reduced by fluctuating fog ...Imaging through fluctuating scattering media such as fog is of challenge since it seriously degrades the image quality.We investigate how the image quality of computational ghost imaging is reduced by fluctuating fog and how to obtain a high-quality defogging ghost image. We show theoretically and experimentally that the photon number fluctuations introduced by fluctuating fog is the reason for ghost image degradation. An algorithm is proposed to process the signals collected by the computational ghost imaging device to eliminate photon number fluctuations of different measurement events. Thus, a high-quality defogging ghost image is reconstructed even though fog is evenly distributed on the optical path. A nearly 100% defogging ghost image is obtained by further using a cycle generative adversarial network to process the reconstructed defogging image.展开更多
Computational linguistics refers to an interdisciplinary field associated with the computational modelling of natural language and studying appropriate computational methods for linguistic questions.The number of soci...Computational linguistics refers to an interdisciplinary field associated with the computational modelling of natural language and studying appropriate computational methods for linguistic questions.The number of social media users has been increasing over the last few years,which have allured researchers’interest in scrutinizing the new kind of creative language utilized on the Internet to explore communication and human opinions in a betterway.Irony and sarcasm detection is a complex task inNatural Language Processing(NLP).Irony detection has inferences in advertising,sentiment analysis(SA),and opinion mining.For the last few years,irony-aware SA has gained significant computational treatment owing to the prevalence of irony in web content.Therefore,this study develops Computational Linguistics with Optimal Deep Belief Network based Irony Detection and Classification(CLODBN-IRC)model on social media.The presented CLODBN-IRC model mainly focuses on the identification and classification of irony that exists in social media.To attain this,the presented CLODBN-IRC model performs different stages of pre-processing and TF-IDF feature extraction.For irony detection and classification,the DBN model is exploited in this work.At last,the hyperparameters of the DBN model are optimally modified by improved artificial bee colony optimization(IABC)algorithm.The experimental validation of the presentedCLODBN-IRCmethod can be tested by making use of benchmark dataset.The simulation outcomes highlight the superior outcomes of the presented CLODBN-IRC model over other approaches.展开更多
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
基金The authors are grateful for financial support from the National Key Projects for Fundamental Research and Development of China(2021YFA1500803)the National Natural Science Foundation of China(51825205,52120105002,22102202,22088102,U22A20391)+1 种基金the DNL Cooperation Fund,CAS(DNL202016)the CAS Project for Young Scientists in Basic Research(YSBR-004).
文摘Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.
基金financially supported by the National Natural Science Foundation of China(U21A20313,22222807)。
文摘For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics.
基金the financial support for this work provided by the National Key R&D Program of China‘Technologies and Integrated Application of Magnesite Waste Utilization for High-Valued Chemicals and Materials’(2020YFC1909303)。
文摘This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.
文摘Recent industrial explosions globally have intensified the focus in mechanical engineering on designing infras-tructure systems and networks capable of withstanding blast loading.Initially centered on high-profile facilities such as embassies and petrochemical plants,this concern now extends to a wider array of infrastructures and facilities.Engineers and scholars increasingly prioritize structural safety against explosions,particularly to prevent disproportionate collapse and damage to nearby structures.Urbanization has further amplified the reliance on oil and gas pipelines,making them vital for urban life and prime targets for terrorist activities.Consequently,there is a growing imperative for computational engineering solutions to tackle blast loading on pipelines and mitigate associated risks to avert disasters.In this study,an empty pipe model was successfully validated under contact blast conditions using Abaqus software,a powerful tool in mechanical engineering for simulating blast effects on buried pipelines.Employing a Eulerian-Lagrangian computational fluid dynamics approach,the investigation extended to above-surface and below-surface blasts at standoff distances of 25 and 50 mm.Material descriptions in the numerical model relied on Abaqus’default mechanical models.Comparative analysis revealed varying pipe performance,with deformation decreasing as explosion-to-pipe distance increased.The explosion’s location relative to the pipe surface notably influenced deformation levels,a key finding highlighted in the study.Moreover,quantitative findings indicated varying ratios of plastic dissipation energy(PDE)for different blast scenarios compared to the contact blast(P0).Specifically,P1(25 mm subsurface blast)and P2(50 mm subsurface blast)showed approximately 24.07%and 14.77%of P0’s PDE,respectively,while P3(25 mm above-surface blast)and P4(50 mm above-surface blast)exhibited lower PDE values,accounting for about 18.08%and 9.67%of P0’s PDE,respectively.Utilising energy-absorbing materials such as thin coatings of ultra-high-strength concrete,metallic foams,carbon fiber-reinforced polymer wraps,and others on the pipeline to effectively mitigate blast damage is recommended.This research contributes to the advancement of mechanical engineering by providing insights and solutions crucial for enhancing the resilience and safety of underground pipelines in the face of blast events.
基金the National Natural Science Foundation of China(62271192)Henan Provincial Scientists Studio(GZS2022015)+10 种基金Central Plains Talents Plan(ZYYCYU202012173)NationalKeyR&DProgramofChina(2020YFB2008400)the Program ofCEMEE(2022Z00202B)LAGEO of Chinese Academy of Sciences(LAGEO-2019-2)Program for Science&Technology Innovation Talents in the University of Henan Province(20HASTIT022)Natural Science Foundation of Henan under Grant 202300410126Program for Innovative Research Team in University of Henan Province(21IRTSTHN015)Equipment Pre-Research Joint Research Program of Ministry of Education(8091B032129)Training Program for Young Scholar of Henan Province for Colleges and Universities(2020GGJS172)Program for Science&Technology Innovation Talents in Universities of Henan Province under Grand(22HASTIT020)Henan Province Science Fund for Distinguished Young Scholars(222300420006).
文摘The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.
基金This study is partially supported by the National Natural Science Foundation of China(NSFC)(62005120,62125504).
文摘An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉).
基金2024 Key Project of Teaching Reform Research and Practice in Higher Education in Henan Province“Exploration and Practice of Training Model for Outstanding Students in Basic Mechanics Discipline”(2024SJGLX094)Henan Province“Mechanics+X”Basic Discipline Outstanding Student Training Base2024 Research and Practice Project of Higher Education Teaching Reform in Henan University of Science and Technology“Optimization and Practice of Ability-Oriented Teaching Mode for Computational Mechanics Course:A New Exploration in Cultivating Practical Simulation Engineers”(2024BK074)。
文摘This paper takes the assessment and evaluation of computational mechanics course as the background,and constructs a diversified course evaluation system that is student-centered and integrates both quantitative and qualitative evaluation methods.The system not only pays attention to students’practical operation and theoretical knowledge mastery but also puts special emphasis on the cultivation of students’innovative abilities.In order to realize a comprehensive and objective evaluation,the assessment and evaluation method of the entropy weight model combining TOPSIS(Technique for Order Preference by Similarity to Ideal Solution)multi-attribute decision analysis and entropy weight theory is adopted,and its validity and practicability are verified through example analysis.This method can not only comprehensively and objectively evaluate students’learning outcomes,but also provide a scientific decision-making basis for curriculum teaching reform.The implementation of this diversified course evaluation system can better reflect the comprehensive ability of students and promote the continuous improvement of teaching quality.
基金This work is supported by National Natural Science Foundation of China(Nos.U23B20151 and 52171253).
文摘Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.
基金supported by the Youth Science Foundation of Sichuan Province(Nos.22NSFSC3816 and 2022NSFSC1231)the General Project of the National Natural Science Foundation of China(Nos.12075039 and 41874121)the Key Project of the National Natural Science Foundation of China(No.U19A2086).
文摘Owing to the constraints on the fabrication ofγ-ray coding plates with many pixels,few studies have been carried out onγ-ray computational ghost imaging.Thus,the development of coding plates with fewer pixels is essential to achieveγ-ray computational ghost imaging.Based on the regional similarity between Hadamard subcoding plates,this study presents an optimization method to reduce the number of pixels of Hadamard coding plates.First,a moving distance matrix was obtained to describe the regional similarity quantitatively.Second,based on the matrix,we used two ant colony optimization arrangement algorithms to maximize the reuse of pixels in the regional similarity area and obtain new compressed coding plates.With full sampling,these two algorithms improved the pixel utilization of the coding plate,and the compression ratio values were 54.2%and 58.9%,respectively.In addition,three undersampled sequences(the Harr,Russian dolls,and cake-cutting sequences)with different sampling rates were tested and discussed.With different sampling rates,our method reduced the number of pixels of all three sequences,especially for the Russian dolls and cake-cutting sequences.Therefore,our method can reduce the number of pixels,manufacturing cost,and difficulty of the coding plate,which is beneficial for the implementation and application ofγ-ray computational ghost imaging.
基金supported by the National Natural Science Foundation of China (22078004)the Research Development Fund from Xi’an Jiaotong-Liverpool University (RDF-16-02-03 and RDF15-01-23)key program special fund (KSF-E-03)。
文摘Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.
基金supported by the Research Grants Council of the Hong Kong Special Administrative Region,China(PolyU152178/20 E)the Hong Kong Polytechnic University(1-W19S)Science and Technology Program of Guangdong Province of China(2020A0505090001).
文摘Lithium-ion batteries(LIBs)and lithium-sulfur(Li–S)batteries are two types of energy storage systems with significance in both scientific research and commercialization.Nevertheless,the rational design of electrode materials for overcoming the bottlenecks of LIBs and Li–S batteries(such as low diffusion rates in LIBs and low sulfur utilization in Li–S batteries)remain the greatest challenge,while two-dimensional(2D)electrodes materials provide a solution because of their unique structural and electrochemical properties.In this article,from the perspective of ab-initio simulations,we review the design of 2D electrode materials for LIBs and Li–S batteries.We first propose the theoretical design principles for 2D electrodes,including stability,electronic properties,capacity,and ion diffusion descriptors.Next,classified examples of promising 2D electrodes designed by theoretical simulations are given,covering graphene,phosphorene,MXene,transition metal sulfides,and so on.Finally,common challenges and a future perspective are provided.This review paves the way for rational design of 2D electrode materials for LIBs and Li–S battery applications and may provide a guide for future experiments.
文摘Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.
基金Superior Farms sheep producersIBEST for their supportfinancial support from the Idaho Global Entrepreneurial Mission
文摘Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across multiple related samples.However,the improvement of variants identification using the mutual support information from mul-tiple samples remains quite limited for population-scale genotyping.Results In this study,we developed a computational framework for joint calling genetic variants from 5,061 sheep by incorporating the sequencing error and optimizing mutual support information from multiple samples’data.The variants were accurately identified from multiple samples by using four steps:(1)Probabilities of variants from two widely used algorithms,GATK and Freebayes,were calculated by Poisson model incorporating base sequencing error potential;(2)The variants with high mapping quality or consistently identified from at least two samples by GATK and Freebayes were used to construct the raw high-confidence identification(rHID)variants database;(3)The high confidence variants identified in single sample were ordered by probability value and controlled by false discovery rate(FDR)using rHID database;(4)To avoid the elimination of potentially true variants from rHID database,the vari-ants that failed FDR were reexamined to rescued potential true variants and ensured high accurate identification variants.The results indicated that the percent of concordant SNPs and Indels from Freebayes and GATK after our new method were significantly improved 12%-32%compared with raw variants and advantageously found low frequency variants of individual sheep involved several traits including nipples number(GPC5),scrapie pathology(PAPSS2),sea-sonal reproduction and litter size(GRM1),coat color(RAB27A),and lentivirus susceptibility(TMEM154).Conclusion The new method used the computational strategy to reduce the number of false positives,and simulta-neously improve the identification of genetic variants.This strategy did not incur any extra cost by using any addi-tional samples or sequencing data information and advantageously identified rare variants which can be important for practical applications of animal breeding.
基金National Natural Science Foundation of China(U2004176,22008055)Technology Research Project of Henan Province(232102240034)are gratefully acknowledged.
文摘Liquid phase exfoliation(LPE)process for graphene production is usually carried out in stirred tank reactor and the interactions between the solvent and the graphite particles are important as to improve the production efficiency.In this paper,these interactions were revealed by computational fluid dynamics–discrete element method(CFD-DEM)method.Based on simulation results,both liquid phase flow hydrodynamics and particle motion behavior have been analyzed,which gave the general information of the multiphase flow behavior inside the stirred tank reactor as to graphene production.By calculating the threshold at the beginning of graphite exfoliation process,the shear force from the slip velocity was determined as the active force.These results can support the optimization of the graphene production process.
文摘Computational fluid dynamics(CFD)provides a powerful tool for investigating complicated fluid flows.This paper aims to study the applicability of CFD in the preliminary design of linear and nonlinear fluid viscous dampers.Two fluid viscous dampers were designed based on CFD models.The first device was a linear viscous damper with straight orifices.The second was a nonlinear viscous damper containing a one-way pressure-responsive valve inside its orifices.Both dampers were detailed based on CFD simulations,and their internal fluid flows were investigated.Full-scale specimens of both dampers were manufactured and tested under dynamic loads.According to the tests results,both dampers demonstrate stable cyclic behaviors,and as expected,the nonlinear damper generally tends to dissipate more energy compared to its linear counterpart.Good compatibility was achieved between the experimentally measured damper force-velocity curves and those estimated from CFD analyses.Using a thermography camera,a rise in temperature of the dampers was measured during the tests.It was found that output force of the manufactured devices was virtually independent of temperature even during long duration loadings.Accordingly,temperature dependence can be ignored in CFD models,because a reliable temperature compensator mechanism was used(or intended to be used)by the damper manufacturer.
文摘This paper presents a time-efficient numerical approach to modelling high explosive(HE)blastwave propagation using Computational Fluid Dynamics(CFD).One of the main issues of using conventional CFD modelling in high explosive simulations is the ability to accurately define the initial blastwave properties that arise from the ignition and consequent explosion.Specialised codes often employ Jones-Wilkins-Lee(JWL)or similar equation of state(EOS)to simulate blasts.However,most available CFD codes are limited in terms of EOS modelling.They are restrictive to the Ideal Gas Law(IGL)for compressible flows,which is generally unsuitable for blast simulations.To this end,this paper presents a numerical approach to simulate blastwave propagation for any generic CFD code using the IGL EOS.A new method known as the Input Cavity Method(ICM)is defined where input conditions of the high explosives are given in the form of pressure,velocity and temperature time-history curves.These time history curves are input at a certain distance from the centre of the charge.It is shown that the ICM numerical method can accurately predict over-pressure and impulse time history at measured locations for the incident,reflective and complex multiple reflection scenarios with high numerical accuracy compared to experimental measurements.The ICM is compared to the Pressure Bubble Method(PBM),a common approach to replicating initial conditions for a high explosive in Finite Volume modelling.It is shown that the ICM outperforms the PBM on multiple fronts,such as peak values and overall overpressure curve shape.Finally,the paper also presents the importance of choosing an appropriate solver between the Pressure Based Solver(PBS)and Density-Based Solver(DBS)and provides the advantages and disadvantages of either choice.In general,it is shown that the PBS can resolve and capture the interactions of blastwaves to a higher degree of resolution than the DBS.This is achieved at a much higher computational cost,showing that the DBS is much preferred for quick turnarounds.
基金Funding Statement:This study was supported by National Natural Science Foundation of China(No.81970439)Natural Science Foundation of Shanghai(No.19ZR1432700)+1 种基金Fund of the Shanghai Committee of Science and Technology(Nos.19411965400,17DZ2253100)the Development Fund of Shanghai Talents(No.2020114).
文摘Background:The assessment of renal function is important to the prognosis of patients needing Fontan palliation due to the reconstructed compromised circulation.To know the relationship between the kidney perfusion and hemodynamic characteristics during surgical design could reduce the risk of acute kidney injury(AKI)and the postoperative complications.However,the issue is still unsolved because the current clinical evaluation methods are unable to predict the hemodynamic changes in renal artery(RA).Methods:We reconstructed a three-dimensional(3D)vascular model of a patient requiring Fontan palliation.The technique of computational fluid dynamics(CFD)was utilized to explore the changes of RA hemodynamics under different possible blood flow rates.The relationship between the kidney perfusion and hemodynamic characteristics was investigated.Results:The calculated results indicated the declined tendency of the pressure and pressure drop as the flow rate decreased.When the flow rate decreased to two-thirds of its baseline,both the pressure of left renal artery(LRA)and the pressure of right renal artery(RRA)dipped below 50%,and the pressure of RRA fell more quickly than that of LRA.Uneven distribution of WSS was observed on the trunk of RA,and the lowest WSS was found at the distal of RA.The average WSS in RA dropped to around 50%as the flow rate reached one-third of its baseline.Conclusions:As a promising approach,CFD can be utilized to quantitatively evaluate the hemodynamic characteristics of RA and contribute to offsetting the drawbacks of clinical assessments of renal function,to help realize better prognosis for the patients with Fontan palliation.
基金supported by the Natural Science Foundation of Shandong Province, China (Grant No. ZR2022MF249)。
文摘Imaging through fluctuating scattering media such as fog is of challenge since it seriously degrades the image quality.We investigate how the image quality of computational ghost imaging is reduced by fluctuating fog and how to obtain a high-quality defogging ghost image. We show theoretically and experimentally that the photon number fluctuations introduced by fluctuating fog is the reason for ghost image degradation. An algorithm is proposed to process the signals collected by the computational ghost imaging device to eliminate photon number fluctuations of different measurement events. Thus, a high-quality defogging ghost image is reconstructed even though fog is evenly distributed on the optical path. A nearly 100% defogging ghost image is obtained by further using a cycle generative adversarial network to process the reconstructed defogging image.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Small Groups Project under Grant Number(120/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R281)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4320484DSR33).
文摘Computational linguistics refers to an interdisciplinary field associated with the computational modelling of natural language and studying appropriate computational methods for linguistic questions.The number of social media users has been increasing over the last few years,which have allured researchers’interest in scrutinizing the new kind of creative language utilized on the Internet to explore communication and human opinions in a betterway.Irony and sarcasm detection is a complex task inNatural Language Processing(NLP).Irony detection has inferences in advertising,sentiment analysis(SA),and opinion mining.For the last few years,irony-aware SA has gained significant computational treatment owing to the prevalence of irony in web content.Therefore,this study develops Computational Linguistics with Optimal Deep Belief Network based Irony Detection and Classification(CLODBN-IRC)model on social media.The presented CLODBN-IRC model mainly focuses on the identification and classification of irony that exists in social media.To attain this,the presented CLODBN-IRC model performs different stages of pre-processing and TF-IDF feature extraction.For irony detection and classification,the DBN model is exploited in this work.At last,the hyperparameters of the DBN model are optimally modified by improved artificial bee colony optimization(IABC)algorithm.The experimental validation of the presentedCLODBN-IRCmethod can be tested by making use of benchmark dataset.The simulation outcomes highlight the superior outcomes of the presented CLODBN-IRC model over other approaches.