The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingl...The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.展开更多
The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be...The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be also an NP_complete problem.The proof for the property of NP_completeness in directional spatial reasoning problem is based on two important transformations.After these transformations,a spatial configuration has been constructed based on directional constraints,and the property of NP_completeness in directional spatial reasoning has been proved with the help of the consistency of the constraints in the configuration.展开更多
In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more ...In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more general case (i.e. when alternatives are not necessarily comparable). We show that they all coincide with Top cycle choice procedure for tournaments. In case of weak tournaments, the strong Untrapped set is equivalent to Getcha choice procedure and the Weak Untrapped set is exactly the Untrapped set studied in the litterature. We also present a polynomial-time algorithm for computing each set.展开更多
Nowadays, an increasing number of persons choose to outsource their computing demands and storage demands to the Cloud. In order to ensure the integrity of the data in the untrusted Cloud, especially the dynamic files...Nowadays, an increasing number of persons choose to outsource their computing demands and storage demands to the Cloud. In order to ensure the integrity of the data in the untrusted Cloud, especially the dynamic files which can be updated online, we propose an improved dynamic provable data possession model. We use some homomorphic tags to verify the integrity of the file and use some hash values generated by some secret values and tags to prevent replay attack and forgery attack. Compared with previous works, our proposal reduces the computational and communication complexity from O(logn) to O(1). We did some experiments to ensure this improvement and extended the model to file sharing situation.展开更多
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to co...This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to consider the complexity of quantum states, offering a more nuanced understanding of measurement precision. By adding a complexity term to the uncertainty relation, we explore nonlinear modifications such as polynomial, exponential, and logarithmic functions. Rigorous mathematical derivations demonstrate the consistency of the modified principle with classical quantum mechanics and quantum information theory. We investigate the implications of this modified HUP for various aspects of quantum mechanics, including quantum metrology, quantum algorithms, quantum error correction, and quantum chaos. Additionally, we propose experimental protocols to test the validity of the modified HUP, evaluating their feasibility with current and near-term quantum technologies. This work highlights the importance of quantum complexity in quantum mechanics and provides a refined perspective on the interplay between complexity, entanglement, and uncertainty in quantum systems. The modified HUP has the potential to stimulate interdisciplinary research at the intersection of quantum physics, information theory, and complexity theory, with significant implications for the development of quantum technologies and the understanding of the quantum-to-classical transition.展开更多
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is...Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.展开更多
Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as bioch...Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as biochemical sensors for medicine,food safety and environmental monitoring.However,there is little research on aptamer-target binding mechanisms,which limits their application and development.Computational simulation has gained much attention for revealing aptamer-target binding mechanisms at the atomic level.This work summarizes the main simulation methods used in the mechanistic analysis of aptamer-target complexes,the characteristics of binding between aptamers and different targets(metal ions,small organic molecules,biomacromolecules,cells,bacteria and viruses),the types of aptamer-target interactions and the factors influencing their strength.It provides a reference for further use of simulations in understanding aptamer-target binding mechanisms.展开更多
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour...Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.展开更多
This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysi...This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.展开更多
Elementary information theory is used to model cybersecurity complexity, where the model assumes that security risk management is a binomial stochastic process. Complexity is shown to increase exponentially with the n...Elementary information theory is used to model cybersecurity complexity, where the model assumes that security risk management is a binomial stochastic process. Complexity is shown to increase exponentially with the number of vulnerabilities in combination with security risk management entropy. However, vulnerabilities can be either local or non-local, where the former is confined to networked elements and the latter results from interactions between elements. Furthermore, interactions involve multiple methods of communication, where each method can contain vulnerabilities specific to that method. Importantly, the number of possible interactions scales quadratically with the number of elements in standard network topologies. Minimizing these interactions can significantly reduce the number of vulnerabilities and the accompanying complexity. Two network configurations that yield sub-quadratic and linear scaling relations are presented.展开更多
Continuous-flow microchannels are widely employed for synthesizing various materials,including nanoparticles,polymers,and metal-organic frameworks(MOFs),to name a few.Microsystem technology allows precise control over...Continuous-flow microchannels are widely employed for synthesizing various materials,including nanoparticles,polymers,and metal-organic frameworks(MOFs),to name a few.Microsystem technology allows precise control over reaction parameters,resulting in purer,more uniform,and structurally stable products due to more effective mass transfer manipulation.However,continuous-flow synthesis processes may be accompanied by the emergence of spatial convective structures initiating convective flows.On the one hand,convection can accelerate reactions by intensifying mass transfer.On the other hand,it may lead to non-uniformity in the final product or defects,especially in MOF microcrystal synthesis.The ability to distinguish regions of convective and diffusive mass transfer may be the key to performing higher-quality reactions and obtaining purer products.In this study,we investigate,for the first time,the possibility of using the information complexity measure as a criterion for assessing the intensity of mass transfer in microchannels,considering both spatial and temporal non-uniformities of liquid’s distributions resulting from convection formation.We calculate the complexity using shearlet transform based on a local approach.In contrast to existing methods for calculating complexity,the shearlet transform based approach provides a more detailed representation of local heterogeneities.Our analysis involves experimental images illustrating the mixing process of two non-reactive liquids in a Y-type continuous-flow microchannel under conditions of double-diffusive convection formation.The obtained complexity fields characterize the mixing process and structure formation,revealing variations in mass transfer intensity along the microchannel.We compare the results with cases of liquid mixing via a pure diffusive mechanism.Upon analysis,it was revealed that the complexity measure exhibits sensitivity to variations in the type of mass transfer,establishing its feasibility as an indirect criterion for assessing mass transfer intensity.The method presented can extend beyond flow analysis,finding application in the controlling of microstructures of various materials(porosity,for instance)or surface defects in metals,optical systems and other materials that hold significant relevance in materials science and engineering.展开更多
For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of sol...For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics.展开更多
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ...The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.展开更多
Task-based Language Teaching(TBLT)research has provided ample evidence that cognitive complexity is an important aspect of task design that influences learner’s performance in terms of fluency,accuracy,and syntactic ...Task-based Language Teaching(TBLT)research has provided ample evidence that cognitive complexity is an important aspect of task design that influences learner’s performance in terms of fluency,accuracy,and syntactic complexity.Despite the substantial number of empirical investigations into task complexity in journal articles,storyline complexity,one of the features of it,is scarcely investigated.Previous research mainly focused on the impact of storyline complexity on learners’oral performance,but the impact on learners’written performance is less investigated.Thus,this study aims at investigating the effects of narrative complexity of storyline on senior high school students’written performance,as displayed by its complexity,fluency,and accuracy.The present study has important pedagogical implications.That is,task design and assessment should make a distinction between different types of narrative tasks.For example,the task with single or dual storyline.Results on task complexity may contribute to informing the pedagogical choices made by teachers when prioritizing work with a specific linguistic dimension.展开更多
Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challeng...Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.展开更多
Recent industrial explosions globally have intensified the focus in mechanical engineering on designing infras-tructure systems and networks capable of withstanding blast loading.Initially centered on high-profile fac...Recent industrial explosions globally have intensified the focus in mechanical engineering on designing infras-tructure systems and networks capable of withstanding blast loading.Initially centered on high-profile facilities such as embassies and petrochemical plants,this concern now extends to a wider array of infrastructures and facilities.Engineers and scholars increasingly prioritize structural safety against explosions,particularly to prevent disproportionate collapse and damage to nearby structures.Urbanization has further amplified the reliance on oil and gas pipelines,making them vital for urban life and prime targets for terrorist activities.Consequently,there is a growing imperative for computational engineering solutions to tackle blast loading on pipelines and mitigate associated risks to avert disasters.In this study,an empty pipe model was successfully validated under contact blast conditions using Abaqus software,a powerful tool in mechanical engineering for simulating blast effects on buried pipelines.Employing a Eulerian-Lagrangian computational fluid dynamics approach,the investigation extended to above-surface and below-surface blasts at standoff distances of 25 and 50 mm.Material descriptions in the numerical model relied on Abaqus’default mechanical models.Comparative analysis revealed varying pipe performance,with deformation decreasing as explosion-to-pipe distance increased.The explosion’s location relative to the pipe surface notably influenced deformation levels,a key finding highlighted in the study.Moreover,quantitative findings indicated varying ratios of plastic dissipation energy(PDE)for different blast scenarios compared to the contact blast(P0).Specifically,P1(25 mm subsurface blast)and P2(50 mm subsurface blast)showed approximately 24.07%and 14.77%of P0’s PDE,respectively,while P3(25 mm above-surface blast)and P4(50 mm above-surface blast)exhibited lower PDE values,accounting for about 18.08%and 9.67%of P0’s PDE,respectively.Utilising energy-absorbing materials such as thin coatings of ultra-high-strength concrete,metallic foams,carbon fiber-reinforced polymer wraps,and others on the pipeline to effectively mitigate blast damage is recommended.This research contributes to the advancement of mechanical engineering by providing insights and solutions crucial for enhancing the resilience and safety of underground pipelines in the face of blast events.展开更多
Living objects have complex internal and external interactions. The complexity is regulated and controlled by homeostasis, which is the balance of multiple opposing influences. The environmental effects finally guide ...Living objects have complex internal and external interactions. The complexity is regulated and controlled by homeostasis, which is the balance of multiple opposing influences. The environmental effects finally guide the self-organized structure. The living systems are open, dynamic structures performing random, stationary, stochastic, self-organizing processes. The self-organizing procedure is defined by the spatial-temporal fractal structure, which is self-similar both in space and time. The system’s complexity appears in its energetics, which tries the most efficient use of the available energies;for that, it organizes various well-connected networks. The controller of environmental relations is the Darwinian selection on a long-time scale. The energetics optimize the healthy processes tuned to the highest efficacy and minimal loss (minimalization of the entropy production). The organism is built up by morphogenetic rules and develops various networks from the genetic level to the organism. The networks have intensive crosstalk and form a balance in the Nash equilibrium, which is the homeostatic state in healthy conditions. Homeostasis may be described as a Nash equilibrium, which ensures energy distribution in a “democratic” way regarding the functions of the parts in the complete system. Cancer radically changes the network system in the organism. Cancer is a network disease. Deviation from healthy networking appears at every level, from genetic (molecular) to cells, tissues, organs, and organisms. The strong proliferation of malignant tissue is the origin of most of the life-threatening processes. The weak side of cancer development is the change of complex information networking in the system, being vulnerable to immune attacks. Cancer cells are masters of adaptation and evade immune surveillance. This hiding process can be broken by electromagnetic nonionizing radiation, for which the malignant structure has no adaptation strategy. Our objective is to review the different sides of living complexity and use the knowledge to fight against cancer.展开更多
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca...The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.展开更多
An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities...An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉).展开更多
文摘The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.
文摘The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be also an NP_complete problem.The proof for the property of NP_completeness in directional spatial reasoning problem is based on two important transformations.After these transformations,a spatial configuration has been constructed based on directional constraints,and the property of NP_completeness in directional spatial reasoning has been proved with the help of the consistency of the constraints in the configuration.
文摘In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more general case (i.e. when alternatives are not necessarily comparable). We show that they all coincide with Top cycle choice procedure for tournaments. In case of weak tournaments, the strong Untrapped set is equivalent to Getcha choice procedure and the Weak Untrapped set is exactly the Untrapped set studied in the litterature. We also present a polynomial-time algorithm for computing each set.
基金supported by Major Program of Shanghai Science and Technology Commission under Grant No.10DZ1500200Collaborative Applied Research and Development Project between Morgan Stanley and Shanghai Jiao Tong University, China
文摘Nowadays, an increasing number of persons choose to outsource their computing demands and storage demands to the Cloud. In order to ensure the integrity of the data in the untrusted Cloud, especially the dynamic files which can be updated online, we propose an improved dynamic provable data possession model. We use some homomorphic tags to verify the integrity of the file and use some hash values generated by some secret values and tags to prevent replay attack and forgery attack. Compared with previous works, our proposal reduces the computational and communication complexity from O(logn) to O(1). We did some experiments to ensure this improvement and extended the model to file sharing situation.
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
文摘This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to consider the complexity of quantum states, offering a more nuanced understanding of measurement precision. By adding a complexity term to the uncertainty relation, we explore nonlinear modifications such as polynomial, exponential, and logarithmic functions. Rigorous mathematical derivations demonstrate the consistency of the modified principle with classical quantum mechanics and quantum information theory. We investigate the implications of this modified HUP for various aspects of quantum mechanics, including quantum metrology, quantum algorithms, quantum error correction, and quantum chaos. Additionally, we propose experimental protocols to test the validity of the modified HUP, evaluating their feasibility with current and near-term quantum technologies. This work highlights the importance of quantum complexity in quantum mechanics and provides a refined perspective on the interplay between complexity, entanglement, and uncertainty in quantum systems. The modified HUP has the potential to stimulate interdisciplinary research at the intersection of quantum physics, information theory, and complexity theory, with significant implications for the development of quantum technologies and the understanding of the quantum-to-classical transition.
基金The authors are grateful for financial support from the National Key Projects for Fundamental Research and Development of China(2021YFA1500803)the National Natural Science Foundation of China(51825205,52120105002,22102202,22088102,U22A20391)+1 种基金the DNL Cooperation Fund,CAS(DNL202016)the CAS Project for Young Scientists in Basic Research(YSBR-004).
文摘Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.
文摘Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as biochemical sensors for medicine,food safety and environmental monitoring.However,there is little research on aptamer-target binding mechanisms,which limits their application and development.Computational simulation has gained much attention for revealing aptamer-target binding mechanisms at the atomic level.This work summarizes the main simulation methods used in the mechanistic analysis of aptamer-target complexes,the characteristics of binding between aptamers and different targets(metal ions,small organic molecules,biomacromolecules,cells,bacteria and viruses),the types of aptamer-target interactions and the factors influencing their strength.It provides a reference for further use of simulations in understanding aptamer-target binding mechanisms.
基金This work is supported by National Natural Science Foundation of China(Nos.U23B20151 and 52171253).
文摘Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.
基金the financial support for this work provided by the National Key R&D Program of China‘Technologies and Integrated Application of Magnesite Waste Utilization for High-Valued Chemicals and Materials’(2020YFC1909303)。
文摘This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.
文摘Elementary information theory is used to model cybersecurity complexity, where the model assumes that security risk management is a binomial stochastic process. Complexity is shown to increase exponentially with the number of vulnerabilities in combination with security risk management entropy. However, vulnerabilities can be either local or non-local, where the former is confined to networked elements and the latter results from interactions between elements. Furthermore, interactions involve multiple methods of communication, where each method can contain vulnerabilities specific to that method. Importantly, the number of possible interactions scales quadratically with the number of elements in standard network topologies. Minimizing these interactions can significantly reduce the number of vulnerabilities and the accompanying complexity. Two network configurations that yield sub-quadratic and linear scaling relations are presented.
基金supported by the Ministry of Science and High Education of Russia(Theme No.368121031700169-1 of ICMM UrB RAS).
文摘Continuous-flow microchannels are widely employed for synthesizing various materials,including nanoparticles,polymers,and metal-organic frameworks(MOFs),to name a few.Microsystem technology allows precise control over reaction parameters,resulting in purer,more uniform,and structurally stable products due to more effective mass transfer manipulation.However,continuous-flow synthesis processes may be accompanied by the emergence of spatial convective structures initiating convective flows.On the one hand,convection can accelerate reactions by intensifying mass transfer.On the other hand,it may lead to non-uniformity in the final product or defects,especially in MOF microcrystal synthesis.The ability to distinguish regions of convective and diffusive mass transfer may be the key to performing higher-quality reactions and obtaining purer products.In this study,we investigate,for the first time,the possibility of using the information complexity measure as a criterion for assessing the intensity of mass transfer in microchannels,considering both spatial and temporal non-uniformities of liquid’s distributions resulting from convection formation.We calculate the complexity using shearlet transform based on a local approach.In contrast to existing methods for calculating complexity,the shearlet transform based approach provides a more detailed representation of local heterogeneities.Our analysis involves experimental images illustrating the mixing process of two non-reactive liquids in a Y-type continuous-flow microchannel under conditions of double-diffusive convection formation.The obtained complexity fields characterize the mixing process and structure formation,revealing variations in mass transfer intensity along the microchannel.We compare the results with cases of liquid mixing via a pure diffusive mechanism.Upon analysis,it was revealed that the complexity measure exhibits sensitivity to variations in the type of mass transfer,establishing its feasibility as an indirect criterion for assessing mass transfer intensity.The method presented can extend beyond flow analysis,finding application in the controlling of microstructures of various materials(porosity,for instance)or surface defects in metals,optical systems and other materials that hold significant relevance in materials science and engineering.
基金financially supported by the National Natural Science Foundation of China(U21A20313,22222807)。
文摘For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics.
文摘The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.
文摘Task-based Language Teaching(TBLT)research has provided ample evidence that cognitive complexity is an important aspect of task design that influences learner’s performance in terms of fluency,accuracy,and syntactic complexity.Despite the substantial number of empirical investigations into task complexity in journal articles,storyline complexity,one of the features of it,is scarcely investigated.Previous research mainly focused on the impact of storyline complexity on learners’oral performance,but the impact on learners’written performance is less investigated.Thus,this study aims at investigating the effects of narrative complexity of storyline on senior high school students’written performance,as displayed by its complexity,fluency,and accuracy.The present study has important pedagogical implications.That is,task design and assessment should make a distinction between different types of narrative tasks.For example,the task with single or dual storyline.Results on task complexity may contribute to informing the pedagogical choices made by teachers when prioritizing work with a specific linguistic dimension.
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through large Research Project under Grant Number RGP2/302/45supported by the Deanship of Scientific Research,Vice Presidency forGraduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Grant Number A426).
文摘Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.
文摘Recent industrial explosions globally have intensified the focus in mechanical engineering on designing infras-tructure systems and networks capable of withstanding blast loading.Initially centered on high-profile facilities such as embassies and petrochemical plants,this concern now extends to a wider array of infrastructures and facilities.Engineers and scholars increasingly prioritize structural safety against explosions,particularly to prevent disproportionate collapse and damage to nearby structures.Urbanization has further amplified the reliance on oil and gas pipelines,making them vital for urban life and prime targets for terrorist activities.Consequently,there is a growing imperative for computational engineering solutions to tackle blast loading on pipelines and mitigate associated risks to avert disasters.In this study,an empty pipe model was successfully validated under contact blast conditions using Abaqus software,a powerful tool in mechanical engineering for simulating blast effects on buried pipelines.Employing a Eulerian-Lagrangian computational fluid dynamics approach,the investigation extended to above-surface and below-surface blasts at standoff distances of 25 and 50 mm.Material descriptions in the numerical model relied on Abaqus’default mechanical models.Comparative analysis revealed varying pipe performance,with deformation decreasing as explosion-to-pipe distance increased.The explosion’s location relative to the pipe surface notably influenced deformation levels,a key finding highlighted in the study.Moreover,quantitative findings indicated varying ratios of plastic dissipation energy(PDE)for different blast scenarios compared to the contact blast(P0).Specifically,P1(25 mm subsurface blast)and P2(50 mm subsurface blast)showed approximately 24.07%and 14.77%of P0’s PDE,respectively,while P3(25 mm above-surface blast)and P4(50 mm above-surface blast)exhibited lower PDE values,accounting for about 18.08%and 9.67%of P0’s PDE,respectively.Utilising energy-absorbing materials such as thin coatings of ultra-high-strength concrete,metallic foams,carbon fiber-reinforced polymer wraps,and others on the pipeline to effectively mitigate blast damage is recommended.This research contributes to the advancement of mechanical engineering by providing insights and solutions crucial for enhancing the resilience and safety of underground pipelines in the face of blast events.
文摘Living objects have complex internal and external interactions. The complexity is regulated and controlled by homeostasis, which is the balance of multiple opposing influences. The environmental effects finally guide the self-organized structure. The living systems are open, dynamic structures performing random, stationary, stochastic, self-organizing processes. The self-organizing procedure is defined by the spatial-temporal fractal structure, which is self-similar both in space and time. The system’s complexity appears in its energetics, which tries the most efficient use of the available energies;for that, it organizes various well-connected networks. The controller of environmental relations is the Darwinian selection on a long-time scale. The energetics optimize the healthy processes tuned to the highest efficacy and minimal loss (minimalization of the entropy production). The organism is built up by morphogenetic rules and develops various networks from the genetic level to the organism. The networks have intensive crosstalk and form a balance in the Nash equilibrium, which is the homeostatic state in healthy conditions. Homeostasis may be described as a Nash equilibrium, which ensures energy distribution in a “democratic” way regarding the functions of the parts in the complete system. Cancer radically changes the network system in the organism. Cancer is a network disease. Deviation from healthy networking appears at every level, from genetic (molecular) to cells, tissues, organs, and organisms. The strong proliferation of malignant tissue is the origin of most of the life-threatening processes. The weak side of cancer development is the change of complex information networking in the system, being vulnerable to immune attacks. Cancer cells are masters of adaptation and evade immune surveillance. This hiding process can be broken by electromagnetic nonionizing radiation, for which the malignant structure has no adaptation strategy. Our objective is to review the different sides of living complexity and use the knowledge to fight against cancer.
基金supported in part by the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant No.2022C03174)the National Natural Science Foundation of China(No.92067103)+4 种基金the Key Research and Development Program of Shaanxi,China(No.2021ZDLGY06-02)the Natural Science Foundation of Shaanxi Province(No.2019ZDLGY12-02)the Shaanxi Innovation Team Project(No.2018TD-007)the Xi'an Science and technology Innovation Plan(No.201809168CX9JC10)the Fundamental Research Funds for the Central Universities(No.YJS2212)and National 111 Program of China B16037.
文摘The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.
基金This study is partially supported by the National Natural Science Foundation of China(NSFC)(62005120,62125504).
文摘An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉).