This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to co...This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to consider the complexity of quantum states, offering a more nuanced understanding of measurement precision. By adding a complexity term to the uncertainty relation, we explore nonlinear modifications such as polynomial, exponential, and logarithmic functions. Rigorous mathematical derivations demonstrate the consistency of the modified principle with classical quantum mechanics and quantum information theory. We investigate the implications of this modified HUP for various aspects of quantum mechanics, including quantum metrology, quantum algorithms, quantum error correction, and quantum chaos. Additionally, we propose experimental protocols to test the validity of the modified HUP, evaluating their feasibility with current and near-term quantum technologies. This work highlights the importance of quantum complexity in quantum mechanics and provides a refined perspective on the interplay between complexity, entanglement, and uncertainty in quantum systems. The modified HUP has the potential to stimulate interdisciplinary research at the intersection of quantum physics, information theory, and complexity theory, with significant implications for the development of quantum technologies and the understanding of the quantum-to-classical transition.展开更多
The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingl...The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.展开更多
The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be...The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be also an NP_complete problem.The proof for the property of NP_completeness in directional spatial reasoning problem is based on two important transformations.After these transformations,a spatial configuration has been constructed based on directional constraints,and the property of NP_completeness in directional spatial reasoning has been proved with the help of the consistency of the constraints in the configuration.展开更多
In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more ...In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more general case (i.e. when alternatives are not necessarily comparable). We show that they all coincide with Top cycle choice procedure for tournaments. In case of weak tournaments, the strong Untrapped set is equivalent to Getcha choice procedure and the Weak Untrapped set is exactly the Untrapped set studied in the litterature. We also present a polynomial-time algorithm for computing each set.展开更多
This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeit...This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeither high computational costs or low accuracy.We aim to solve such contradictory relation between complexity and accuracy by using randomizedmatrix approximation.Specifically,we apply an easily-interpretablerandomized low-rank approximation to the covariance matrix(CM)and R∈C^(M×M)throughthresketch maties in the fom of R≈OBQ^(H).Here the approximately compute its subspaces.That is,we first approximate matrix Q∈C^(M×z)contains the orthonormal basis for the range of the sketchmatrik C∈C^(M×z)cwe whichis etrated fom R using randomized unifom counsampling and B∈C^(z×z)is a weight-matrix reducing the approximation error.Relying on such approximation,we are able to accelerate the subspacecomputation by the orders of the magnitude without compromising estimation accuracy.Furthermore,we drive a theoretical error bound for the suggested scheme to ensure the accuracy of the approximation.As validated by the simulation results,the DOA estimation accuracy of the proposed algorithm,eficient multiple signal classification(E-MUSIC)s high,closely tracks standardMUSIC,and outperforms the well-known algorithms with tremendouslyreduced time complexity.Thus,the devised method can realize high-resolutionreal-time target detection in the emerging multiple input and multiple output(MIMO)automotive radar systems.展开更多
Severe water erosion is notorious for its harmful effects on land-water resources as well as local societies. The scale effects of water erosion, however, greatly exacerbate the difficulties of accurate erosion evalua...Severe water erosion is notorious for its harmful effects on land-water resources as well as local societies. The scale effects of water erosion, however, greatly exacerbate the difficulties of accurate erosion evaluation and hazard control in the real world. Analyzing the related scale issues is thus urgent for a better understanding of erosion variations as well as reducing such erosion. In this review article, water erosion dynamics across three spatial scales including plot, watershed, and regional scales were selected and discussed. For the study purposes and objectives, the advantages and disadvantages of these scales all demonstrate clear spatial-scale dependence. Plot scale studies are primarily focused on abundant data collection and mechanism discrimination of erosion generation, while watershed scale studies provide valuable information for watershed management and hazard control as well as the development of quantitatively distributed models. Regional studies concentrate more on large-scale erosion assessment, and serve policymakers and stakeholders in achieving the basis for regulatory policy for comprehensive land uses. The results of this study show that the driving forces and mechanisms of water erosion variations among the scales are quite different. As a result, several major aspects contributing to variations in water erosion across the scales are stressed: differences in the methodologies across various scales, different sink-source roles on water erosion processes, and diverse climatic zones and morphological regions. This variability becomes more complex in the context of accelerated global change. The changing climatic factors and earth surface features are considered the fourth key reason responsible for the increased variability of water erosion across spatial scales.展开更多
Minimizing the impact of the mixed uncertainties(i.e.,the aleatory uncertainty and the epistemic uncertainty) for a complex product of compliant mechanism(CPCM) quality improvement signifies a fascinating research top...Minimizing the impact of the mixed uncertainties(i.e.,the aleatory uncertainty and the epistemic uncertainty) for a complex product of compliant mechanism(CPCM) quality improvement signifies a fascinating research topic to enhance the robustness.However, most of the existing works in the CPCM robust design optimization neglect the mixed uncertainties, which might result in an unstable design or even an infeasible design. To solve this issue, a response surface methodology-based hybrid robust design optimization(RSM-based HRDO) approach is proposed to improve the robustness of the quality characteristic for the CPCM via considering the mixed uncertainties in the robust design optimization. A bridge-type amplification mechanism is used to manifest the effectiveness of the proposed approach. The comparison results prove that the proposed approach can not only keep its superiority in the robustness, but also provide a robust scheme for optimizing the design parameters.展开更多
The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relations...The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relationship(QSAR) of this kind of complexes is still not clear mainly due to the difficulties to obtain their geometric molecular structures through laboratory experiments. An alternative solution is the quantum chemistry calculation in which the comformational population shall be determined. In this study, ten conformers of the title complex were obtained with the function of molecular dynamics conformational search in Gabedit 2.4.8, and their geometry optimization and thermodynamics calculation were made with a Sparkle/PM7 approach in MOPAC 2012. Their Gibbs free energies at 1 atm. and 298.15 K were calculated. Population of the conformers was further calculated out according to the theory of Boltzmann distribution, indicating that one of the ten conformers has a dominant population of 77.13%.展开更多
This paper studies the global fixed time synchronization of complex dynamical network,including non-identical nodes with disturbances and uncertainties as well as input nonlinearity.First,a novel fixed time sliding ma...This paper studies the global fixed time synchronization of complex dynamical network,including non-identical nodes with disturbances and uncertainties as well as input nonlinearity.First,a novel fixed time sliding manifold is constructed to achieve the fixed time synchronization of complex dynamical network with disturbances and uncertainties.Second,a novel sliding mode controller is proposed to realize the global fixed time reachability of sliding surfaces.The outstanding feature of the designed control is that the fixed convergence time of both reaching and sliding modes can be adjusted to the desired values in advance by choosing the explicit parameters in the controller,which does not rest upon the initial conditions and the topology of the network.Finally,the effectiveness and validity of the obtained results are demonstrated by corresponding numerical simulations.展开更多
Protein-protein complexes play an important role in the physiology and the pathology of cellular functions, and therefore are attractive therapeutic targets. A small subset of residues known as “hot spots”, accounts...Protein-protein complexes play an important role in the physiology and the pathology of cellular functions, and therefore are attractive therapeutic targets. A small subset of residues known as “hot spots”, accounts for most of the protein-protein binding free energy. Computational methods play a critical role in identifying the hotspots on the proteinprotein interface. In this paper, we use a computational alanine scanning method with all-atom force fields for predicting hotspots for 313 mutations in 16 protein complexes of known structures. We studied the effect of force fields, solvation models, and conformational sampling on the hotspot predictions. We compared the calculated change in the protein-protein interaction energies upon mutation of the residues in and near the protein-protein interface, to the experimental change in free energies. The AMBER force field (FF) predicted 86% of the hotspots among the three commonly used FF for proteins, namely, AMBER FF, Charmm27 FF, and OPLS-2005 FF. However, AMBER FF also showed a high rate of false positives, while the Charmm27 FF yielded 74% correct predictions of the hotspot residues with low false positives. Van der Waals and hydrogen bonding energy show the largest energy contribution with a high rate of prediction accuracy, while the desolvation energy was found to contribute little to improve the hot spot prediction. Using a conformational ensemble including limited backbone movement instead of one static structure leads to better predicttion of hotpsots.展开更多
Gas release and its dispersion is a major concern in chemical industries.In order to manage and mitigate the risk of gas dispersion and its consequences,it is necessary to predict gas dispersion behavior and its conce...Gas release and its dispersion is a major concern in chemical industries.In order to manage and mitigate the risk of gas dispersion and its consequences,it is necessary to predict gas dispersion behavior and its concentration at various locations upon emission.Therefore,models and commercial packages such as Phast and ALOHA have been developed.Computational fluid dynamics(CFD)can be a useful tool to simulate gas dispersion in complex areas and conditions.The validation of the models requires the employment of the experimental data from filed and wind tunnel experiments.It appears that the use of the experimental data to validate the CFD method that only includes certain monitor points and not the entire domain can lead to unreliable results for the intended areas of concern.In this work,some of the trials of the Kit Fox field experiment,which provided a wide-range database for gas dispersion,were simulated by CFD.Various scenarios were considered with different mesh sizes,physical conditions,and types of release.The results of the simulations were surveyed in the whole domain.The data matching each scenario was varied by the influence of the dominant displacement force(wind or diffusivity).Furthermore,the statistical parameters suggested for the heavy gas dispersion showed a dependency on the lower band of gas concentration.Therefore,they should be used with precaution.Finally,the results and computation cost of the simulation could be affected by the chosen scenario,the location of the intended points,and the release type.展开更多
The cathode of biofuel cell reduces molecular oxygen to water using four electrons, an enzyme of multicopper oxidase family, laccase, is contained, though its electron transfer efficiency from the electrode resulted i...The cathode of biofuel cell reduces molecular oxygen to water using four electrons, an enzyme of multicopper oxidase family, laccase, is contained, though its electron transfer efficiency from the electrode resulted in rate determining process. To improve this electron, transfer via mediators, we have investigated several mediator metal complexes between the electrode and laccase, in particular hydrophobic pocket on the surface. We have discussed DFT computational results and selected experimental data of new Mn(III/II) Schiff base complexes having redox active (anthraquinone) ligands and photochromic (azobenzene) ligands about azobenzene moiety at the sole molecular level. Moreover, we carried out computational docking simulation of laccase and complexes considering trans-cis photoisomerization (electronic states) and Weigert effect (molecular orientation to fit better) of azobenzene moiety. Additionally, actual experimental data also presented to indicate the expected merits for mediators.展开更多
Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of...Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of subcarriers for computation in each iteration, which reduces the complexity without any performance degradation. Moreover, a modified algorithm with even lower complexity is developed, and equal power allocation is introduced as an initial allocation to accelerate its convergence. Simulation results show that the modified algorithm achieves a considerable complexity reduction while causing only a minor drop in performance.展开更多
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems....Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.展开更多
For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an i...For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an important role in the PD-NOMA system because it considerably affects the total throughput and Geometric Mean User Throughput(GMUT)performance.However,most existing studies have not completely accounted for the computational complexity of the power allocation process when the User Terminals(UTs)move in a slow fading channel environment.To resolve such problems,a power allocation method is proposed to considerably reduce the search space of a Full Search Power(FSP)allocation algorithm.The initial power reallocation coefficients will be set to start with former optimal values by the proposed Lemma before searching for optimal power reallocation coefficients based on total throughput performance.Step size and correction granularity will be adjusted within a much narrower power search range while invalid power combinations may be reasonably discarded during the search process.The simulation results show that the proposed power reallocation scheme can greatly reduce computational complexity while the total throughput and GMUT performance loss are not greater than 1.5%compared with the FSP algorithm.展开更多
In the last century, there has been a significant development in the evaluation of methods to predict ground movement due to underground extraction. Some remarkable developments in three-dimensional computational meth...In the last century, there has been a significant development in the evaluation of methods to predict ground movement due to underground extraction. Some remarkable developments in three-dimensional computational methods have been supported in civil engineering, subsidence engineering and mining engineering practice. However, ground movement problem due to mining extraction sequence is effectively four dimensional (4D). A rational prediction is getting more and more important for long-term underground mining planning. Hence, computer-based analytical methods that realistically simulate spatially distributed time-dependent ground movement process are needed for the reliable long-term underground mining planning to minimize the surface environmental damages. In this research, a new computational system is developed to simulate four-dimensional (4D) ground movement by combining a stochastic medium theory, Knothe time-delay model and geographic information system (GIS) technology. All the calculations are implemented by a computational program, in which the components of GIS are used to fulfill the spatial-temporal analysis model. In this paper a tight coupling strategy based on component object model of GIS technology is used to overcome the problems of complex three-dimensional extraction model and spatial data integration. Moreover, the implementation of computational of the interfaces of the developed tool is described. The GIS based developed tool is validated by two study cases. The developed computational tool and models are achieved within the GIS system so the effective and efficient calculation methodology can be obtained, so the simulation problems of 4D ground movement due to underground mining extraction sequence can be solved by implementation of the developed tool in GIS.展开更多
The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduce...The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduced in coding system, which hinders practical application of MVC. An efficient fast mode decision method using mode complexity is proposed to reduce the computational complexity. In the proposed method, mode complexity is firstly computed by using the spatial, temporal and inter-view correlation between the current macroblock(MB) and its neighboring MBs. Based on the observation that direct mode is highly possible to be the optimal mode, mode complexity is always checked in advance whether it is below a predefined threshold for providing an efficient early termination opportunity. If this early termination condition is not met, three mode types for the MBs are classified according to the value of mode complexity, i.e., simple mode, medium mode and complex mode, to speed up the encoding process by reducing the number of the variable block modes required to be checked. Furthermore, for simple and medium mode region, the rate distortion(RD) cost of mode 16×16 in the temporal prediction direction is compared with that of the disparity prediction direction, to determine in advance whether the optimal prediction direction is in the temporal prediction direction or not, for skipping unnecessary disparity estimation. Experimental results show that the proposed method is able to significantly reduce the computational load by 78.79% and the total bit rate by 0.07% on average, while only incurring a negligible loss of PSNR(about 0.04 d B on average), compared with the full mode decision(FMD) in the reference software of MVC.展开更多
The implicit Colebrook equation has been the standard for estimating pipe friction factor in a fully developed turbulent regime. Several alternative explicit models to the Colebrook equation have been proposed. To dat...The implicit Colebrook equation has been the standard for estimating pipe friction factor in a fully developed turbulent regime. Several alternative explicit models to the Colebrook equation have been proposed. To date, most of the accurate explicit models have been those with three logarithmic functions, but they require more computational time than the Colebrook equation. In this study, a new explicit non-linear regression model which has only two logarithmic functions is developed. The new model, when compared with the existing extremely accurate models, gives rise to the least average and maximum relative errors of 0.0025% and 0.0664%, respectively. Moreover, it requires far less computational time than the Colebrook equation. It is therefore concluded that the new explicit model provides a good trade-off between accuracy and relative computational efficiency for pipe friction factor estimation in the fully developed turbulent flow regime.展开更多
文摘This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to consider the complexity of quantum states, offering a more nuanced understanding of measurement precision. By adding a complexity term to the uncertainty relation, we explore nonlinear modifications such as polynomial, exponential, and logarithmic functions. Rigorous mathematical derivations demonstrate the consistency of the modified principle with classical quantum mechanics and quantum information theory. We investigate the implications of this modified HUP for various aspects of quantum mechanics, including quantum metrology, quantum algorithms, quantum error correction, and quantum chaos. Additionally, we propose experimental protocols to test the validity of the modified HUP, evaluating their feasibility with current and near-term quantum technologies. This work highlights the importance of quantum complexity in quantum mechanics and provides a refined perspective on the interplay between complexity, entanglement, and uncertainty in quantum systems. The modified HUP has the potential to stimulate interdisciplinary research at the intersection of quantum physics, information theory, and complexity theory, with significant implications for the development of quantum technologies and the understanding of the quantum-to-classical transition.
文摘The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.
文摘The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be also an NP_complete problem.The proof for the property of NP_completeness in directional spatial reasoning problem is based on two important transformations.After these transformations,a spatial configuration has been constructed based on directional constraints,and the property of NP_completeness in directional spatial reasoning has been proved with the help of the consistency of the constraints in the configuration.
文摘In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more general case (i.e. when alternatives are not necessarily comparable). We show that they all coincide with Top cycle choice procedure for tournaments. In case of weak tournaments, the strong Untrapped set is equivalent to Getcha choice procedure and the Weak Untrapped set is exactly the Untrapped set studied in the litterature. We also present a polynomial-time algorithm for computing each set.
文摘This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeither high computational costs or low accuracy.We aim to solve such contradictory relation between complexity and accuracy by using randomizedmatrix approximation.Specifically,we apply an easily-interpretablerandomized low-rank approximation to the covariance matrix(CM)and R∈C^(M×M)throughthresketch maties in the fom of R≈OBQ^(H).Here the approximately compute its subspaces.That is,we first approximate matrix Q∈C^(M×z)contains the orthonormal basis for the range of the sketchmatrik C∈C^(M×z)cwe whichis etrated fom R using randomized unifom counsampling and B∈C^(z×z)is a weight-matrix reducing the approximation error.Relying on such approximation,we are able to accelerate the subspacecomputation by the orders of the magnitude without compromising estimation accuracy.Furthermore,we drive a theoretical error bound for the suggested scheme to ensure the accuracy of the approximation.As validated by the simulation results,the DOA estimation accuracy of the proposed algorithm,eficient multiple signal classification(E-MUSIC)s high,closely tracks standardMUSIC,and outperforms the well-known algorithms with tremendouslyreduced time complexity.Thus,the devised method can realize high-resolutionreal-time target detection in the emerging multiple input and multiple output(MIMO)automotive radar systems.
基金Under the auspices of National Natural Science Foundation of China (No. 40925003, 40930528, 40801041)
文摘Severe water erosion is notorious for its harmful effects on land-water resources as well as local societies. The scale effects of water erosion, however, greatly exacerbate the difficulties of accurate erosion evaluation and hazard control in the real world. Analyzing the related scale issues is thus urgent for a better understanding of erosion variations as well as reducing such erosion. In this review article, water erosion dynamics across three spatial scales including plot, watershed, and regional scales were selected and discussed. For the study purposes and objectives, the advantages and disadvantages of these scales all demonstrate clear spatial-scale dependence. Plot scale studies are primarily focused on abundant data collection and mechanism discrimination of erosion generation, while watershed scale studies provide valuable information for watershed management and hazard control as well as the development of quantitatively distributed models. Regional studies concentrate more on large-scale erosion assessment, and serve policymakers and stakeholders in achieving the basis for regulatory policy for comprehensive land uses. The results of this study show that the driving forces and mechanisms of water erosion variations among the scales are quite different. As a result, several major aspects contributing to variations in water erosion across the scales are stressed: differences in the methodologies across various scales, different sink-source roles on water erosion processes, and diverse climatic zones and morphological regions. This variability becomes more complex in the context of accelerated global change. The changing climatic factors and earth surface features are considered the fourth key reason responsible for the increased variability of water erosion across spatial scales.
基金supported by the National Natural Science Foundation of China(71702072 71811540414+2 种基金 71573115)the Natural Science Foundation for Jiangsu Institutions(BK20170810)the Ministry of Education of Humanities and Social Science Planning Fund(18YJA630008)
文摘Minimizing the impact of the mixed uncertainties(i.e.,the aleatory uncertainty and the epistemic uncertainty) for a complex product of compliant mechanism(CPCM) quality improvement signifies a fascinating research topic to enhance the robustness.However, most of the existing works in the CPCM robust design optimization neglect the mixed uncertainties, which might result in an unstable design or even an infeasible design. To solve this issue, a response surface methodology-based hybrid robust design optimization(RSM-based HRDO) approach is proposed to improve the robustness of the quality characteristic for the CPCM via considering the mixed uncertainties in the robust design optimization. A bridge-type amplification mechanism is used to manifest the effectiveness of the proposed approach. The comparison results prove that the proposed approach can not only keep its superiority in the robustness, but also provide a robust scheme for optimizing the design parameters.
基金supported by the National Natural Science Foundation of China(No.21476119)
文摘The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relationship(QSAR) of this kind of complexes is still not clear mainly due to the difficulties to obtain their geometric molecular structures through laboratory experiments. An alternative solution is the quantum chemistry calculation in which the comformational population shall be determined. In this study, ten conformers of the title complex were obtained with the function of molecular dynamics conformational search in Gabedit 2.4.8, and their geometry optimization and thermodynamics calculation were made with a Sparkle/PM7 approach in MOPAC 2012. Their Gibbs free energies at 1 atm. and 298.15 K were calculated. Population of the conformers was further calculated out according to the theory of Boltzmann distribution, indicating that one of the ten conformers has a dominant population of 77.13%.
文摘This paper studies the global fixed time synchronization of complex dynamical network,including non-identical nodes with disturbances and uncertainties as well as input nonlinearity.First,a novel fixed time sliding manifold is constructed to achieve the fixed time synchronization of complex dynamical network with disturbances and uncertainties.Second,a novel sliding mode controller is proposed to realize the global fixed time reachability of sliding surfaces.The outstanding feature of the designed control is that the fixed convergence time of both reaching and sliding modes can be adjusted to the desired values in advance by choosing the explicit parameters in the controller,which does not rest upon the initial conditions and the topology of the network.Finally,the effectiveness and validity of the obtained results are demonstrated by corresponding numerical simulations.
文摘Protein-protein complexes play an important role in the physiology and the pathology of cellular functions, and therefore are attractive therapeutic targets. A small subset of residues known as “hot spots”, accounts for most of the protein-protein binding free energy. Computational methods play a critical role in identifying the hotspots on the proteinprotein interface. In this paper, we use a computational alanine scanning method with all-atom force fields for predicting hotspots for 313 mutations in 16 protein complexes of known structures. We studied the effect of force fields, solvation models, and conformational sampling on the hotspot predictions. We compared the calculated change in the protein-protein interaction energies upon mutation of the residues in and near the protein-protein interface, to the experimental change in free energies. The AMBER force field (FF) predicted 86% of the hotspots among the three commonly used FF for proteins, namely, AMBER FF, Charmm27 FF, and OPLS-2005 FF. However, AMBER FF also showed a high rate of false positives, while the Charmm27 FF yielded 74% correct predictions of the hotspot residues with low false positives. Van der Waals and hydrogen bonding energy show the largest energy contribution with a high rate of prediction accuracy, while the desolvation energy was found to contribute little to improve the hot spot prediction. Using a conformational ensemble including limited backbone movement instead of one static structure leads to better predicttion of hotpsots.
基金the support provided by the Iranian Research Organization for Scientific and Technology(IROST)in conducting this research。
文摘Gas release and its dispersion is a major concern in chemical industries.In order to manage and mitigate the risk of gas dispersion and its consequences,it is necessary to predict gas dispersion behavior and its concentration at various locations upon emission.Therefore,models and commercial packages such as Phast and ALOHA have been developed.Computational fluid dynamics(CFD)can be a useful tool to simulate gas dispersion in complex areas and conditions.The validation of the models requires the employment of the experimental data from filed and wind tunnel experiments.It appears that the use of the experimental data to validate the CFD method that only includes certain monitor points and not the entire domain can lead to unreliable results for the intended areas of concern.In this work,some of the trials of the Kit Fox field experiment,which provided a wide-range database for gas dispersion,were simulated by CFD.Various scenarios were considered with different mesh sizes,physical conditions,and types of release.The results of the simulations were surveyed in the whole domain.The data matching each scenario was varied by the influence of the dominant displacement force(wind or diffusivity).Furthermore,the statistical parameters suggested for the heavy gas dispersion showed a dependency on the lower band of gas concentration.Therefore,they should be used with precaution.Finally,the results and computation cost of the simulation could be affected by the chosen scenario,the location of the intended points,and the release type.
文摘The cathode of biofuel cell reduces molecular oxygen to water using four electrons, an enzyme of multicopper oxidase family, laccase, is contained, though its electron transfer efficiency from the electrode resulted in rate determining process. To improve this electron, transfer via mediators, we have investigated several mediator metal complexes between the electrode and laccase, in particular hydrophobic pocket on the surface. We have discussed DFT computational results and selected experimental data of new Mn(III/II) Schiff base complexes having redox active (anthraquinone) ligands and photochromic (azobenzene) ligands about azobenzene moiety at the sole molecular level. Moreover, we carried out computational docking simulation of laccase and complexes considering trans-cis photoisomerization (electronic states) and Weigert effect (molecular orientation to fit better) of azobenzene moiety. Additionally, actual experimental data also presented to indicate the expected merits for mediators.
基金The National High Technology Research and Devel-opment Program of China (863Program) (No2006AA01Z263)the National Natural Science Foundation of China (No60496311)
文摘Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of subcarriers for computation in each iteration, which reduces the complexity without any performance degradation. Moreover, a modified algorithm with even lower complexity is developed, and equal power allocation is introduced as an initial allocation to accelerate its convergence. Simulation results show that the modified algorithm achieves a considerable complexity reduction while causing only a minor drop in performance.
基金This work was supported by an EPSRC grant (No.EP/C520696/1).
文摘Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.
基金supported in part by the Science and Technology Research Program of the National Science Foundation of China(61671096)Chongqing Research Program of Basic Science and Frontier Technology(cstc2017jcyjBX0005)+1 种基金Chongqing Municipal Education Commission(KJQN201800642)Doctoral Student Training Program(BYJS2016009).
文摘For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an important role in the PD-NOMA system because it considerably affects the total throughput and Geometric Mean User Throughput(GMUT)performance.However,most existing studies have not completely accounted for the computational complexity of the power allocation process when the User Terminals(UTs)move in a slow fading channel environment.To resolve such problems,a power allocation method is proposed to considerably reduce the search space of a Full Search Power(FSP)allocation algorithm.The initial power reallocation coefficients will be set to start with former optimal values by the proposed Lemma before searching for optimal power reallocation coefficients based on total throughput performance.Step size and correction granularity will be adjusted within a much narrower power search range while invalid power combinations may be reasonably discarded during the search process.The simulation results show that the proposed power reallocation scheme can greatly reduce computational complexity while the total throughput and GMUT performance loss are not greater than 1.5%compared with the FSP algorithm.
文摘In the last century, there has been a significant development in the evaluation of methods to predict ground movement due to underground extraction. Some remarkable developments in three-dimensional computational methods have been supported in civil engineering, subsidence engineering and mining engineering practice. However, ground movement problem due to mining extraction sequence is effectively four dimensional (4D). A rational prediction is getting more and more important for long-term underground mining planning. Hence, computer-based analytical methods that realistically simulate spatially distributed time-dependent ground movement process are needed for the reliable long-term underground mining planning to minimize the surface environmental damages. In this research, a new computational system is developed to simulate four-dimensional (4D) ground movement by combining a stochastic medium theory, Knothe time-delay model and geographic information system (GIS) technology. All the calculations are implemented by a computational program, in which the components of GIS are used to fulfill the spatial-temporal analysis model. In this paper a tight coupling strategy based on component object model of GIS technology is used to overcome the problems of complex three-dimensional extraction model and spatial data integration. Moreover, the implementation of computational of the interfaces of the developed tool is described. The GIS based developed tool is validated by two study cases. The developed computational tool and models are achieved within the GIS system so the effective and efficient calculation methodology can be obtained, so the simulation problems of 4D ground movement due to underground mining extraction sequence can be solved by implementation of the developed tool in GIS.
基金Project(08Y29-7)supported by the Transportation Science and Research Program of Jiangsu Province,ChinaProject(201103051)supported by the Major Infrastructure Program of the Health Monitoring System Hardware Platform Based on Sensor Network Node,China+1 种基金Project(61100111)supported by the National Natural Science Foundation of ChinaProject(BE2011169)supported by the Scientific and Technical Supporting Program of Jiangsu Province,China
文摘The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduced in coding system, which hinders practical application of MVC. An efficient fast mode decision method using mode complexity is proposed to reduce the computational complexity. In the proposed method, mode complexity is firstly computed by using the spatial, temporal and inter-view correlation between the current macroblock(MB) and its neighboring MBs. Based on the observation that direct mode is highly possible to be the optimal mode, mode complexity is always checked in advance whether it is below a predefined threshold for providing an efficient early termination opportunity. If this early termination condition is not met, three mode types for the MBs are classified according to the value of mode complexity, i.e., simple mode, medium mode and complex mode, to speed up the encoding process by reducing the number of the variable block modes required to be checked. Furthermore, for simple and medium mode region, the rate distortion(RD) cost of mode 16×16 in the temporal prediction direction is compared with that of the disparity prediction direction, to determine in advance whether the optimal prediction direction is in the temporal prediction direction or not, for skipping unnecessary disparity estimation. Experimental results show that the proposed method is able to significantly reduce the computational load by 78.79% and the total bit rate by 0.07% on average, while only incurring a negligible loss of PSNR(about 0.04 d B on average), compared with the full mode decision(FMD) in the reference software of MVC.
文摘The implicit Colebrook equation has been the standard for estimating pipe friction factor in a fully developed turbulent regime. Several alternative explicit models to the Colebrook equation have been proposed. To date, most of the accurate explicit models have been those with three logarithmic functions, but they require more computational time than the Colebrook equation. In this study, a new explicit non-linear regression model which has only two logarithmic functions is developed. The new model, when compared with the existing extremely accurate models, gives rise to the least average and maximum relative errors of 0.0025% and 0.0664%, respectively. Moreover, it requires far less computational time than the Colebrook equation. It is therefore concluded that the new explicit model provides a good trade-off between accuracy and relative computational efficiency for pipe friction factor estimation in the fully developed turbulent flow regime.