The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingl...The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.展开更多
The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be...The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be also an NP_complete problem.The proof for the property of NP_completeness in directional spatial reasoning problem is based on two important transformations.After these transformations,a spatial configuration has been constructed based on directional constraints,and the property of NP_completeness in directional spatial reasoning has been proved with the help of the consistency of the constraints in the configuration.展开更多
In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more ...In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more general case (i.e. when alternatives are not necessarily comparable). We show that they all coincide with Top cycle choice procedure for tournaments. In case of weak tournaments, the strong Untrapped set is equivalent to Getcha choice procedure and the Weak Untrapped set is exactly the Untrapped set studied in the litterature. We also present a polynomial-time algorithm for computing each set.展开更多
The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relations...The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relationship(QSAR) of this kind of complexes is still not clear mainly due to the difficulties to obtain their geometric molecular structures through laboratory experiments. An alternative solution is the quantum chemistry calculation in which the comformational population shall be determined. In this study, ten conformers of the title complex were obtained with the function of molecular dynamics conformational search in Gabedit 2.4.8, and their geometry optimization and thermodynamics calculation were made with a Sparkle/PM7 approach in MOPAC 2012. Their Gibbs free energies at 1 atm. and 298.15 K were calculated. Population of the conformers was further calculated out according to the theory of Boltzmann distribution, indicating that one of the ten conformers has a dominant population of 77.13%.展开更多
The implicit Colebrook equation has been the standard for estimating pipe friction factor in a fully developed turbulent regime. Several alternative explicit models to the Colebrook equation have been proposed. To dat...The implicit Colebrook equation has been the standard for estimating pipe friction factor in a fully developed turbulent regime. Several alternative explicit models to the Colebrook equation have been proposed. To date, most of the accurate explicit models have been those with three logarithmic functions, but they require more computational time than the Colebrook equation. In this study, a new explicit non-linear regression model which has only two logarithmic functions is developed. The new model, when compared with the existing extremely accurate models, gives rise to the least average and maximum relative errors of 0.0025% and 0.0664%, respectively. Moreover, it requires far less computational time than the Colebrook equation. It is therefore concluded that the new explicit model provides a good trade-off between accuracy and relative computational efficiency for pipe friction factor estimation in the fully developed turbulent flow regime.展开更多
Protein-protein complexes play an important role in the physiology and the pathology of cellular functions, and therefore are attractive therapeutic targets. A small subset of residues known as “hot spots”, accounts...Protein-protein complexes play an important role in the physiology and the pathology of cellular functions, and therefore are attractive therapeutic targets. A small subset of residues known as “hot spots”, accounts for most of the protein-protein binding free energy. Computational methods play a critical role in identifying the hotspots on the proteinprotein interface. In this paper, we use a computational alanine scanning method with all-atom force fields for predicting hotspots for 313 mutations in 16 protein complexes of known structures. We studied the effect of force fields, solvation models, and conformational sampling on the hotspot predictions. We compared the calculated change in the protein-protein interaction energies upon mutation of the residues in and near the protein-protein interface, to the experimental change in free energies. The AMBER force field (FF) predicted 86% of the hotspots among the three commonly used FF for proteins, namely, AMBER FF, Charmm27 FF, and OPLS-2005 FF. However, AMBER FF also showed a high rate of false positives, while the Charmm27 FF yielded 74% correct predictions of the hotspot residues with low false positives. Van der Waals and hydrogen bonding energy show the largest energy contribution with a high rate of prediction accuracy, while the desolvation energy was found to contribute little to improve the hot spot prediction. Using a conformational ensemble including limited backbone movement instead of one static structure leads to better predicttion of hotpsots.展开更多
Gas release and its dispersion is a major concern in chemical industries.In order to manage and mitigate the risk of gas dispersion and its consequences,it is necessary to predict gas dispersion behavior and its conce...Gas release and its dispersion is a major concern in chemical industries.In order to manage and mitigate the risk of gas dispersion and its consequences,it is necessary to predict gas dispersion behavior and its concentration at various locations upon emission.Therefore,models and commercial packages such as Phast and ALOHA have been developed.Computational fluid dynamics(CFD)can be a useful tool to simulate gas dispersion in complex areas and conditions.The validation of the models requires the employment of the experimental data from filed and wind tunnel experiments.It appears that the use of the experimental data to validate the CFD method that only includes certain monitor points and not the entire domain can lead to unreliable results for the intended areas of concern.In this work,some of the trials of the Kit Fox field experiment,which provided a wide-range database for gas dispersion,were simulated by CFD.Various scenarios were considered with different mesh sizes,physical conditions,and types of release.The results of the simulations were surveyed in the whole domain.The data matching each scenario was varied by the influence of the dominant displacement force(wind or diffusivity).Furthermore,the statistical parameters suggested for the heavy gas dispersion showed a dependency on the lower band of gas concentration.Therefore,they should be used with precaution.Finally,the results and computation cost of the simulation could be affected by the chosen scenario,the location of the intended points,and the release type.展开更多
The cathode of biofuel cell reduces molecular oxygen to water using four electrons, an enzyme of multicopper oxidase family, laccase, is contained, though its electron transfer efficiency from the electrode resulted i...The cathode of biofuel cell reduces molecular oxygen to water using four electrons, an enzyme of multicopper oxidase family, laccase, is contained, though its electron transfer efficiency from the electrode resulted in rate determining process. To improve this electron, transfer via mediators, we have investigated several mediator metal complexes between the electrode and laccase, in particular hydrophobic pocket on the surface. We have discussed DFT computational results and selected experimental data of new Mn(III/II) Schiff base complexes having redox active (anthraquinone) ligands and photochromic (azobenzene) ligands about azobenzene moiety at the sole molecular level. Moreover, we carried out computational docking simulation of laccase and complexes considering trans-cis photoisomerization (electronic states) and Weigert effect (molecular orientation to fit better) of azobenzene moiety. Additionally, actual experimental data also presented to indicate the expected merits for mediators.展开更多
Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of...Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of subcarriers for computation in each iteration, which reduces the complexity without any performance degradation. Moreover, a modified algorithm with even lower complexity is developed, and equal power allocation is introduced as an initial allocation to accelerate its convergence. Simulation results show that the modified algorithm achieves a considerable complexity reduction while causing only a minor drop in performance.展开更多
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems....Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.展开更多
For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an i...For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an important role in the PD-NOMA system because it considerably affects the total throughput and Geometric Mean User Throughput(GMUT)performance.However,most existing studies have not completely accounted for the computational complexity of the power allocation process when the User Terminals(UTs)move in a slow fading channel environment.To resolve such problems,a power allocation method is proposed to considerably reduce the search space of a Full Search Power(FSP)allocation algorithm.The initial power reallocation coefficients will be set to start with former optimal values by the proposed Lemma before searching for optimal power reallocation coefficients based on total throughput performance.Step size and correction granularity will be adjusted within a much narrower power search range while invalid power combinations may be reasonably discarded during the search process.The simulation results show that the proposed power reallocation scheme can greatly reduce computational complexity while the total throughput and GMUT performance loss are not greater than 1.5%compared with the FSP algorithm.展开更多
In the last century, there has been a significant development in the evaluation of methods to predict ground movement due to underground extraction. Some remarkable developments in three-dimensional computational meth...In the last century, there has been a significant development in the evaluation of methods to predict ground movement due to underground extraction. Some remarkable developments in three-dimensional computational methods have been supported in civil engineering, subsidence engineering and mining engineering practice. However, ground movement problem due to mining extraction sequence is effectively four dimensional (4D). A rational prediction is getting more and more important for long-term underground mining planning. Hence, computer-based analytical methods that realistically simulate spatially distributed time-dependent ground movement process are needed for the reliable long-term underground mining planning to minimize the surface environmental damages. In this research, a new computational system is developed to simulate four-dimensional (4D) ground movement by combining a stochastic medium theory, Knothe time-delay model and geographic information system (GIS) technology. All the calculations are implemented by a computational program, in which the components of GIS are used to fulfill the spatial-temporal analysis model. In this paper a tight coupling strategy based on component object model of GIS technology is used to overcome the problems of complex three-dimensional extraction model and spatial data integration. Moreover, the implementation of computational of the interfaces of the developed tool is described. The GIS based developed tool is validated by two study cases. The developed computational tool and models are achieved within the GIS system so the effective and efficient calculation methodology can be obtained, so the simulation problems of 4D ground movement due to underground mining extraction sequence can be solved by implementation of the developed tool in GIS.展开更多
Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has bee...Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has been widely applied in the core network. With the tremendous growth in mobile traffic and services, it is natural to extend virtualization technology to the cloud computing based radio access networks(CCRANs) for achieving high spectral efficiency with low cost.In this paper, the virtualization technologies in CC-RANs are surveyed, including the system architecture, key enabling techniques, challenges, and open issues. The enabling key technologies for virtualization in CC-RANs mainly including virtual resource allocation, radio access network(RAN) slicing, mobility management, and social-awareness have been comprehensively surveyed to satisfy the isolation, customization and high-efficiency utilization of radio resources. The challenges and open issues mainly focus on virtualization levels for CC-RANs, signaling design for CC-RAN virtualization, performance analysis for CC-RAN virtualization, and network security for virtualized CC-RANs.展开更多
This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeit...This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeither high computational costs or low accuracy.We aim to solve such contradictory relation between complexity and accuracy by using randomizedmatrix approximation.Specifically,we apply an easily-interpretablerandomized low-rank approximation to the covariance matrix(CM)and R∈C^(M×M)throughthresketch maties in the fom of R≈OBQ^(H).Here the approximately compute its subspaces.That is,we first approximate matrix Q∈C^(M×z)contains the orthonormal basis for the range of the sketchmatrik C∈C^(M×z)cwe whichis etrated fom R using randomized unifom counsampling and B∈C^(z×z)is a weight-matrix reducing the approximation error.Relying on such approximation,we are able to accelerate the subspacecomputation by the orders of the magnitude without compromising estimation accuracy.Furthermore,we drive a theoretical error bound for the suggested scheme to ensure the accuracy of the approximation.As validated by the simulation results,the DOA estimation accuracy of the proposed algorithm,eficient multiple signal classification(E-MUSIC)s high,closely tracks standardMUSIC,and outperforms the well-known algorithms with tremendouslyreduced time complexity.Thus,the devised method can realize high-resolutionreal-time target detection in the emerging multiple input and multiple output(MIMO)automotive radar systems.展开更多
The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduce...The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduced in coding system, which hinders practical application of MVC. An efficient fast mode decision method using mode complexity is proposed to reduce the computational complexity. In the proposed method, mode complexity is firstly computed by using the spatial, temporal and inter-view correlation between the current macroblock(MB) and its neighboring MBs. Based on the observation that direct mode is highly possible to be the optimal mode, mode complexity is always checked in advance whether it is below a predefined threshold for providing an efficient early termination opportunity. If this early termination condition is not met, three mode types for the MBs are classified according to the value of mode complexity, i.e., simple mode, medium mode and complex mode, to speed up the encoding process by reducing the number of the variable block modes required to be checked. Furthermore, for simple and medium mode region, the rate distortion(RD) cost of mode 16×16 in the temporal prediction direction is compared with that of the disparity prediction direction, to determine in advance whether the optimal prediction direction is in the temporal prediction direction or not, for skipping unnecessary disparity estimation. Experimental results show that the proposed method is able to significantly reduce the computational load by 78.79% and the total bit rate by 0.07% on average, while only incurring a negligible loss of PSNR(about 0.04 d B on average), compared with the full mode decision(FMD) in the reference software of MVC.展开更多
Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more d...Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more degrees of freedom(DOFs)and better angle estimation performance.Furthermore,a computationally efficient DOA estimation algorithm is proposed.The discrete Fourier transform(DFT)method is utilized to obtain coarse DOA estimates,and subsequently,fine DOA estimates are achieved by spatial smoothing multiple signals classification(SS-MUSIC)algorithm.Compared to SS-MUSIC algorithm,the proposed algorithm has the same estimation accuracy with lower computational complexity because the coarse DOA estimates enable to shrink the range of angle spectral search.In addition,the estimation of the number of signals is not required in advance by DFT method.Extensive simulation results testify the effectiveness of the proposed algorithm.展开更多
Orbital angular momentum(OAM),emerging as an inherently high-dimensional property of photons,has boosted information capacity in optical communications.However,the potential of OAM in optical computing remains almost ...Orbital angular momentum(OAM),emerging as an inherently high-dimensional property of photons,has boosted information capacity in optical communications.However,the potential of OAM in optical computing remains almost unexplored.Here,we present a highly efficient optical computing protocol for complex vector convolution with the superposition of high-dimensional OAM eigenmodes.We used two cascaded spatial light modulators to prepare suitable OAM superpositions to encode two complex vectors.Then,a deep-learning strategy is devised to decode the complex OAM spectrum,thus accomplishing the optical convolution task.In our experiment,we succeed in demonstrating 7-,9-,and 11-dimensional complex vector convolutions,in which an average proximity better than 95%and a mean relative error<6%are achieved.Our present scheme can be extended to incorporate other degrees of freedom for a more versatile optical computing in the high-dimensional Hilbert space.展开更多
In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantu...In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantum entangled cluster states based on classical computations, the precise and detailed meaning of the computing power of the correlations in the quantum cluster states is made. This study exposes a connection, arousing interest, between the infringement of the realistic models that are local and the computing power of the quantum entangled cluster states.展开更多
While finite volume methodologies (FVM) have predominated in fluid flow computations, many flow problems, including groundwater models, would benefit from the use of boundary methods, such as the Complex Variable Boun...While finite volume methodologies (FVM) have predominated in fluid flow computations, many flow problems, including groundwater models, would benefit from the use of boundary methods, such as the Complex Variable Boundary Element Method (CVBEM). However, to date, there has been no reporting of a comparison of computational results between the FVM and the CVBEM in the assessment of flow field characteristics. In this work, the CVBEM is used to develop a flow field vector outcome of ideal fluid flow in a 90-degree bend which is then compared to the computational results from a finite volume model of the same situation. The focus of the modelling comparison in the current work is flow field trajectory vectors of the fluid flow, with respect to vector magnitude and direction. Such a comparison is necessary to validate the development of flow field vectors from the CVBEM and is of interest to many engineering flow problems, specifically groundwater modelling. Comparison of the CVBEM and FVM flow field trajectory vectors for the target problem of ideal flow in a 90-degree bend shows good agreement between the considered methodologies.展开更多
文摘The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.
文摘The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be also an NP_complete problem.The proof for the property of NP_completeness in directional spatial reasoning problem is based on two important transformations.After these transformations,a spatial configuration has been constructed based on directional constraints,and the property of NP_completeness in directional spatial reasoning has been proved with the help of the consistency of the constraints in the configuration.
文摘In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more general case (i.e. when alternatives are not necessarily comparable). We show that they all coincide with Top cycle choice procedure for tournaments. In case of weak tournaments, the strong Untrapped set is equivalent to Getcha choice procedure and the Weak Untrapped set is exactly the Untrapped set studied in the litterature. We also present a polynomial-time algorithm for computing each set.
基金supported by the National Natural Science Foundation of China(No.21476119)
文摘The title complex is widely used as an efficient key component of Ziegler-Natta catalyst for stereospecific polymerization of dienes to produce synthetic rubbers. However, the quantitative structure-activity relationship(QSAR) of this kind of complexes is still not clear mainly due to the difficulties to obtain their geometric molecular structures through laboratory experiments. An alternative solution is the quantum chemistry calculation in which the comformational population shall be determined. In this study, ten conformers of the title complex were obtained with the function of molecular dynamics conformational search in Gabedit 2.4.8, and their geometry optimization and thermodynamics calculation were made with a Sparkle/PM7 approach in MOPAC 2012. Their Gibbs free energies at 1 atm. and 298.15 K were calculated. Population of the conformers was further calculated out according to the theory of Boltzmann distribution, indicating that one of the ten conformers has a dominant population of 77.13%.
文摘The implicit Colebrook equation has been the standard for estimating pipe friction factor in a fully developed turbulent regime. Several alternative explicit models to the Colebrook equation have been proposed. To date, most of the accurate explicit models have been those with three logarithmic functions, but they require more computational time than the Colebrook equation. In this study, a new explicit non-linear regression model which has only two logarithmic functions is developed. The new model, when compared with the existing extremely accurate models, gives rise to the least average and maximum relative errors of 0.0025% and 0.0664%, respectively. Moreover, it requires far less computational time than the Colebrook equation. It is therefore concluded that the new explicit model provides a good trade-off between accuracy and relative computational efficiency for pipe friction factor estimation in the fully developed turbulent flow regime.
文摘Protein-protein complexes play an important role in the physiology and the pathology of cellular functions, and therefore are attractive therapeutic targets. A small subset of residues known as “hot spots”, accounts for most of the protein-protein binding free energy. Computational methods play a critical role in identifying the hotspots on the proteinprotein interface. In this paper, we use a computational alanine scanning method with all-atom force fields for predicting hotspots for 313 mutations in 16 protein complexes of known structures. We studied the effect of force fields, solvation models, and conformational sampling on the hotspot predictions. We compared the calculated change in the protein-protein interaction energies upon mutation of the residues in and near the protein-protein interface, to the experimental change in free energies. The AMBER force field (FF) predicted 86% of the hotspots among the three commonly used FF for proteins, namely, AMBER FF, Charmm27 FF, and OPLS-2005 FF. However, AMBER FF also showed a high rate of false positives, while the Charmm27 FF yielded 74% correct predictions of the hotspot residues with low false positives. Van der Waals and hydrogen bonding energy show the largest energy contribution with a high rate of prediction accuracy, while the desolvation energy was found to contribute little to improve the hot spot prediction. Using a conformational ensemble including limited backbone movement instead of one static structure leads to better predicttion of hotpsots.
基金the support provided by the Iranian Research Organization for Scientific and Technology(IROST)in conducting this research。
文摘Gas release and its dispersion is a major concern in chemical industries.In order to manage and mitigate the risk of gas dispersion and its consequences,it is necessary to predict gas dispersion behavior and its concentration at various locations upon emission.Therefore,models and commercial packages such as Phast and ALOHA have been developed.Computational fluid dynamics(CFD)can be a useful tool to simulate gas dispersion in complex areas and conditions.The validation of the models requires the employment of the experimental data from filed and wind tunnel experiments.It appears that the use of the experimental data to validate the CFD method that only includes certain monitor points and not the entire domain can lead to unreliable results for the intended areas of concern.In this work,some of the trials of the Kit Fox field experiment,which provided a wide-range database for gas dispersion,were simulated by CFD.Various scenarios were considered with different mesh sizes,physical conditions,and types of release.The results of the simulations were surveyed in the whole domain.The data matching each scenario was varied by the influence of the dominant displacement force(wind or diffusivity).Furthermore,the statistical parameters suggested for the heavy gas dispersion showed a dependency on the lower band of gas concentration.Therefore,they should be used with precaution.Finally,the results and computation cost of the simulation could be affected by the chosen scenario,the location of the intended points,and the release type.
文摘The cathode of biofuel cell reduces molecular oxygen to water using four electrons, an enzyme of multicopper oxidase family, laccase, is contained, though its electron transfer efficiency from the electrode resulted in rate determining process. To improve this electron, transfer via mediators, we have investigated several mediator metal complexes between the electrode and laccase, in particular hydrophobic pocket on the surface. We have discussed DFT computational results and selected experimental data of new Mn(III/II) Schiff base complexes having redox active (anthraquinone) ligands and photochromic (azobenzene) ligands about azobenzene moiety at the sole molecular level. Moreover, we carried out computational docking simulation of laccase and complexes considering trans-cis photoisomerization (electronic states) and Weigert effect (molecular orientation to fit better) of azobenzene moiety. Additionally, actual experimental data also presented to indicate the expected merits for mediators.
基金The National High Technology Research and Devel-opment Program of China (863Program) (No2006AA01Z263)the National Natural Science Foundation of China (No60496311)
文摘Based on the iterative bit-filling procedure, a computationally efficient bit and power allocation algorithm is presented. The algorithm improves the conventional bit-filling algorithms by maintaining only a subset of subcarriers for computation in each iteration, which reduces the complexity without any performance degradation. Moreover, a modified algorithm with even lower complexity is developed, and equal power allocation is introduced as an initial allocation to accelerate its convergence. Simulation results show that the modified algorithm achieves a considerable complexity reduction while causing only a minor drop in performance.
基金This work was supported by an EPSRC grant (No.EP/C520696/1).
文摘Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.
基金supported in part by the Science and Technology Research Program of the National Science Foundation of China(61671096)Chongqing Research Program of Basic Science and Frontier Technology(cstc2017jcyjBX0005)+1 种基金Chongqing Municipal Education Commission(KJQN201800642)Doctoral Student Training Program(BYJS2016009).
文摘For future wireless communication systems,Power Domain Non-Orthogonal Multiple Access(PD-NOMA)using an advanced receiver has been considered as a promising radio access technology candidate.Power allocation plays an important role in the PD-NOMA system because it considerably affects the total throughput and Geometric Mean User Throughput(GMUT)performance.However,most existing studies have not completely accounted for the computational complexity of the power allocation process when the User Terminals(UTs)move in a slow fading channel environment.To resolve such problems,a power allocation method is proposed to considerably reduce the search space of a Full Search Power(FSP)allocation algorithm.The initial power reallocation coefficients will be set to start with former optimal values by the proposed Lemma before searching for optimal power reallocation coefficients based on total throughput performance.Step size and correction granularity will be adjusted within a much narrower power search range while invalid power combinations may be reasonably discarded during the search process.The simulation results show that the proposed power reallocation scheme can greatly reduce computational complexity while the total throughput and GMUT performance loss are not greater than 1.5%compared with the FSP algorithm.
文摘In the last century, there has been a significant development in the evaluation of methods to predict ground movement due to underground extraction. Some remarkable developments in three-dimensional computational methods have been supported in civil engineering, subsidence engineering and mining engineering practice. However, ground movement problem due to mining extraction sequence is effectively four dimensional (4D). A rational prediction is getting more and more important for long-term underground mining planning. Hence, computer-based analytical methods that realistically simulate spatially distributed time-dependent ground movement process are needed for the reliable long-term underground mining planning to minimize the surface environmental damages. In this research, a new computational system is developed to simulate four-dimensional (4D) ground movement by combining a stochastic medium theory, Knothe time-delay model and geographic information system (GIS) technology. All the calculations are implemented by a computational program, in which the components of GIS are used to fulfill the spatial-temporal analysis model. In this paper a tight coupling strategy based on component object model of GIS technology is used to overcome the problems of complex three-dimensional extraction model and spatial data integration. Moreover, the implementation of computational of the interfaces of the developed tool is described. The GIS based developed tool is validated by two study cases. The developed computational tool and models are achieved within the GIS system so the effective and efficient calculation methodology can be obtained, so the simulation problems of 4D ground movement due to underground mining extraction sequence can be solved by implementation of the developed tool in GIS.
文摘Since virtualization technology enables the abstraction and sharing of resources in a flexible management way, the overall expenses of network deployment can be significantly reduced. Therefore, the technology has been widely applied in the core network. With the tremendous growth in mobile traffic and services, it is natural to extend virtualization technology to the cloud computing based radio access networks(CCRANs) for achieving high spectral efficiency with low cost.In this paper, the virtualization technologies in CC-RANs are surveyed, including the system architecture, key enabling techniques, challenges, and open issues. The enabling key technologies for virtualization in CC-RANs mainly including virtual resource allocation, radio access network(RAN) slicing, mobility management, and social-awareness have been comprehensively surveyed to satisfy the isolation, customization and high-efficiency utilization of radio resources. The challenges and open issues mainly focus on virtualization levels for CC-RANs, signaling design for CC-RAN virtualization, performance analysis for CC-RAN virtualization, and network security for virtualized CC-RANs.
文摘This paper proposes low-cost yet high-accuracy direction of arrival(DOA)estimation for the automotive frequency-modulated continuous-wave(FMcW)radar.The existing subspace-based DOA estimation algorithms suffer fromeither high computational costs or low accuracy.We aim to solve such contradictory relation between complexity and accuracy by using randomizedmatrix approximation.Specifically,we apply an easily-interpretablerandomized low-rank approximation to the covariance matrix(CM)and R∈C^(M×M)throughthresketch maties in the fom of R≈OBQ^(H).Here the approximately compute its subspaces.That is,we first approximate matrix Q∈C^(M×z)contains the orthonormal basis for the range of the sketchmatrik C∈C^(M×z)cwe whichis etrated fom R using randomized unifom counsampling and B∈C^(z×z)is a weight-matrix reducing the approximation error.Relying on such approximation,we are able to accelerate the subspacecomputation by the orders of the magnitude without compromising estimation accuracy.Furthermore,we drive a theoretical error bound for the suggested scheme to ensure the accuracy of the approximation.As validated by the simulation results,the DOA estimation accuracy of the proposed algorithm,eficient multiple signal classification(E-MUSIC)s high,closely tracks standardMUSIC,and outperforms the well-known algorithms with tremendouslyreduced time complexity.Thus,the devised method can realize high-resolutionreal-time target detection in the emerging multiple input and multiple output(MIMO)automotive radar systems.
基金Project(08Y29-7)supported by the Transportation Science and Research Program of Jiangsu Province,ChinaProject(201103051)supported by the Major Infrastructure Program of the Health Monitoring System Hardware Platform Based on Sensor Network Node,China+1 种基金Project(61100111)supported by the National Natural Science Foundation of ChinaProject(BE2011169)supported by the Scientific and Technical Supporting Program of Jiangsu Province,China
文摘The variable block-size motion estimation(ME) and disparity estimation(DE) are adopted in multi-view video coding(MVC) to achieve high coding efficiency. However, much higher computational complexity is also introduced in coding system, which hinders practical application of MVC. An efficient fast mode decision method using mode complexity is proposed to reduce the computational complexity. In the proposed method, mode complexity is firstly computed by using the spatial, temporal and inter-view correlation between the current macroblock(MB) and its neighboring MBs. Based on the observation that direct mode is highly possible to be the optimal mode, mode complexity is always checked in advance whether it is below a predefined threshold for providing an efficient early termination opportunity. If this early termination condition is not met, three mode types for the MBs are classified according to the value of mode complexity, i.e., simple mode, medium mode and complex mode, to speed up the encoding process by reducing the number of the variable block modes required to be checked. Furthermore, for simple and medium mode region, the rate distortion(RD) cost of mode 16×16 in the temporal prediction direction is compared with that of the disparity prediction direction, to determine in advance whether the optimal prediction direction is in the temporal prediction direction or not, for skipping unnecessary disparity estimation. Experimental results show that the proposed method is able to significantly reduce the computational load by 78.79% and the total bit rate by 0.07% on average, while only incurring a negligible loss of PSNR(about 0.04 d B on average), compared with the full mode decision(FMD) in the reference software of MVC.
基金supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province (No.SJCX18_0103)Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology (No.KF20181915)
文摘Nested linear array enables to enhance localization resolution and achieve under-determined direction of arrival(DOA)estimation.In this paper,the traditional two-level nested linear array is improved to achieve more degrees of freedom(DOFs)and better angle estimation performance.Furthermore,a computationally efficient DOA estimation algorithm is proposed.The discrete Fourier transform(DFT)method is utilized to obtain coarse DOA estimates,and subsequently,fine DOA estimates are achieved by spatial smoothing multiple signals classification(SS-MUSIC)algorithm.Compared to SS-MUSIC algorithm,the proposed algorithm has the same estimation accuracy with lower computational complexity because the coarse DOA estimates enable to shrink the range of angle spectral search.In addition,the estimation of the number of signals is not required in advance by DFT method.Extensive simulation results testify the effectiveness of the proposed algorithm.
基金supported by the National Natural Science Foundation of China(Grant Nos.12034016,61975169,and 11904303)the Youth Innovation Fund of Xiamen(Grant No.3502Z20206045)+2 种基金the Fundamental Research Funds for the Central Universities at Xiamen University(Grant Nos.20720200074 and 20720220030)the Natural Science Foundation of Fujian Province of China(Grant No.2021J02002)and for Distinguished Young Scientists(Grant No.2015J06002)the Program for New Century Excellent Talents in University of China(Grant No.NCET-13-0495).
文摘Orbital angular momentum(OAM),emerging as an inherently high-dimensional property of photons,has boosted information capacity in optical communications.However,the potential of OAM in optical computing remains almost unexplored.Here,we present a highly efficient optical computing protocol for complex vector convolution with the superposition of high-dimensional OAM eigenmodes.We used two cascaded spatial light modulators to prepare suitable OAM superpositions to encode two complex vectors.Then,a deep-learning strategy is devised to decode the complex OAM spectrum,thus accomplishing the optical convolution task.In our experiment,we succeed in demonstrating 7-,9-,and 11-dimensional complex vector convolutions,in which an average proximity better than 95%and a mean relative error<6%are achieved.Our present scheme can be extended to incorporate other degrees of freedom for a more versatile optical computing in the high-dimensional Hilbert space.
文摘In this article the inherent computational power of the quantum entangled cluster states examined by measurement-based quantum computations is studied. By defining a common framework of rules for measurement of quantum entangled cluster states based on classical computations, the precise and detailed meaning of the computing power of the correlations in the quantum cluster states is made. This study exposes a connection, arousing interest, between the infringement of the realistic models that are local and the computing power of the quantum entangled cluster states.
文摘While finite volume methodologies (FVM) have predominated in fluid flow computations, many flow problems, including groundwater models, would benefit from the use of boundary methods, such as the Complex Variable Boundary Element Method (CVBEM). However, to date, there has been no reporting of a comparison of computational results between the FVM and the CVBEM in the assessment of flow field characteristics. In this work, the CVBEM is used to develop a flow field vector outcome of ideal fluid flow in a 90-degree bend which is then compared to the computational results from a finite volume model of the same situation. The focus of the modelling comparison in the current work is flow field trajectory vectors of the fluid flow, with respect to vector magnitude and direction. Such a comparison is necessary to validate the development of flow field vectors from the CVBEM and is of interest to many engineering flow problems, specifically groundwater modelling. Comparison of the CVBEM and FVM flow field trajectory vectors for the target problem of ideal flow in a 90-degree bend shows good agreement between the considered methodologies.