Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main i...Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which cause</span><span style="font-size:10.0pt;font-family:"">s</span><span style="font-size:10.0pt;font-family:""> higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them <span>together by a global theoretical evaluation system called worst-case time co</span><span>mplexity. In this study, we have analyzed the selected algorithms and calculate</span>d the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by comparing th</span><span style="font-size:10.0pt;font-family:"">ese</span><span style="font-size:10.0pt;font-family:""> criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms </span><span style="font-size:10.0pt;font-family:"">such </span><span style="font-size:10.0pt;font-family:"">as Wee method and Modified Prata method were successful in having the smaller time complexit<span>ies, some other approaches did not make any significant difference compa</span>r</span><span style="font-size:10.0pt;font-family:"">ed</span><span style="font-size:10.0pt;font-family:""> to the classical algorithm.展开更多
Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems. The time complexity of the algorithms forcombinatorial optimisation has not been well und...Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems. The time complexity of the algorithms forcombinatorial optimisation has not been well understood. This paper considers the time complexity ofan evolutionary algorithm for a classical combinatorial optimisation problem, to find the maximumcardinality matching in a graph. It is shown that the evolutionary algorithm can produce a matchingwith nearly maximum cardinality in average polynomial time.展开更多
A natural extension of the Lorentz transformation to its complex version was constructed together with a parallel extension of the Minkowski M<sup>4</sup> model for special relativity (SR) to complex C<...A natural extension of the Lorentz transformation to its complex version was constructed together with a parallel extension of the Minkowski M<sup>4</sup> model for special relativity (SR) to complex C<sup>4</sup> space-time. As the [signed] absolute values of complex coordinates of the underlying motion’s characterization in C<sup>4</sup> one obtains a Newtonian-like type of motion whereas as the real parts of the complex motion’s description and of the complex Lorentz transformation, all the SR theory as modeled by M<sup>4</sup> real space-time can be recovered. This means all the SR theory is preserved in the real subspace M<sup>4</sup> of the space-time C<sup>4</sup> while becoming simpler and clearer in the new complex model’s framework. Since velocities in the complex model can be determined geometrically, with no primary use of time, time turns out to be definable within the equivalent theory of the reduced complex C<sup>4</sup> model to the C<sup>3</sup> “para-space” model. That procedure allows us to separate time from the (para)space and consider all the SR theory as a theory of C<sup>3</sup> alone. On the other hand, the complex time defined within the C<sup>3</sup> theory is interpreted and modeled by the single separate C<sup>1</sup> complex plane. The possibility for application of the C<sup>3</sup> model to quantum mechanics is suggested. As such, the model C<sup>3</sup> seems to have unifying abilities for application to different physical theories.展开更多
The effects of the calorimetric buffer solutions were investigated while the two colorimetric reactions of AI-ferron complex and Fe-ferron complex occurred individually, and the effects of the testing wavelength and t...The effects of the calorimetric buffer solutions were investigated while the two colorimetric reactions of AI-ferron complex and Fe-ferron complex occurred individually, and the effects of the testing wavelength and the pH of the solutions were also investigated. A timed complexatian colorimetric analysis method of Al-Fe-ferron in view of the total concentration of {AI + Fe} was then established to determine the species distribution of polymeric Al-Fe. The testing wavelength was recommended at 362 net and the testing pH value was 5. With a comparison of the ratios of n(Al)/n(Fe), the standard adsorption curves of the polymeric Al-Fe solutions were derived from the experimental results. Furthermore, the solutions' composition were carious in both the molar n(Al)/n(Fe) ratios, i.e. 0/0, 5/5, 9/1 and 0/10, and the concentrations associated with the total ( Al + Fe which ranged from 10(-5) to 10(-4) mol/L..展开更多
Complex networks are important paradigms for analyzing the complex systems as they allow understanding the structural properties of systems composed of different interacting entities. In this work we propose a reliabl...Complex networks are important paradigms for analyzing the complex systems as they allow understanding the structural properties of systems composed of different interacting entities. In this work we propose a reliable method for constructing complex networks from chaotic time series. We first estimate the covariance matrices, then a geodesic-based distance between the covariance matrices is introduced. Consequently the network can be constructed on a Riemannian manifold where the nodes and edges correspond to the covariance matrix and geodesic-based distance, respectively. The proposed method provides us with an intrinsic geometry viewpoint to understand the time series.展开更多
Internet of Things(IoT)is an emerging technology that moves the world in the direction of smart things.But,IoT security is the complex problem due to its centralized architecture,and limited capacity.So,blockchain tec...Internet of Things(IoT)is an emerging technology that moves the world in the direction of smart things.But,IoT security is the complex problem due to its centralized architecture,and limited capacity.So,blockchain technology has great attention due to its features of decentralized architecture,transparency,immutable records and cryptography hash functions when combining with IoT.Cryptography hash algorithms are very important in blockchain technology for secure transmission.It converts the variable size inputs to a fixed size hash output which is unchangeable.Existing cryptography hash algorithms with digital signature have issues of single node accessibility and accessed up to 128 bytes of key size only.As well as,if the attacker tries to hack the key,it cancels the transaction.This paper presents the Modified Elliptic Curve Cryptography Multi Signature Scheme(MECC-MSS)for multiple node accessibility by finding nearest path for secure transaction.In this work,the input key size can be extended up to 512 bytes to enhance the security.The performance of the proposed algorithm is analyzed with other cryptography hash algorithms like Secure Hashing Algorithms(SHAs)such as SHA224,SHA256,SHA384,SHA512,SHA3-224,SHA3-256,SHA3-384,SHA3-512 and Message Digest5 by one-way analysis of variance test in terms of accuracy and time complexity.Results show that the MECC-MSS achieves 90.85%of accuracy and time complexity of 1.4 nano seconds with significance less than 0.05.From the statistical analysis,it is observed that the proposed algorithm is significantly better than other cryptography hash algorithms and also having less time complexity.展开更多
Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and...Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and experimental reesult proves that distribution search is superior to binary search.展开更多
Most information used to evaluate diabetic statuses is collected at a special time-point,such as taking fasting plasma glucose test and providing a limited view of individual’s health and disease risk.As a new parame...Most information used to evaluate diabetic statuses is collected at a special time-point,such as taking fasting plasma glucose test and providing a limited view of individual’s health and disease risk.As a new parameter for continuously evaluating personal clinical statuses,the newly developed technique“continuous glucose monitoring”(CGM)can characterize glucose dynamics.By calculating the complexity of glucose time series index(CGI)with refined composite multi-scale entropy analysis of the CGM data,the study showed for the first time that the complexity of glucose time series in subjects decreased gradually from normal glucose tolerance to impaired glucose regulation and then to type 2 diabetes(P for trend<0.01).Furthermore,CGI was significantly associated with various parameters such as insulin sensitivity/secretion(all P<0.01),and multiple linear stepwise regression showed that the disposition index,which reflectsβ-cell function after adjusting for insulin sensitivity,was the only independent factor correlated with CGI(P<0.01).Our findings indicate that the CGI derived from the CGM data may serve as a novel marker to evaluate glucose homeostasis.展开更多
Based on protein-DNA complex crystal structural data in up-to-date Nucleic Acid Database,the related parameters of DNA Kinetic Structure were investigated by Monte-Carlo Multiple Integrals on the base of modified DNA ...Based on protein-DNA complex crystal structural data in up-to-date Nucleic Acid Database,the related parameters of DNA Kinetic Structure were investigated by Monte-Carlo Multiple Integrals on the base of modified DNA structure statistical mechanical model,and time complexity and precision were analyzed on the calculated results.展开更多
To further improve the performance of UKF(Unscented Kalman Filter) algorithm used in BDS/SINS(BeiDou Navigation Satellite System/Strap down Inertial Navigation System), an improved GM-UKF(Gaussian Mixture Unscented Ka...To further improve the performance of UKF(Unscented Kalman Filter) algorithm used in BDS/SINS(BeiDou Navigation Satellite System/Strap down Inertial Navigation System), an improved GM-UKF(Gaussian Mixture Unscented Kalman Filter) considering non-Gaussian distribution is discussed in this paper. This new algorithm using SVD(Singular Value Decomposition) is proposed to alternative covariance square root calculation in UKF sigma point production. And to end the rapidly increasing number of Gaussian distributions, PDF(Probability Density Function) re-approximation is conducted. In principle this efficiency algorithm proposed here can achieve higher computational speed compared with traditional GM-UKF. And simulation experiment result show that, compared with UKF and GM-UKF algorithm, new algorithm implemented in BDS/SINS tightly integrated navigation system is suitable for handling nonlinear/non-Gaussian integrated navigation position calculation, for its lower computational complexity with high accuracy.展开更多
With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have...With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have shown that network failures occur on the Internet inevitably and frequently.The current routing protocols deployed on the Internet adopt the reconvergence mechanism to cope with network failures.During the reconvergence process,the packets may be lost because of inconsistent routing information,which reduces the network’s availability greatly and affects the Internet service provider’s(ISP’s)service quality and reputation seriously.Therefore,improving network availability has become an urgent problem.As such,the Internet Engineering Task Force suggests the use of downstream path criterion(DC)to address all single-link failure scenarios.However,existing methods for implementing DC schemes are time consuming,require a large amount of router CPU resources,and may deteriorate router capability.Thus,the computation overhead introduced by existing DC schemes is significant,especially in large-scale networks.Therefore,this study proposes an efficient intra-domain routing protection algorithm(ERPA)in large-scale networks.Theoretical analysis indicates that the time complexity of ERPA is less than that of constructing a shortest path tree.Experimental results show that ERPA can reduce the computation overhead significantly compared with the existing algorithms while offering the same network availability as DC.展开更多
In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases an...In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases and their subsequent retrievals are performed for further processing.Finding the required data from a given database within the minimum possible time is one of the key factors in achieving the best possible performance of any computer-based application.If the data is already sorted,finding or searching is comparatively faster.In real-life scenarios,the data collected from different sources may not be in sorted order.Sorting algorithms are required to arrange the data in some order in the least possible time.In this paper,I propose an intelligent approach towards designing a smart variant of the bubble sort algorithm.I call it Smart Bubble sort that exhibits dynamic footprint:The capability of adapting itself from the average-case to the best-case scenario.It is an in-place sorting algorithm and its best-case time complexity isΩ(n).It is linear and better than bubble sort,selection sort,and merge sort.In averagecase and worst-case analyses,the complexity estimates are based on its static footprint analyses.Its complexity in worst-case is O(n2)and in average-case isΘ(n^(2)).Smart Bubble sort is capable of adapting itself to the best-case scenario from the average-case scenario at any subsequent stages due to its dynamic and intelligent nature.The Smart Bubble sort outperforms bubble sort,selection sort,and merge sort in the best-case scenario whereas it outperforms bubble sort in the average-case scenario.展开更多
This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster tha...This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster than the sorting methods using AVL-Thee or other balanced trees.展开更多
Nowadays, increased information capacity and transmission processes make information security a difficult problem. As a result, most researchers employ encryption and decryption algorithms to enhance information secur...Nowadays, increased information capacity and transmission processes make information security a difficult problem. As a result, most researchers employ encryption and decryption algorithms to enhance information security domains. As it progresses, new encryption methods are being used for information security. In this paper, a hybrid encryption algorithm that combines the honey encryption algorithm and an advanced DNA encoding scheme in key generation is presented. Deoxyribonucleic Acid (DNA) achieves maximal protection and powerful security with high capacity and low modification rate, it is currently being investigated as a potential carrier for information security. Honey Encryption (HE) is an important encryption method for security systems and can strongly prevent brute force attacks. However, the traditional honeyword encryption has a message space limitation problem in the message distribution process. Therefore, we use an improved honey encryption algorithm in our proposed system. By combining the benefits of the DNA-based encoding algorithm with the improved Honey encryption algorithm, a new hybrid method is created in the proposed system. In this paper, five different lookup tables are created in the DNA encoding scheme in key generation. The improved Honey encryption algorithm based on the DNA encoding scheme in key generation is discussed in detail. The passwords are generated as the keys by using the DNA methods based on five different lookup tables, and the disease names are the input messages that are encoded by using the honey encryption process. This hybrid method can reduce the storage overhead problem in the DNA method by applying the five different lookup tables and can reduce time complexity in the existing honey encryption process.展开更多
A series of poly-aluminum-chloride-sulfate (PACS), which has different basicities (gamma) and Al3+/SO42- molar ratio, has been prepared and dried at 105degreesC and 65degreesC, respectively. The distribution of alumin...A series of poly-aluminum-chloride-sulfate (PACS), which has different basicities (gamma) and Al3+/SO42- molar ratio, has been prepared and dried at 105degreesC and 65degreesC, respectively. The distribution of aluminum species of PACS was examined, and the effect of 7 value, Al3+/SO42- molar ratio, dilution on the distribution of aluminum species of PACS was also investigated by using Al-ferron timed complex colorimetric method. The IR spectroscopy and X-ray diffraction were used to study the effect of gamma value, Al3+ / SO42- molar ratio and the drying temperature on the structure of PACS. The experimental results show that Al3+/SO42- molar ratio has a great effect on the distribution of aluminum species, but the dilution has a little effect on the distribution of aluminum species. The lower the Al3+/SO42- molar ratio, the higher the proportions of the polymer and colloidal species in PACS, The polymeric degree of PACS was related to gamma value and Al3+/SO(4)(2-)molar ratio. Drying temperature has an influence on the structure and the solubility of solid PACS products.展开更多
Al Ferron timed complex colorimetric method (AFM) and 27 Al NMR spectroscopy method(ANM) were discussed. For the former, the different colorimetric reagent preparation methods' results indicate that the...Al Ferron timed complex colorimetric method (AFM) and 27 Al NMR spectroscopy method(ANM) were discussed. For the former, the different colorimetric reagent preparation methods' results indicate that there are some differences beteween them, and the combined method can be used as a simplified procedure. For the latter, the small tube method is more accurate. Eventually, the Al 13 (ANM) was compared to the Al b (AFM).展开更多
A Bloom filter is a space-efficient data structure used for concisely representing a set as well as membership queries at the expense of introducing false positive. In this paper, we propose the L-priorities Bloom fil...A Bloom filter is a space-efficient data structure used for concisely representing a set as well as membership queries at the expense of introducing false positive. In this paper, we propose the L-priorities Bloom filter (LPBF) as a new member of the Bloom filter (BF) family, it uses a limited multidimensional bit space matrix to replace the bit vector of standard bloom filters in order to support different priorities for the elements of a set. We demonstrate the time and space complexity, especially the false positive rate of LPBF. Furthermore, we also present a detailed practical evaluation of the false positive rate achieved by LPBF. The results show that LPBF performs better than standard BFs with respect to false positive rate.展开更多
The Molopo Farms Complex(MFC)is a 13000 km2layered,mafic-ultramafic intrusion straddling the southern border of Botswana with South Africa.It does not outcrop due to Cenozoic cover,but is believed to intrude the
A fundamental problem with complex time series analysis involves data prediction and repair.However,existing methods are not accurate enough for complex and multidimensional time series data.In this paper,we propose a...A fundamental problem with complex time series analysis involves data prediction and repair.However,existing methods are not accurate enough for complex and multidimensional time series data.In this paper,we propose a novel approach,a complex time series predic-tion model,which is based on the conditional randomfield(CRF)and recurrent neural network(RNN).This model can be used as an upper-level predictor in the stacking process or be trained using deep learning methods.Our approach is more accurate than existing methods in some suitable scenarios,as shown in the experimental results.展开更多
文摘Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which cause</span><span style="font-size:10.0pt;font-family:"">s</span><span style="font-size:10.0pt;font-family:""> higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them <span>together by a global theoretical evaluation system called worst-case time co</span><span>mplexity. In this study, we have analyzed the selected algorithms and calculate</span>d the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by comparing th</span><span style="font-size:10.0pt;font-family:"">ese</span><span style="font-size:10.0pt;font-family:""> criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms </span><span style="font-size:10.0pt;font-family:"">such </span><span style="font-size:10.0pt;font-family:"">as Wee method and Modified Prata method were successful in having the smaller time complexit<span>ies, some other approaches did not make any significant difference compa</span>r</span><span style="font-size:10.0pt;font-family:"">ed</span><span style="font-size:10.0pt;font-family:""> to the classical algorithm.
基金Engineering and Physical Sciences Research Council(GR/R52541/01)a,武汉大学校科研和教改项目
文摘Most of works on the time complexity analysis of evolutionary algorithms havealways focused on some artificial binary problems. The time complexity of the algorithms forcombinatorial optimisation has not been well understood. This paper considers the time complexity ofan evolutionary algorithm for a classical combinatorial optimisation problem, to find the maximumcardinality matching in a graph. It is shown that the evolutionary algorithm can produce a matchingwith nearly maximum cardinality in average polynomial time.
文摘A natural extension of the Lorentz transformation to its complex version was constructed together with a parallel extension of the Minkowski M<sup>4</sup> model for special relativity (SR) to complex C<sup>4</sup> space-time. As the [signed] absolute values of complex coordinates of the underlying motion’s characterization in C<sup>4</sup> one obtains a Newtonian-like type of motion whereas as the real parts of the complex motion’s description and of the complex Lorentz transformation, all the SR theory as modeled by M<sup>4</sup> real space-time can be recovered. This means all the SR theory is preserved in the real subspace M<sup>4</sup> of the space-time C<sup>4</sup> while becoming simpler and clearer in the new complex model’s framework. Since velocities in the complex model can be determined geometrically, with no primary use of time, time turns out to be definable within the equivalent theory of the reduced complex C<sup>4</sup> model to the C<sup>3</sup> “para-space” model. That procedure allows us to separate time from the (para)space and consider all the SR theory as a theory of C<sup>3</sup> alone. On the other hand, the complex time defined within the C<sup>3</sup> theory is interpreted and modeled by the single separate C<sup>1</sup> complex plane. The possibility for application of the C<sup>3</sup> model to quantum mechanics is suggested. As such, the model C<sup>3</sup> seems to have unifying abilities for application to different physical theories.
基金TheNationalNaturalScienceFoundationofChina (No .2 96 770 0 4)
文摘The effects of the calorimetric buffer solutions were investigated while the two colorimetric reactions of AI-ferron complex and Fe-ferron complex occurred individually, and the effects of the testing wavelength and the pH of the solutions were also investigated. A timed complexatian colorimetric analysis method of Al-Fe-ferron in view of the total concentration of {AI + Fe} was then established to determine the species distribution of polymeric Al-Fe. The testing wavelength was recommended at 362 net and the testing pH value was 5. With a comparison of the ratios of n(Al)/n(Fe), the standard adsorption curves of the polymeric Al-Fe solutions were derived from the experimental results. Furthermore, the solutions' composition were carious in both the molar n(Al)/n(Fe) ratios, i.e. 0/0, 5/5, 9/1 and 0/10, and the concentrations associated with the total ( Al + Fe which ranged from 10(-5) to 10(-4) mol/L..
基金Supported by the National Natural Science Foundation of China under Grant No 61362024
文摘Complex networks are important paradigms for analyzing the complex systems as they allow understanding the structural properties of systems composed of different interacting entities. In this work we propose a reliable method for constructing complex networks from chaotic time series. We first estimate the covariance matrices, then a geodesic-based distance between the covariance matrices is introduced. Consequently the network can be constructed on a Riemannian manifold where the nodes and edges correspond to the covariance matrix and geodesic-based distance, respectively. The proposed method provides us with an intrinsic geometry viewpoint to understand the time series.
文摘Internet of Things(IoT)is an emerging technology that moves the world in the direction of smart things.But,IoT security is the complex problem due to its centralized architecture,and limited capacity.So,blockchain technology has great attention due to its features of decentralized architecture,transparency,immutable records and cryptography hash functions when combining with IoT.Cryptography hash algorithms are very important in blockchain technology for secure transmission.It converts the variable size inputs to a fixed size hash output which is unchangeable.Existing cryptography hash algorithms with digital signature have issues of single node accessibility and accessed up to 128 bytes of key size only.As well as,if the attacker tries to hack the key,it cancels the transaction.This paper presents the Modified Elliptic Curve Cryptography Multi Signature Scheme(MECC-MSS)for multiple node accessibility by finding nearest path for secure transaction.In this work,the input key size can be extended up to 512 bytes to enhance the security.The performance of the proposed algorithm is analyzed with other cryptography hash algorithms like Secure Hashing Algorithms(SHAs)such as SHA224,SHA256,SHA384,SHA512,SHA3-224,SHA3-256,SHA3-384,SHA3-512 and Message Digest5 by one-way analysis of variance test in terms of accuracy and time complexity.Results show that the MECC-MSS achieves 90.85%of accuracy and time complexity of 1.4 nano seconds with significance less than 0.05.From the statistical analysis,it is observed that the proposed algorithm is significantly better than other cryptography hash algorithms and also having less time complexity.
文摘Provided an algorithm for the distribution search and proves the time complexity of the algorithm. This algorithm uses a mathematical formula to search n elements in the sequence of n elements in O(n)expected time,and experimental reesult proves that distribution search is superior to binary search.
基金the National Natural Science Foundation of China(Nos.81873646 and 61903071)the Shanghai United Developing Technology Project of Municipal Hospitals(Nos.SHDC12006101 and SHDC12010115)the Shanghai Municipal Education Commission Gaofeng Clinical Medicine grant support(Nos.20161430).
文摘Most information used to evaluate diabetic statuses is collected at a special time-point,such as taking fasting plasma glucose test and providing a limited view of individual’s health and disease risk.As a new parameter for continuously evaluating personal clinical statuses,the newly developed technique“continuous glucose monitoring”(CGM)can characterize glucose dynamics.By calculating the complexity of glucose time series index(CGI)with refined composite multi-scale entropy analysis of the CGM data,the study showed for the first time that the complexity of glucose time series in subjects decreased gradually from normal glucose tolerance to impaired glucose regulation and then to type 2 diabetes(P for trend<0.01).Furthermore,CGI was significantly associated with various parameters such as insulin sensitivity/secretion(all P<0.01),and multiple linear stepwise regression showed that the disposition index,which reflectsβ-cell function after adjusting for insulin sensitivity,was the only independent factor correlated with CGI(P<0.01).Our findings indicate that the CGI derived from the CGM data may serve as a novel marker to evaluate glucose homeostasis.
基金Supported by Inner Mongolia Natural Science Foundation(200711020112)Innovation Fundation of Inner Mongolia University of Science and Technology (2009NC064)~~
文摘Based on protein-DNA complex crystal structural data in up-to-date Nucleic Acid Database,the related parameters of DNA Kinetic Structure were investigated by Monte-Carlo Multiple Integrals on the base of modified DNA structure statistical mechanical model,and time complexity and precision were analyzed on the calculated results.
基金supported by Chinese National Natural ScienceFoundation (41674016 and 41274016)
文摘To further improve the performance of UKF(Unscented Kalman Filter) algorithm used in BDS/SINS(BeiDou Navigation Satellite System/Strap down Inertial Navigation System), an improved GM-UKF(Gaussian Mixture Unscented Kalman Filter) considering non-Gaussian distribution is discussed in this paper. This new algorithm using SVD(Singular Value Decomposition) is proposed to alternative covariance square root calculation in UKF sigma point production. And to end the rapidly increasing number of Gaussian distributions, PDF(Probability Density Function) re-approximation is conducted. In principle this efficiency algorithm proposed here can achieve higher computational speed compared with traditional GM-UKF. And simulation experiment result show that, compared with UKF and GM-UKF algorithm, new algorithm implemented in BDS/SINS tightly integrated navigation system is suitable for handling nonlinear/non-Gaussian integrated navigation position calculation, for its lower computational complexity with high accuracy.
基金the National Natural Science Foundation of China(No.61702315)the Key R&D program(international science and technology cooperation project)of Shanxi Province China(No.201903D421003)the National Key Research and Development Program of China(No.2018YFB1800401).
文摘With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have shown that network failures occur on the Internet inevitably and frequently.The current routing protocols deployed on the Internet adopt the reconvergence mechanism to cope with network failures.During the reconvergence process,the packets may be lost because of inconsistent routing information,which reduces the network’s availability greatly and affects the Internet service provider’s(ISP’s)service quality and reputation seriously.Therefore,improving network availability has become an urgent problem.As such,the Internet Engineering Task Force suggests the use of downstream path criterion(DC)to address all single-link failure scenarios.However,existing methods for implementing DC schemes are time consuming,require a large amount of router CPU resources,and may deteriorate router capability.Thus,the computation overhead introduced by existing DC schemes is significant,especially in large-scale networks.Therefore,this study proposes an efficient intra-domain routing protection algorithm(ERPA)in large-scale networks.Theoretical analysis indicates that the time complexity of ERPA is less than that of constructing a shortest path tree.Experimental results show that ERPA can reduce the computation overhead significantly compared with the existing algorithms while offering the same network availability as DC.
文摘In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases and their subsequent retrievals are performed for further processing.Finding the required data from a given database within the minimum possible time is one of the key factors in achieving the best possible performance of any computer-based application.If the data is already sorted,finding or searching is comparatively faster.In real-life scenarios,the data collected from different sources may not be in sorted order.Sorting algorithms are required to arrange the data in some order in the least possible time.In this paper,I propose an intelligent approach towards designing a smart variant of the bubble sort algorithm.I call it Smart Bubble sort that exhibits dynamic footprint:The capability of adapting itself from the average-case to the best-case scenario.It is an in-place sorting algorithm and its best-case time complexity isΩ(n).It is linear and better than bubble sort,selection sort,and merge sort.In averagecase and worst-case analyses,the complexity estimates are based on its static footprint analyses.Its complexity in worst-case is O(n2)and in average-case isΘ(n^(2)).Smart Bubble sort is capable of adapting itself to the best-case scenario from the average-case scenario at any subsequent stages due to its dynamic and intelligent nature.The Smart Bubble sort outperforms bubble sort,selection sort,and merge sort in the best-case scenario whereas it outperforms bubble sort in the average-case scenario.
文摘This paper presents a new tree sorting algorithm whose average time complexity is much better than the sorting methods using AVL-Tree or other balanced trees. The experiment shows that our algorithm is much faster than the sorting methods using AVL-Thee or other balanced trees.
文摘Nowadays, increased information capacity and transmission processes make information security a difficult problem. As a result, most researchers employ encryption and decryption algorithms to enhance information security domains. As it progresses, new encryption methods are being used for information security. In this paper, a hybrid encryption algorithm that combines the honey encryption algorithm and an advanced DNA encoding scheme in key generation is presented. Deoxyribonucleic Acid (DNA) achieves maximal protection and powerful security with high capacity and low modification rate, it is currently being investigated as a potential carrier for information security. Honey Encryption (HE) is an important encryption method for security systems and can strongly prevent brute force attacks. However, the traditional honeyword encryption has a message space limitation problem in the message distribution process. Therefore, we use an improved honey encryption algorithm in our proposed system. By combining the benefits of the DNA-based encoding algorithm with the improved Honey encryption algorithm, a new hybrid method is created in the proposed system. In this paper, five different lookup tables are created in the DNA encoding scheme in key generation. The improved Honey encryption algorithm based on the DNA encoding scheme in key generation is discussed in detail. The passwords are generated as the keys by using the DNA methods based on five different lookup tables, and the disease names are the input messages that are encoded by using the honey encryption process. This hybrid method can reduce the storage overhead problem in the DNA method by applying the five different lookup tables and can reduce time complexity in the existing honey encryption process.
文摘A series of poly-aluminum-chloride-sulfate (PACS), which has different basicities (gamma) and Al3+/SO42- molar ratio, has been prepared and dried at 105degreesC and 65degreesC, respectively. The distribution of aluminum species of PACS was examined, and the effect of 7 value, Al3+/SO42- molar ratio, dilution on the distribution of aluminum species of PACS was also investigated by using Al-ferron timed complex colorimetric method. The IR spectroscopy and X-ray diffraction were used to study the effect of gamma value, Al3+ / SO42- molar ratio and the drying temperature on the structure of PACS. The experimental results show that Al3+/SO42- molar ratio has a great effect on the distribution of aluminum species, but the dilution has a little effect on the distribution of aluminum species. The lower the Al3+/SO42- molar ratio, the higher the proportions of the polymer and colloidal species in PACS, The polymeric degree of PACS was related to gamma value and Al3+/SO(4)(2-)molar ratio. Drying temperature has an influence on the structure and the solubility of solid PACS products.
文摘Al Ferron timed complex colorimetric method (AFM) and 27 Al NMR spectroscopy method(ANM) were discussed. For the former, the different colorimetric reagent preparation methods' results indicate that there are some differences beteween them, and the combined method can be used as a simplified procedure. For the latter, the small tube method is more accurate. Eventually, the Al 13 (ANM) was compared to the Al b (AFM).
基金supported by Project of Plan for Science and Technology Development of Jilin Province (No. 20101504)Project of Research of Science and Technology for the 11th Five-year Plan of Jilin Education Department (No. 2009604)
文摘A Bloom filter is a space-efficient data structure used for concisely representing a set as well as membership queries at the expense of introducing false positive. In this paper, we propose the L-priorities Bloom filter (LPBF) as a new member of the Bloom filter (BF) family, it uses a limited multidimensional bit space matrix to replace the bit vector of standard bloom filters in order to support different priorities for the elements of a set. We demonstrate the time and space complexity, especially the false positive rate of LPBF. Furthermore, we also present a detailed practical evaluation of the false positive rate achieved by LPBF. The results show that LPBF performs better than standard BFs with respect to false positive rate.
文摘The Molopo Farms Complex(MFC)is a 13000 km2layered,mafic-ultramafic intrusion straddling the southern border of Botswana with South Africa.It does not outcrop due to Cenozoic cover,but is believed to intrude the
基金Supported by The National Key Research and Development Program of China(2020YFB1006104).
文摘A fundamental problem with complex time series analysis involves data prediction and repair.However,existing methods are not accurate enough for complex and multidimensional time series data.In this paper,we propose a novel approach,a complex time series predic-tion model,which is based on the conditional randomfield(CRF)and recurrent neural network(RNN).This model can be used as an upper-level predictor in the stacking process or be trained using deep learning methods.Our approach is more accurate than existing methods in some suitable scenarios,as shown in the experimental results.