Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main i...Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which cause</span><span style="font-size:10.0pt;font-family:"">s</span><span style="font-size:10.0pt;font-family:""> higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them <span>together by a global theoretical evaluation system called worst-case time co</span><span>mplexity. In this study, we have analyzed the selected algorithms and calculate</span>d the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by comparing th</span><span style="font-size:10.0pt;font-family:"">ese</span><span style="font-size:10.0pt;font-family:""> criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms </span><span style="font-size:10.0pt;font-family:"">such </span><span style="font-size:10.0pt;font-family:"">as Wee method and Modified Prata method were successful in having the smaller time complexit<span>ies, some other approaches did not make any significant difference compa</span>r</span><span style="font-size:10.0pt;font-family:"">ed</span><span style="font-size:10.0pt;font-family:""> to the classical algorithm.展开更多
A natural extension of the Lorentz transformation to its complex version was constructed together with a parallel extension of the Minkowski M<sup>4</sup> model for special relativity (SR) to complex C<...A natural extension of the Lorentz transformation to its complex version was constructed together with a parallel extension of the Minkowski M<sup>4</sup> model for special relativity (SR) to complex C<sup>4</sup> space-time. As the [signed] absolute values of complex coordinates of the underlying motion’s characterization in C<sup>4</sup> one obtains a Newtonian-like type of motion whereas as the real parts of the complex motion’s description and of the complex Lorentz transformation, all the SR theory as modeled by M<sup>4</sup> real space-time can be recovered. This means all the SR theory is preserved in the real subspace M<sup>4</sup> of the space-time C<sup>4</sup> while becoming simpler and clearer in the new complex model’s framework. Since velocities in the complex model can be determined geometrically, with no primary use of time, time turns out to be definable within the equivalent theory of the reduced complex C<sup>4</sup> model to the C<sup>3</sup> “para-space” model. That procedure allows us to separate time from the (para)space and consider all the SR theory as a theory of C<sup>3</sup> alone. On the other hand, the complex time defined within the C<sup>3</sup> theory is interpreted and modeled by the single separate C<sup>1</sup> complex plane. The possibility for application of the C<sup>3</sup> model to quantum mechanics is suggested. As such, the model C<sup>3</sup> seems to have unifying abilities for application to different physical theories.展开更多
Internet of Things(IoT)is an emerging technology that moves the world in the direction of smart things.But,IoT security is the complex problem due to its centralized architecture,and limited capacity.So,blockchain tec...Internet of Things(IoT)is an emerging technology that moves the world in the direction of smart things.But,IoT security is the complex problem due to its centralized architecture,and limited capacity.So,blockchain technology has great attention due to its features of decentralized architecture,transparency,immutable records and cryptography hash functions when combining with IoT.Cryptography hash algorithms are very important in blockchain technology for secure transmission.It converts the variable size inputs to a fixed size hash output which is unchangeable.Existing cryptography hash algorithms with digital signature have issues of single node accessibility and accessed up to 128 bytes of key size only.As well as,if the attacker tries to hack the key,it cancels the transaction.This paper presents the Modified Elliptic Curve Cryptography Multi Signature Scheme(MECC-MSS)for multiple node accessibility by finding nearest path for secure transaction.In this work,the input key size can be extended up to 512 bytes to enhance the security.The performance of the proposed algorithm is analyzed with other cryptography hash algorithms like Secure Hashing Algorithms(SHAs)such as SHA224,SHA256,SHA384,SHA512,SHA3-224,SHA3-256,SHA3-384,SHA3-512 and Message Digest5 by one-way analysis of variance test in terms of accuracy and time complexity.Results show that the MECC-MSS achieves 90.85%of accuracy and time complexity of 1.4 nano seconds with significance less than 0.05.From the statistical analysis,it is observed that the proposed algorithm is significantly better than other cryptography hash algorithms and also having less time complexity.展开更多
Most information used to evaluate diabetic statuses is collected at a special time-point,such as taking fasting plasma glucose test and providing a limited view of individual’s health and disease risk.As a new parame...Most information used to evaluate diabetic statuses is collected at a special time-point,such as taking fasting plasma glucose test and providing a limited view of individual’s health and disease risk.As a new parameter for continuously evaluating personal clinical statuses,the newly developed technique“continuous glucose monitoring”(CGM)can characterize glucose dynamics.By calculating the complexity of glucose time series index(CGI)with refined composite multi-scale entropy analysis of the CGM data,the study showed for the first time that the complexity of glucose time series in subjects decreased gradually from normal glucose tolerance to impaired glucose regulation and then to type 2 diabetes(P for trend<0.01).Furthermore,CGI was significantly associated with various parameters such as insulin sensitivity/secretion(all P<0.01),and multiple linear stepwise regression showed that the disposition index,which reflectsβ-cell function after adjusting for insulin sensitivity,was the only independent factor correlated with CGI(P<0.01).Our findings indicate that the CGI derived from the CGM data may serve as a novel marker to evaluate glucose homeostasis.展开更多
To further improve the performance of UKF(Unscented Kalman Filter) algorithm used in BDS/SINS(BeiDou Navigation Satellite System/Strap down Inertial Navigation System), an improved GM-UKF(Gaussian Mixture Unscented Ka...To further improve the performance of UKF(Unscented Kalman Filter) algorithm used in BDS/SINS(BeiDou Navigation Satellite System/Strap down Inertial Navigation System), an improved GM-UKF(Gaussian Mixture Unscented Kalman Filter) considering non-Gaussian distribution is discussed in this paper. This new algorithm using SVD(Singular Value Decomposition) is proposed to alternative covariance square root calculation in UKF sigma point production. And to end the rapidly increasing number of Gaussian distributions, PDF(Probability Density Function) re-approximation is conducted. In principle this efficiency algorithm proposed here can achieve higher computational speed compared with traditional GM-UKF. And simulation experiment result show that, compared with UKF and GM-UKF algorithm, new algorithm implemented in BDS/SINS tightly integrated navigation system is suitable for handling nonlinear/non-Gaussian integrated navigation position calculation, for its lower computational complexity with high accuracy.展开更多
In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases an...In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases and their subsequent retrievals are performed for further processing.Finding the required data from a given database within the minimum possible time is one of the key factors in achieving the best possible performance of any computer-based application.If the data is already sorted,finding or searching is comparatively faster.In real-life scenarios,the data collected from different sources may not be in sorted order.Sorting algorithms are required to arrange the data in some order in the least possible time.In this paper,I propose an intelligent approach towards designing a smart variant of the bubble sort algorithm.I call it Smart Bubble sort that exhibits dynamic footprint:The capability of adapting itself from the average-case to the best-case scenario.It is an in-place sorting algorithm and its best-case time complexity isΩ(n).It is linear and better than bubble sort,selection sort,and merge sort.In averagecase and worst-case analyses,the complexity estimates are based on its static footprint analyses.Its complexity in worst-case is O(n2)and in average-case isΘ(n^(2)).Smart Bubble sort is capable of adapting itself to the best-case scenario from the average-case scenario at any subsequent stages due to its dynamic and intelligent nature.The Smart Bubble sort outperforms bubble sort,selection sort,and merge sort in the best-case scenario whereas it outperforms bubble sort in the average-case scenario.展开更多
With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have...With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have shown that network failures occur on the Internet inevitably and frequently.The current routing protocols deployed on the Internet adopt the reconvergence mechanism to cope with network failures.During the reconvergence process,the packets may be lost because of inconsistent routing information,which reduces the network’s availability greatly and affects the Internet service provider’s(ISP’s)service quality and reputation seriously.Therefore,improving network availability has become an urgent problem.As such,the Internet Engineering Task Force suggests the use of downstream path criterion(DC)to address all single-link failure scenarios.However,existing methods for implementing DC schemes are time consuming,require a large amount of router CPU resources,and may deteriorate router capability.Thus,the computation overhead introduced by existing DC schemes is significant,especially in large-scale networks.Therefore,this study proposes an efficient intra-domain routing protection algorithm(ERPA)in large-scale networks.Theoretical analysis indicates that the time complexity of ERPA is less than that of constructing a shortest path tree.Experimental results show that ERPA can reduce the computation overhead significantly compared with the existing algorithms while offering the same network availability as DC.展开更多
Nowadays, increased information capacity and transmission processes make information security a difficult problem. As a result, most researchers employ encryption and decryption algorithms to enhance information secur...Nowadays, increased information capacity and transmission processes make information security a difficult problem. As a result, most researchers employ encryption and decryption algorithms to enhance information security domains. As it progresses, new encryption methods are being used for information security. In this paper, a hybrid encryption algorithm that combines the honey encryption algorithm and an advanced DNA encoding scheme in key generation is presented. Deoxyribonucleic Acid (DNA) achieves maximal protection and powerful security with high capacity and low modification rate, it is currently being investigated as a potential carrier for information security. Honey Encryption (HE) is an important encryption method for security systems and can strongly prevent brute force attacks. However, the traditional honeyword encryption has a message space limitation problem in the message distribution process. Therefore, we use an improved honey encryption algorithm in our proposed system. By combining the benefits of the DNA-based encoding algorithm with the improved Honey encryption algorithm, a new hybrid method is created in the proposed system. In this paper, five different lookup tables are created in the DNA encoding scheme in key generation. The improved Honey encryption algorithm based on the DNA encoding scheme in key generation is discussed in detail. The passwords are generated as the keys by using the DNA methods based on five different lookup tables, and the disease names are the input messages that are encoded by using the honey encryption process. This hybrid method can reduce the storage overhead problem in the DNA method by applying the five different lookup tables and can reduce time complexity in the existing honey encryption process.展开更多
A series of poly\|aluminum\|chloride\|sulfate (PACS), which has different basicities (γ) and Al 3+ /SO 2- 4 molar ratio, has been prepared and dried at 105℃ and 65℃, respectively. The distribution of aluminum speci...A series of poly\|aluminum\|chloride\|sulfate (PACS), which has different basicities (γ) and Al 3+ /SO 2- 4 molar ratio, has been prepared and dried at 105℃ and 65℃, respectively. The distribution of aluminum species of PACS was examined, and the effect of γ value, Al 3+ /SO 2- 4 molar ratio, dilution on the distribution of aluminum species of PACS was also investigated by using Al\|ferron timed complex colorimetric method. The IR spectroscopy and X \|ray diffraction were used to study the effect of γ value, Al 3+ /SO 2- 4 molar ratio and the drying temperature on the structure of PACS. The experimental results show that Al 3+ /SO 2- 4 molar ratio has a great effect on the distribution of aluminum species, but the dilution has a little effect on the distribution of aluminum species. The lower the Al 3+ /SO 2- 4 molar ratio, the higher the proportions of the polymer and colloidal species in PACS. The polymeric degree of PACS was related to γ value and Al 3+ /SO 2- 4 molar ratio. Drying temperature has an influence on the structure and the solubility of solid PACS products.展开更多
The Molopo Farms Complex(MFC)is a 13000 km2layered,mafic-ultramafic intrusion straddling the southern border of Botswana with South Africa.It does not outcrop due to Cenozoic cover,but is believed to intrude
A fundamental problem with complex time series analysis involves data prediction and repair.However,existing methods are not accurate enough for complex and multidimensional time series data.In this paper,we propose a...A fundamental problem with complex time series analysis involves data prediction and repair.However,existing methods are not accurate enough for complex and multidimensional time series data.In this paper,we propose a novel approach,a complex time series predic-tion model,which is based on the conditional randomfield(CRF)and recurrent neural network(RNN).This model can be used as an upper-level predictor in the stacking process or be trained using deep learning methods.Our approach is more accurate than existing methods in some suitable scenarios,as shown in the experimental results.展开更多
This paper investigates pinning synchronization of discrete-time complex networks with differen t time-varying delays.An important lemma is presen ted and proved,t hen detailed analysis is given to yield some synchron...This paper investigates pinning synchronization of discrete-time complex networks with differen t time-varying delays.An important lemma is presen ted and proved,t hen detailed analysis is given to yield some synchronization criteria for this kind of net works.The results provide an effective way to synchronize discrete-time complex networks by reducing control cost.Furthermore,these theoretical results are illustrated by a complex network via two kinds of pinning schemes.Numerical simulations verify the feasibil计y of the proposed methods.展开更多
The fading factor exerts a significant role in the strong tracking idea. However, traditional fading factor introduction method hinders the accuracy and robustness advantages of current strong-tracking-based nonlinear...The fading factor exerts a significant role in the strong tracking idea. However, traditional fading factor introduction method hinders the accuracy and robustness advantages of current strong-tracking-based nonlinear filtering algorithms such as Cubature Kalman Filter(CKF) since traditional fading factor introduction method only considers the first-order Taylor expansion. To this end, a new fading factor idea is suggested and introduced into the strong tracking CKF method.The new fading factor introduction method expanded the number of fading factors from one to two with reselected introduction positions. The relationship between the two fading factors as well as the general calculation method can be derived based on Taylor expansion. Obvious superiority of the newly suggested fading factor introduction method is demonstrated according to different nonlinearity of the measurement function. Equivalent calculation method can also be established while applied to CKF. Theoretical analysis shows that the strong tracking CKF can extract the thirdorder term information from the residual and thus realize second-order accuracy. After optimizing the strong tracking algorithm process, a Fast Strong Tracking CKF(FSTCKF) is finally established. Two simulation examples show that the novel FSTCKF improves the robustness of traditional CKF while minimizing the algorithm time complexity under various conditions.展开更多
This paper deals with a general variant of the reverse undesirable(obnoxious)center location problem on cycle graphs.Given a‘selective’subset of the vertices of the underlying cycle graph as location of the existin...This paper deals with a general variant of the reverse undesirable(obnoxious)center location problem on cycle graphs.Given a‘selective’subset of the vertices of the underlying cycle graph as location of the existing customers,the task is to modify the edge lengths within a given budget such that the minimum of distances between a predetermined undesirable facility location and the customer points is maximized under the perturbed edge lengths.We develop a combinatorial O(n log n)algorithm for the problem with continuous modifications.For the uniform-cost model,we solve this problem in linear time by an improved algorithm.Furthermore,exact solution methods are proposed for the problem with integer modifications.展开更多
This paper is concerned with the problem of modifying the edge lengths of a weighted extended star network with n vertices by integer amounts at the minimum total cost subject to be given modification bounds so that a...This paper is concerned with the problem of modifying the edge lengths of a weighted extended star network with n vertices by integer amounts at the minimum total cost subject to be given modification bounds so that a set of p prespecified vertices becomes an undesirable p-median location on the perturbed network.We call this problem as the integer inverse undesirable p-median location model.Exact combinatorial algorithms with O(p2n logn)and O(p2(n logn+n log nmax))running times are proposed for solving the problem under the weighted rectilinear and weighted Chebyshev norms,respectively.Furthermore,it is shown that the problem under the weighted sum-type Hamming distance with uniform modification bounds can be solved in O(p-n log n)time.展开更多
文摘Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which cause</span><span style="font-size:10.0pt;font-family:"">s</span><span style="font-size:10.0pt;font-family:""> higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them <span>together by a global theoretical evaluation system called worst-case time co</span><span>mplexity. In this study, we have analyzed the selected algorithms and calculate</span>d the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by comparing th</span><span style="font-size:10.0pt;font-family:"">ese</span><span style="font-size:10.0pt;font-family:""> criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms </span><span style="font-size:10.0pt;font-family:"">such </span><span style="font-size:10.0pt;font-family:"">as Wee method and Modified Prata method were successful in having the smaller time complexit<span>ies, some other approaches did not make any significant difference compa</span>r</span><span style="font-size:10.0pt;font-family:"">ed</span><span style="font-size:10.0pt;font-family:""> to the classical algorithm.
文摘A natural extension of the Lorentz transformation to its complex version was constructed together with a parallel extension of the Minkowski M<sup>4</sup> model for special relativity (SR) to complex C<sup>4</sup> space-time. As the [signed] absolute values of complex coordinates of the underlying motion’s characterization in C<sup>4</sup> one obtains a Newtonian-like type of motion whereas as the real parts of the complex motion’s description and of the complex Lorentz transformation, all the SR theory as modeled by M<sup>4</sup> real space-time can be recovered. This means all the SR theory is preserved in the real subspace M<sup>4</sup> of the space-time C<sup>4</sup> while becoming simpler and clearer in the new complex model’s framework. Since velocities in the complex model can be determined geometrically, with no primary use of time, time turns out to be definable within the equivalent theory of the reduced complex C<sup>4</sup> model to the C<sup>3</sup> “para-space” model. That procedure allows us to separate time from the (para)space and consider all the SR theory as a theory of C<sup>3</sup> alone. On the other hand, the complex time defined within the C<sup>3</sup> theory is interpreted and modeled by the single separate C<sup>1</sup> complex plane. The possibility for application of the C<sup>3</sup> model to quantum mechanics is suggested. As such, the model C<sup>3</sup> seems to have unifying abilities for application to different physical theories.
文摘Internet of Things(IoT)is an emerging technology that moves the world in the direction of smart things.But,IoT security is the complex problem due to its centralized architecture,and limited capacity.So,blockchain technology has great attention due to its features of decentralized architecture,transparency,immutable records and cryptography hash functions when combining with IoT.Cryptography hash algorithms are very important in blockchain technology for secure transmission.It converts the variable size inputs to a fixed size hash output which is unchangeable.Existing cryptography hash algorithms with digital signature have issues of single node accessibility and accessed up to 128 bytes of key size only.As well as,if the attacker tries to hack the key,it cancels the transaction.This paper presents the Modified Elliptic Curve Cryptography Multi Signature Scheme(MECC-MSS)for multiple node accessibility by finding nearest path for secure transaction.In this work,the input key size can be extended up to 512 bytes to enhance the security.The performance of the proposed algorithm is analyzed with other cryptography hash algorithms like Secure Hashing Algorithms(SHAs)such as SHA224,SHA256,SHA384,SHA512,SHA3-224,SHA3-256,SHA3-384,SHA3-512 and Message Digest5 by one-way analysis of variance test in terms of accuracy and time complexity.Results show that the MECC-MSS achieves 90.85%of accuracy and time complexity of 1.4 nano seconds with significance less than 0.05.From the statistical analysis,it is observed that the proposed algorithm is significantly better than other cryptography hash algorithms and also having less time complexity.
基金the National Natural Science Foundation of China(Nos.81873646 and 61903071)the Shanghai United Developing Technology Project of Municipal Hospitals(Nos.SHDC12006101 and SHDC12010115)the Shanghai Municipal Education Commission Gaofeng Clinical Medicine grant support(Nos.20161430).
文摘Most information used to evaluate diabetic statuses is collected at a special time-point,such as taking fasting plasma glucose test and providing a limited view of individual’s health and disease risk.As a new parameter for continuously evaluating personal clinical statuses,the newly developed technique“continuous glucose monitoring”(CGM)can characterize glucose dynamics.By calculating the complexity of glucose time series index(CGI)with refined composite multi-scale entropy analysis of the CGM data,the study showed for the first time that the complexity of glucose time series in subjects decreased gradually from normal glucose tolerance to impaired glucose regulation and then to type 2 diabetes(P for trend<0.01).Furthermore,CGI was significantly associated with various parameters such as insulin sensitivity/secretion(all P<0.01),and multiple linear stepwise regression showed that the disposition index,which reflectsβ-cell function after adjusting for insulin sensitivity,was the only independent factor correlated with CGI(P<0.01).Our findings indicate that the CGI derived from the CGM data may serve as a novel marker to evaluate glucose homeostasis.
基金supported by Chinese National Natural ScienceFoundation (41674016 and 41274016)
文摘To further improve the performance of UKF(Unscented Kalman Filter) algorithm used in BDS/SINS(BeiDou Navigation Satellite System/Strap down Inertial Navigation System), an improved GM-UKF(Gaussian Mixture Unscented Kalman Filter) considering non-Gaussian distribution is discussed in this paper. This new algorithm using SVD(Singular Value Decomposition) is proposed to alternative covariance square root calculation in UKF sigma point production. And to end the rapidly increasing number of Gaussian distributions, PDF(Probability Density Function) re-approximation is conducted. In principle this efficiency algorithm proposed here can achieve higher computational speed compared with traditional GM-UKF. And simulation experiment result show that, compared with UKF and GM-UKF algorithm, new algorithm implemented in BDS/SINS tightly integrated navigation system is suitable for handling nonlinear/non-Gaussian integrated navigation position calculation, for its lower computational complexity with high accuracy.
文摘In the present era,a very huge volume of data is being stored in online and offline databases.Enterprise houses,research,medical as well as healthcare organizations,and academic institutions store data in databases and their subsequent retrievals are performed for further processing.Finding the required data from a given database within the minimum possible time is one of the key factors in achieving the best possible performance of any computer-based application.If the data is already sorted,finding or searching is comparatively faster.In real-life scenarios,the data collected from different sources may not be in sorted order.Sorting algorithms are required to arrange the data in some order in the least possible time.In this paper,I propose an intelligent approach towards designing a smart variant of the bubble sort algorithm.I call it Smart Bubble sort that exhibits dynamic footprint:The capability of adapting itself from the average-case to the best-case scenario.It is an in-place sorting algorithm and its best-case time complexity isΩ(n).It is linear and better than bubble sort,selection sort,and merge sort.In averagecase and worst-case analyses,the complexity estimates are based on its static footprint analyses.Its complexity in worst-case is O(n2)and in average-case isΘ(n^(2)).Smart Bubble sort is capable of adapting itself to the best-case scenario from the average-case scenario at any subsequent stages due to its dynamic and intelligent nature.The Smart Bubble sort outperforms bubble sort,selection sort,and merge sort in the best-case scenario whereas it outperforms bubble sort in the average-case scenario.
基金the National Natural Science Foundation of China(No.61702315)the Key R&D program(international science and technology cooperation project)of Shanxi Province China(No.201903D421003)the National Key Research and Development Program of China(No.2018YFB1800401).
文摘With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have shown that network failures occur on the Internet inevitably and frequently.The current routing protocols deployed on the Internet adopt the reconvergence mechanism to cope with network failures.During the reconvergence process,the packets may be lost because of inconsistent routing information,which reduces the network’s availability greatly and affects the Internet service provider’s(ISP’s)service quality and reputation seriously.Therefore,improving network availability has become an urgent problem.As such,the Internet Engineering Task Force suggests the use of downstream path criterion(DC)to address all single-link failure scenarios.However,existing methods for implementing DC schemes are time consuming,require a large amount of router CPU resources,and may deteriorate router capability.Thus,the computation overhead introduced by existing DC schemes is significant,especially in large-scale networks.Therefore,this study proposes an efficient intra-domain routing protection algorithm(ERPA)in large-scale networks.Theoretical analysis indicates that the time complexity of ERPA is less than that of constructing a shortest path tree.Experimental results show that ERPA can reduce the computation overhead significantly compared with the existing algorithms while offering the same network availability as DC.
文摘Nowadays, increased information capacity and transmission processes make information security a difficult problem. As a result, most researchers employ encryption and decryption algorithms to enhance information security domains. As it progresses, new encryption methods are being used for information security. In this paper, a hybrid encryption algorithm that combines the honey encryption algorithm and an advanced DNA encoding scheme in key generation is presented. Deoxyribonucleic Acid (DNA) achieves maximal protection and powerful security with high capacity and low modification rate, it is currently being investigated as a potential carrier for information security. Honey Encryption (HE) is an important encryption method for security systems and can strongly prevent brute force attacks. However, the traditional honeyword encryption has a message space limitation problem in the message distribution process. Therefore, we use an improved honey encryption algorithm in our proposed system. By combining the benefits of the DNA-based encoding algorithm with the improved Honey encryption algorithm, a new hybrid method is created in the proposed system. In this paper, five different lookup tables are created in the DNA encoding scheme in key generation. The improved Honey encryption algorithm based on the DNA encoding scheme in key generation is discussed in detail. The passwords are generated as the keys by using the DNA methods based on five different lookup tables, and the disease names are the input messages that are encoded by using the honey encryption process. This hybrid method can reduce the storage overhead problem in the DNA method by applying the five different lookup tables and can reduce time complexity in the existing honey encryption process.
文摘A series of poly\|aluminum\|chloride\|sulfate (PACS), which has different basicities (γ) and Al 3+ /SO 2- 4 molar ratio, has been prepared and dried at 105℃ and 65℃, respectively. The distribution of aluminum species of PACS was examined, and the effect of γ value, Al 3+ /SO 2- 4 molar ratio, dilution on the distribution of aluminum species of PACS was also investigated by using Al\|ferron timed complex colorimetric method. The IR spectroscopy and X \|ray diffraction were used to study the effect of γ value, Al 3+ /SO 2- 4 molar ratio and the drying temperature on the structure of PACS. The experimental results show that Al 3+ /SO 2- 4 molar ratio has a great effect on the distribution of aluminum species, but the dilution has a little effect on the distribution of aluminum species. The lower the Al 3+ /SO 2- 4 molar ratio, the higher the proportions of the polymer and colloidal species in PACS. The polymeric degree of PACS was related to γ value and Al 3+ /SO 2- 4 molar ratio. Drying temperature has an influence on the structure and the solubility of solid PACS products.
文摘The Molopo Farms Complex(MFC)is a 13000 km2layered,mafic-ultramafic intrusion straddling the southern border of Botswana with South Africa.It does not outcrop due to Cenozoic cover,but is believed to intrude
基金Supported by The National Key Research and Development Program of China(2020YFB1006104).
文摘A fundamental problem with complex time series analysis involves data prediction and repair.However,existing methods are not accurate enough for complex and multidimensional time series data.In this paper,we propose a novel approach,a complex time series predic-tion model,which is based on the conditional randomfield(CRF)and recurrent neural network(RNN).This model can be used as an upper-level predictor in the stacking process or be trained using deep learning methods.Our approach is more accurate than existing methods in some suitable scenarios,as shown in the experimental results.
基金supported by the National Natural Science Foundation of China under Grant Nos.61304022,61573262 and 61573011the Excellent Youth Foundation of Hunan Provincial Department of Education(16B141)
文摘This paper investigates pinning synchronization of discrete-time complex networks with differen t time-varying delays.An important lemma is presen ted and proved,t hen detailed analysis is given to yield some synchronization criteria for this kind of net works.The results provide an effective way to synchronize discrete-time complex networks by reducing control cost.Furthermore,these theoretical results are illustrated by a complex network via two kinds of pinning schemes.Numerical simulations verify the feasibil计y of the proposed methods.
基金supported by the National Natural Science Foundation of China (No. 61573283)
文摘The fading factor exerts a significant role in the strong tracking idea. However, traditional fading factor introduction method hinders the accuracy and robustness advantages of current strong-tracking-based nonlinear filtering algorithms such as Cubature Kalman Filter(CKF) since traditional fading factor introduction method only considers the first-order Taylor expansion. To this end, a new fading factor idea is suggested and introduced into the strong tracking CKF method.The new fading factor introduction method expanded the number of fading factors from one to two with reselected introduction positions. The relationship between the two fading factors as well as the general calculation method can be derived based on Taylor expansion. Obvious superiority of the newly suggested fading factor introduction method is demonstrated according to different nonlinearity of the measurement function. Equivalent calculation method can also be established while applied to CKF. Theoretical analysis shows that the strong tracking CKF can extract the thirdorder term information from the residual and thus realize second-order accuracy. After optimizing the strong tracking algorithm process, a Fast Strong Tracking CKF(FSTCKF) is finally established. Two simulation examples show that the novel FSTCKF improves the robustness of traditional CKF while minimizing the algorithm time complexity under various conditions.
基金the Sahand University of Technology under the Ph.D.program contract(No.30/15971).
文摘This paper deals with a general variant of the reverse undesirable(obnoxious)center location problem on cycle graphs.Given a‘selective’subset of the vertices of the underlying cycle graph as location of the existing customers,the task is to modify the edge lengths within a given budget such that the minimum of distances between a predetermined undesirable facility location and the customer points is maximized under the perturbed edge lengths.We develop a combinatorial O(n log n)algorithm for the problem with continuous modifications.For the uniform-cost model,we solve this problem in linear time by an improved algorithm.Furthermore,exact solution methods are proposed for the problem with integer modifications.
文摘This paper is concerned with the problem of modifying the edge lengths of a weighted extended star network with n vertices by integer amounts at the minimum total cost subject to be given modification bounds so that a set of p prespecified vertices becomes an undesirable p-median location on the perturbed network.We call this problem as the integer inverse undesirable p-median location model.Exact combinatorial algorithms with O(p2n logn)and O(p2(n logn+n log nmax))running times are proposed for solving the problem under the weighted rectilinear and weighted Chebyshev norms,respectively.Furthermore,it is shown that the problem under the weighted sum-type Hamming distance with uniform modification bounds can be solved in O(p-n log n)time.