Graphics Processing Units(GPUs)are used to accelerate computing-intensive tasks,such as neural networks,data analysis,high-performance computing,etc.In the past decade or so,researchers have done a lot of work on GPU ...Graphics Processing Units(GPUs)are used to accelerate computing-intensive tasks,such as neural networks,data analysis,high-performance computing,etc.In the past decade or so,researchers have done a lot of work on GPU architecture and proposed a variety of theories and methods to study the microarchitectural characteristics of various GPUs.In this study,the GPU serves as a co-processor and works together with the CPU in an embedded real-time system to handle computationally intensive tasks.It models the architecture of the GPU and further considers it based on some excellent work.The SIMT mechanism and Cache-miss situation provide a more detailed analysis of the GPU architecture.In order to verify the GPU architecture model proposed in this article,10 GPU kernel_task and an Nvidia GPU device were used to perform experiments.The experimental results showed that the minimum error between the kernel task execution time predicted by the GPU architecture model proposed in this article and the actual measured kernel task execution time was 3.80%,and the maximum error was 8.30%.展开更多
Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main i...Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which cause</span><span style="font-size:10.0pt;font-family:"">s</span><span style="font-size:10.0pt;font-family:""> higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them <span>together by a global theoretical evaluation system called worst-case time co</span><span>mplexity. In this study, we have analyzed the selected algorithms and calculate</span>d the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by comparing th</span><span style="font-size:10.0pt;font-family:"">ese</span><span style="font-size:10.0pt;font-family:""> criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms </span><span style="font-size:10.0pt;font-family:"">such </span><span style="font-size:10.0pt;font-family:"">as Wee method and Modified Prata method were successful in having the smaller time complexit<span>ies, some other approaches did not make any significant difference compa</span>r</span><span style="font-size:10.0pt;font-family:"">ed</span><span style="font-size:10.0pt;font-family:""> to the classical algorithm.展开更多
In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergo...In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergoing spatial superposition, we argue that evolution in the presence of metric perturbations will differ from that in the absence of these waves. This differential state evolution will then encode the information being processed by the tract due to the interaction of the quantum state of the ions at the nodes with the “controlling’ potential. Upon decoherence, which is equal to a measurement, the final spatial state of the ions is decided and it also gets reset by the next impulse initiation time. Under synchronization, several tracts undergo such processes in synchrony and therefore the picture of a quantum computing circuit is complete. Under this model, based on the number of axons in the corpus callosum alone, we estimate that upwards of 50 million quantum states might be prepared and evolved every second in this white matter tract, far greater processing than any present quantum computer can accomplish.展开更多
With the advancement of technology and the increase in user demands, gesture recognition played a pivotal role in the field of human-computer interaction. Among various sensing devices, Time-of-Flight (ToF) sensors we...With the advancement of technology and the increase in user demands, gesture recognition played a pivotal role in the field of human-computer interaction. Among various sensing devices, Time-of-Flight (ToF) sensors were widely applied due to their low cost. This paper explored the implementation of a human hand posture recognition system using ToF sensors and residual neural networks. Firstly, this paper reviewed the typical applications of human hand recognition. Secondly, this paper designed a hand gesture recognition system using a ToF sensor VL53L5. Subsequently, data preprocessing was conducted, followed by training the constructed residual neural network. Then, the recognition results were analyzed, indicating that gesture recognition based on the residual neural network achieved an accuracy of 98.5% in a 5-class classification scenario. Finally, the paper discussed existing issues and future research directions.展开更多
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems....Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.展开更多
We study the correlation between detrended fluctuation analysis(DFA) and the Lempel-Ziv complexity(LZC) in nonlinear time series analysis in this paper.Typical dynamic systems including a logistic map and a Duffin...We study the correlation between detrended fluctuation analysis(DFA) and the Lempel-Ziv complexity(LZC) in nonlinear time series analysis in this paper.Typical dynamic systems including a logistic map and a Duffing model are investigated.Moreover,the influence of Gaussian random noise on both the DFA and LZC are analyzed.The results show a high correlation between the DFA and LZC,which can quantify the non-stationarity and the nonlinearity of the time series,respectively.With the enhancement of the random component,the exponent α and the normalized complexity index C show increasing trends.In addition,C is found to be more sensitive to the fluctuation in the nonlinear time series than α.Finally,the correlation between the DFA and LZC is applied to the extraction of vibration signals for a reciprocating compressor gas valve,and an effective fault diagnosis result is obtained.展开更多
The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingl...The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.展开更多
In this paper, a real-time computation method for the control problems in differential-algebraic systems is presented. The errors of the method are estimated, and the relation between the sampling stepsize and the con...In this paper, a real-time computation method for the control problems in differential-algebraic systems is presented. The errors of the method are estimated, and the relation between the sampling stepsize and the controlled errors is analyzed. The stability analysis is done for a model problem, and the stability region is ploted which gives the range of the sampling stepsizes with which the stability of control process is guaranteed.展开更多
The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be...The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be also an NP_complete problem.The proof for the property of NP_completeness in directional spatial reasoning problem is based on two important transformations.After these transformations,a spatial configuration has been constructed based on directional constraints,and the property of NP_completeness in directional spatial reasoning has been proved with the help of the consistency of the constraints in the configuration.展开更多
Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the eq...Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step, the computation effort of the proposed method is reduced compared with the full analysis of Newmark method. The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass. It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.展开更多
A user-programmable computational/control platform was developed at the University of Toronto that offers real-time hybrid simulation (RTHS) capabilities. The platform was verified previously using several linear ph...A user-programmable computational/control platform was developed at the University of Toronto that offers real-time hybrid simulation (RTHS) capabilities. The platform was verified previously using several linear physical substructures. The study presented in this paper is focused on further validating the RTHS platform using a nonlinear viscoelastic-plastic damper that has displacement, frequency and temperature-dependent properties. The validation study includes damper component characterization tests, as well as RTHS of a series of single-degree-of-freedom (SDOF) systems equipped with viscoelastic-plastic dampers that represent different structural designs. From the component characterization tests, it was found that for a wide range of excitation frequencies and friction slip loads, the tracking errors are comparable to the errors in RTHS of linear spring systems. The hybrid SDOF results are compared to an independently validated thermal- mechanical viscoelastic model to further validate the ability for the platform to test nonlinear systems. After the validation, as an application study, nonlinear SDOF hybrid tests were used to develop performance spectra to predict the response of structures equipped with damping systems that are more challenging to model analytically. The use of the experimental performance spectra is illustrated by comparing the predicted response to the hybrid test response of 2DOF systems equipped with viscoelastic-plastic dampers.展开更多
In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is t...In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is the packet data length.Different from previous works where the optimal performance of codes has been shown under asymptotic assumption,TC enjoys near-zero redundancy even under non-asymptotic settings for smallmoderate number of packets.These features make TC suitable for practical implementation in batteryconstrained devices in IoT,D2D and M2M network paradigms to achieve scalable reliability,and minimize latency due to its low decoding delay.TC is a non-linear code,which is encoded using the simple shift and XOR addition operations,and decoded using the simple back-substitution algorithm.Although it is nonlinear code at the packet level,it remains linear code when atomized at the bit level.We use this property to show that the backsubstitution decoder of TC is equivalent to the Belief Propagation(BP)decoder of LT code.Therefore,TC can benefit from rich prolific literature published on LT code,to design efficient code for various applications.Despite the equivalency between the decoders of TC and LT code,we show that compared to state-of-the-art optimized LT code,TC reduces the redundancy of LT code by 68%-99% for k reaching 1024.展开更多
CAPTCHA is an acronym that stands for Completely Automated Public Turing Test to tell Computers and Humans Apart(CAPTCHA),it is a good example of an authentication system that can be used to determine the true identit...CAPTCHA is an acronym that stands for Completely Automated Public Turing Test to tell Computers and Humans Apart(CAPTCHA),it is a good example of an authentication system that can be used to determine the true identity of any user.It serves as a security measure to prevent an attack caused by web bots(automatic programs)during an online transaction.It can come as text-based or image-based depending on the project and the programmer.The usability and robustness,as well as level of security,provided each of the varies and call for the development of an improved system.Hence,this paper studied and improved two different CAPTCHA systems(the text-based CAPTCHA and image-based CAPTCHA).The textbased and image-based CAPTCHAwere designed using JavaScript.Response time and solving time are the two metrics used to determine the effectiveness and efficiency of the two CAPTCHA systems.The inclusion of response time and solving time improved the shortfall of the usability and robustness of the existing system.The developed system was tested using 200 students from the Federal College of Animal Health and Production Technology.The results of each of the participants,for the two CAPTCHAs,were extracted from the database and subjected to analysis using SPSS.The result shows that textbased CAPTCHAhas the lowest average solving time(21.3333 s)with a 47.8%success rate while image-based CAPTCHA has the highest average solving time was 23.5138 s with a 52.8%success rate.The average response time for the image-based CAPTCHA was 2.1855 s with a 37.9%success rate lower than the text-based CAPTCHA response time(3.5561 s)with a 62.1%success rate.This indicates that the text-based CAPTCHA is more effective in terms of usability tests while image-based CAPTCHA is more efficient in terms of system responsiveness and recommended for potential users.展开更多
Noise and time delay are inevitable in real-world networks. In this article, the framework of master stability function is generalized to stochastic complex networks with time-delayed coupling. The focus is on the eff...Noise and time delay are inevitable in real-world networks. In this article, the framework of master stability function is generalized to stochastic complex networks with time-delayed coupling. The focus is on the effects of noise, time delay,and their inner interactions on the network synchronization. It is found that when there exists time-delayed coupling in the network and noise diffuses through all state variables of nodes, appropriately increasing the noise intensity can effectively improve the network synchronizability;otherwise, noise can be either beneficial or harmful. For stochastic networks, large time delays will lead to desynchronization. These findings provide valuable references for designing optimal complex networks in practical applications.展开更多
Two reduced-complexity decoding algorithms for unitary space-time codes based on tree-structured constellation are presented. In this letter original unitary space-time constellation is divided into several groups. Ea...Two reduced-complexity decoding algorithms for unitary space-time codes based on tree-structured constellation are presented. In this letter original unitary space-time constellation is divided into several groups. Each one is treated as the leaf nodes set of a subtree. Choosing the unitary signals that represent each group as the roots of these subtrees generates a tree-structured constellation. The proposed tree search decoder decides to which sub tree the receive signal belongs by searching in the set of subtree roots. The final decision is made after a local search in the leaf nodes set of the se-lected sub tree. The adjacent subtree joint decoder performs joint search in the selected sub tree and its “surrounding” subtrees,which improves the Bit Error Rate (BER) performance of purely tree search method. The exhaustively search in the whole constellation is avoided in our proposed decoding al-gorithms,a lower complexity is obtained compared to that of Maximum Likelihood (ML) decoding. Simulation results have also been provided to demonstrate the feasibility of these new methods.展开更多
Despite the advances mobile devices have endured,they still remain resource-restricted computing devices,so there is a need for a technology that supports these devices.An emerging technology that supports such resour...Despite the advances mobile devices have endured,they still remain resource-restricted computing devices,so there is a need for a technology that supports these devices.An emerging technology that supports such resource-con-strained devices is called fog computing.End devices can offload the task to close-by fog nodes to improve the quality of service and experience.Since com-putation offloading is a multiobjective problem,we need to consider many factors before taking offloading decisions,such as task length,remaining battery power,latency,communication cost,etc.This study uses the multiobjective grey wolf optimization(MOGWO)technique for optimizing offloading decisions.This is thefirst time MOGWO has been applied for computation offloading in fog com-puting.A gravity reference point method is also integrated with MOGWO to pro-pose an enhanced multiobjective grey wolf optimization(E-MOGWO)algorithm.Itfinds the optimal offloading target by taking into account two parameters,i.e.,energy consumption and computational time in a heterogeneous,scalable,multi-fog,multi-user environment.The proposed E-MOGWO is compared with MOG-WO,non-dominated sorting genetic algorithm(NSGA-II)and accelerated particle swarm optimization(APSO).The results showed that the proposed algorithm achieved better results than existing approaches regarding energy consumption,computational time and the number of tasks successfully executed.展开更多
Objective To evaluate the utility of computed tomography perfusion(CTP)both at admission and during delayed cerebral ischemia time-window(DCITW)in the detection of delayed cerebral ischemia(DCI)and the change in CTP p...Objective To evaluate the utility of computed tomography perfusion(CTP)both at admission and during delayed cerebral ischemia time-window(DCITW)in the detection of delayed cerebral ischemia(DCI)and the change in CTP parameters from admission to DCITW following aneurysmal subarachnoid hemorrhage.Methods Eighty patients underwent CTP at admission and during DCITW.The mean and extreme values of all CTP parameters at admission and during DCITW were compared between the DCI group and non-DCI group,and comparisons were also made between admission and DCITW within each group.The qualitative color-coded perfusion maps were recorded.Finally,the relationship between CTP parameters and DCI was assessed by receiver operating characteristic(ROC)analyses.Results With the exception of cerebral blood volume(P=0.295,admission;P=0.682,DCITW),there were significant differences in the mean quantitative CTP parameters between DCI and non-DCI patients both at admission and during DCITW.In the DCI group,the extreme parameters were significantly different between admission and DCITW.The DCI group also showed a deteriorative trend in the qualitative color-coded perfusion maps.For the detection of DCI,mean transit time to the center of the impulse response function(Tmax)at admission and mean time to start(TTS)during DCITW had the largest area under curve(AUC),0.698 and 0.789,respectively.Conclusion Whole-brain CTP can predict the occurrence of DCI at admission and diagnose DCI during DCITW.The extreme quantitative parameters and qualitative color-coded perfusion maps can better reflect the perfusion changes of patients with DCI from admission to DCITW.展开更多
In this work,a consistent and physically accurate implementation of the general framework of unified second-order time accurate integrators via the well-known GSSSS framework in the Discrete Element Method is presente...In this work,a consistent and physically accurate implementation of the general framework of unified second-order time accurate integrators via the well-known GSSSS framework in the Discrete Element Method is presented.The improved tangential displacement evaluation in the present implementation of the discrete element method has been derived and implemented to preserve the consistency of the correct time level evaluation during the time integration process in calculating the algorithmic tangential displacement.Several numerical examples have been used to validate the proposed tangential displacement evaluation;this is in contrast to past practices which only seem to attain the first-order time accuracy due to inconsistent time level implementation with different algorithms for normal and tangential directions.The comparisons with the existing implementation and the superiority of the proposed implementation are given in terms of the convergence rate with improved numerical accuracy in time.Moreover,several schemes via the unified second-order time integrators within the framework of the GSSSS family have been carried out based on the proposed correct implementation.All the numerical results demonstrate that using the existing state-of-the-art implementation reduces the time accuracy to be first-order accurate in time,while the proposed implementation preserves the correct time accuracy to yield second-order.展开更多
In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more ...In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more general case (i.e. when alternatives are not necessarily comparable). We show that they all coincide with Top cycle choice procedure for tournaments. In case of weak tournaments, the strong Untrapped set is equivalent to Getcha choice procedure and the Weak Untrapped set is exactly the Untrapped set studied in the litterature. We also present a polynomial-time algorithm for computing each set.展开更多
文摘Graphics Processing Units(GPUs)are used to accelerate computing-intensive tasks,such as neural networks,data analysis,high-performance computing,etc.In the past decade or so,researchers have done a lot of work on GPU architecture and proposed a variety of theories and methods to study the microarchitectural characteristics of various GPUs.In this study,the GPU serves as a co-processor and works together with the CPU in an embedded real-time system to handle computationally intensive tasks.It models the architecture of the GPU and further considers it based on some excellent work.The SIMT mechanism and Cache-miss situation provide a more detailed analysis of the GPU architecture.In order to verify the GPU architecture model proposed in this article,10 GPU kernel_task and an Nvidia GPU device were used to perform experiments.The experimental results showed that the minimum error between the kernel task execution time predicted by the GPU architecture model proposed in this article and the actual measured kernel task execution time was 3.80%,and the maximum error was 8.30%.
文摘Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which cause</span><span style="font-size:10.0pt;font-family:"">s</span><span style="font-size:10.0pt;font-family:""> higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them <span>together by a global theoretical evaluation system called worst-case time co</span><span>mplexity. In this study, we have analyzed the selected algorithms and calculate</span>d the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by comparing th</span><span style="font-size:10.0pt;font-family:"">ese</span><span style="font-size:10.0pt;font-family:""> criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms </span><span style="font-size:10.0pt;font-family:"">such </span><span style="font-size:10.0pt;font-family:"">as Wee method and Modified Prata method were successful in having the smaller time complexit<span>ies, some other approaches did not make any significant difference compa</span>r</span><span style="font-size:10.0pt;font-family:"">ed</span><span style="font-size:10.0pt;font-family:""> to the classical algorithm.
文摘In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergoing spatial superposition, we argue that evolution in the presence of metric perturbations will differ from that in the absence of these waves. This differential state evolution will then encode the information being processed by the tract due to the interaction of the quantum state of the ions at the nodes with the “controlling’ potential. Upon decoherence, which is equal to a measurement, the final spatial state of the ions is decided and it also gets reset by the next impulse initiation time. Under synchronization, several tracts undergo such processes in synchrony and therefore the picture of a quantum computing circuit is complete. Under this model, based on the number of axons in the corpus callosum alone, we estimate that upwards of 50 million quantum states might be prepared and evolved every second in this white matter tract, far greater processing than any present quantum computer can accomplish.
文摘With the advancement of technology and the increase in user demands, gesture recognition played a pivotal role in the field of human-computer interaction. Among various sensing devices, Time-of-Flight (ToF) sensors were widely applied due to their low cost. This paper explored the implementation of a human hand posture recognition system using ToF sensors and residual neural networks. Firstly, this paper reviewed the typical applications of human hand recognition. Secondly, this paper designed a hand gesture recognition system using a ToF sensor VL53L5. Subsequently, data preprocessing was conducted, followed by training the constructed residual neural network. Then, the recognition results were analyzed, indicating that gesture recognition based on the residual neural network achieved an accuracy of 98.5% in a 5-class classification scenario. Finally, the paper discussed existing issues and future research directions.
基金This work was supported by an EPSRC grant (No.EP/C520696/1).
文摘Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.
基金Project supported by the National Natural Science Foundation of China (Grant No. 51175316)the Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20103108110006)
文摘We study the correlation between detrended fluctuation analysis(DFA) and the Lempel-Ziv complexity(LZC) in nonlinear time series analysis in this paper.Typical dynamic systems including a logistic map and a Duffing model are investigated.Moreover,the influence of Gaussian random noise on both the DFA and LZC are analyzed.The results show a high correlation between the DFA and LZC,which can quantify the non-stationarity and the nonlinearity of the time series,respectively.With the enhancement of the random component,the exponent α and the normalized complexity index C show increasing trends.In addition,C is found to be more sensitive to the fluctuation in the nonlinear time series than α.Finally,the correlation between the DFA and LZC is applied to the extraction of vibration signals for a reciprocating compressor gas valve,and an effective fault diagnosis result is obtained.
文摘The prediction of intrinsically disordered proteins is a hot research area in bio-information.Due to the high cost of experimental methods to evaluate disordered regions of protein sequences,it is becoming increasingly important to predict those regions through computational methods.In this paper,we developed a novel scheme by employing sequence complexity to calculate six features for each residue of a protein sequence,which includes the Shannon entropy,the topological entropy,the sample entropy and three amino acid preferences including Remark 465,Deleage/Roux,and Bfactor(2STD).Particularly,we introduced the sample entropy for calculating time series complexity by mapping the amino acid sequence to a time series of 0-9.To our knowledge,the sample entropy has not been previously used for predicting IDPs and hence is being used for the first time in our study.In addition,the scheme used a properly sized sliding window in every protein sequence which greatly improved the prediction performance.Finally,we used seven machine learning algorithms and tested with 10-fold cross-validation to get the results on the dataset R80 collected by Yang et al.and of the dataset DIS1556 from the Database of Protein Disorder(DisProt)(https://www.disprot.org)containing experimentally determined intrinsically disordered proteins(IDPs).The results showed that k-Nearest Neighbor was more appropriate and an overall prediction accuracy of 92%.Furthermore,our method just used six features and hence required lower computational complexity.
文摘In this paper, a real-time computation method for the control problems in differential-algebraic systems is presented. The errors of the method are estimated, and the relation between the sampling stepsize and the controlled errors is analyzed. The stability analysis is done for a model problem, and the stability region is ploted which gives the range of the sampling stepsizes with which the stability of control process is guaranteed.
文摘The property of NP_completeness of topologic spatial reasoning problem has been proved.According to the similarity of uncertainty with topologic spatial reasoning,the problem of directional spatial reasoning should be also an NP_complete problem.The proof for the property of NP_completeness in directional spatial reasoning problem is based on two important transformations.After these transformations,a spatial configuration has been constructed based on directional constraints,and the property of NP_completeness in directional spatial reasoning has been proved with the help of the consistency of the constraints in the configuration.
基金supported by the Foundation of the Science and Technology of Jilin Province (20070541)985-Automotive Engineering of Jilin University and Innovation Fund for 985 Engineering of Jilin University (20080104).
文摘Based on Neumman series and epsilon-algorithm, an efficient computation for dynamic responses of systems with arbitrary time-varying characteristics is investigated. Avoiding the calculation for the inverses of the equivalent stiffness matrices in each time step, the computation effort of the proposed method is reduced compared with the full analysis of Newmark method. The validity and applications of the proposed method are illustrated by a 4-DOF spring-mass system with periodical time-varying stiffness properties and a truss structure with arbitrary time-varying lumped mass. It shows that good approximate results can be obtained by the proposed method compared with the responses obtained by the full analysis of Newmark method.
基金NSERC Discovery under Grant 371627-2009 and NSERC RTI under Grant 374707-2009 EQPEQ programs
文摘A user-programmable computational/control platform was developed at the University of Toronto that offers real-time hybrid simulation (RTHS) capabilities. The platform was verified previously using several linear physical substructures. The study presented in this paper is focused on further validating the RTHS platform using a nonlinear viscoelastic-plastic damper that has displacement, frequency and temperature-dependent properties. The validation study includes damper component characterization tests, as well as RTHS of a series of single-degree-of-freedom (SDOF) systems equipped with viscoelastic-plastic dampers that represent different structural designs. From the component characterization tests, it was found that for a wide range of excitation frequencies and friction slip loads, the tracking errors are comparable to the errors in RTHS of linear spring systems. The hybrid SDOF results are compared to an independently validated thermal- mechanical viscoelastic model to further validate the ability for the platform to test nonlinear systems. After the validation, as an application study, nonlinear SDOF hybrid tests were used to develop performance spectra to predict the response of structures equipped with damping systems that are more challenging to model analytically. The use of the experimental performance spectra is illustrated by comparing the predicted response to the hybrid test response of 2DOF systems equipped with viscoelastic-plastic dampers.
文摘In this paper,we propose Triangular Code(TC),a new class of fountain code with near-zero redundancy and linear encoding and decoding computational complexities of OeLklog kT,where k is the packet batch size and L is the packet data length.Different from previous works where the optimal performance of codes has been shown under asymptotic assumption,TC enjoys near-zero redundancy even under non-asymptotic settings for smallmoderate number of packets.These features make TC suitable for practical implementation in batteryconstrained devices in IoT,D2D and M2M network paradigms to achieve scalable reliability,and minimize latency due to its low decoding delay.TC is a non-linear code,which is encoded using the simple shift and XOR addition operations,and decoded using the simple back-substitution algorithm.Although it is nonlinear code at the packet level,it remains linear code when atomized at the bit level.We use this property to show that the backsubstitution decoder of TC is equivalent to the Belief Propagation(BP)decoder of LT code.Therefore,TC can benefit from rich prolific literature published on LT code,to design efficient code for various applications.Despite the equivalency between the decoders of TC and LT code,we show that compared to state-of-the-art optimized LT code,TC reduces the redundancy of LT code by 68%-99% for k reaching 1024.
文摘CAPTCHA is an acronym that stands for Completely Automated Public Turing Test to tell Computers and Humans Apart(CAPTCHA),it is a good example of an authentication system that can be used to determine the true identity of any user.It serves as a security measure to prevent an attack caused by web bots(automatic programs)during an online transaction.It can come as text-based or image-based depending on the project and the programmer.The usability and robustness,as well as level of security,provided each of the varies and call for the development of an improved system.Hence,this paper studied and improved two different CAPTCHA systems(the text-based CAPTCHA and image-based CAPTCHA).The textbased and image-based CAPTCHAwere designed using JavaScript.Response time and solving time are the two metrics used to determine the effectiveness and efficiency of the two CAPTCHA systems.The inclusion of response time and solving time improved the shortfall of the usability and robustness of the existing system.The developed system was tested using 200 students from the Federal College of Animal Health and Production Technology.The results of each of the participants,for the two CAPTCHAs,were extracted from the database and subjected to analysis using SPSS.The result shows that textbased CAPTCHAhas the lowest average solving time(21.3333 s)with a 47.8%success rate while image-based CAPTCHA has the highest average solving time was 23.5138 s with a 52.8%success rate.The average response time for the image-based CAPTCHA was 2.1855 s with a 37.9%success rate lower than the text-based CAPTCHA response time(3.5561 s)with a 62.1%success rate.This indicates that the text-based CAPTCHA is more effective in terms of usability tests while image-based CAPTCHA is more efficient in terms of system responsiveness and recommended for potential users.
基金Project supported in part by the National Natural Science Foundation of China (Grant No. 61973064)the Natural Science Foundation of Hebei Province of China (Grant Nos. F2019501126 and F2022501024)+1 种基金the Natural Science Foundation of Liaoning Province, China (Grant No. 2020-KF11-03)the Fund from Hong Kong Research Grants Council (Grant No. CityU11206320)。
文摘Noise and time delay are inevitable in real-world networks. In this article, the framework of master stability function is generalized to stochastic complex networks with time-delayed coupling. The focus is on the effects of noise, time delay,and their inner interactions on the network synchronization. It is found that when there exists time-delayed coupling in the network and noise diffuses through all state variables of nodes, appropriately increasing the noise intensity can effectively improve the network synchronizability;otherwise, noise can be either beneficial or harmful. For stochastic networks, large time delays will lead to desynchronization. These findings provide valuable references for designing optimal complex networks in practical applications.
基金Supported by the National Natural Science Foundation of China (No.60572148).
文摘Two reduced-complexity decoding algorithms for unitary space-time codes based on tree-structured constellation are presented. In this letter original unitary space-time constellation is divided into several groups. Each one is treated as the leaf nodes set of a subtree. Choosing the unitary signals that represent each group as the roots of these subtrees generates a tree-structured constellation. The proposed tree search decoder decides to which sub tree the receive signal belongs by searching in the set of subtree roots. The final decision is made after a local search in the leaf nodes set of the se-lected sub tree. The adjacent subtree joint decoder performs joint search in the selected sub tree and its “surrounding” subtrees,which improves the Bit Error Rate (BER) performance of purely tree search method. The exhaustively search in the whole constellation is avoided in our proposed decoding al-gorithms,a lower complexity is obtained compared to that of Maximum Likelihood (ML) decoding. Simulation results have also been provided to demonstrate the feasibility of these new methods.
文摘Despite the advances mobile devices have endured,they still remain resource-restricted computing devices,so there is a need for a technology that supports these devices.An emerging technology that supports such resource-con-strained devices is called fog computing.End devices can offload the task to close-by fog nodes to improve the quality of service and experience.Since com-putation offloading is a multiobjective problem,we need to consider many factors before taking offloading decisions,such as task length,remaining battery power,latency,communication cost,etc.This study uses the multiobjective grey wolf optimization(MOGWO)technique for optimizing offloading decisions.This is thefirst time MOGWO has been applied for computation offloading in fog com-puting.A gravity reference point method is also integrated with MOGWO to pro-pose an enhanced multiobjective grey wolf optimization(E-MOGWO)algorithm.Itfinds the optimal offloading target by taking into account two parameters,i.e.,energy consumption and computational time in a heterogeneous,scalable,multi-fog,multi-user environment.The proposed E-MOGWO is compared with MOG-WO,non-dominated sorting genetic algorithm(NSGA-II)and accelerated particle swarm optimization(APSO).The results showed that the proposed algorithm achieved better results than existing approaches regarding energy consumption,computational time and the number of tasks successfully executed.
基金supported by the National Natural Science Foundation of China,Research on Brain Magnetic Resonance Image Segmentation Based on Particle Computation(No.61672386).
文摘Objective To evaluate the utility of computed tomography perfusion(CTP)both at admission and during delayed cerebral ischemia time-window(DCITW)in the detection of delayed cerebral ischemia(DCI)and the change in CTP parameters from admission to DCITW following aneurysmal subarachnoid hemorrhage.Methods Eighty patients underwent CTP at admission and during DCITW.The mean and extreme values of all CTP parameters at admission and during DCITW were compared between the DCI group and non-DCI group,and comparisons were also made between admission and DCITW within each group.The qualitative color-coded perfusion maps were recorded.Finally,the relationship between CTP parameters and DCI was assessed by receiver operating characteristic(ROC)analyses.Results With the exception of cerebral blood volume(P=0.295,admission;P=0.682,DCITW),there were significant differences in the mean quantitative CTP parameters between DCI and non-DCI patients both at admission and during DCITW.In the DCI group,the extreme parameters were significantly different between admission and DCITW.The DCI group also showed a deteriorative trend in the qualitative color-coded perfusion maps.For the detection of DCI,mean transit time to the center of the impulse response function(Tmax)at admission and mean time to start(TTS)during DCITW had the largest area under curve(AUC),0.698 and 0.789,respectively.Conclusion Whole-brain CTP can predict the occurrence of DCI at admission and diagnose DCI during DCITW.The extreme quantitative parameters and qualitative color-coded perfusion maps can better reflect the perfusion changes of patients with DCI from admission to DCITW.
文摘In this work,a consistent and physically accurate implementation of the general framework of unified second-order time accurate integrators via the well-known GSSSS framework in the Discrete Element Method is presented.The improved tangential displacement evaluation in the present implementation of the discrete element method has been derived and implemented to preserve the consistency of the correct time level evaluation during the time integration process in calculating the algorithmic tangential displacement.Several numerical examples have been used to validate the proposed tangential displacement evaluation;this is in contrast to past practices which only seem to attain the first-order time accuracy due to inconsistent time level implementation with different algorithms for normal and tangential directions.The comparisons with the existing implementation and the superiority of the proposed implementation are given in terms of the convergence rate with improved numerical accuracy in time.Moreover,several schemes via the unified second-order time integrators within the framework of the GSSSS family have been carried out based on the proposed correct implementation.All the numerical results demonstrate that using the existing state-of-the-art implementation reduces the time accuracy to be first-order accurate in time,while the proposed implementation preserves the correct time accuracy to yield second-order.
文摘In this paper, we define two versions of Untrapped set (weak and strong Untrapped sets) over a finite set of alternatives. These versions, considered as choice procedures, extend the notion of Untrapped set in a more general case (i.e. when alternatives are not necessarily comparable). We show that they all coincide with Top cycle choice procedure for tournaments. In case of weak tournaments, the strong Untrapped set is equivalent to Getcha choice procedure and the Weak Untrapped set is exactly the Untrapped set studied in the litterature. We also present a polynomial-time algorithm for computing each set.