A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inne...A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inner-level Bregmanized method devotes to dictionary updating and sparse represention of small overlapping image patches. The introduced constraint of graph regularized sparse coding can capture local image features effectively, and consequently enables accurate reconstruction from highly undersampled partial data. Furthermore, modified sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge within a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can effectively reconstruct images and it outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.展开更多
[Objective] To discuss the effects of major mapping methods for DNA sequence on the accuracy of protein coding regions prediction,and to find out the effective mapping methods.[Method] By taking Approximate Correlatio...[Objective] To discuss the effects of major mapping methods for DNA sequence on the accuracy of protein coding regions prediction,and to find out the effective mapping methods.[Method] By taking Approximate Correlation(AC) as the full measure of the prediction accuracy at nucleotide level,the windowed narrow pass-band filter(WNPBF) based prediction algorithm was applied to study the effects of different mapping methods on prediction accuracy.[Result] In DNA data sets ALLSEQ and HMR195,the Voss and Z-Curve methods are proved to be more effective mapping methods than paired numeric(PN),Electron-ion Interaction Potential(EIIP) and complex number methods.[Conclusion] This study lays the foundation to verify the effectiveness of new mapping methods by using the predicted AC value,and it is meaningful to reveal DNA structure by using bioinformatics methods.展开更多
After a code-table has been established by means of node association information from signal flow graph, the totally coded method (TCM) is applied merely in the domain of code operation beyond any figure-earching algo...After a code-table has been established by means of node association information from signal flow graph, the totally coded method (TCM) is applied merely in the domain of code operation beyond any figure-earching algorithm. The code-series (CS) have the holo-information nature, so that both the content and the sign of each gain-term can be determined via the coded method. The principle of this method is simple and it is suited for computer programming. The capability of the computer-aided analysis for switched current network (SIN) can be enhanced.展开更多
The imaging speed is a bottleneck for magnetic resonance imaging( MRI) since it appears. To alleviate this difficulty,a novel graph regularized sparse coding method for highly undersampled MRI reconstruction( GSCMRI) ...The imaging speed is a bottleneck for magnetic resonance imaging( MRI) since it appears. To alleviate this difficulty,a novel graph regularized sparse coding method for highly undersampled MRI reconstruction( GSCMRI) was proposed. The graph regularized sparse coding showed the potential in maintaining the geometrical information of the data. In this study, it was incorporated with two-level Bregman iterative procedure that updated the data term in outer-level and learned dictionary in innerlevel. Moreover,the graph regularized sparse coding and simple dictionary updating stages derived by the inner minimization made the proposed algorithm converge in few iterations, meanwhile achieving superior reconstruction performance. Extensive experimental results have demonstrated GSCMRI can consistently recover both real-valued MR images and complex-valued MR data efficiently,and outperform the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.展开更多
This paper analyses the current common problems of material code standardization in the implementation of MRP Ⅱ, and puts forward the basic ideas and methods for solving the problems, which has some reference value ...This paper analyses the current common problems of material code standardization in the implementation of MRP Ⅱ, and puts forward the basic ideas and methods for solving the problems, which has some reference value for the popularization of application o展开更多
A pseudo-random coding side-lobe suppression method based on CLEAN algorithm is introduced.The CLEAN algorithm mainly processes pulse compression results of a pseudo-random coding,and estimates a target's distance by...A pseudo-random coding side-lobe suppression method based on CLEAN algorithm is introduced.The CLEAN algorithm mainly processes pulse compression results of a pseudo-random coding,and estimates a target's distance by a method named interpolation method,so that we can get an ideal pulse compression result of the target,and then use the adjusted ideal pulse compression side-lobe to cut the actual pulse compression result,so as to achieve the remarkable performance of side-lobe suppression for large targets,and let the adjacent small targets appear.The computer simulations by MATLAB with this method analyze the effect of side-lobe suppression in an ideal or noisy environment.It is proved that this method can effectively solve the problem due to the side-lobe of pseudo-random coding being too high,and can enhance the radar's multi-target detection ability.展开更多
In this paper, a two-level Bregman method is presented with graph regularized sparse coding for highly undersampled magnetic resonance image reconstruction. The graph regularized sparse coding is incorporated with the...In this paper, a two-level Bregman method is presented with graph regularized sparse coding for highly undersampled magnetic resonance image reconstruction. The graph regularized sparse coding is incorporated with the two-level Bregman iterative procedure which enforces the sampled data constraints in the outer level and updates dictionary and sparse representation in the inner level. Graph regularized sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge with a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can consistently reconstruct both simulated MR images and real MR data efficiently, and outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.展开更多
Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defec...Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defective to some extent.However,these detection techniques are mainly based on text features and have weak detection capabilities across programs.Compared with the uncertainty of the code and text caused by the developer’s personalization,the programming language has a stricter logical specification,which reflects the rules and requirements of the language itself and the developer’s potential way of thinking.This article replaces text analysis with programming logic modeling,breaks through the limitation of code text analysis solely relying on the probability of sentence/word occurrence in the code,and proposes an object-oriented language programming logic construction method based on method constraint relationships,selecting features through hypothesis testing ideas,and construct support vector machine classifier to detect class files with defects and reduce the impact of personalized programming on detection methods.In the experiment,some representative Android applications were selected to test and compare the proposed methods.In terms of the accuracy of code defect detection,through cross validation,the proposed method and the existing leading methods all reach an average of more than 90%.In the aspect of cross program detection,the method proposed in this paper is superior to the other two leading methods in accuracy,recall and F1 value.展开更多
According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, t...According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, the solution method to the model is made and a computer code is developed, which for flat-plate composites cured by a specified cure cycle, provides the variation of temperature distribution, the cure reaction process in the resin, the resin flow and fibers stress inside the composite, the void variation and the residual stress distribution.展开更多
In this paper, we present a theoretical codebook design method for VQ-based fast face recognition algorithm to im-prove recognition accuracy. Based on the systematic analysis and classification of code patterns, first...In this paper, we present a theoretical codebook design method for VQ-based fast face recognition algorithm to im-prove recognition accuracy. Based on the systematic analysis and classification of code patterns, firstly we theoretically create a systematically organized codebook. Combined with another codebook created by Kohonen’s Self-Organizing Maps (SOM) method, an optimized codebook consisted of 2×2 codevectors for facial images is generated. Experimental results show face recognition using such a codebook is more efficient than the codebook consisted of 4×4 codevector used in conventional algorithm. The highest average recognition rate of 98.6% is obtained for 40 persons’ 400 images of publicly available face database of AT&T Laboratories Cambridge containing variations in lighting, posing, and expressions. A table look-up (TLU) method is also proposed for the speed up of the recognition processing. By applying this method in the quantization step, the total recognition processing time achieves only 28 msec, enabling real-time face recognition.展开更多
The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved i...The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call 'Multiple Impulse Method (MIM)', where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code.展开更多
Symbolic analysis has many applications in the design of analog circuits. Existing approaches rely on two forms of symbolic-expression representation: expanded sum-of-product form and arbitrarily nested form. Expanded...Symbolic analysis has many applications in the design of analog circuits. Existing approaches rely on two forms of symbolic-expression representation: expanded sum-of-product form and arbitrarily nested form. Expanded form suffers the problem that the number of product terms grows exponentially with the size of a circuit. Nested form is neither canonical nor amenable to symbolic manipulation. In this paper, we present a new approach to exact and canonical symbolic analysis by exploiting the sparsity and sharing of product terms. This algorithm, called totally coded method (TCM), consists of representing the symbolic determinant of a circuit matrix by code series and performing symbolic analysis by code manipulation. We describe an efficient code-ordering heuristic and prove that it is optimum for ladder-structured circuits. For practical analog circuits, TCM not only covers all advantages of the algorithm via determinant decision diagrams (DDD) but is more simple and efficient than DDD method.展开更多
A Gray code based gradient-free optimization(GCO)algorithm is proposed to update the parameters of parameterized quantum circuits(PQCs)in this work.Each parameter of PQCs is encoded as a binary string,named as a gene,...A Gray code based gradient-free optimization(GCO)algorithm is proposed to update the parameters of parameterized quantum circuits(PQCs)in this work.Each parameter of PQCs is encoded as a binary string,named as a gene,and a genetic-based method is adopted to select the offsprings.The individuals in the offspring are decoded in Gray code way to keep Hamming distance,and then are evaluated to obtain the best one with the lowest cost value in each iteration.The algorithm is performed iteratively for all parameters one by one until the cost value satisfies the stop condition or the number of iterations is reached.The GCO algorithm is demonstrated for classification tasks in Iris and MNIST datasets,and their performance are compared by those with the Bayesian optimization algorithm and binary code based optimization algorithm.The simulation results show that the GCO algorithm can reach high accuracies steadily for quantum classification tasks.Importantly,the GCO algorithm has a robust performance in the noise environment.展开更多
A new method of constructing regular low-density parity-check (LDPC) codes was proposed. And the novel class of LDPC codes was applied in a coded orthogonal frequency division multiplexing (OFDM) system. This method e...A new method of constructing regular low-density parity-check (LDPC) codes was proposed. And the novel class of LDPC codes was applied in a coded orthogonal frequency division multiplexing (OFDM) system. This method extended the class of LDPC codes which could be constructed from shifted identity matrices. The method could avoid short cycles in Tanner graphs with simple inequation in the construction of shifting identity matrices, which made the girth of Tanner graphs 8. Because of the quasicyclic structure and the inherent block configuration of parity-check matrices, the encoders and the decoders were practically feasible. They were linear-time encodable and decodable. The LDPC codes proposed had various code rates, ranging from low to high. They performed excellently with iterative decoding and demonstrate better performance than other regular LDPC codes in OFDM systems.展开更多
本文介绍了二维PIC(Particle in cell)方法,这种方法常用于粒子动力学模拟中空间电荷作用的计算。并比较了以时间为自变量(t-code)和以纵向位置为自变量(z-code)的两种动力学模拟程序;针对“国家重点基础研究发展规划”洁净核能项目中...本文介绍了二维PIC(Particle in cell)方法,这种方法常用于粒子动力学模拟中空间电荷作用的计算。并比较了以时间为自变量(t-code)和以纵向位置为自变量(z-code)的两种动力学模拟程序;针对“国家重点基础研究发展规划”洁净核能项目中的射频四极(RFQ)加速器结构参数,给出了单束加速和正、负离子束同时加速两种情况下,t-code和z-code模拟得出的传输效率。结果表明,当束团的相位宽度大或能散大时,z-code 在计算空间电荷作用时会引入相对较大的误差,从而应该使用t-code来进行动力学模拟,以获得更准确的结果。展开更多
For at least the past five decades,structural synthesis has been used as a main means of finding better mechanisms with some predefined function.In structural synthesis,isomorphism identification is still a problem un...For at least the past five decades,structural synthesis has been used as a main means of finding better mechanisms with some predefined function.In structural synthesis,isomorphism identification is still a problem unsolved well,and to solve this problem is very significant to the design of new mechanisms.According to the given degree of freedom(DOF) and link connection property of planar closed chain mechanisms,vertex assortment is obtained.For solving the isomorphism problem,a method of the adding sub-chains is proposed with the detailed steps and algorithms in the synthesizing process.Employing this method,the identification code and formation code of every topological structure are achieved,therefore many isomorphic structures could be eliminated in time during structural synthesis by comparing those codes among different topological graphs,resulting in the improvement of synthesizing efficiency and accuracy,and the approach for eliminating rigid sub-chains in and after the synthesizing process is also presented.Some examples are given,including how to add sub-chains,how to detect simple rigid sub-chains and how to obtain identification codes and formulation codes et al.Using the adding sub-chain method,the relative information of some common topological graphs is given in the form of table.The comparison result is coincident with many literatures,so the correctness of the adding sub-chain method is convinced.This method will greatly improve the synthesizing efficiency and accuracy,and has a good potential for application.展开更多
This study presents a calibration process of three-dimensional particle flow code(PFC3D)simulation of intact and fissured granite samples.First,laboratory stressestrain response from triaxial testing of intact and fis...This study presents a calibration process of three-dimensional particle flow code(PFC3D)simulation of intact and fissured granite samples.First,laboratory stressestrain response from triaxial testing of intact and fissured granite samples is recalled.Then,PFC3D is introduced,with focus on the bonded particle models(BPM).After that,we present previous studies where intact rock is simulated by means of flatjoint approaches,and how improved accuracy was gained with the help of parametric studies.Then,models of the pre-fissured rock specimens were generated,including modeled fissures in the form of“smooth joint”type contacts.Finally,triaxial testing simulations of 1 t 2 and 2 t 3 jointed rock specimens were performed.Results show that both elastic behavior and the peak strength levels are closely matched,without any additional fine tuning of micro-mechanical parameters.Concerning the postfailure behavior,models reproduce the trends of decreasing dilation with increasing confinement and plasticity.However,the dilation values simulated are larger than those observed in practice.This is attributed to the difficulty in modeling some phenomena of fissured rock behaviors,such as rock piece corner crushing with dust production and interactions between newly formed shear bands or axial splitting cracks with pre-existing joints.展开更多
Many researchers have developed new calculation methods to analyze seismic slope stability problems, but the conventional pseudo-static method is still widely used in engineering design due to its simplicity. Based on...Many researchers have developed new calculation methods to analyze seismic slope stability problems, but the conventional pseudo-static method is still widely used in engineering design due to its simplicity. Based on the Technical Code for Building Slope Engineering(GB 50330-2013) of China and the Guidelines for Evaluating and Mitigating Seismic Hazards in California(SP117), a comparative study on the pseudo-static method was performed. The results indicate that the largest difference between these two design codes lies in determination of the seismic equivalence reduction factor( f;). The GB 50330-2013 code specifies a single value for f;of 0.25. In SP117, numerous factors,such as magnitude and distance, are considered in determining f;. Two case studies show that the types of slope stability status evaluated by SP117 are in agreement with those evaluated by the seismic time-history stability analysis and Newmark displacement analysis. The factors of safety evaluated by SP117 can be used in practice for safe design. However, the factors of safety evaluated by GB 50330-2013 are risky for slope seismic design.展开更多
A new method for constructing Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) codes based on Euclidean Geometry (EG) is presented. The proposed method results in a class of QC-LDPC codes with girth of at least 6 and...A new method for constructing Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) codes based on Euclidean Geometry (EG) is presented. The proposed method results in a class of QC-LDPC codes with girth of at least 6 and the designed codes perform very close to the Shannon limit with iterative decoding. Simulations show that the designed QC-LDPC codes have almost the same performance with the existing EG-LDPC codes.展开更多
基金The National Natural Science Foundation of China (No.61362001,61102043,61262084,20132BAB211030,20122BAB211015)the Basic Research Program of Shenzhen(No.JC201104220219A)
文摘A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inner-level Bregmanized method devotes to dictionary updating and sparse represention of small overlapping image patches. The introduced constraint of graph regularized sparse coding can capture local image features effectively, and consequently enables accurate reconstruction from highly undersampled partial data. Furthermore, modified sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge within a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can effectively reconstruct images and it outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.
基金Supported by Ningxia Natural Science Foundation (NZ1024)the Scientific Research the Project of Ningxia Universities (201027)~~
文摘[Objective] To discuss the effects of major mapping methods for DNA sequence on the accuracy of protein coding regions prediction,and to find out the effective mapping methods.[Method] By taking Approximate Correlation(AC) as the full measure of the prediction accuracy at nucleotide level,the windowed narrow pass-band filter(WNPBF) based prediction algorithm was applied to study the effects of different mapping methods on prediction accuracy.[Result] In DNA data sets ALLSEQ and HMR195,the Voss and Z-Curve methods are proved to be more effective mapping methods than paired numeric(PN),Electron-ion Interaction Potential(EIIP) and complex number methods.[Conclusion] This study lays the foundation to verify the effectiveness of new mapping methods by using the predicted AC value,and it is meaningful to reveal DNA structure by using bioinformatics methods.
文摘After a code-table has been established by means of node association information from signal flow graph, the totally coded method (TCM) is applied merely in the domain of code operation beyond any figure-earching algorithm. The code-series (CS) have the holo-information nature, so that both the content and the sign of each gain-term can be determined via the coded method. The principle of this method is simple and it is suited for computer programming. The capability of the computer-aided analysis for switched current network (SIN) can be enhanced.
基金National Natural Science Foundations of China(Nos.61362001,61102043,61262084)Technology Foundations of Department of Education of Jiangxi Province,China(Nos.GJJ12006,GJJ14196)Natural Science Foundations of Jiangxi Province,China(Nos.20132BAB211030,20122BAB211015)
文摘The imaging speed is a bottleneck for magnetic resonance imaging( MRI) since it appears. To alleviate this difficulty,a novel graph regularized sparse coding method for highly undersampled MRI reconstruction( GSCMRI) was proposed. The graph regularized sparse coding showed the potential in maintaining the geometrical information of the data. In this study, it was incorporated with two-level Bregman iterative procedure that updated the data term in outer-level and learned dictionary in innerlevel. Moreover,the graph regularized sparse coding and simple dictionary updating stages derived by the inner minimization made the proposed algorithm converge in few iterations, meanwhile achieving superior reconstruction performance. Extensive experimental results have demonstrated GSCMRI can consistently recover both real-valued MR images and complex-valued MR data efficiently,and outperform the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
文摘This paper analyses the current common problems of material code standardization in the implementation of MRP Ⅱ, and puts forward the basic ideas and methods for solving the problems, which has some reference value for the popularization of application o
文摘A pseudo-random coding side-lobe suppression method based on CLEAN algorithm is introduced.The CLEAN algorithm mainly processes pulse compression results of a pseudo-random coding,and estimates a target's distance by a method named interpolation method,so that we can get an ideal pulse compression result of the target,and then use the adjusted ideal pulse compression side-lobe to cut the actual pulse compression result,so as to achieve the remarkable performance of side-lobe suppression for large targets,and let the adjacent small targets appear.The computer simulations by MATLAB with this method analyze the effect of side-lobe suppression in an ideal or noisy environment.It is proved that this method can effectively solve the problem due to the side-lobe of pseudo-random coding being too high,and can enhance the radar's multi-target detection ability.
基金Supported by the National Natural Science Foundation of China(No.61261010No.61362001+7 种基金No.61365013No.61262084No.51165033)Technology Foundation of Department of Education in Jiangxi Province(GJJ13061GJJ14196)Young Scientists Training Plan of Jiangxi Province(No.20133ACB21007No.20142BCB23001)National Post-Doctoral Research Fund(No.2014M551867)and Jiangxi Advanced Project for Post-Doctoral Research Fund(No.2014KY02)
文摘In this paper, a two-level Bregman method is presented with graph regularized sparse coding for highly undersampled magnetic resonance image reconstruction. The graph regularized sparse coding is incorporated with the two-level Bregman iterative procedure which enforces the sampled data constraints in the outer level and updates dictionary and sparse representation in the inner level. Graph regularized sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge with a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can consistently reconstruct both simulated MR images and real MR data efficiently, and outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.
基金This work was supported by National Key RD Program of China under Grant 2017YFB0802901.
文摘Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defective to some extent.However,these detection techniques are mainly based on text features and have weak detection capabilities across programs.Compared with the uncertainty of the code and text caused by the developer’s personalization,the programming language has a stricter logical specification,which reflects the rules and requirements of the language itself and the developer’s potential way of thinking.This article replaces text analysis with programming logic modeling,breaks through the limitation of code text analysis solely relying on the probability of sentence/word occurrence in the code,and proposes an object-oriented language programming logic construction method based on method constraint relationships,selecting features through hypothesis testing ideas,and construct support vector machine classifier to detect class files with defects and reduce the impact of personalized programming on detection methods.In the experiment,some representative Android applications were selected to test and compare the proposed methods.In terms of the accuracy of code defect detection,through cross validation,the proposed method and the existing leading methods all reach an average of more than 90%.In the aspect of cross program detection,the method proposed in this paper is superior to the other two leading methods in accuracy,recall and F1 value.
文摘According to a mathematical model which describes the curing process of composites constructed from continuous fiber-reinforced, thermosetting resin matrix prepreg materials, and the consolidation of the composites, the solution method to the model is made and a computer code is developed, which for flat-plate composites cured by a specified cure cycle, provides the variation of temperature distribution, the cure reaction process in the resin, the resin flow and fibers stress inside the composite, the void variation and the residual stress distribution.
文摘In this paper, we present a theoretical codebook design method for VQ-based fast face recognition algorithm to im-prove recognition accuracy. Based on the systematic analysis and classification of code patterns, firstly we theoretically create a systematically organized codebook. Combined with another codebook created by Kohonen’s Self-Organizing Maps (SOM) method, an optimized codebook consisted of 2×2 codevectors for facial images is generated. Experimental results show face recognition using such a codebook is more efficient than the codebook consisted of 4×4 codevector used in conventional algorithm. The highest average recognition rate of 98.6% is obtained for 40 persons’ 400 images of publicly available face database of AT&T Laboratories Cambridge containing variations in lighting, posing, and expressions. A table look-up (TLU) method is also proposed for the speed up of the recognition processing. By applying this method in the quantization step, the total recognition processing time achieves only 28 msec, enabling real-time face recognition.
文摘The evaluation of the minimum distance of linear block codes remains an open problem in coding theory, and it is not easy to determine its true value by classical methods, for this reason the problem has been solved in the literature with heuristic techniques such as genetic algorithms and local search algorithms. In this paper we propose two approaches to attack the hardness of this problem. The first approach is based on genetic algorithms and it yield to good results comparing to another work based also on genetic algorithms. The second approach is based on a new randomized algorithm which we call 'Multiple Impulse Method (MIM)', where the principle is to search codewords locally around the all-zero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimum Hamming-weight codeword whose Hamming weight is equal to the minimum distance of the linear code.
文摘Symbolic analysis has many applications in the design of analog circuits. Existing approaches rely on two forms of symbolic-expression representation: expanded sum-of-product form and arbitrarily nested form. Expanded form suffers the problem that the number of product terms grows exponentially with the size of a circuit. Nested form is neither canonical nor amenable to symbolic manipulation. In this paper, we present a new approach to exact and canonical symbolic analysis by exploiting the sparsity and sharing of product terms. This algorithm, called totally coded method (TCM), consists of representing the symbolic determinant of a circuit matrix by code series and performing symbolic analysis by code manipulation. We describe an efficient code-ordering heuristic and prove that it is optimum for ladder-structured circuits. For practical analog circuits, TCM not only covers all advantages of the algorithm via determinant decision diagrams (DDD) but is more simple and efficient than DDD method.
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.61871234 and 62375140)Postgraduate Research&Practice Innovation Program of Jiangsu Province(Grant No.KYCX190900).
文摘A Gray code based gradient-free optimization(GCO)algorithm is proposed to update the parameters of parameterized quantum circuits(PQCs)in this work.Each parameter of PQCs is encoded as a binary string,named as a gene,and a genetic-based method is adopted to select the offsprings.The individuals in the offspring are decoded in Gray code way to keep Hamming distance,and then are evaluated to obtain the best one with the lowest cost value in each iteration.The algorithm is performed iteratively for all parameters one by one until the cost value satisfies the stop condition or the number of iterations is reached.The GCO algorithm is demonstrated for classification tasks in Iris and MNIST datasets,and their performance are compared by those with the Bayesian optimization algorithm and binary code based optimization algorithm.The simulation results show that the GCO algorithm can reach high accuracies steadily for quantum classification tasks.Importantly,the GCO algorithm has a robust performance in the noise environment.
文摘A new method of constructing regular low-density parity-check (LDPC) codes was proposed. And the novel class of LDPC codes was applied in a coded orthogonal frequency division multiplexing (OFDM) system. This method extended the class of LDPC codes which could be constructed from shifted identity matrices. The method could avoid short cycles in Tanner graphs with simple inequation in the construction of shifting identity matrices, which made the girth of Tanner graphs 8. Because of the quasicyclic structure and the inherent block configuration of parity-check matrices, the encoders and the decoders were practically feasible. They were linear-time encodable and decodable. The LDPC codes proposed had various code rates, ranging from low to high. They performed excellently with iterative decoding and demonstrate better performance than other regular LDPC codes in OFDM systems.
文摘本文介绍了二维PIC(Particle in cell)方法,这种方法常用于粒子动力学模拟中空间电荷作用的计算。并比较了以时间为自变量(t-code)和以纵向位置为自变量(z-code)的两种动力学模拟程序;针对“国家重点基础研究发展规划”洁净核能项目中的射频四极(RFQ)加速器结构参数,给出了单束加速和正、负离子束同时加速两种情况下,t-code和z-code模拟得出的传输效率。结果表明,当束团的相位宽度大或能散大时,z-code 在计算空间电荷作用时会引入相对较大的误差,从而应该使用t-code来进行动力学模拟,以获得更准确的结果。
基金supported by National Natural Science Foundation of China (Grant No. 51075079)National Hi-tech Research and Development Program of China(863 Program,Grant No. 2008AA04Z202)
文摘For at least the past five decades,structural synthesis has been used as a main means of finding better mechanisms with some predefined function.In structural synthesis,isomorphism identification is still a problem unsolved well,and to solve this problem is very significant to the design of new mechanisms.According to the given degree of freedom(DOF) and link connection property of planar closed chain mechanisms,vertex assortment is obtained.For solving the isomorphism problem,a method of the adding sub-chains is proposed with the detailed steps and algorithms in the synthesizing process.Employing this method,the identification code and formation code of every topological structure are achieved,therefore many isomorphic structures could be eliminated in time during structural synthesis by comparing those codes among different topological graphs,resulting in the improvement of synthesizing efficiency and accuracy,and the approach for eliminating rigid sub-chains in and after the synthesizing process is also presented.Some examples are given,including how to add sub-chains,how to detect simple rigid sub-chains and how to obtain identification codes and formulation codes et al.Using the adding sub-chain method,the relative information of some common topological graphs is given in the form of table.The comparison result is coincident with many literatures,so the correctness of the adding sub-chain method is convinced.This method will greatly improve the synthesizing efficiency and accuracy,and has a good potential for application.
基金The University of Vigo is acknowledged for financing part of the first author’s PhD studiesthe Spanish Ministry of Economy and Competitiveness for funding of the project‘Deepening on the behaviour of rock masses:Scale effects on the stressestrain response of fissured rock samples with particular emphasis on post-failure’,awarded under Contract Reference No.RTI2018-093563-B-I00partially financed by means of European Regional Development Funds from the European Union(EU)。
文摘This study presents a calibration process of three-dimensional particle flow code(PFC3D)simulation of intact and fissured granite samples.First,laboratory stressestrain response from triaxial testing of intact and fissured granite samples is recalled.Then,PFC3D is introduced,with focus on the bonded particle models(BPM).After that,we present previous studies where intact rock is simulated by means of flatjoint approaches,and how improved accuracy was gained with the help of parametric studies.Then,models of the pre-fissured rock specimens were generated,including modeled fissures in the form of“smooth joint”type contacts.Finally,triaxial testing simulations of 1 t 2 and 2 t 3 jointed rock specimens were performed.Results show that both elastic behavior and the peak strength levels are closely matched,without any additional fine tuning of micro-mechanical parameters.Concerning the postfailure behavior,models reproduce the trends of decreasing dilation with increasing confinement and plasticity.However,the dilation values simulated are larger than those observed in practice.This is attributed to the difficulty in modeling some phenomena of fissured rock behaviors,such as rock piece corner crushing with dust production and interactions between newly formed shear bands or axial splitting cracks with pre-existing joints.
基金supported by the National Key R&D Program of China(Grant No.2017YFC0404804)the National Natural Science Foundation of China(Grant No.51509019)
文摘Many researchers have developed new calculation methods to analyze seismic slope stability problems, but the conventional pseudo-static method is still widely used in engineering design due to its simplicity. Based on the Technical Code for Building Slope Engineering(GB 50330-2013) of China and the Guidelines for Evaluating and Mitigating Seismic Hazards in California(SP117), a comparative study on the pseudo-static method was performed. The results indicate that the largest difference between these two design codes lies in determination of the seismic equivalence reduction factor( f;). The GB 50330-2013 code specifies a single value for f;of 0.25. In SP117, numerous factors,such as magnitude and distance, are considered in determining f;. Two case studies show that the types of slope stability status evaluated by SP117 are in agreement with those evaluated by the seismic time-history stability analysis and Newmark displacement analysis. The factors of safety evaluated by SP117 can be used in practice for safe design. However, the factors of safety evaluated by GB 50330-2013 are risky for slope seismic design.
基金Supported by the National Key Basic Research Program (973) Project (No. 2010CB328300)the 111 Project (No. B08038)
文摘A new method for constructing Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) codes based on Euclidean Geometry (EG) is presented. The proposed method results in a class of QC-LDPC codes with girth of at least 6 and the designed codes perform very close to the Shannon limit with iterative decoding. Simulations show that the designed QC-LDPC codes have almost the same performance with the existing EG-LDPC codes.