Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor gen...Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor generates a helper data and a random string from biometric data and uses the helper data to reproduce the random string from the second reading.In 2013,Fuller et al.proposed a computational fuzzy extractor based on the learning with errors problem.Their construction,however,can tolerate a sub-linear fraction of errors and has an inefficient decoding algorithm,which causes the reproducing time to increase significantly.In 2016,Canetti et al.proposed a fuzzy extractor with inputs from low-entropy distributions based on a strong primitive,which is called digital locker.However,their construction necessitates an excessive amount of storage space for the helper data,which is stored in authentication server.Based on these observations,we propose a new efficient computational fuzzy extractorwith small size of helper data.Our scheme supports reusability and robustness,which are security notions that must be satisfied in order to use a fuzzy extractor as a secure authentication method in real life.Also,it conceals no information about the biometric data and thanks to the new decoding algorithm can tolerate linear errors.Based on the non-uniform learning with errors problem,we present a formal security proof for the proposed fuzzy extractor.Furthermore,we analyze the performance of our fuzzy extractor scheme and provide parameter sets that meet the security requirements.As a result of our implementation and analysis,we show that our scheme outperforms previous fuzzy extractor schemes in terms of the efficiency of the generation and reproduction algorithms,as well as the size of helper data.展开更多
The main purpose of this paper is to introduce the LWE public key cryptosystem with its security. In the first section, we introduce the LWE public key cryptosystem by Regev with its applications and some previous res...The main purpose of this paper is to introduce the LWE public key cryptosystem with its security. In the first section, we introduce the LWE public key cryptosystem by Regev with its applications and some previous research results. Then we prove the security of LWE public key cryptosystem by Regev in detail. For not only independent identical Gaussian disturbances but also any general independent identical disturbances, we give a more accurate estimation probability of decryption error of general LWE cryptosystem. This guarantees high security and widespread applications of the LWE public key cryptosystem.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable ...The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.展开更多
Complementary-label learning(CLL)aims at finding a classifier via samples with complementary labels.Such data is considered to contain less information than ordinary-label samples.The transition matrix between the tru...Complementary-label learning(CLL)aims at finding a classifier via samples with complementary labels.Such data is considered to contain less information than ordinary-label samples.The transition matrix between the true label and the complementary label,and some loss functions have been developed to handle this problem.In this paper,we show that CLL can be transformed into ordinary classification under some mild conditions,which indicates that the complementary labels can supply enough information in most cases.As an example,an extensive misclassification error analysis was performed for the Kernel Ridge Regression(KRR)method applied to multiple complementary-label learning(MCLL),which demonstrates its superior performance compared to existing approaches.展开更多
With the development of communication systems, modulation methods are becoming more and more diverse. Among them, quadrature spatial modulation(QSM) is considered as one method with less capacity and high efficiency. ...With the development of communication systems, modulation methods are becoming more and more diverse. Among them, quadrature spatial modulation(QSM) is considered as one method with less capacity and high efficiency. In QSM, the traditional signal detection methods sometimes are unable to meet the actual requirement of low complexity of the system. Therefore, this paper proposes a signal detection scheme for QSM systems using deep learning to solve the complexity problem. Results from the simulations show that the bit error rate performance of the proposed deep learning-based detector is better than that of the zero-forcing(ZF) and minimum mean square error(MMSE) detectors, and similar to the maximum likelihood(ML) detector. Moreover, the proposed method requires less processing time than ZF, MMSE,and ML.展开更多
Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum co...Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum computer. For this new topological stabilizer code-XYZ^(2) code defined on the cellular lattice, it is implemented on a hexagonal lattice of qubits and it encodes the logical qubits with the help of stabilizer measurements of weight six and weight two. However topological stabilizer codes in cellular lattice quantum systems suffer from the detrimental effects of noise due to interaction with the environment. Several decoding approaches have been proposed to address this problem. Here, we propose the use of a state-attention based reinforcement learning decoder to decode XYZ^(2) codes, which enables the decoder to more accurately focus on the information related to the current decoding position, and the error correction accuracy of our reinforcement learning decoder model under the optimisation conditions can reach 83.27% under the depolarizing noise model, and we have measured thresholds of 0.18856 and 0.19043 for XYZ^(2) codes at code spacing of 3–7 and 7–11, respectively. our study provides directions and ideas for applications of decoding schemes combining reinforcement learning attention mechanisms to other topological quantum error-correcting codes.展开更多
The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agil...The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agile Scrum and the Obtain, Scrub, Explore, Model, and iNterpret (OSEMN) methodology. Six machine learning models, namely Linear Forecast, Naive Forecast, Simple Moving Average with weekly window (SMA 5), Simple Moving Average with monthly window (SMA 20), Autoregressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM), are compared and evaluated through Mean Absolute Error (MAE), with the LSTM model performing the best, showcasing its potential for practical financial applications. A Django web application “Predict It” is developed to implement the LSTM model. Ethical concerns related to predictive modeling in finance are addressed. Data quality, algorithm choice, feature engineering, and preprocessing techniques are emphasized for better model performance. The research acknowledges limitations and suggests future research directions, aiming to equip investors and financial professionals with reliable predictive models for dynamic markets.展开更多
Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximat...Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.展开更多
Language teaching is not a one-way process.It interacts with language learning in an extremely intricate way.To improve language teaching,we need to take the process of language learning into account.This paper tries ...Language teaching is not a one-way process.It interacts with language learning in an extremely intricate way.To improve language teaching,we need to take the process of language learning into account.This paper tries to explore and understand what strategies the second language learners consciously or subconsciously adopt during their language learning process through the analyses of the linguistic errors they commit,so as to provide some insights into language teaching practice.展开更多
Based on the questionnaire, this study found that :1) Elementary learners were inclined to commit more global errors compared to their local errors, whilst advanced learners make more local errors; 2) Interlingual fac...Based on the questionnaire, this study found that :1) Elementary learners were inclined to commit more global errors compared to their local errors, whilst advanced learners make more local errors; 2) Interlingual factors were more influential than intralingual factors in elementary learners' error making, but for advanced learners, intralingual factors played relatively a much more important role in error making; 3) Elementary learners preferred explicit correction whilst advanced learners favoured im?plicit correction in question-asking.展开更多
In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep lea...In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep learning process has also been widely considered in these genomics data processing system.In this research,brain disorder illness incliding Alzheimer’s disease,Schizophrenia and Parkinson’s diseaseis is analyzed owing to misdetection of disorders in neuroimaging data examined by means fo traditional methods.Moeover,deep learning approach is incorporated here for classification purpose of brain disorder with the aid of Deep Belief Networks(DBN).Images are stored in a secured manner by using DNA sequence based on JPEG Zig Zag Encryption algorithm(DBNJZZ)approach.The suggested approach is executed and tested by using the performance metric measure such as accuracy,root mean square error,Mean absolute error and mean absolute percentage error.Proposed DBNJZZ gives better performance than previously available methods.展开更多
Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topo...Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.展开更多
A learning with error problem based encryption scheme that allows secure searching over the cipher text is proposed. Both the generation of cipher text and the trapdoor of the query are based on the problem of learnin...A learning with error problem based encryption scheme that allows secure searching over the cipher text is proposed. Both the generation of cipher text and the trapdoor of the query are based on the problem of learning with errors. By performing an operation over the trapdoor and the cipher text, it is able to tell if the cipher text is the encryption of a plaintext. The secure searchable encryption scheme is both cipher text and trapdoor indistinguishable. The probabilities of missing and failing match occurrence in searching are both exponentially small.展开更多
Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptab...Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.展开更多
Despite the advancement within the last decades in the field of smart grids,energy consumption forecasting utilizing the metrological features is still challenging.This paper proposes a genetic algorithm-based adaptiv...Despite the advancement within the last decades in the field of smart grids,energy consumption forecasting utilizing the metrological features is still challenging.This paper proposes a genetic algorithm-based adaptive error curve learning ensemble(GA-ECLE)model.The proposed technique copes with the stochastic variations of improving energy consumption forecasting using a machine learning-based ensembled approach.A modified ensemble model based on a utilizing error of model as a feature is used to improve the forecast accuracy.This approach combines three models,namely CatBoost(CB),Gradient Boost(GB),and Multilayer Perceptron(MLP).The ensembled CB-GB-MLP model’s inner mechanism consists of generating a meta-data from Gradient Boosting and CatBoost models to compute the final predictions using the Multilayer Perceptron network.A genetic algorithm is used to obtain the optimal features to be used for the model.To prove the proposed model’s effectiveness,we have used a four-phase technique using Jeju island’s real energy consumption data.In the first phase,we have obtained the results by applying the CB-GB-MLP model.In the second phase,we have utilized a GA-ensembled model with optimal features.The third phase is for the comparison of the energy forecasting result with the proposed ECL-based model.The fourth stage is the final stage,where we have applied the GA-ECLE model.We obtained a mean absolute error of 3.05,and a root mean square error of 5.05.Extensive experimental results are provided,demonstrating the superiority of the proposed GA-ECLE model over traditional ensemble models.展开更多
With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting m...With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method.展开更多
Industrial robot system is a kind of dynamic system w ith strong nonlinear coupling and high position precision. A lot of control ways , such as nonlinear feedbackdecomposition motion and adaptive control and so o n, ...Industrial robot system is a kind of dynamic system w ith strong nonlinear coupling and high position precision. A lot of control ways , such as nonlinear feedbackdecomposition motion and adaptive control and so o n, have been used to control this kind of system, but there are some deficiencie s in those methods: some need accurate and some need complicated operation and e tc. In recent years, in need of controlling the industrial robots, aiming at com pletely tracking the ideal input for the controlled subject with repetitive character, a new research area, ILC (iterative learning control), has been devel oped in the control technology and theory. The iterative learning control method can make the controlled subject operate as desired in a definite time span, merely making use of the prior control experie nce of the system and searching for the desired control signal according to the practical and desired output signal. The process of searching is equal to that o f learning, during which we only need to measure the output signal to amend the control signal, not like the adaptive control strategy, which on line assesses t he complex parameters of the system. Besides, since the iterative learning contr ol relies little on the prior message of the subject, it has been well used in a lot of areas, especially the dynamic systems with strong non-linear coupling a nd high repetitive position precision and the control system with batch producti on. Since robot manipulator has the above-mentioned character, ILC can be very well used in robot manipulator. In the ILC, since the operation always begins with a certain initial state, init ial condition has been required in almost all convergence verification. Therefor e, in designing the controller, the initial state has to be restricted with some condition to guarantee the convergence of the algorithm. The settle of initial condition problem has long been pursued in the ILC. There are commonly two kinds of initial condition problems: one is zero initial error problem, another is non-zero initial error problem. In practice, the repe titive operation will invariably produce excursion of the iterative initial stat e from the desired initial state. As a result, the research on the second in itial problem has more practical meaning. In this paper, for the non-zero initial error problem, one novel robust ILC alg orithms, respectively combining PD type iterative learning control algorithm wit h the robust feedback control algorithm, has been presented. This novel robust ILC algorithm contain two parts: feedforward ILC algorithm and robust feedback algorithm, which can be used to restrain disturbance from param eter variation, mechanical nonlinearities and unmodeled dynamics and to achieve good performance as well. The feedforward ILC algorithm can be used to improve the tracking error and perf ormance of the system through iteratively learning from the previous operation, thus performing the tracking task very fast. The robust feedback algorithm could mainly be applied to make the real output of the system not deviate too much fr om the desired tracking trajectory, and guarantee the system’s robustness w hen there are exterior noises and variations of the system parameter. In this paper, in order to analyze the convergence of the algorithm, Lyapunov st ability theory has been used through properly selecting the Lyapunov function. T he result of the verification shows the feasibility of the novel robust iterativ e learning control in theory. Finally, aiming at the two-freedom rate robot, simulation has been made with th e MATLAB software. Furthermore, two groups of parameters are selected to validat e the robustness of the algorithm.展开更多
With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial informati...With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial information grow, electronic documents have also proliferated. When dealing with numerous electronic documents and texts written by Chinese beginners, manually written texts often contain hidden grammatical errors, posing a significant challenge to traditional manual proofreading. Correcting these grammatical errors is crucial to ensure fluency and readability. However, certain special types of text grammar or logical errors can have a huge impact, and manually proofreading a large number of texts individually is clearly impractical. Consequently, research on text error correction techniques has garnered significant attention in recent years. The advent and advancement of deep learning have paved the way for sequence-to-sequence learning methods to be extensively applied to the task of text error correction. This paper presents a comprehensive analysis of Chinese text grammar error correction technology, elaborates on its current research status, discusses existing problems, proposes preliminary solutions, and conducts experiments using judicial documents as an example. The aim is to provide a feasible research approach for Chinese text error correction technology.展开更多
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2022-0-00518,Blockchain privacy preserving techniques based on data encryption).
文摘Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor generates a helper data and a random string from biometric data and uses the helper data to reproduce the random string from the second reading.In 2013,Fuller et al.proposed a computational fuzzy extractor based on the learning with errors problem.Their construction,however,can tolerate a sub-linear fraction of errors and has an inefficient decoding algorithm,which causes the reproducing time to increase significantly.In 2016,Canetti et al.proposed a fuzzy extractor with inputs from low-entropy distributions based on a strong primitive,which is called digital locker.However,their construction necessitates an excessive amount of storage space for the helper data,which is stored in authentication server.Based on these observations,we propose a new efficient computational fuzzy extractorwith small size of helper data.Our scheme supports reusability and robustness,which are security notions that must be satisfied in order to use a fuzzy extractor as a secure authentication method in real life.Also,it conceals no information about the biometric data and thanks to the new decoding algorithm can tolerate linear errors.Based on the non-uniform learning with errors problem,we present a formal security proof for the proposed fuzzy extractor.Furthermore,we analyze the performance of our fuzzy extractor scheme and provide parameter sets that meet the security requirements.As a result of our implementation and analysis,we show that our scheme outperforms previous fuzzy extractor schemes in terms of the efficiency of the generation and reproduction algorithms,as well as the size of helper data.
文摘The main purpose of this paper is to introduce the LWE public key cryptosystem with its security. In the first section, we introduce the LWE public key cryptosystem by Regev with its applications and some previous research results. Then we prove the security of LWE public key cryptosystem by Regev in detail. For not only independent identical Gaussian disturbances but also any general independent identical disturbances, we give a more accurate estimation probability of decryption error of general LWE cryptosystem. This guarantees high security and widespread applications of the LWE public key cryptosystem.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the Interdisciplinary Innovation Fund of Natural Science,Nanchang University(Grant No.9167-28220007-YB2107).
文摘The accuracy of landslide susceptibility prediction(LSP)mainly depends on the precision of the landslide spatial position.However,the spatial position error of landslide survey is inevitable,resulting in considerable uncertainties in LSP modeling.To overcome this drawback,this study explores the influence of positional errors of landslide spatial position on LSP uncertainties,and then innovatively proposes a semi-supervised machine learning model to reduce the landslide spatial position error.This paper collected 16 environmental factors and 337 landslides with accurate spatial positions taking Shangyou County of China as an example.The 30e110 m error-based multilayer perceptron(MLP)and random forest(RF)models for LSP are established by randomly offsetting the original landslide by 30,50,70,90 and 110 m.The LSP uncertainties are analyzed by the LSP accuracy and distribution characteristics.Finally,a semi-supervised model is proposed to relieve the LSP uncertainties.Results show that:(1)The LSP accuracies of error-based RF/MLP models decrease with the increase of landslide position errors,and are lower than those of original data-based models;(2)70 m error-based models can still reflect the overall distribution characteristics of landslide susceptibility indices,thus original landslides with certain position errors are acceptable for LSP;(3)Semi-supervised machine learning model can efficiently reduce the landslide position errors and thus improve the LSP accuracies.
基金Supported by the Indigenous Innovation’s Capability Development Program of Huizhou University(HZU202003,HZU202020)Natural Science Foundation of Guangdong Province(2022A1515011463)+2 种基金the Project of Educational Commission of Guangdong Province(2023ZDZX1025)National Natural Science Foundation of China(12271473)Guangdong Province’s 2023 Education Science Planning Project(Higher Education Special Project)(2023GXJK505)。
文摘Complementary-label learning(CLL)aims at finding a classifier via samples with complementary labels.Such data is considered to contain less information than ordinary-label samples.The transition matrix between the true label and the complementary label,and some loss functions have been developed to handle this problem.In this paper,we show that CLL can be transformed into ordinary classification under some mild conditions,which indicates that the complementary labels can supply enough information in most cases.As an example,an extensive misclassification error analysis was performed for the Kernel Ridge Regression(KRR)method applied to multiple complementary-label learning(MCLL),which demonstrates its superior performance compared to existing approaches.
基金supported in part by The Science and Technology Development Fund, Macao SAR, China (0108/2020/A3)in part by The Science and Technology Development Fund, Macao SAR, China (0005/2021/ITP)the Deanship of Scientific Research at Taif University for funding this work。
文摘With the development of communication systems, modulation methods are becoming more and more diverse. Among them, quadrature spatial modulation(QSM) is considered as one method with less capacity and high efficiency. In QSM, the traditional signal detection methods sometimes are unable to meet the actual requirement of low complexity of the system. Therefore, this paper proposes a signal detection scheme for QSM systems using deep learning to solve the complexity problem. Results from the simulations show that the bit error rate performance of the proposed deep learning-based detector is better than that of the zero-forcing(ZF) and minimum mean square error(MMSE) detectors, and similar to the maximum likelihood(ML) detector. Moreover, the proposed method requires less processing time than ZF, MMSE,and ML.
基金supported by the Natural Science Foundation of Shandong Province,China (Grant No. ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos. ZR2022LLZ012 and ZR2021LLZ001)。
文摘Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum computer. For this new topological stabilizer code-XYZ^(2) code defined on the cellular lattice, it is implemented on a hexagonal lattice of qubits and it encodes the logical qubits with the help of stabilizer measurements of weight six and weight two. However topological stabilizer codes in cellular lattice quantum systems suffer from the detrimental effects of noise due to interaction with the environment. Several decoding approaches have been proposed to address this problem. Here, we propose the use of a state-attention based reinforcement learning decoder to decode XYZ^(2) codes, which enables the decoder to more accurately focus on the information related to the current decoding position, and the error correction accuracy of our reinforcement learning decoder model under the optimisation conditions can reach 83.27% under the depolarizing noise model, and we have measured thresholds of 0.18856 and 0.19043 for XYZ^(2) codes at code spacing of 3–7 and 7–11, respectively. our study provides directions and ideas for applications of decoding schemes combining reinforcement learning attention mechanisms to other topological quantum error-correcting codes.
文摘The research focuses on improving predictive accuracy in the financial sector through the exploration of machine learning algorithms for stock price prediction. The research follows an organized process combining Agile Scrum and the Obtain, Scrub, Explore, Model, and iNterpret (OSEMN) methodology. Six machine learning models, namely Linear Forecast, Naive Forecast, Simple Moving Average with weekly window (SMA 5), Simple Moving Average with monthly window (SMA 20), Autoregressive Integrated Moving Average (ARIMA), and Long Short-Term Memory (LSTM), are compared and evaluated through Mean Absolute Error (MAE), with the LSTM model performing the best, showcasing its potential for practical financial applications. A Django web application “Predict It” is developed to implement the LSTM model. Ethical concerns related to predictive modeling in finance are addressed. Data quality, algorithm choice, feature engineering, and preprocessing techniques are emphasized for better model performance. The research acknowledges limitations and suggests future research directions, aiming to equip investors and financial professionals with reliable predictive models for dynamic markets.
基金Project supported by the Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction technology is an important method to eliminate errors during the operation of quantum computers.In order to solve the problem of influence of errors on physical qubits,we propose an approximate error correction scheme that performs dimension mapping operations on surface codes.This error correction scheme utilizes the topological properties of error correction codes to map the surface code dimension to three dimensions.Compared to previous error correction schemes,the present three-dimensional surface code exhibits good scalability due to its higher redundancy and more efficient error correction capabilities.By reducing the number of ancilla qubits required for error correction,this approach achieves savings in measurement space and reduces resource consumption costs.In order to improve the decoding efficiency and solve the problem of the correlation between the surface code stabilizer and the 3D space after dimension mapping,we employ a reinforcement learning(RL)decoder based on deep Q-learning,which enables faster identification of the optimal syndrome and achieves better thresholds through conditional optimization.Compared to the minimum weight perfect matching decoding,the threshold of the RL trained model reaches 0.78%,which is 56%higher and enables large-scale fault-tolerant quantum computation.
文摘Language teaching is not a one-way process.It interacts with language learning in an extremely intricate way.To improve language teaching,we need to take the process of language learning into account.This paper tries to explore and understand what strategies the second language learners consciously or subconsciously adopt during their language learning process through the analyses of the linguistic errors they commit,so as to provide some insights into language teaching practice.
文摘Based on the questionnaire, this study found that :1) Elementary learners were inclined to commit more global errors compared to their local errors, whilst advanced learners make more local errors; 2) Interlingual factors were more influential than intralingual factors in elementary learners' error making, but for advanced learners, intralingual factors played relatively a much more important role in error making; 3) Elementary learners preferred explicit correction whilst advanced learners favoured im?plicit correction in question-asking.
文摘In order to research brain problems using MRI,PET,and CT neuroimaging,a correct understanding of brain function is required.This has been considered in earlier times with the support of traditional algorithms.Deep learning process has also been widely considered in these genomics data processing system.In this research,brain disorder illness incliding Alzheimer’s disease,Schizophrenia and Parkinson’s diseaseis is analyzed owing to misdetection of disorders in neuroimaging data examined by means fo traditional methods.Moeover,deep learning approach is incorporated here for classification purpose of brain disorder with the aid of Deep Belief Networks(DBN).Images are stored in a secured manner by using DNA sequence based on JPEG Zig Zag Encryption algorithm(DBNJZZ)approach.The suggested approach is executed and tested by using the performance metric measure such as accuracy,root mean square error,Mean absolute error and mean absolute percentage error.Proposed DBNJZZ gives better performance than previously available methods.
基金Program of Science and Technology Department of Sichuan Province(2022YFS0541-02)Program of Heavy Rain and Drought-flood Disasters in Plateau and Basin Key Laboratory of Sichuan Province(SCQXKJQN202121)Innovative Development Program of the China Meteorological Administration(CXFZ2021Z007)。
文摘Machine learning models were used to improve the accuracy of China Meteorological Administration Multisource Precipitation Analysis System(CMPAS)in complex terrain areas by combining rain gauge precipitation with topographic factors like altitude,slope,slope direction,slope variability,surface roughness,and meteorological factors like temperature and wind speed.The results of the correction demonstrated that the ensemble learning method has a considerably corrective effect and the three methods(Random Forest,AdaBoost,and Bagging)adopted in the study had similar results.The mean bias between CMPAS and 85%of automatic weather stations has dropped by more than 30%.The plateau region displays the largest accuracy increase,the winter season shows the greatest error reduction,and decreasing precipitation improves the correction outcome.Additionally,the heavy precipitation process’precision has improved to some degree.For individual stations,the revised CMPAS error fluctuation range is significantly reduced.
基金Supported by Self-directed Research Program of Tsinghua University (2011Z01033)
文摘A learning with error problem based encryption scheme that allows secure searching over the cipher text is proposed. Both the generation of cipher text and the trapdoor of the query are based on the problem of learning with errors. By performing an operation over the trapdoor and the cipher text, it is able to tell if the cipher text is the encryption of a plaintext. The secure searchable encryption scheme is both cipher text and trapdoor indistinguishable. The probabilities of missing and failing match occurrence in searching are both exponentially small.
基金supported in part by the Education Department of Sichuan Province(Grant No.[2022]114).
文摘Automatically correcting students’code errors using deep learning is an effective way to reduce the burden of teachers and to enhance the effects of students’learning.However,code errors vary greatly,and the adaptability of fixing techniques may vary for different types of code errors.How to choose the appropriate methods to fix different types of errors is still an unsolved problem.To this end,this paper first classifies code errors by Java novice programmers based on Delphi analysis,and compares the effectiveness of different deep learning models(CuBERT,GraphCodeBERT and GGNN)fixing different types of errors.The results indicated that the 3 models differed significantly in their classification accuracy on different error codes,while the error correction model based on the Bert structure showed better code correction potential for beginners’codes.
基金This research was financially supported by the Ministry of Small and Mediumsized Enterprises(SMEs)and Startups(MSS),Korea,under the“Regional Specialized Industry Development Program(R&D,S2855401)”supervised by the Korea Institute for Advancement of Technology(KIAT).
文摘Despite the advancement within the last decades in the field of smart grids,energy consumption forecasting utilizing the metrological features is still challenging.This paper proposes a genetic algorithm-based adaptive error curve learning ensemble(GA-ECLE)model.The proposed technique copes with the stochastic variations of improving energy consumption forecasting using a machine learning-based ensembled approach.A modified ensemble model based on a utilizing error of model as a feature is used to improve the forecast accuracy.This approach combines three models,namely CatBoost(CB),Gradient Boost(GB),and Multilayer Perceptron(MLP).The ensembled CB-GB-MLP model’s inner mechanism consists of generating a meta-data from Gradient Boosting and CatBoost models to compute the final predictions using the Multilayer Perceptron network.A genetic algorithm is used to obtain the optimal features to be used for the model.To prove the proposed model’s effectiveness,we have used a four-phase technique using Jeju island’s real energy consumption data.In the first phase,we have obtained the results by applying the CB-GB-MLP model.In the second phase,we have utilized a GA-ensembled model with optimal features.The third phase is for the comparison of the energy forecasting result with the proposed ECL-based model.The fourth stage is the final stage,where we have applied the GA-ECLE model.We obtained a mean absolute error of 3.05,and a root mean square error of 5.05.Extensive experimental results are provided,demonstrating the superiority of the proposed GA-ECLE model over traditional ensemble models.
基金funded by Liaoning Provincial Department of Science and Technology(2023JH2/101600058)。
文摘With the continuous advancement of China’s“peak carbon dioxide emissions and Carbon Neutrality”process,the proportion of wind power is increasing.In the current research,aiming at the problem that the forecasting model is outdated due to the continuous updating of wind power data,a short-term wind power forecasting algorithm based on Incremental Learning-Bagging Deep Hybrid Kernel Extreme Learning Machine(IL-Bagging-DHKELM)error affinity propagation cluster analysis is proposed.The algorithm effectively combines deep hybrid kernel extreme learning machine(DHKELM)with incremental learning(IL).Firstly,an initial wind power prediction model is trained using the Bagging-DHKELM model.Secondly,Euclidean morphological distance affinity propagation AP clustering algorithm is used to cluster and analyze the prediction error of wind power obtained from the initial training model.Finally,the correlation between wind power prediction errors and Numerical Weather Prediction(NWP)data is introduced as incremental updates to the initial wind power prediction model.During the incremental learning process,multiple error performance indicators are used to measure the overall model performance,thereby enabling incremental updates of wind power models.Practical examples show the method proposed in this article reduces the root mean square error of the initial model by 1.9 percentage points,indicating that this method can be better adapted to the current scenario of the continuous increase in wind power penetration rate.The accuracy and precision of wind power generation prediction are effectively improved through the method.
文摘Industrial robot system is a kind of dynamic system w ith strong nonlinear coupling and high position precision. A lot of control ways , such as nonlinear feedbackdecomposition motion and adaptive control and so o n, have been used to control this kind of system, but there are some deficiencie s in those methods: some need accurate and some need complicated operation and e tc. In recent years, in need of controlling the industrial robots, aiming at com pletely tracking the ideal input for the controlled subject with repetitive character, a new research area, ILC (iterative learning control), has been devel oped in the control technology and theory. The iterative learning control method can make the controlled subject operate as desired in a definite time span, merely making use of the prior control experie nce of the system and searching for the desired control signal according to the practical and desired output signal. The process of searching is equal to that o f learning, during which we only need to measure the output signal to amend the control signal, not like the adaptive control strategy, which on line assesses t he complex parameters of the system. Besides, since the iterative learning contr ol relies little on the prior message of the subject, it has been well used in a lot of areas, especially the dynamic systems with strong non-linear coupling a nd high repetitive position precision and the control system with batch producti on. Since robot manipulator has the above-mentioned character, ILC can be very well used in robot manipulator. In the ILC, since the operation always begins with a certain initial state, init ial condition has been required in almost all convergence verification. Therefor e, in designing the controller, the initial state has to be restricted with some condition to guarantee the convergence of the algorithm. The settle of initial condition problem has long been pursued in the ILC. There are commonly two kinds of initial condition problems: one is zero initial error problem, another is non-zero initial error problem. In practice, the repe titive operation will invariably produce excursion of the iterative initial stat e from the desired initial state. As a result, the research on the second in itial problem has more practical meaning. In this paper, for the non-zero initial error problem, one novel robust ILC alg orithms, respectively combining PD type iterative learning control algorithm wit h the robust feedback control algorithm, has been presented. This novel robust ILC algorithm contain two parts: feedforward ILC algorithm and robust feedback algorithm, which can be used to restrain disturbance from param eter variation, mechanical nonlinearities and unmodeled dynamics and to achieve good performance as well. The feedforward ILC algorithm can be used to improve the tracking error and perf ormance of the system through iteratively learning from the previous operation, thus performing the tracking task very fast. The robust feedback algorithm could mainly be applied to make the real output of the system not deviate too much fr om the desired tracking trajectory, and guarantee the system’s robustness w hen there are exterior noises and variations of the system parameter. In this paper, in order to analyze the convergence of the algorithm, Lyapunov st ability theory has been used through properly selecting the Lyapunov function. T he result of the verification shows the feasibility of the novel robust iterativ e learning control in theory. Finally, aiming at the two-freedom rate robot, simulation has been made with th e MATLAB software. Furthermore, two groups of parameters are selected to validat e the robustness of the algorithm.
文摘With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial information grow, electronic documents have also proliferated. When dealing with numerous electronic documents and texts written by Chinese beginners, manually written texts often contain hidden grammatical errors, posing a significant challenge to traditional manual proofreading. Correcting these grammatical errors is crucial to ensure fluency and readability. However, certain special types of text grammar or logical errors can have a huge impact, and manually proofreading a large number of texts individually is clearly impractical. Consequently, research on text error correction techniques has garnered significant attention in recent years. The advent and advancement of deep learning have paved the way for sequence-to-sequence learning methods to be extensively applied to the task of text error correction. This paper presents a comprehensive analysis of Chinese text grammar error correction technology, elaborates on its current research status, discusses existing problems, proposes preliminary solutions, and conducts experiments using judicial documents as an example. The aim is to provide a feasible research approach for Chinese text error correction technology.