Wireless Network security management is difficult because of the ever-increasing number of wireless network malfunctions,vulnerabilities,and assaults.Complex security systems,such as Intrusion Detection Systems(IDS),a...Wireless Network security management is difficult because of the ever-increasing number of wireless network malfunctions,vulnerabilities,and assaults.Complex security systems,such as Intrusion Detection Systems(IDS),are essential due to the limitations of simpler security measures,such as cryptography and firewalls.Due to their compact nature and low energy reserves,wireless networks present a significant challenge for security procedures.The features of small cells can cause threats to the network.Network Coding(NC)enabled small cells are vulnerable to various types of attacks.Avoiding attacks and performing secure“peer”to“peer”data transmission is a challenging task in small cells.Due to the low power and memory requirements of the proposed model,it is well suited to use with constrained small cells.An attacker cannot change the contents of data and generate a new Hashed Homomorphic Message Authentication Code(HHMAC)hash between transmissions since the HMAC function is generated using the shared secret.In this research,a chaotic sequence mapping based low overhead 1D Improved Logistic Map is used to secure“peer”to“peer”data transmission model using lightweight H-MAC(1D-LM-P2P-LHHMAC)is proposed with accurate intrusion detection.The proposed model is evaluated with the traditional models by considering various evaluation metrics like Vector Set Generation Accuracy Levels,Key Pair Generation Time Levels,Chaotic Map Accuracy Levels,Intrusion Detection Accuracy Levels,and the results represent that the proposed model performance in chaotic map accuracy level is 98%and intrusion detection is 98.2%.The proposed model is compared with the traditional models and the results represent that the proposed model secure data transmission levels are high.展开更多
Coding sequences (CDS) are commonly used for transient gene expression, in yeast two-hybrid screening, to verify protein interactions and in prokaryotic gene expression studies. CDS are most commonly obtained using co...Coding sequences (CDS) are commonly used for transient gene expression, in yeast two-hybrid screening, to verify protein interactions and in prokaryotic gene expression studies. CDS are most commonly obtained using complementary DNA (cDNA) derived from messenger RNA (mRNA) extracted from plant tissues and generated by reverse transcription. However, some CDS are difficult to acquire through this process as they are expressed at extremely low levels or have specific spatial and/or temporal expression patterns in vivo. These challenges require the development of alternative CDS cloning technologies. In this study, we found that the genomic intron-containing gene coding sequences (gDNA) from Arabidopsis thaliana, Oryza sativa, Brassica napus, and Glycine max can be correctly transcribed and spliced into mRNA in Nicotiana benthamiana. In contrast, gDNAs from Triticum aestivum and Sorghum bicolor did not function correctly. In transient expression experiments, the target DNA sequence is driven by a constitutive promoter. Theoretically, a sufficient amount of mRNA can be extracted from the N. benthamiana leaves, making it conducive to the cloning of CDS target genes. Our data demonstrate that N. benthamiana can be used as an effective host for the cloning CDS of plant genes.展开更多
Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression ...Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data.展开更多
In some schemes, quantum blind signatures require the use of difficult-to-prepare multiparticle entangled states. By considering the communication overhead, quantum operation complexity, verification efficiency and ot...In some schemes, quantum blind signatures require the use of difficult-to-prepare multiparticle entangled states. By considering the communication overhead, quantum operation complexity, verification efficiency and other relevant factors in practical situations, this article proposes a non-entangled quantum blind signature scheme based on dense encoding. The information owner utilizes dense encoding and hash functions to blind the information while reducing the use of quantum resources. After receiving particles, the signer encrypts the message using a one-way function and performs a Hadamard gate operation on the selected single photon to generate the signature. Then the verifier performs a Hadamard gate inverse operation on the signature and combines it with the encoding rules to restore the message and complete the verification.Compared with some typical quantum blind signature protocols, this protocol has strong blindness in privacy protection,and higher flexibility in scalability and application. The signer can adjust the signature operation according to the actual situation, which greatly simplifies the complexity of the signature. By simultaneously utilizing the secondary distribution and rearrangement of non-entangled quantum states, a non-entangled quantum state representation of three bits of classical information is achieved, reducing the use of a large amount of quantum resources and lowering implementation costs. This improves both signature verification efficiency and communication efficiency while, at the same time, this scheme meets the requirements of unforgeability, non-repudiation, and prevention of information leakage.展开更多
During faults in a distribution network,the output power of a distributed generation(DG)may be uncertain.Moreover,the output currents of distributed power sources are also affected by the output power,resulting in unc...During faults in a distribution network,the output power of a distributed generation(DG)may be uncertain.Moreover,the output currents of distributed power sources are also affected by the output power,resulting in uncertainties in the calculation of the short-circuit current at the time of a fault.Additionally,the impacts of such uncertainties around short-circuit currents will increase with the increase of distributed power sources.Thus,it is very important to develop a method for calculating the short-circuit current while considering the uncertainties in a distribution network.In this study,an affine arithmetic algorithm for calculating short-circuit current intervals in distribution networks with distributed power sources while considering power fluctuations is presented.The proposed algorithm includes two stages.In the first stage,normal operations are considered to establish a conservative interval affine optimization model of injection currents in distributed power sources.Constrained by the fluctuation range of distributed generation power at the moment of fault occurrence,the model can then be used to solve for the fluctuation range of injected current amplitudes in distributed power sources.The second stage is implemented after a malfunction occurs.In this stage,an affine optimization model is first established.This model is developed to characterizes the short-circuit current interval of a transmission line,and is constrained by the fluctuation range of the injected current amplitude of DG during normal operations.Finally,the range of the short-circuit current amplitudes of distribution network lines after a short-circuit fault occurs is predicted.The algorithm proposed in this article obtains an interval range containing accurate results through interval operation.Compared with traditional point value calculation methods,interval calculation methods can provide more reliable analysis and calculation results.The range of short-circuit current amplitude obtained by this algorithm is slightly larger than those obtained using the Monte Carlo algorithm and the Latin hypercube sampling algorithm.Therefore,the proposed algorithm has good suitability and does not require iterative calculations,resulting in a significant improvement in computational speed compared to the Monte Carlo algorithm and the Latin hypercube sampling algorithm.Furthermore,the proposed algorithm can provide more reliable analysis and calculation results,improving the safety and stability of power systems.展开更多
Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum co...Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum computer. For this new topological stabilizer code-XYZ^(2) code defined on the cellular lattice, it is implemented on a hexagonal lattice of qubits and it encodes the logical qubits with the help of stabilizer measurements of weight six and weight two. However topological stabilizer codes in cellular lattice quantum systems suffer from the detrimental effects of noise due to interaction with the environment. Several decoding approaches have been proposed to address this problem. Here, we propose the use of a state-attention based reinforcement learning decoder to decode XYZ^(2) codes, which enables the decoder to more accurately focus on the information related to the current decoding position, and the error correction accuracy of our reinforcement learning decoder model under the optimisation conditions can reach 83.27% under the depolarizing noise model, and we have measured thresholds of 0.18856 and 0.19043 for XYZ^(2) codes at code spacing of 3–7 and 7–11, respectively. our study provides directions and ideas for applications of decoding schemes combining reinforcement learning attention mechanisms to other topological quantum error-correcting codes.展开更多
Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error corre...Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error correction using neural network-based machine learning methods is a promising approach that is adapted to physical systems without the need to build noise models.In this paper,we use a distributed decoding strategy,which effectively alleviates the problem of exponential growth of the training set required for neural networks as the code distance of quantum error-correcting codes increases.Our decoding algorithm is based on renormalization group decoding and recurrent neural network decoder.The recurrent neural network is trained through the ResNet architecture to improve its decoding accuracy.Then we test the decoding performance of our distributed strategy decoder,recurrent neural network decoder,and the classic minimum weight perfect matching(MWPM)decoder for rotated surface codes with different code distances under the circuit noise model,the thresholds of these three decoders are about 0.0052,0.0051,and 0.0049,respectively.Our results demonstrate that the distributed strategy decoder outperforms the other two decoders,achieving approximately a 5%improvement in decoding efficiency compared to the MWPM decoder and approximately a 2%improvement compared to the recurrent neural network decoder.展开更多
This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
Belief propagation list(BPL) decoding for polar codes has attracted more attention due to its inherent parallel nature. However, a large gap still exists with CRC-aided SCL(CA-SCL) decoding.In this work, an improved s...Belief propagation list(BPL) decoding for polar codes has attracted more attention due to its inherent parallel nature. However, a large gap still exists with CRC-aided SCL(CA-SCL) decoding.In this work, an improved segmented belief propagation list decoding based on bit flipping(SBPL-BF) is proposed. On the one hand, the proposed algorithm makes use of the cooperative characteristic in BPL decoding such that the codeword is decoded in different BP decoders. Based on this characteristic, the unreliable bits for flipping could be split into multiple subblocks and could be flipped in different decoders simultaneously. On the other hand, a more flexible and effective processing strategy for the priori information of the unfrozen bits that do not need to be flipped is designed to improve the decoding convergence. In addition, this is the first proposal in BPL decoding which jointly optimizes the bit flipping of the information bits and the code bits. In particular, for bit flipping of the code bits, a H-matrix aided bit-flipping algorithm is designed to enhance the accuracy in identifying erroneous code bits. The simulation results show that the proposed algorithm significantly improves the errorcorrection performance of BPL decoding for medium and long codes. It is more than 0.25 d B better than the state-of-the-art BPL decoding at a block error rate(BLER) of 10^(-5), and outperforms CA-SCL decoding in the low signal-to-noise(SNR) region for(1024, 0.5)polar codes.展开更多
High-dimensional datasets present significant challenges for classification tasks.Dimensionality reduction,a crucial aspect of data preprocessing,has gained substantial attention due to its ability to improve classifi...High-dimensional datasets present significant challenges for classification tasks.Dimensionality reduction,a crucial aspect of data preprocessing,has gained substantial attention due to its ability to improve classification per-formance.However,identifying the optimal features within high-dimensional datasets remains a computationally demanding task,necessitating the use of efficient algorithms.This paper introduces the Arithmetic Optimization Algorithm(AOA),a novel approach for finding the optimal feature subset.AOA is specifically modified to address feature selection problems based on a transfer function.Additionally,two enhancements are incorporated into the AOA algorithm to overcome limitations such as limited precision,slow convergence,and susceptibility to local optima.The first enhancement proposes a new method for selecting solutions to be improved during the search process.This method effectively improves the original algorithm’s accuracy and convergence speed.The second enhancement introduces a local search with neighborhood strategies(AOA_NBH)during the AOA exploitation phase.AOA_NBH explores the vast search space,aiding the algorithm in escaping local optima.Our results demonstrate that incorporating neighborhood methods enhances the output and achieves significant improvement over state-of-the-art methods.展开更多
This article addresses the issues of falling into local optima and insufficient exploration capability in the Arithmetic Optimization Algorithm (AOA), proposing an improved Arithmetic Optimization Algorithm with a mul...This article addresses the issues of falling into local optima and insufficient exploration capability in the Arithmetic Optimization Algorithm (AOA), proposing an improved Arithmetic Optimization Algorithm with a multi-strategy mechanism (BSFAOA). This algorithm introduces three strategies within the standard AOA framework: an adaptive balance factor SMOA based on sine functions, a search strategy combining Spiral Search and Brownian Motion, and a hybrid perturbation strategy based on Whale Fall Mechanism and Polynomial Differential Learning. The BSFAOA algorithm is analyzed in depth on the well-known 23 benchmark functions, CEC2019 test functions, and four real optimization problems. The experimental results demonstrate that the BSFAOA algorithm can better balance the exploration and exploitation capabilities, significantly enhancing the stability, convergence mode, and search efficiency of the AOA algorithm.展开更多
BACKGROUND with the widespread application of computer network systems in the medical field,the plan-do-check-action(PDCA)and the international classification of diseases tenth edition(ICD-10)coding system have also a...BACKGROUND with the widespread application of computer network systems in the medical field,the plan-do-check-action(PDCA)and the international classification of diseases tenth edition(ICD-10)coding system have also achieved favorable results in clinical medical record management.However,research on their combined application is relatively lacking.Objective:it was to explore the impact of network systems and PDCA management mode on ICD-10 encoding.Material and Method:a retrospective collection of 768 discharged medical records from the Medical Record Management Department of Meishan People’s Hospital was conducted.They were divided into a control group(n=232)and an observation group(n=536)based on whether the PDCA management mode was implemented.The two sets of coding accuracy,time spent,case completion rate,satisfaction,and other indicators were compared.AIM To study the adoption of network and PDCA in the ICD-10.METHODS A retrospective collection of 768 discharged medical records from the Medical Record Management Department of Meishan People’s Hospital was conducted.They were divided into a control group(n=232)and an observation group(n=536)based on whether the PDCA management mode was implemented.The two sets of coding accuracy,time spent,case completion rate,satisfaction,and other indicators were compared.RESULTS In the 3,6,12,18,and 24 months of PDCA cycle management mode,the coding accuracy and medical record completion rate were higher,and the coding time was lower in the observation group as against the controls(P<0.05).The satisfaction of coders(80.22%vs 53.45%)and patients(84.89%vs 51.72%)in the observation group was markedly higher as against the controls(P<0.05).CONCLUSION The combination of computer networks and PDCA can improve the accuracy,efficiency,completion rate,and satisfaction of ICD-10 coding.展开更多
This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Ree...This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Reed-Solomon and low-density parity-check(RS-LDPC)coding system,combined with a Kalman filter-based adaptive algorithm.The hybrid RS-LDPC coding accommodates a wide range of code length requirements,employing RS coding for short codes and LDPC coding for medium-long codes.We delimit the short and medium-length codes by coding performance so that both codes remain in the optimal region.Additionally,a Kalman filter-based adaptive algorithm has been developed to handle dynamic alterations in a packet loss rate.The Kalman filter estimates packet loss rate utilizing observation data and system models,and then we establish the redundancy decision module through receiver feedback.As a result,the lost packets can be perfectly recovered by the receiver based on the redundant packets.Experimental results show that the proposed method enhances the decoding performance significantly under the same redundancy and channel packet loss.展开更多
To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advan...To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advances in video coding for machine standards are presented and comprehensive introductions to the use cases,requirements,evaluation frameworks and corresponding metrics of the VCM standard are given.Then the existing methods are presented,introducing the existing proposals by category and the research progress of the latest VCM conference.Finally,we give conclusions.展开更多
By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative st...By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative study of the existing coding system standards in different regions of the country,a coding data model suitable for big data research needs is proposed based on the current national standard for artificial material mechanical equipment classification coding.This model achieves a horizontal connection of characteristics and a vertical penetration of attribute values for construction materials and machinery through forward automatic coding calculation and reverse automatic decoding.This coding scheme and calculation model can also establish a database file for the coding and unit price of construction materials and machinery,forming a complete big data model for construction material coding unit prices.This provides foundational support for calculating and analyzing big data related to construction material unit prices,real-time information prices,market prices,and various comprehensive prices,thus contributing to the formation of cost-related big data.展开更多
In order to reveal the complex network characteristics and evolution principle of China aviation network,the probability distribution and evolution trace of arithmetic average of edge vertices nearest neighbor average...In order to reveal the complex network characteristics and evolution principle of China aviation network,the probability distribution and evolution trace of arithmetic average of edge vertices nearest neighbor average degree values of China aviation network were studied based on the statistics data of China civil aviation network in 1988,1994,2001,2008 and 2015.According to the theory and method of complex network,the network system was constructed with the city where the airport was located as the network node and the route between cities as the edge of the network.Based on the statistical data,the arithmetic averages of edge vertices nearest neighbor average degree values of China aviation network in 1988,1994,2001,2008 and 2015 were calculated.Using the probability statistical analysis method,it was found that the arithmetic average of edge vertices nearest neighbor average degree values had the probability distribution of normal function and the position parameters and scale parameters of the probability distribution had linear evolution trace.展开更多
In this paper, we considered the equality problem of weighted Bajraktarević means with weighted quasi-arithmetic means. Using the method of substituting for functions, we first transform the equality problem into solv...In this paper, we considered the equality problem of weighted Bajraktarević means with weighted quasi-arithmetic means. Using the method of substituting for functions, we first transform the equality problem into solving an equivalent functional equation. We obtain the necessary and sufficient conditions for the equality equation.展开更多
For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitutio...For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.展开更多
An approximately optimal adaptive arithmetic coding (AC) system using a forbidden symbol (FS) over noisy channels was proposed which allows one to jointly and adaptively design the source decoding and channel correcti...An approximately optimal adaptive arithmetic coding (AC) system using a forbidden symbol (FS) over noisy channels was proposed which allows one to jointly and adaptively design the source decoding and channel correcting in a single process, with superior performance compared with traditional separated techniques. The concept of adaptiveness is applied not only to the source model but also to the amount of coding redundancy. In addition, an improved branch metric computing algorithm and a faster sequential searching algorithm compared with the system proposed by Grangetto were proposed. The proposed system is tested in the case of image transmission over the AWGN channel, and compared with traditional separated system in terms of packet error rate and complexity. Both hard and soft decoding were taken into account.展开更多
文摘Wireless Network security management is difficult because of the ever-increasing number of wireless network malfunctions,vulnerabilities,and assaults.Complex security systems,such as Intrusion Detection Systems(IDS),are essential due to the limitations of simpler security measures,such as cryptography and firewalls.Due to their compact nature and low energy reserves,wireless networks present a significant challenge for security procedures.The features of small cells can cause threats to the network.Network Coding(NC)enabled small cells are vulnerable to various types of attacks.Avoiding attacks and performing secure“peer”to“peer”data transmission is a challenging task in small cells.Due to the low power and memory requirements of the proposed model,it is well suited to use with constrained small cells.An attacker cannot change the contents of data and generate a new Hashed Homomorphic Message Authentication Code(HHMAC)hash between transmissions since the HMAC function is generated using the shared secret.In this research,a chaotic sequence mapping based low overhead 1D Improved Logistic Map is used to secure“peer”to“peer”data transmission model using lightweight H-MAC(1D-LM-P2P-LHHMAC)is proposed with accurate intrusion detection.The proposed model is evaluated with the traditional models by considering various evaluation metrics like Vector Set Generation Accuracy Levels,Key Pair Generation Time Levels,Chaotic Map Accuracy Levels,Intrusion Detection Accuracy Levels,and the results represent that the proposed model performance in chaotic map accuracy level is 98%and intrusion detection is 98.2%.The proposed model is compared with the traditional models and the results represent that the proposed model secure data transmission levels are high.
文摘Coding sequences (CDS) are commonly used for transient gene expression, in yeast two-hybrid screening, to verify protein interactions and in prokaryotic gene expression studies. CDS are most commonly obtained using complementary DNA (cDNA) derived from messenger RNA (mRNA) extracted from plant tissues and generated by reverse transcription. However, some CDS are difficult to acquire through this process as they are expressed at extremely low levels or have specific spatial and/or temporal expression patterns in vivo. These challenges require the development of alternative CDS cloning technologies. In this study, we found that the genomic intron-containing gene coding sequences (gDNA) from Arabidopsis thaliana, Oryza sativa, Brassica napus, and Glycine max can be correctly transcribed and spliced into mRNA in Nicotiana benthamiana. In contrast, gDNAs from Triticum aestivum and Sorghum bicolor did not function correctly. In transient expression experiments, the target DNA sequence is driven by a constitutive promoter. Theoretically, a sufficient amount of mRNA can be extracted from the N. benthamiana leaves, making it conducive to the cloning of CDS target genes. Our data demonstrate that N. benthamiana can be used as an effective host for the cloning CDS of plant genes.
基金supported by the Deanship of Scientific Research,at Imam Abdulrahman Bin Faisal University.Grant Number:2019-416-ASCS.
文摘Lung cancer is among the most frequent cancers in the world,with over one million deaths per year.Classification is required for lung cancer diagnosis and therapy to be effective,accurate,and reliable.Gene expression microarrays have made it possible to find genetic biomarkers for cancer diagnosis and prediction in a high-throughput manner.Machine Learning(ML)has been widely used to diagnose and classify lung cancer where the performance of ML methods is evaluated to identify the appropriate technique.Identifying and selecting the gene expression patterns can help in lung cancer diagnoses and classification.Normally,microarrays include several genes and may cause confusion or false prediction.Therefore,the Arithmetic Optimization Algorithm(AOA)is used to identify the optimal gene subset to reduce the number of selected genes.Which can allow the classifiers to yield the best performance for lung cancer classification.In addition,we proposed a modified version of AOA which can work effectively on the high dimensional dataset.In the modified AOA,the features are ranked by their weights and are used to initialize the AOA population.The exploitation process of AOA is then enhanced by developing a local search algorithm based on two neighborhood strategies.Finally,the efficiency of the proposed methods was evaluated on gene expression datasets related to Lung cancer using stratified 4-fold cross-validation.The method’s efficacy in selecting the optimal gene subset is underscored by its ability to maintain feature proportions between 10%to 25%.Moreover,the approach significantly enhances lung cancer prediction accuracy.For instance,Lung_Harvard1 achieved an accuracy of 97.5%,Lung_Harvard2 and Lung_Michigan datasets both achieved 100%,Lung_Adenocarcinoma obtained an accuracy of 88.2%,and Lung_Ontario achieved an accuracy of 87.5%.In conclusion,the results indicate the potential promise of the proposed modified AOA approach in classifying microarray cancer data.
基金Project supported by the National Natural Science Foundation of China (Grant No. 61762039)。
文摘In some schemes, quantum blind signatures require the use of difficult-to-prepare multiparticle entangled states. By considering the communication overhead, quantum operation complexity, verification efficiency and other relevant factors in practical situations, this article proposes a non-entangled quantum blind signature scheme based on dense encoding. The information owner utilizes dense encoding and hash functions to blind the information while reducing the use of quantum resources. After receiving particles, the signer encrypts the message using a one-way function and performs a Hadamard gate operation on the selected single photon to generate the signature. Then the verifier performs a Hadamard gate inverse operation on the signature and combines it with the encoding rules to restore the message and complete the verification.Compared with some typical quantum blind signature protocols, this protocol has strong blindness in privacy protection,and higher flexibility in scalability and application. The signer can adjust the signature operation according to the actual situation, which greatly simplifies the complexity of the signature. By simultaneously utilizing the secondary distribution and rearrangement of non-entangled quantum states, a non-entangled quantum state representation of three bits of classical information is achieved, reducing the use of a large amount of quantum resources and lowering implementation costs. This improves both signature verification efficiency and communication efficiency while, at the same time, this scheme meets the requirements of unforgeability, non-repudiation, and prevention of information leakage.
基金This article was supported by the general project“Research on Wind and Photovoltaic Fault Characteristics and Practical Short Circuit Calculation Model”(521820200097)of Jiangxi Electric Power Company.
文摘During faults in a distribution network,the output power of a distributed generation(DG)may be uncertain.Moreover,the output currents of distributed power sources are also affected by the output power,resulting in uncertainties in the calculation of the short-circuit current at the time of a fault.Additionally,the impacts of such uncertainties around short-circuit currents will increase with the increase of distributed power sources.Thus,it is very important to develop a method for calculating the short-circuit current while considering the uncertainties in a distribution network.In this study,an affine arithmetic algorithm for calculating short-circuit current intervals in distribution networks with distributed power sources while considering power fluctuations is presented.The proposed algorithm includes two stages.In the first stage,normal operations are considered to establish a conservative interval affine optimization model of injection currents in distributed power sources.Constrained by the fluctuation range of distributed generation power at the moment of fault occurrence,the model can then be used to solve for the fluctuation range of injected current amplitudes in distributed power sources.The second stage is implemented after a malfunction occurs.In this stage,an affine optimization model is first established.This model is developed to characterizes the short-circuit current interval of a transmission line,and is constrained by the fluctuation range of the injected current amplitude of DG during normal operations.Finally,the range of the short-circuit current amplitudes of distribution network lines after a short-circuit fault occurs is predicted.The algorithm proposed in this article obtains an interval range containing accurate results through interval operation.Compared with traditional point value calculation methods,interval calculation methods can provide more reliable analysis and calculation results.The range of short-circuit current amplitude obtained by this algorithm is slightly larger than those obtained using the Monte Carlo algorithm and the Latin hypercube sampling algorithm.Therefore,the proposed algorithm has good suitability and does not require iterative calculations,resulting in a significant improvement in computational speed compared to the Monte Carlo algorithm and the Latin hypercube sampling algorithm.Furthermore,the proposed algorithm can provide more reliable analysis and calculation results,improving the safety and stability of power systems.
基金supported by the Natural Science Foundation of Shandong Province,China (Grant No. ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos. ZR2022LLZ012 and ZR2021LLZ001)。
文摘Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum computer. For this new topological stabilizer code-XYZ^(2) code defined on the cellular lattice, it is implemented on a hexagonal lattice of qubits and it encodes the logical qubits with the help of stabilizer measurements of weight six and weight two. However topological stabilizer codes in cellular lattice quantum systems suffer from the detrimental effects of noise due to interaction with the environment. Several decoding approaches have been proposed to address this problem. Here, we propose the use of a state-attention based reinforcement learning decoder to decode XYZ^(2) codes, which enables the decoder to more accurately focus on the information related to the current decoding position, and the error correction accuracy of our reinforcement learning decoder model under the optimisation conditions can reach 83.27% under the depolarizing noise model, and we have measured thresholds of 0.18856 and 0.19043 for XYZ^(2) codes at code spacing of 3–7 and 7–11, respectively. our study provides directions and ideas for applications of decoding schemes combining reinforcement learning attention mechanisms to other topological quantum error-correcting codes.
基金Project supported by Natural Science Foundation of Shandong Province,China (Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error correction using neural network-based machine learning methods is a promising approach that is adapted to physical systems without the need to build noise models.In this paper,we use a distributed decoding strategy,which effectively alleviates the problem of exponential growth of the training set required for neural networks as the code distance of quantum error-correcting codes increases.Our decoding algorithm is based on renormalization group decoding and recurrent neural network decoder.The recurrent neural network is trained through the ResNet architecture to improve its decoding accuracy.Then we test the decoding performance of our distributed strategy decoder,recurrent neural network decoder,and the classic minimum weight perfect matching(MWPM)decoder for rotated surface codes with different code distances under the circuit noise model,the thresholds of these three decoders are about 0.0052,0.0051,and 0.0049,respectively.Our results demonstrate that the distributed strategy decoder outperforms the other two decoders,achieving approximately a 5%improvement in decoding efficiency compared to the MWPM decoder and approximately a 2%improvement compared to the recurrent neural network decoder.
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金funded by the Key Project of NSFC-Guangdong Province Joint Program(Grant No.U2001204)the National Natural Science Foundation of China(Grant Nos.61873290 and 61972431)+1 种基金the Science and Technology Program of Guangzhou,China(Grant No.202002030470)the Funding Project of Featured Major of Guangzhou Xinhua University(2021TZ002).
文摘Belief propagation list(BPL) decoding for polar codes has attracted more attention due to its inherent parallel nature. However, a large gap still exists with CRC-aided SCL(CA-SCL) decoding.In this work, an improved segmented belief propagation list decoding based on bit flipping(SBPL-BF) is proposed. On the one hand, the proposed algorithm makes use of the cooperative characteristic in BPL decoding such that the codeword is decoded in different BP decoders. Based on this characteristic, the unreliable bits for flipping could be split into multiple subblocks and could be flipped in different decoders simultaneously. On the other hand, a more flexible and effective processing strategy for the priori information of the unfrozen bits that do not need to be flipped is designed to improve the decoding convergence. In addition, this is the first proposal in BPL decoding which jointly optimizes the bit flipping of the information bits and the code bits. In particular, for bit flipping of the code bits, a H-matrix aided bit-flipping algorithm is designed to enhance the accuracy in identifying erroneous code bits. The simulation results show that the proposed algorithm significantly improves the errorcorrection performance of BPL decoding for medium and long codes. It is more than 0.25 d B better than the state-of-the-art BPL decoding at a block error rate(BLER) of 10^(-5), and outperforms CA-SCL decoding in the low signal-to-noise(SNR) region for(1024, 0.5)polar codes.
文摘High-dimensional datasets present significant challenges for classification tasks.Dimensionality reduction,a crucial aspect of data preprocessing,has gained substantial attention due to its ability to improve classification per-formance.However,identifying the optimal features within high-dimensional datasets remains a computationally demanding task,necessitating the use of efficient algorithms.This paper introduces the Arithmetic Optimization Algorithm(AOA),a novel approach for finding the optimal feature subset.AOA is specifically modified to address feature selection problems based on a transfer function.Additionally,two enhancements are incorporated into the AOA algorithm to overcome limitations such as limited precision,slow convergence,and susceptibility to local optima.The first enhancement proposes a new method for selecting solutions to be improved during the search process.This method effectively improves the original algorithm’s accuracy and convergence speed.The second enhancement introduces a local search with neighborhood strategies(AOA_NBH)during the AOA exploitation phase.AOA_NBH explores the vast search space,aiding the algorithm in escaping local optima.Our results demonstrate that incorporating neighborhood methods enhances the output and achieves significant improvement over state-of-the-art methods.
文摘This article addresses the issues of falling into local optima and insufficient exploration capability in the Arithmetic Optimization Algorithm (AOA), proposing an improved Arithmetic Optimization Algorithm with a multi-strategy mechanism (BSFAOA). This algorithm introduces three strategies within the standard AOA framework: an adaptive balance factor SMOA based on sine functions, a search strategy combining Spiral Search and Brownian Motion, and a hybrid perturbation strategy based on Whale Fall Mechanism and Polynomial Differential Learning. The BSFAOA algorithm is analyzed in depth on the well-known 23 benchmark functions, CEC2019 test functions, and four real optimization problems. The experimental results demonstrate that the BSFAOA algorithm can better balance the exploration and exploitation capabilities, significantly enhancing the stability, convergence mode, and search efficiency of the AOA algorithm.
文摘BACKGROUND with the widespread application of computer network systems in the medical field,the plan-do-check-action(PDCA)and the international classification of diseases tenth edition(ICD-10)coding system have also achieved favorable results in clinical medical record management.However,research on their combined application is relatively lacking.Objective:it was to explore the impact of network systems and PDCA management mode on ICD-10 encoding.Material and Method:a retrospective collection of 768 discharged medical records from the Medical Record Management Department of Meishan People’s Hospital was conducted.They were divided into a control group(n=232)and an observation group(n=536)based on whether the PDCA management mode was implemented.The two sets of coding accuracy,time spent,case completion rate,satisfaction,and other indicators were compared.AIM To study the adoption of network and PDCA in the ICD-10.METHODS A retrospective collection of 768 discharged medical records from the Medical Record Management Department of Meishan People’s Hospital was conducted.They were divided into a control group(n=232)and an observation group(n=536)based on whether the PDCA management mode was implemented.The two sets of coding accuracy,time spent,case completion rate,satisfaction,and other indicators were compared.RESULTS In the 3,6,12,18,and 24 months of PDCA cycle management mode,the coding accuracy and medical record completion rate were higher,and the coding time was lower in the observation group as against the controls(P<0.05).The satisfaction of coders(80.22%vs 53.45%)and patients(84.89%vs 51.72%)in the observation group was markedly higher as against the controls(P<0.05).CONCLUSION The combination of computer networks and PDCA can improve the accuracy,efficiency,completion rate,and satisfaction of ICD-10 coding.
文摘This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Reed-Solomon and low-density parity-check(RS-LDPC)coding system,combined with a Kalman filter-based adaptive algorithm.The hybrid RS-LDPC coding accommodates a wide range of code length requirements,employing RS coding for short codes and LDPC coding for medium-long codes.We delimit the short and medium-length codes by coding performance so that both codes remain in the optimal region.Additionally,a Kalman filter-based adaptive algorithm has been developed to handle dynamic alterations in a packet loss rate.The Kalman filter estimates packet loss rate utilizing observation data and system models,and then we establish the redundancy decision module through receiver feedback.As a result,the lost packets can be perfectly recovered by the receiver based on the redundant packets.Experimental results show that the proposed method enhances the decoding performance significantly under the same redundancy and channel packet loss.
基金supported by ZTE Industry-University-Institute Cooperation Funds.
文摘To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advances in video coding for machine standards are presented and comprehensive introductions to the use cases,requirements,evaluation frameworks and corresponding metrics of the VCM standard are given.Then the existing methods are presented,introducing the existing proposals by category and the research progress of the latest VCM conference.Finally,we give conclusions.
基金Research project of the Construction Department of Hubei Province(Project No.2023-64).
文摘By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative study of the existing coding system standards in different regions of the country,a coding data model suitable for big data research needs is proposed based on the current national standard for artificial material mechanical equipment classification coding.This model achieves a horizontal connection of characteristics and a vertical penetration of attribute values for construction materials and machinery through forward automatic coding calculation and reverse automatic decoding.This coding scheme and calculation model can also establish a database file for the coding and unit price of construction materials and machinery,forming a complete big data model for construction material coding unit prices.This provides foundational support for calculating and analyzing big data related to construction material unit prices,real-time information prices,market prices,and various comprehensive prices,thus contributing to the formation of cost-related big data.
文摘In order to reveal the complex network characteristics and evolution principle of China aviation network,the probability distribution and evolution trace of arithmetic average of edge vertices nearest neighbor average degree values of China aviation network were studied based on the statistics data of China civil aviation network in 1988,1994,2001,2008 and 2015.According to the theory and method of complex network,the network system was constructed with the city where the airport was located as the network node and the route between cities as the edge of the network.Based on the statistical data,the arithmetic averages of edge vertices nearest neighbor average degree values of China aviation network in 1988,1994,2001,2008 and 2015 were calculated.Using the probability statistical analysis method,it was found that the arithmetic average of edge vertices nearest neighbor average degree values had the probability distribution of normal function and the position parameters and scale parameters of the probability distribution had linear evolution trace.
文摘In this paper, we considered the equality problem of weighted Bajraktarević means with weighted quasi-arithmetic means. Using the method of substituting for functions, we first transform the equality problem into solving an equivalent functional equation. We obtain the necessary and sufficient conditions for the equality equation.
基金This project is supported by National Natural Science Foundation of China(No.61202439)partly supported by Scientific Research Foundation of Hunan Provincial Education Department of China(No.16A008)partly supported by Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems(No.2017TP1016).
文摘For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.
文摘An approximately optimal adaptive arithmetic coding (AC) system using a forbidden symbol (FS) over noisy channels was proposed which allows one to jointly and adaptively design the source decoding and channel correcting in a single process, with superior performance compared with traditional separated techniques. The concept of adaptiveness is applied not only to the source model but also to the amount of coding redundancy. In addition, an improved branch metric computing algorithm and a faster sequential searching algorithm compared with the system proposed by Grangetto were proposed. The proposed system is tested in the case of image transmission over the AWGN channel, and compared with traditional separated system in terms of packet error rate and complexity. Both hard and soft decoding were taken into account.