Wireless Network security management is difficult because of the ever-increasing number of wireless network malfunctions,vulnerabilities,and assaults.Complex security systems,such as Intrusion Detection Systems(IDS),a...Wireless Network security management is difficult because of the ever-increasing number of wireless network malfunctions,vulnerabilities,and assaults.Complex security systems,such as Intrusion Detection Systems(IDS),are essential due to the limitations of simpler security measures,such as cryptography and firewalls.Due to their compact nature and low energy reserves,wireless networks present a significant challenge for security procedures.The features of small cells can cause threats to the network.Network Coding(NC)enabled small cells are vulnerable to various types of attacks.Avoiding attacks and performing secure“peer”to“peer”data transmission is a challenging task in small cells.Due to the low power and memory requirements of the proposed model,it is well suited to use with constrained small cells.An attacker cannot change the contents of data and generate a new Hashed Homomorphic Message Authentication Code(HHMAC)hash between transmissions since the HMAC function is generated using the shared secret.In this research,a chaotic sequence mapping based low overhead 1D Improved Logistic Map is used to secure“peer”to“peer”data transmission model using lightweight H-MAC(1D-LM-P2P-LHHMAC)is proposed with accurate intrusion detection.The proposed model is evaluated with the traditional models by considering various evaluation metrics like Vector Set Generation Accuracy Levels,Key Pair Generation Time Levels,Chaotic Map Accuracy Levels,Intrusion Detection Accuracy Levels,and the results represent that the proposed model performance in chaotic map accuracy level is 98%and intrusion detection is 98.2%.The proposed model is compared with the traditional models and the results represent that the proposed model secure data transmission levels are high.展开更多
Coding sequences (CDS) are commonly used for transient gene expression, in yeast two-hybrid screening, to verify protein interactions and in prokaryotic gene expression studies. CDS are most commonly obtained using co...Coding sequences (CDS) are commonly used for transient gene expression, in yeast two-hybrid screening, to verify protein interactions and in prokaryotic gene expression studies. CDS are most commonly obtained using complementary DNA (cDNA) derived from messenger RNA (mRNA) extracted from plant tissues and generated by reverse transcription. However, some CDS are difficult to acquire through this process as they are expressed at extremely low levels or have specific spatial and/or temporal expression patterns in vivo. These challenges require the development of alternative CDS cloning technologies. In this study, we found that the genomic intron-containing gene coding sequences (gDNA) from Arabidopsis thaliana, Oryza sativa, Brassica napus, and Glycine max can be correctly transcribed and spliced into mRNA in Nicotiana benthamiana. In contrast, gDNAs from Triticum aestivum and Sorghum bicolor did not function correctly. In transient expression experiments, the target DNA sequence is driven by a constitutive promoter. Theoretically, a sufficient amount of mRNA can be extracted from the N. benthamiana leaves, making it conducive to the cloning of CDS target genes. Our data demonstrate that N. benthamiana can be used as an effective host for the cloning CDS of plant genes.展开更多
In some schemes, quantum blind signatures require the use of difficult-to-prepare multiparticle entangled states. By considering the communication overhead, quantum operation complexity, verification efficiency and ot...In some schemes, quantum blind signatures require the use of difficult-to-prepare multiparticle entangled states. By considering the communication overhead, quantum operation complexity, verification efficiency and other relevant factors in practical situations, this article proposes a non-entangled quantum blind signature scheme based on dense encoding. The information owner utilizes dense encoding and hash functions to blind the information while reducing the use of quantum resources. After receiving particles, the signer encrypts the message using a one-way function and performs a Hadamard gate operation on the selected single photon to generate the signature. Then the verifier performs a Hadamard gate inverse operation on the signature and combines it with the encoding rules to restore the message and complete the verification.Compared with some typical quantum blind signature protocols, this protocol has strong blindness in privacy protection,and higher flexibility in scalability and application. The signer can adjust the signature operation according to the actual situation, which greatly simplifies the complexity of the signature. By simultaneously utilizing the secondary distribution and rearrangement of non-entangled quantum states, a non-entangled quantum state representation of three bits of classical information is achieved, reducing the use of a large amount of quantum resources and lowering implementation costs. This improves both signature verification efficiency and communication efficiency while, at the same time, this scheme meets the requirements of unforgeability, non-repudiation, and prevention of information leakage.展开更多
Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum co...Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum computer. For this new topological stabilizer code-XYZ^(2) code defined on the cellular lattice, it is implemented on a hexagonal lattice of qubits and it encodes the logical qubits with the help of stabilizer measurements of weight six and weight two. However topological stabilizer codes in cellular lattice quantum systems suffer from the detrimental effects of noise due to interaction with the environment. Several decoding approaches have been proposed to address this problem. Here, we propose the use of a state-attention based reinforcement learning decoder to decode XYZ^(2) codes, which enables the decoder to more accurately focus on the information related to the current decoding position, and the error correction accuracy of our reinforcement learning decoder model under the optimisation conditions can reach 83.27% under the depolarizing noise model, and we have measured thresholds of 0.18856 and 0.19043 for XYZ^(2) codes at code spacing of 3–7 and 7–11, respectively. our study provides directions and ideas for applications of decoding schemes combining reinforcement learning attention mechanisms to other topological quantum error-correcting codes.展开更多
Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error corre...Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error correction using neural network-based machine learning methods is a promising approach that is adapted to physical systems without the need to build noise models.In this paper,we use a distributed decoding strategy,which effectively alleviates the problem of exponential growth of the training set required for neural networks as the code distance of quantum error-correcting codes increases.Our decoding algorithm is based on renormalization group decoding and recurrent neural network decoder.The recurrent neural network is trained through the ResNet architecture to improve its decoding accuracy.Then we test the decoding performance of our distributed strategy decoder,recurrent neural network decoder,and the classic minimum weight perfect matching(MWPM)decoder for rotated surface codes with different code distances under the circuit noise model,the thresholds of these three decoders are about 0.0052,0.0051,and 0.0049,respectively.Our results demonstrate that the distributed strategy decoder outperforms the other two decoders,achieving approximately a 5%improvement in decoding efficiency compared to the MWPM decoder and approximately a 2%improvement compared to the recurrent neural network decoder.展开更多
This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
Belief propagation list(BPL) decoding for polar codes has attracted more attention due to its inherent parallel nature. However, a large gap still exists with CRC-aided SCL(CA-SCL) decoding.In this work, an improved s...Belief propagation list(BPL) decoding for polar codes has attracted more attention due to its inherent parallel nature. However, a large gap still exists with CRC-aided SCL(CA-SCL) decoding.In this work, an improved segmented belief propagation list decoding based on bit flipping(SBPL-BF) is proposed. On the one hand, the proposed algorithm makes use of the cooperative characteristic in BPL decoding such that the codeword is decoded in different BP decoders. Based on this characteristic, the unreliable bits for flipping could be split into multiple subblocks and could be flipped in different decoders simultaneously. On the other hand, a more flexible and effective processing strategy for the priori information of the unfrozen bits that do not need to be flipped is designed to improve the decoding convergence. In addition, this is the first proposal in BPL decoding which jointly optimizes the bit flipping of the information bits and the code bits. In particular, for bit flipping of the code bits, a H-matrix aided bit-flipping algorithm is designed to enhance the accuracy in identifying erroneous code bits. The simulation results show that the proposed algorithm significantly improves the errorcorrection performance of BPL decoding for medium and long codes. It is more than 0.25 d B better than the state-of-the-art BPL decoding at a block error rate(BLER) of 10^(-5), and outperforms CA-SCL decoding in the low signal-to-noise(SNR) region for(1024, 0.5)polar codes.展开更多
BACKGROUND with the widespread application of computer network systems in the medical field,the plan-do-check-action(PDCA)and the international classification of diseases tenth edition(ICD-10)coding system have also a...BACKGROUND with the widespread application of computer network systems in the medical field,the plan-do-check-action(PDCA)and the international classification of diseases tenth edition(ICD-10)coding system have also achieved favorable results in clinical medical record management.However,research on their combined application is relatively lacking.Objective:it was to explore the impact of network systems and PDCA management mode on ICD-10 encoding.Material and Method:a retrospective collection of 768 discharged medical records from the Medical Record Management Department of Meishan People’s Hospital was conducted.They were divided into a control group(n=232)and an observation group(n=536)based on whether the PDCA management mode was implemented.The two sets of coding accuracy,time spent,case completion rate,satisfaction,and other indicators were compared.AIM To study the adoption of network and PDCA in the ICD-10.METHODS A retrospective collection of 768 discharged medical records from the Medical Record Management Department of Meishan People’s Hospital was conducted.They were divided into a control group(n=232)and an observation group(n=536)based on whether the PDCA management mode was implemented.The two sets of coding accuracy,time spent,case completion rate,satisfaction,and other indicators were compared.RESULTS In the 3,6,12,18,and 24 months of PDCA cycle management mode,the coding accuracy and medical record completion rate were higher,and the coding time was lower in the observation group as against the controls(P<0.05).The satisfaction of coders(80.22%vs 53.45%)and patients(84.89%vs 51.72%)in the observation group was markedly higher as against the controls(P<0.05).CONCLUSION The combination of computer networks and PDCA can improve the accuracy,efficiency,completion rate,and satisfaction of ICD-10 coding.展开更多
This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Ree...This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Reed-Solomon and low-density parity-check(RS-LDPC)coding system,combined with a Kalman filter-based adaptive algorithm.The hybrid RS-LDPC coding accommodates a wide range of code length requirements,employing RS coding for short codes and LDPC coding for medium-long codes.We delimit the short and medium-length codes by coding performance so that both codes remain in the optimal region.Additionally,a Kalman filter-based adaptive algorithm has been developed to handle dynamic alterations in a packet loss rate.The Kalman filter estimates packet loss rate utilizing observation data and system models,and then we establish the redundancy decision module through receiver feedback.As a result,the lost packets can be perfectly recovered by the receiver based on the redundant packets.Experimental results show that the proposed method enhances the decoding performance significantly under the same redundancy and channel packet loss.展开更多
To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advan...To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advances in video coding for machine standards are presented and comprehensive introductions to the use cases,requirements,evaluation frameworks and corresponding metrics of the VCM standard are given.Then the existing methods are presented,introducing the existing proposals by category and the research progress of the latest VCM conference.Finally,we give conclusions.展开更多
By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative st...By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative study of the existing coding system standards in different regions of the country,a coding data model suitable for big data research needs is proposed based on the current national standard for artificial material mechanical equipment classification coding.This model achieves a horizontal connection of characteristics and a vertical penetration of attribute values for construction materials and machinery through forward automatic coding calculation and reverse automatic decoding.This coding scheme and calculation model can also establish a database file for the coding and unit price of construction materials and machinery,forming a complete big data model for construction material coding unit prices.This provides foundational support for calculating and analyzing big data related to construction material unit prices,real-time information prices,market prices,and various comprehensive prices,thus contributing to the formation of cost-related big data.展开更多
A new scheme combining a scalable transcoder with space time block codes (STBC) for an orthogonal frequency division multiplexing (OFDM) system is proposed for robust video transmission in dispersive fading channe...A new scheme combining a scalable transcoder with space time block codes (STBC) for an orthogonal frequency division multiplexing (OFDM) system is proposed for robust video transmission in dispersive fading channels. The target application for such a scalable transcoder is to provide successful access to the pre-encoded high quality video MPEG-2 from mobile wireless terminals. In the scalable transcoder, besides outputting the MPEG-4 fine granular scalability (FGS) bitstream, both the size of video frames and the bit rate are reduced. And an array processing algorithm of layer interference suppression is used at the receiver which makes the system structure provide different levels of protection to different layers. Furthermore, by considering the important level of scalable bitstream, the different bitstreams can be given different level protection by the system structure and channel coding. With the proposed system, the concurrent large diversity gain characteristic of STBC and alleviation of the frequency-selective fading effect of OFDM can be achieved. The simulation results show that the proposed schemes integrating scalable transcoding can provide a basic quality of video transmission and outperform the conventional single layer transcoding transmitted under the random and bursty error channel conditions.展开更多
The performance analysis and simulation of coding schemes based on the modeling Ka band fixed satellite channel have been presented. The results indicate that concatenated codes with large inner interleaving depth ha...The performance analysis and simulation of coding schemes based on the modeling Ka band fixed satellite channel have been presented. The results indicate that concatenated codes with large inner interleaving depth have good performance and high spectrum efficiency. The studies also show that simple block interleaving is very effective in combating the slow frequency nonselective fading of Ka band.展开更多
In order to decrease both computational complexity and coding time, an improved algorithm for the early detection of all-zero blocks (AZBs) in H. 264/AVC is proposed. The previous AZBs detection algorithms are revie...In order to decrease both computational complexity and coding time, an improved algorithm for the early detection of all-zero blocks (AZBs) in H. 264/AVC is proposed. The previous AZBs detection algorithms are reviewed. Three types of transformed frequency-domain coefficients, which are quantized to zeros, are analyzed. Based on the three types of frequencydomain scaling factors, the corresponding spatial coefficients are derived. Then the Schwarz inequality is applied to the derivation of the three thresholds based on spatial coefficients. Another threshold is set on the basis of the probability distribution of zero coefficients in a block. As a result, an adaptive AZBs detection algorithm is proposed based on the minimum of the former three thresholds and the threshold of zero blocks distribution. The simulation results show that, compared with the existing AZBs detection algorithms, the proposed algorithm achieves a 5% higher detection ratio in AZBs and 4% to 10% computation saving with only 0. 1 dB video quality degradation.展开更多
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre...Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.展开更多
A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inne...A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inner-level Bregmanized method devotes to dictionary updating and sparse represention of small overlapping image patches. The introduced constraint of graph regularized sparse coding can capture local image features effectively, and consequently enables accurate reconstruction from highly undersampled partial data. Furthermore, modified sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge within a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can effectively reconstruct images and it outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.展开更多
文摘Wireless Network security management is difficult because of the ever-increasing number of wireless network malfunctions,vulnerabilities,and assaults.Complex security systems,such as Intrusion Detection Systems(IDS),are essential due to the limitations of simpler security measures,such as cryptography and firewalls.Due to their compact nature and low energy reserves,wireless networks present a significant challenge for security procedures.The features of small cells can cause threats to the network.Network Coding(NC)enabled small cells are vulnerable to various types of attacks.Avoiding attacks and performing secure“peer”to“peer”data transmission is a challenging task in small cells.Due to the low power and memory requirements of the proposed model,it is well suited to use with constrained small cells.An attacker cannot change the contents of data and generate a new Hashed Homomorphic Message Authentication Code(HHMAC)hash between transmissions since the HMAC function is generated using the shared secret.In this research,a chaotic sequence mapping based low overhead 1D Improved Logistic Map is used to secure“peer”to“peer”data transmission model using lightweight H-MAC(1D-LM-P2P-LHHMAC)is proposed with accurate intrusion detection.The proposed model is evaluated with the traditional models by considering various evaluation metrics like Vector Set Generation Accuracy Levels,Key Pair Generation Time Levels,Chaotic Map Accuracy Levels,Intrusion Detection Accuracy Levels,and the results represent that the proposed model performance in chaotic map accuracy level is 98%and intrusion detection is 98.2%.The proposed model is compared with the traditional models and the results represent that the proposed model secure data transmission levels are high.
文摘Coding sequences (CDS) are commonly used for transient gene expression, in yeast two-hybrid screening, to verify protein interactions and in prokaryotic gene expression studies. CDS are most commonly obtained using complementary DNA (cDNA) derived from messenger RNA (mRNA) extracted from plant tissues and generated by reverse transcription. However, some CDS are difficult to acquire through this process as they are expressed at extremely low levels or have specific spatial and/or temporal expression patterns in vivo. These challenges require the development of alternative CDS cloning technologies. In this study, we found that the genomic intron-containing gene coding sequences (gDNA) from Arabidopsis thaliana, Oryza sativa, Brassica napus, and Glycine max can be correctly transcribed and spliced into mRNA in Nicotiana benthamiana. In contrast, gDNAs from Triticum aestivum and Sorghum bicolor did not function correctly. In transient expression experiments, the target DNA sequence is driven by a constitutive promoter. Theoretically, a sufficient amount of mRNA can be extracted from the N. benthamiana leaves, making it conducive to the cloning of CDS target genes. Our data demonstrate that N. benthamiana can be used as an effective host for the cloning CDS of plant genes.
基金Project supported by the National Natural Science Foundation of China (Grant No. 61762039)。
文摘In some schemes, quantum blind signatures require the use of difficult-to-prepare multiparticle entangled states. By considering the communication overhead, quantum operation complexity, verification efficiency and other relevant factors in practical situations, this article proposes a non-entangled quantum blind signature scheme based on dense encoding. The information owner utilizes dense encoding and hash functions to blind the information while reducing the use of quantum resources. After receiving particles, the signer encrypts the message using a one-way function and performs a Hadamard gate operation on the selected single photon to generate the signature. Then the verifier performs a Hadamard gate inverse operation on the signature and combines it with the encoding rules to restore the message and complete the verification.Compared with some typical quantum blind signature protocols, this protocol has strong blindness in privacy protection,and higher flexibility in scalability and application. The signer can adjust the signature operation according to the actual situation, which greatly simplifies the complexity of the signature. By simultaneously utilizing the secondary distribution and rearrangement of non-entangled quantum states, a non-entangled quantum state representation of three bits of classical information is achieved, reducing the use of a large amount of quantum resources and lowering implementation costs. This improves both signature verification efficiency and communication efficiency while, at the same time, this scheme meets the requirements of unforgeability, non-repudiation, and prevention of information leakage.
基金supported by the Natural Science Foundation of Shandong Province,China (Grant No. ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos. ZR2022LLZ012 and ZR2021LLZ001)。
文摘Quantum error correction, a technique that relies on the principle of redundancy to encode logical information into additional qubits to better protect the system from noise, is necessary to design a viable quantum computer. For this new topological stabilizer code-XYZ^(2) code defined on the cellular lattice, it is implemented on a hexagonal lattice of qubits and it encodes the logical qubits with the help of stabilizer measurements of weight six and weight two. However topological stabilizer codes in cellular lattice quantum systems suffer from the detrimental effects of noise due to interaction with the environment. Several decoding approaches have been proposed to address this problem. Here, we propose the use of a state-attention based reinforcement learning decoder to decode XYZ^(2) codes, which enables the decoder to more accurately focus on the information related to the current decoding position, and the error correction accuracy of our reinforcement learning decoder model under the optimisation conditions can reach 83.27% under the depolarizing noise model, and we have measured thresholds of 0.18856 and 0.19043 for XYZ^(2) codes at code spacing of 3–7 and 7–11, respectively. our study provides directions and ideas for applications of decoding schemes combining reinforcement learning attention mechanisms to other topological quantum error-correcting codes.
基金Project supported by Natural Science Foundation of Shandong Province,China (Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error correction using neural network-based machine learning methods is a promising approach that is adapted to physical systems without the need to build noise models.In this paper,we use a distributed decoding strategy,which effectively alleviates the problem of exponential growth of the training set required for neural networks as the code distance of quantum error-correcting codes increases.Our decoding algorithm is based on renormalization group decoding and recurrent neural network decoder.The recurrent neural network is trained through the ResNet architecture to improve its decoding accuracy.Then we test the decoding performance of our distributed strategy decoder,recurrent neural network decoder,and the classic minimum weight perfect matching(MWPM)decoder for rotated surface codes with different code distances under the circuit noise model,the thresholds of these three decoders are about 0.0052,0.0051,and 0.0049,respectively.Our results demonstrate that the distributed strategy decoder outperforms the other two decoders,achieving approximately a 5%improvement in decoding efficiency compared to the MWPM decoder and approximately a 2%improvement compared to the recurrent neural network decoder.
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金funded by the Key Project of NSFC-Guangdong Province Joint Program(Grant No.U2001204)the National Natural Science Foundation of China(Grant Nos.61873290 and 61972431)+1 种基金the Science and Technology Program of Guangzhou,China(Grant No.202002030470)the Funding Project of Featured Major of Guangzhou Xinhua University(2021TZ002).
文摘Belief propagation list(BPL) decoding for polar codes has attracted more attention due to its inherent parallel nature. However, a large gap still exists with CRC-aided SCL(CA-SCL) decoding.In this work, an improved segmented belief propagation list decoding based on bit flipping(SBPL-BF) is proposed. On the one hand, the proposed algorithm makes use of the cooperative characteristic in BPL decoding such that the codeword is decoded in different BP decoders. Based on this characteristic, the unreliable bits for flipping could be split into multiple subblocks and could be flipped in different decoders simultaneously. On the other hand, a more flexible and effective processing strategy for the priori information of the unfrozen bits that do not need to be flipped is designed to improve the decoding convergence. In addition, this is the first proposal in BPL decoding which jointly optimizes the bit flipping of the information bits and the code bits. In particular, for bit flipping of the code bits, a H-matrix aided bit-flipping algorithm is designed to enhance the accuracy in identifying erroneous code bits. The simulation results show that the proposed algorithm significantly improves the errorcorrection performance of BPL decoding for medium and long codes. It is more than 0.25 d B better than the state-of-the-art BPL decoding at a block error rate(BLER) of 10^(-5), and outperforms CA-SCL decoding in the low signal-to-noise(SNR) region for(1024, 0.5)polar codes.
文摘BACKGROUND with the widespread application of computer network systems in the medical field,the plan-do-check-action(PDCA)and the international classification of diseases tenth edition(ICD-10)coding system have also achieved favorable results in clinical medical record management.However,research on their combined application is relatively lacking.Objective:it was to explore the impact of network systems and PDCA management mode on ICD-10 encoding.Material and Method:a retrospective collection of 768 discharged medical records from the Medical Record Management Department of Meishan People’s Hospital was conducted.They were divided into a control group(n=232)and an observation group(n=536)based on whether the PDCA management mode was implemented.The two sets of coding accuracy,time spent,case completion rate,satisfaction,and other indicators were compared.AIM To study the adoption of network and PDCA in the ICD-10.METHODS A retrospective collection of 768 discharged medical records from the Medical Record Management Department of Meishan People’s Hospital was conducted.They were divided into a control group(n=232)and an observation group(n=536)based on whether the PDCA management mode was implemented.The two sets of coding accuracy,time spent,case completion rate,satisfaction,and other indicators were compared.RESULTS In the 3,6,12,18,and 24 months of PDCA cycle management mode,the coding accuracy and medical record completion rate were higher,and the coding time was lower in the observation group as against the controls(P<0.05).The satisfaction of coders(80.22%vs 53.45%)and patients(84.89%vs 51.72%)in the observation group was markedly higher as against the controls(P<0.05).CONCLUSION The combination of computer networks and PDCA can improve the accuracy,efficiency,completion rate,and satisfaction of ICD-10 coding.
文摘This paper proposes an adaptive hybrid forward error correction(AH-FEC)coding scheme for coping with dynamic packet loss events in video and audio transmission.Specifically,the proposed scheme consists of a hybrid Reed-Solomon and low-density parity-check(RS-LDPC)coding system,combined with a Kalman filter-based adaptive algorithm.The hybrid RS-LDPC coding accommodates a wide range of code length requirements,employing RS coding for short codes and LDPC coding for medium-long codes.We delimit the short and medium-length codes by coding performance so that both codes remain in the optimal region.Additionally,a Kalman filter-based adaptive algorithm has been developed to handle dynamic alterations in a packet loss rate.The Kalman filter estimates packet loss rate utilizing observation data and system models,and then we establish the redundancy decision module through receiver feedback.As a result,the lost packets can be perfectly recovered by the receiver based on the redundant packets.Experimental results show that the proposed method enhances the decoding performance significantly under the same redundancy and channel packet loss.
基金supported by ZTE Industry-University-Institute Cooperation Funds.
文摘To improve the performance of video compression for machine vision analysis tasks,a video coding for machines(VCM)standard working group was established to promote standardization procedures.In this paper,recent advances in video coding for machine standards are presented and comprehensive introductions to the use cases,requirements,evaluation frameworks and corresponding metrics of the VCM standard are given.Then the existing methods are presented,introducing the existing proposals by category and the research progress of the latest VCM conference.Finally,we give conclusions.
基金Research project of the Construction Department of Hubei Province(Project No.2023-64).
文摘By analyzing and comparing the current application status and advantages and disadvantages of domestic and foreign artificial material mechanical equipment classification coding systems,and conducting a comparative study of the existing coding system standards in different regions of the country,a coding data model suitable for big data research needs is proposed based on the current national standard for artificial material mechanical equipment classification coding.This model achieves a horizontal connection of characteristics and a vertical penetration of attribute values for construction materials and machinery through forward automatic coding calculation and reverse automatic decoding.This coding scheme and calculation model can also establish a database file for the coding and unit price of construction materials and machinery,forming a complete big data model for construction material coding unit prices.This provides foundational support for calculating and analyzing big data related to construction material unit prices,real-time information prices,market prices,and various comprehensive prices,thus contributing to the formation of cost-related big data.
文摘A new scheme combining a scalable transcoder with space time block codes (STBC) for an orthogonal frequency division multiplexing (OFDM) system is proposed for robust video transmission in dispersive fading channels. The target application for such a scalable transcoder is to provide successful access to the pre-encoded high quality video MPEG-2 from mobile wireless terminals. In the scalable transcoder, besides outputting the MPEG-4 fine granular scalability (FGS) bitstream, both the size of video frames and the bit rate are reduced. And an array processing algorithm of layer interference suppression is used at the receiver which makes the system structure provide different levels of protection to different layers. Furthermore, by considering the important level of scalable bitstream, the different bitstreams can be given different level protection by the system structure and channel coding. With the proposed system, the concurrent large diversity gain characteristic of STBC and alleviation of the frequency-selective fading effect of OFDM can be achieved. The simulation results show that the proposed schemes integrating scalable transcoding can provide a basic quality of video transmission and outperform the conventional single layer transcoding transmitted under the random and bursty error channel conditions.
文摘The performance analysis and simulation of coding schemes based on the modeling Ka band fixed satellite channel have been presented. The results indicate that concatenated codes with large inner interleaving depth have good performance and high spectrum efficiency. The studies also show that simple block interleaving is very effective in combating the slow frequency nonselective fading of Ka band.
基金The EU Seventh Framework Programme FP7-PEOPLE-IRSES( No. 247083)
文摘In order to decrease both computational complexity and coding time, an improved algorithm for the early detection of all-zero blocks (AZBs) in H. 264/AVC is proposed. The previous AZBs detection algorithms are reviewed. Three types of transformed frequency-domain coefficients, which are quantized to zeros, are analyzed. Based on the three types of frequencydomain scaling factors, the corresponding spatial coefficients are derived. Then the Schwarz inequality is applied to the derivation of the three thresholds based on spatial coefficients. Another threshold is set on the basis of the probability distribution of zero coefficients in a block. As a result, an adaptive AZBs detection algorithm is proposed based on the minimum of the former three thresholds and the threshold of zero blocks distribution. The simulation results show that, compared with the existing AZBs detection algorithms, the proposed algorithm achieves a 5% higher detection ratio in AZBs and 4% to 10% computation saving with only 0. 1 dB video quality degradation.
基金Supported by the National Natural Science Foundation of China(61076019,61106018)the Aeronautical Science Foundation of China(20115552031)+3 种基金the China Postdoctoral Science Foundation(20100481134)the Jiangsu Province Key Technology R&D Program(BE2010003)the Nanjing University of Aeronautics and Astronautics Research Funding(NS2010115)the Nanjing University of Aeronatics and Astronautics Initial Funding for Talented Faculty(1004-YAH10027)~~
文摘Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.
基金The National Natural Science Foundation of China (No.61362001,61102043,61262084,20132BAB211030,20122BAB211015)the Basic Research Program of Shenzhen(No.JC201104220219A)
文摘A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inner-level Bregmanized method devotes to dictionary updating and sparse represention of small overlapping image patches. The introduced constraint of graph regularized sparse coding can capture local image features effectively, and consequently enables accurate reconstruction from highly undersampled partial data. Furthermore, modified sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge within a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can effectively reconstruct images and it outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.