Spatially-coupled low-density parity-check(SC-LDPC)codes are prominent candidates for fu-ture communication standards due to their‘threshold saturation’properties.However,when facing burst erasures,the decoding proc...Spatially-coupled low-density parity-check(SC-LDPC)codes are prominent candidates for fu-ture communication standards due to their‘threshold saturation’properties.However,when facing burst erasures,the decoding process will stop and the decoding performances will dramatically de-grade.To improve the ability of burst erasure corrections,this paper proposes a two-dimensional SC-LDPC(2D-SC-LDPC)codes constructed by parallelly connecting two asymmetric SC-LDPC coupled chains for resistance to burst erasures.Density evolution algorithm is presented to evaluate the as-ymptotic performances against burst erasures,by which the maximum correctable burst erasure length can be computed.The analysis results show that the maximum correctable burst erasure lengths of the proposed 2D-SC-LDPC codes are much larger than the SC-LDPC codes and the asym-metric SC-LDPC codes.Finite-length performance simulation results of the 2D-SC-LDPC codes over the burst erasure channel confirm the excellent asymptotic performances.展开更多
This paper tries to analyze Philip Larkin's poem"High Window"from a deconstructive perspective. It is to show that in the poem the key words/signifiers are always under erasure, and thus the chain of sig...This paper tries to analyze Philip Larkin's poem"High Window"from a deconstructive perspective. It is to show that in the poem the key words/signifiers are always under erasure, and thus the chain of signification is endless since the poem is self-deconstructing. Then, the paper argues that the linguistic features of the poem paradoxically meaningful in the sense that it reflects the poet's skepticism and anxiety.展开更多
Frame erasure concealment is studied to solve the problem of rapid speech quality reduction due to the loss of speech parameters during speech transmission. A large hidden Markov model is applied to model the immittan...Frame erasure concealment is studied to solve the problem of rapid speech quality reduction due to the loss of speech parameters during speech transmission. A large hidden Markov model is applied to model the immittance spectral frequency (ISF) parameters in AMR-WB codec to optimally estimate the lost ISFs based on the minimum mean square error (MMSE) rule. The estimated ISFs are weighted with the ones of their previous neighbors to smooth the speech, resulting in the actual concealed ISF vectors. They are used instead of the lost ISFs in the speech synthesis on the receiver. Comparison is made between the speech concealed by this algorithm and by Annex I of G. 722. 2 specification, and simulation shows that the proposed concealment algorithm can lead to better performance in terms of frequency-weighted spectral distortion and signal-to-noise ratio compared to the baseline method, with an increase of 2.41 dB in signal-to-noise ratio (SNR) and a reduction of 0. 885 dB in frequency-weighted spectral distortion.展开更多
To reduce the time required to complete the regeneration process of erasure codes, we propose a Tree-structured Parallel Regeneration (TPR) scheme for multiple data losses in distributed storage systems. Under the sch...To reduce the time required to complete the regeneration process of erasure codes, we propose a Tree-structured Parallel Regeneration (TPR) scheme for multiple data losses in distributed storage systems. Under the scheme, two algorithms are proposed for the construction of multiple regeneration trees, namely the edge-disjoint algorithm and edge-sharing algorithm. The edge-disjoint algorithm constructs multiple independent trees, and is simple and appropriate for environments where newcomers and their providers are distributed over a large area and have few intersections. The edge-sharing algorithm constructs multiple trees that compete to utilize the bandwidth, and make a better utilization of the bandwidth, although it needs to measure the available band-width and deal with the bandwidth changes; it is therefore difficult to implement in practical systems. The parallel regeneration for multiple data losses of TPR primarily includes two optimizations: firstly, transferring the data through the bandwidth optimized-paths in a pipe-line manner; secondly, executing data regeneration over multiple trees in parallel. To evaluate the proposal, we implement an event-based simulator and make a detailed comparison with some popular regeneration methods. The quantitative comparison results show that the use of TPR employing either the edge-disjoint algorithm or edge-sharing algorithm reduces the regeneration time significantly.展开更多
Fault-tolerance is increasingly significant for large-scale storage systems in which Byzantine failure of storage nodes may happen. Traditional Byzantine Quorum systems that tolerate Byzantine failures by using replic...Fault-tolerance is increasingly significant for large-scale storage systems in which Byzantine failure of storage nodes may happen. Traditional Byzantine Quorum systems that tolerate Byzantine failures by using replication have two main limitations: low space-efficiency and static quorum variables. We propose an Erasure-code Byzantine Fault-tolerance Quorum that can provide high reliability with far lower storage overhead than replication by adopting erasure code as redundancy scheme. Through read/write operations of clients and diagnose operation of supervisor, our Quorum system can detect Byzantine nodes, and dynamically adjust system size and fault threshold. Simulation results show that our method improves performance for the Quorum with relatively small quorums.展开更多
In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of inform...In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future.展开更多
In the big data era,data unavailability,either temporary or permanent,becomes a normal occurrence on a daily basis.Unlike the permanent data failure,which is fixed through a background job,temporarily unavailable data...In the big data era,data unavailability,either temporary or permanent,becomes a normal occurrence on a daily basis.Unlike the permanent data failure,which is fixed through a background job,temporarily unavailable data is recovered on-the-fly to serve the ongoing read request.However,those newly revived data is discarded after serving the request,due to the assumption that data experiencing temporary failures could come back alive later.Such disposal of failure data prevents the sharing of failure information among clients,and leads to many unnecessary data recovery processes,(e.g.caused by either recurring unavailability of a data or multiple data failures in one stripe),thereby straining system performance.To this end,this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes.GFCache employs a greedy caching approach of opportunism to promote not only the failed data,but also sequential failure-likely data in the same stripe.Additionally,GFCache includes a FARC(Failure ARC)catch replacement algorithm,which features a balanced consideration of failure recency,frequency to accommodate data corruption with good hit ratio.The stored data in GFCache is able to support fast read of the normal data access.Furthermore,since GFCache is a generic failure cache,it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters.Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache.展开更多
Multi-channel can be used to provide higher transmission ability to the bandwidth-intensive and delay-sensitive real-time streams. However, traditional channel capacity theories and coding schemes are seldom designed ...Multi-channel can be used to provide higher transmission ability to the bandwidth-intensive and delay-sensitive real-time streams. However, traditional channel capacity theories and coding schemes are seldom designed for the real-time streams with strict delay constraint, especially in multi-channel context. This paper considers a real-time stream system, where real-time messages with different importance should be transmitted through several packet erasure channels, and be decoded by the receiver within a fixed delay. Based on window erasure channels and i.i.d.(identically and independently distributed) erasure channels, we derive the Multi-channel Real-time Stream Transmission(MRST) capacity models for Symmetric Real-time(SR) streams and Asymmetric Real-time(AR) streams respectively. Moreover, for window erasures, a Maximum Equilibrium Intra-session Code(MEIC) is presented for SR and AR streams, and is shown able to asymptotically achieve the theoretical MRST capacity. For i.i.d. erasures, we propose an Adaptive Maximum Equilibrium Intra-session Code(AMEIC), and then prove AMEIC can closely approach the MRST transmission capacity. Finally, the performances of the proposed codes are verified by simulations.展开更多
Classical unequal erasure protection schemes split data to be protected into classes which are encoded independently. The unequal protection scheme presented in this paper is based on an erasure code which encodes all...Classical unequal erasure protection schemes split data to be protected into classes which are encoded independently. The unequal protection scheme presented in this paper is based on an erasure code which encodes all the data together according to the existing dependencies. A simple algorithm generates dynamically the generator matrix of the erasure code according to the packets streams structure, i.e., the dependencies between the packets, and the rate of the code. This proposed erasure code was applied to a packetized MPEG4 stream transmitted over a packet erasure channel and compared with other classical protection schemes in terms of PSNR and MOS. It is shown that the proposed code allows keeping a high video quality-level in a larger packet loss rate range than the other protection schemes.展开更多
Building a new decentralized domain name system based on blockchain technology is helping to solve problems,such as load imbalance and over-dependence on the trust of the central node.However,in the existing blockchai...Building a new decentralized domain name system based on blockchain technology is helping to solve problems,such as load imbalance and over-dependence on the trust of the central node.However,in the existing blockchain storage system,the storage overhead is very high due to its fullreplication data storage mechanism.The total storage consumption for each block is up to O(n)with n nodes.Erasure code applied to blockchains can significantly reduce the storage overhead,but also greatly lower the read performance.In this study,we propose a novel coding scheme for blockchain storage,Combination Locality based Erasure Code for Permissioned blockchain storage(CLEC).CLEC uses erasure code,parity locality,and topology locality in blockchain storage,greatly reducing reading latency and repair time.In CLEC,the storage consumption per block can be reduced to O(1),and the repair penalty can also be lowered to O(1).Experiments in an open-source permissioned blockchain Tendermint show that CLEC has a maximum repair speed of 6 times and a read speed of nearly 1.7 times with storage overhead of only 1.17 times compared to the current work,a great improvement in reading performance and repair performance with slightly increased storage overhead via implementation.展开更多
In this paper,we propose an opportunistic decoding method to enhance the reconstruction of unequal erasure protected scalable coded source.This method explores some non-channel-decodable symbols of the received packet...In this paper,we propose an opportunistic decoding method to enhance the reconstruction of unequal erasure protected scalable coded source.This method explores some non-channel-decodable symbols of the received packets by opportunistically utilizing the structure information of joint source-channel coding scheme.As a result,a longer prefix of source coded stream may be obtained to enhance the reconstruction of scalable coded source.Experiment results indicate that opportunistic decoding at the receiver can improve the quality of the reconstructed source when channel decoding fails to recover all the source symbols.展开更多
A modified type of Hybrid ARQ system with erasures correction and parity bits retransmission is considered. Performance of the system is analyzed under assumption that the forward channel suffers from Nakagami common ...A modified type of Hybrid ARQ system with erasures correction and parity bits retransmission is considered. Performance of the system is analyzed under assumption that the forward channel suffers from Nakagami common fading and additive white Gaussian noise. A good agreement between theoretical results and simulation is achieved. The proposed ARQ protocol is compared with other known Hybrid ARQ algorithms. It demonstrates significantly higher throughput efficiency in a range of SNR.展开更多
Detecting the presence of a valid signal is an important task of a telecommunication receiver.When the receiver is unable to detect the presence of a valid signal,due to noise and fading,it is referred to as an erasur...Detecting the presence of a valid signal is an important task of a telecommunication receiver.When the receiver is unable to detect the presence of a valid signal,due to noise and fading,it is referred to as an erasure.This work deals with the probability of erasure computation for orthogonal frequency division multiplexed(OFDM)signals used by multiple input multiple output(MIMO)systems.The theoretical results are validated by computer simulations.OFDM is widely used in present day wireless communication systems due to its ability to mitigate intersymbol interference(ISI)caused by frequency selective fading channels.MIMO systems offer the advantage of spatial multiplexing,resulting in increased bit-rate,which is the main requirement of the recent wireless standards like 5G and beyond.展开更多
In distributed storage systems,replication and erasure code(EC)are common methods for data redundancy.Compared with replication,EC has better storage efficiency,but suffers higher overhead in update.Moreover,consisten...In distributed storage systems,replication and erasure code(EC)are common methods for data redundancy.Compared with replication,EC has better storage efficiency,but suffers higher overhead in update.Moreover,consistency and reliability problems caused by concurrent updates bring new challenges to applications of EC.Many works focus on optimizing the EC solution,including algorithm optimization,novel data update method,and so on,but lack the solutions for consistency and reliability problems.In this paper,we introduce a storage system that decouples data updating and EC encoding,namely,decoupled data updating and coding(DDUC),and propose a data placement policy that combines replication and parity blocks.For the(N,M)EC system,the data are placed as N groups of M+1 replicas,and redundant data blocks of the same stripe are placed in the parity nodes,so that the parity nodes can autonomously perform local EC encoding.Based on the above policy,a two-phase data update method is implemented in which data are updated in replica mode in phase 1,and the EC encoding is done independently by parity nodes in phase 2.This solves the problem of data reliability degradation caused by concurrent updates while ensuring high concurrency performance.It also uses persistent memory(PMem)hardware features of the byte addressing and eight-byte atomic write to implement a lightweight logging mechanism that improves performance while ensuring data consistency.Experimental results show that the concurrent access performance of the proposed storage system is 1.70–3.73 times that of the state-of-the-art storage system Ceph,and the latency is only 3.4%–5.9%that of Ceph.展开更多
基金Supported by the National Natural Science Foundation of China(No.U19B2015,62271386,61801371).
文摘Spatially-coupled low-density parity-check(SC-LDPC)codes are prominent candidates for fu-ture communication standards due to their‘threshold saturation’properties.However,when facing burst erasures,the decoding process will stop and the decoding performances will dramatically de-grade.To improve the ability of burst erasure corrections,this paper proposes a two-dimensional SC-LDPC(2D-SC-LDPC)codes constructed by parallelly connecting two asymmetric SC-LDPC coupled chains for resistance to burst erasures.Density evolution algorithm is presented to evaluate the as-ymptotic performances against burst erasures,by which the maximum correctable burst erasure length can be computed.The analysis results show that the maximum correctable burst erasure lengths of the proposed 2D-SC-LDPC codes are much larger than the SC-LDPC codes and the asym-metric SC-LDPC codes.Finite-length performance simulation results of the 2D-SC-LDPC codes over the burst erasure channel confirm the excellent asymptotic performances.
文摘This paper tries to analyze Philip Larkin's poem"High Window"from a deconstructive perspective. It is to show that in the poem the key words/signifiers are always under erasure, and thus the chain of signification is endless since the poem is self-deconstructing. Then, the paper argues that the linguistic features of the poem paradoxically meaningful in the sense that it reflects the poet's skepticism and anxiety.
基金The Science Foundation of Southeast University(No.XJ0704268)the Natural Science Foundation of the Education Department of Anhui Province(No.KJ2007B088)
文摘Frame erasure concealment is studied to solve the problem of rapid speech quality reduction due to the loss of speech parameters during speech transmission. A large hidden Markov model is applied to model the immittance spectral frequency (ISF) parameters in AMR-WB codec to optimally estimate the lost ISFs based on the minimum mean square error (MMSE) rule. The estimated ISFs are weighted with the ones of their previous neighbors to smooth the speech, resulting in the actual concealed ISF vectors. They are used instead of the lost ISFs in the speech synthesis on the receiver. Comparison is made between the speech concealed by this algorithm and by Annex I of G. 722. 2 specification, and simulation shows that the proposed concealment algorithm can lead to better performance in terms of frequency-weighted spectral distortion and signal-to-noise ratio compared to the baseline method, with an increase of 2.41 dB in signal-to-noise ratio (SNR) and a reduction of 0. 885 dB in frequency-weighted spectral distortion.
基金supported by the National Grand Fundamental Research of China (973 Program) under Grant No. 2011CB302601the National High Technology Research and Development of China (863 Program) under GrantNo. 2013AA01A213+2 种基金the National Natural Science Foundation of China under Grant No. 60873215the Natural Science Foundation for Distinguished Young Scholars of Hunan Province under Grant No. S2010J5050Specialized Research Fund for the Doctoral Program of Higher Education under Grant No. 20124307110015
文摘To reduce the time required to complete the regeneration process of erasure codes, we propose a Tree-structured Parallel Regeneration (TPR) scheme for multiple data losses in distributed storage systems. Under the scheme, two algorithms are proposed for the construction of multiple regeneration trees, namely the edge-disjoint algorithm and edge-sharing algorithm. The edge-disjoint algorithm constructs multiple independent trees, and is simple and appropriate for environments where newcomers and their providers are distributed over a large area and have few intersections. The edge-sharing algorithm constructs multiple trees that compete to utilize the bandwidth, and make a better utilization of the bandwidth, although it needs to measure the available band-width and deal with the bandwidth changes; it is therefore difficult to implement in practical systems. The parallel regeneration for multiple data losses of TPR primarily includes two optimizations: firstly, transferring the data through the bandwidth optimized-paths in a pipe-line manner; secondly, executing data regeneration over multiple trees in parallel. To evaluate the proposal, we implement an event-based simulator and make a detailed comparison with some popular regeneration methods. The quantitative comparison results show that the use of TPR employing either the edge-disjoint algorithm or edge-sharing algorithm reduces the regeneration time significantly.
基金Supported by the National Natural Science Foun-dation of China (60373088)
文摘Fault-tolerance is increasingly significant for large-scale storage systems in which Byzantine failure of storage nodes may happen. Traditional Byzantine Quorum systems that tolerate Byzantine failures by using replication have two main limitations: low space-efficiency and static quorum variables. We propose an Erasure-code Byzantine Fault-tolerance Quorum that can provide high reliability with far lower storage overhead than replication by adopting erasure code as redundancy scheme. Through read/write operations of clients and diagnose operation of supervisor, our Quorum system can detect Byzantine nodes, and dynamically adjust system size and fault threshold. Simulation results show that our method improves performance for the Quorum with relatively small quorums.
基金supported by the National Natural Science Foundation of China under Grant No.61501064Sichuan Provincial Science and Technology Project under Grant No.2016GZ0122
文摘In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future.
基金We would like to greatly appreciate the anonymous reviewers for their insightful comments.This work is supported by The National Key Research and Development Program of China(2016YFB1000302)The National Natural Science Foundation of China(61433019,U1435217).
文摘In the big data era,data unavailability,either temporary or permanent,becomes a normal occurrence on a daily basis.Unlike the permanent data failure,which is fixed through a background job,temporarily unavailable data is recovered on-the-fly to serve the ongoing read request.However,those newly revived data is discarded after serving the request,due to the assumption that data experiencing temporary failures could come back alive later.Such disposal of failure data prevents the sharing of failure information among clients,and leads to many unnecessary data recovery processes,(e.g.caused by either recurring unavailability of a data or multiple data failures in one stripe),thereby straining system performance.To this end,this paper proposes GFCache to cache corrupted data for the dual purposes of failure information sharing and eliminating unnecessary data recovery processes.GFCache employs a greedy caching approach of opportunism to promote not only the failed data,but also sequential failure-likely data in the same stripe.Additionally,GFCache includes a FARC(Failure ARC)catch replacement algorithm,which features a balanced consideration of failure recency,frequency to accommodate data corruption with good hit ratio.The stored data in GFCache is able to support fast read of the normal data access.Furthermore,since GFCache is a generic failure cache,it can be used anywhere erasure coding is deployed with any specific coding schemes and parameters.Evaluations show that GFCache achieves good hit ratio with our sophisticated caching algorithm and manages to significantly boost system performance by reducing unnecessary data recoveries with vulnerable data in the cache.
基金supported by National Key Technology Research and Development Program of China under Grant No.2015BAH08F01the joint fund of the Ministry of Education of People's Republic of China and China Mobile Communications Corporation under Grant No.MCM20160304
文摘Multi-channel can be used to provide higher transmission ability to the bandwidth-intensive and delay-sensitive real-time streams. However, traditional channel capacity theories and coding schemes are seldom designed for the real-time streams with strict delay constraint, especially in multi-channel context. This paper considers a real-time stream system, where real-time messages with different importance should be transmitted through several packet erasure channels, and be decoded by the receiver within a fixed delay. Based on window erasure channels and i.i.d.(identically and independently distributed) erasure channels, we derive the Multi-channel Real-time Stream Transmission(MRST) capacity models for Symmetric Real-time(SR) streams and Asymmetric Real-time(AR) streams respectively. Moreover, for window erasures, a Maximum Equilibrium Intra-session Code(MEIC) is presented for SR and AR streams, and is shown able to asymptotically achieve the theoretical MRST capacity. For i.i.d. erasures, we propose an Adaptive Maximum Equilibrium Intra-session Code(AMEIC), and then prove AMEIC can closely approach the MRST transmission capacity. Finally, the performances of the proposed codes are verified by simulations.
文摘Classical unequal erasure protection schemes split data to be protected into classes which are encoded independently. The unequal protection scheme presented in this paper is based on an erasure code which encodes all the data together according to the existing dependencies. A simple algorithm generates dynamically the generator matrix of the erasure code according to the packets streams structure, i.e., the dependencies between the packets, and the rate of the code. This proposed erasure code was applied to a packetized MPEG4 stream transmitted over a packet erasure channel and compared with other classical protection schemes in terms of PSNR and MOS. It is shown that the proposed code allows keeping a high video quality-level in a larger packet loss rate range than the other protection schemes.
基金This work is supported by The National Key Research and Development Program of China(2019YFB1804502).
文摘Building a new decentralized domain name system based on blockchain technology is helping to solve problems,such as load imbalance and over-dependence on the trust of the central node.However,in the existing blockchain storage system,the storage overhead is very high due to its fullreplication data storage mechanism.The total storage consumption for each block is up to O(n)with n nodes.Erasure code applied to blockchains can significantly reduce the storage overhead,but also greatly lower the read performance.In this study,we propose a novel coding scheme for blockchain storage,Combination Locality based Erasure Code for Permissioned blockchain storage(CLEC).CLEC uses erasure code,parity locality,and topology locality in blockchain storage,greatly reducing reading latency and repair time.In CLEC,the storage consumption per block can be reduced to O(1),and the repair penalty can also be lowered to O(1).Experiments in an open-source permissioned blockchain Tendermint show that CLEC has a maximum repair speed of 6 times and a read speed of nearly 1.7 times with storage overhead of only 1.17 times compared to the current work,a great improvement in reading performance and repair performance with slightly increased storage overhead via implementation.
基金Supported by the National Science Foundation of China (No.60773137,60972067)the National 863 Project (No.2007AA01Z297)
文摘In this paper,we propose an opportunistic decoding method to enhance the reconstruction of unequal erasure protected scalable coded source.This method explores some non-channel-decodable symbols of the received packets by opportunistically utilizing the structure information of joint source-channel coding scheme.As a result,a longer prefix of source coded stream may be obtained to enhance the reconstruction of scalable coded source.Experiment results indicate that opportunistic decoding at the receiver can improve the quality of the reconstructed source when channel decoding fails to recover all the source symbols.
文摘A modified type of Hybrid ARQ system with erasures correction and parity bits retransmission is considered. Performance of the system is analyzed under assumption that the forward channel suffers from Nakagami common fading and additive white Gaussian noise. A good agreement between theoretical results and simulation is achieved. The proposed ARQ protocol is compared with other known Hybrid ARQ algorithms. It demonstrates significantly higher throughput efficiency in a range of SNR.
文摘Detecting the presence of a valid signal is an important task of a telecommunication receiver.When the receiver is unable to detect the presence of a valid signal,due to noise and fading,it is referred to as an erasure.This work deals with the probability of erasure computation for orthogonal frequency division multiplexed(OFDM)signals used by multiple input multiple output(MIMO)systems.The theoretical results are validated by computer simulations.OFDM is widely used in present day wireless communication systems due to its ability to mitigate intersymbol interference(ISI)caused by frequency selective fading channels.MIMO systems offer the advantage of spatial multiplexing,resulting in increased bit-rate,which is the main requirement of the recent wireless standards like 5G and beyond.
基金Project supported by the National Key Research and Development Program of China(No.2021YFB3101100)。
文摘In distributed storage systems,replication and erasure code(EC)are common methods for data redundancy.Compared with replication,EC has better storage efficiency,but suffers higher overhead in update.Moreover,consistency and reliability problems caused by concurrent updates bring new challenges to applications of EC.Many works focus on optimizing the EC solution,including algorithm optimization,novel data update method,and so on,but lack the solutions for consistency and reliability problems.In this paper,we introduce a storage system that decouples data updating and EC encoding,namely,decoupled data updating and coding(DDUC),and propose a data placement policy that combines replication and parity blocks.For the(N,M)EC system,the data are placed as N groups of M+1 replicas,and redundant data blocks of the same stripe are placed in the parity nodes,so that the parity nodes can autonomously perform local EC encoding.Based on the above policy,a two-phase data update method is implemented in which data are updated in replica mode in phase 1,and the EC encoding is done independently by parity nodes in phase 2.This solves the problem of data reliability degradation caused by concurrent updates while ensuring high concurrency performance.It also uses persistent memory(PMem)hardware features of the byte addressing and eight-byte atomic write to implement a lightweight logging mechanism that improves performance while ensuring data consistency.Experimental results show that the concurrent access performance of the proposed storage system is 1.70–3.73 times that of the state-of-the-art storage system Ceph,and the latency is only 3.4%–5.9%that of Ceph.