A 2.5Gb/s clock and data recovery (CDR) circuit is designed and realized in TSMC's standard 0.18/μm CMOS process. The clock recovery is based on a PLL. For phase noise optimization,a dynamic phase and frequency de...A 2.5Gb/s clock and data recovery (CDR) circuit is designed and realized in TSMC's standard 0.18/μm CMOS process. The clock recovery is based on a PLL. For phase noise optimization,a dynamic phase and frequency detector (PFD) is used in the PLL. The rms jitter of the recovered 2.5GHz clock is 2.4ps and the SSB phase noise is - 111dBc/Hz at 10kHz offset. The rms jitter of the recovered 2.5Gb/s data is 3.3ps. The power consumption is 120mW.展开更多
The design of a 2. 488 Gbit/s clock and data recovery (CDR) If for synchronous digital hierarchy (SDH) STM-16 receiver is described. Based on the injected phase-locked loop (IPLL) and D-flip flop architectures, ...The design of a 2. 488 Gbit/s clock and data recovery (CDR) If for synchronous digital hierarchy (SDH) STM-16 receiver is described. Based on the injected phase-locked loop (IPLL) and D-flip flop architectures, the CDR IC was implemented in a standard 0. 35 μan complementary metal-oxide-semiconductor (CMOS) technology. With 2^31 -1 pseudorandom bit sequences (PRBS) input, the sensitivity of data recovery circuit is less than 20 mV with 10^-12 bit error rate (BER). The recovered clock shows a root mean square (rms) jitter of 2. 8 ps and a phase noise of - 110 dBc/Hz at 100 kHz offset. The capture range of the circuit is larger than 40 MHz. With a 5 V supply, the circuit consumes 680 mW and the chip area is 1.49 mm × 1 mm.展开更多
In this paper,a detailed analysis of a phase interpolator for clock recovery is presented. A mathematical model is setup for the phase interpolator and we perform a precise analysis using this model. The result shows ...In this paper,a detailed analysis of a phase interpolator for clock recovery is presented. A mathematical model is setup for the phase interpolator and we perform a precise analysis using this model. The result shows that the output amplitude and linearity of phase interpolator is primarily related to the difference between the two input phases. A new encoding pattern is given to solve this problem. Analysis in the circuit domain was also undertaken. The simulation results show that the relation between RC time-constant and time difference of input clocks affects the linearity of the phase interpolator. To alleviate this undesired effect, two adjustable-RC buffers are added at the input of the PI. Finally,a 90nm CMOS phase interpolator,which can work in the frequency from 1GHz to 5GHz,is proposed. The power dissipation of the phase interpolator is lmW with a 1.2V power supply. Experiment results show that the phase interpolator has a monotone output phase and good linearity.展开更多
A 2.5Gb/s/ch data recovery (DR) circuit is designed for an SFI-5 interface. To make the parallel data bit-synchronization and reduce the bit error rate (BER) ,a delay locked loop (DLL) is used to place the cente...A 2.5Gb/s/ch data recovery (DR) circuit is designed for an SFI-5 interface. To make the parallel data bit-synchronization and reduce the bit error rate (BER) ,a delay locked loop (DLL) is used to place the center of the data eye exactly at the rising edge of the data-sampling clock. A single channel DR circuit was fabricated in TSMC's standard 0. 18μm CMOS process. The chip area is 0. 46mm^2. With a 2^32 - 1 pseudorandom bit sequence (PRBS) input,the RMS jitter of the recovered 2.5Gb/s data is 3.3ps. The sensitivity of the single channel DR is less than 20mV with 10-12 BER.展开更多
To solve the problem of data recovery on free disk sectors, an approach of data recovering based on intelligent pattern matching is proposed in this paper. Different from the methods based on the file directory, this ...To solve the problem of data recovery on free disk sectors, an approach of data recovering based on intelligent pattern matching is proposed in this paper. Different from the methods based on the file directory, this approach utilizes the consistency among the data on the disk. A feature pattern library is established based on different types of fries according to the internal constructions of text. Data on sectors will be classified automatically by data clustering and evaluating. When the conflict happens on data classification, the digestion will be initiated by adopting context pattern. Based on this approach, the paper achieved the data recovery system aiming at pattern matching of txt, word and PDF fries. Raw and formatting recovery tests proved that the system works well.展开更多
Recovering accurate data is important for both earthquake and exploration seismology studies when data are sparsely sampled or partially missing. We present a method that allows for precise and accurate recovery of se...Recovering accurate data is important for both earthquake and exploration seismology studies when data are sparsely sampled or partially missing. We present a method that allows for precise and accurate recovery of seismic data using a localized fractal recovery method. This method requires that the data are self- similar on local and global spatial scales. We present examples that show that the intrinsic structure associated with seismic data can be easily and accurately recovered by using this approach. This result, in turn, indicates that seismic data are indeed self-similar on local and global scales. This method is applicable not only for seismic studies, but also for any field studies that require accurate recovery of data from sparsely sampled datasets with partially missing data. Our ability to recover the missing data with high fidelity and accuracy will qualitatively improve the images of seismic tomography.展开更多
During state perception of a power system, fragments of harmonic data are inevitably lost owing to the loss of synchronization signals, transmission delays, instrument failures, or other factors. A harmonic data recov...During state perception of a power system, fragments of harmonic data are inevitably lost owing to the loss of synchronization signals, transmission delays, instrument failures, or other factors. A harmonic data recovery method is proposed based on multivariate norm matrix in this paper. The proposed method involves dynamic time warping for correlation analysis of harmonic data, normalized cuts for correlation clustering of power-quality monitoring devices, and adaptive alternating direction method of multipliers for multivariable norm joint optimization. Compared with existing data recovery methods, our proposed method maintains excellent recovery accuracy without requiring prior information or optimization of the power-quality monitoring device. Simulation results on the IEEE 39-bus and IEEE 118-bus test systems demonstrate the low computational complexity of the proposed method and its robustness against noise. In addition, the application of the proposed method to field data from a real-world system provides consistent results with those obtained from simulations.展开更多
The occurrence of earthquakes is closely related to the crustal geotectonic movement and the migration of mass,which consequently cause changes in gravity.The Gravity Recovery And Climate Experiment(GRACE)satellite da...The occurrence of earthquakes is closely related to the crustal geotectonic movement and the migration of mass,which consequently cause changes in gravity.The Gravity Recovery And Climate Experiment(GRACE)satellite data can be used to detect gravity changes associated with large earthquakes.However,previous GRACE satellite-based seismic gravity-change studies have focused more on coseismic gravity changes than on preseismic gravity changes.Moreover,the noise of the north–south stripe in GRACE data is difficult to eliminate,thereby resulting in the loss of some gravity information related to tectonic activities.To explore the preseismic gravity anomalies in a more refined way,we first propose a method of characterizing gravity variation based on the maximum shear strain of gravity,inspired by the concept of crustal strain.The offset index method is then adopted to describe the gravity anomalies,and the spatial and temporal characteristics of gravity anomalies before earthquakes are analyzed at the scales of the fault zone and plate,respectively.In this work,experiments are carried out on the Tibetan Plateau and its surrounding areas,and the following findings are obtained:First,from the observation scale of the fault zone,we detect the occurrence of large-area gravity anomalies near the epicenter,oftentimes about half a year before an earthquake,and these anomalies were distributed along the fault zone.Second,from the observation scale of the plate,we find that when an earthquake occurred on the Tibetan Plateau,a large number of gravity anomalies also occurred at the boundary of the Tibetan Plateau and the Indian Plate.Moreover,the aforementioned experiments confirm that the proposed method can successfully capture the preseismic gravity anomalies of large earthquakes with a magnitude of less than 8,which suggests a new idea for the application of gravity satellite data to earthquake research.展开更多
To ensure the reliability and availability of data,redundancy strategies are always required for distributed storage systems.Erasure coding,one of the representative redundancy strategies,has the advantage of low stor...To ensure the reliability and availability of data,redundancy strategies are always required for distributed storage systems.Erasure coding,one of the representative redundancy strategies,has the advantage of low storage overhead,which facilitates its employment in distributed storage systems.Among the various erasure coding schemes,XOR-based erasure codes are becoming popular due to their high computing speed.When a single-node failure occurs in such coding schemes,a process called data recovery takes place to retrieve the failed node’s lost data from surviving nodes.However,data transmission during the data recovery process usually requires a considerable amount of time.Current research has focused mainly on reducing the amount of data needed for data recovery to reduce the time required for data transmission,but it has encountered problems such as significant complexity and local optima.In this paper,we propose a random search recovery algorithm,named SA-RSR,to speed up single-node failure recovery of XOR-based erasure codes.SA-RSR uses a simulated annealing technique to search for an optimal recovery solution that reads and transmits a minimum amount of data.In addition,this search process can be done in polynomial time.We evaluate SA-RSR with a variety of XOR-based erasure codes in simulations and in a real storage system,Ceph.Experimental results in Ceph show that SA-RSR reduces the amount of data required for recovery by up to 30.0%and improves the performance of data recovery by up to 20.36%compared to the conventional recovery method.展开更多
We introduce a gated oscillator based on XONR/XOR cells and illustrate its working process. A halfrate BM-CDR circuit based on the proposed oscillator is designed, and the design is implemented in SMIC 0.13 μm CMOS t...We introduce a gated oscillator based on XONR/XOR cells and illustrate its working process. A halfrate BM-CDR circuit based on the proposed oscillator is designed, and the design is implemented in SMIC 0.13 μm CMOS technology occupying an area of 675 ×25 μm2. The measured results show that this circuit can recover clock and data from each 10 Gbit/s burst-mode data packet within 5 bits, and the recovered data pass eye-mask test defined in IEEE standard 802.3av.展开更多
Varieties of trusted computing products usually follow the mechanism of liner-style chain of trust according to the specifications of TCG.The distinct advantage is that the compatibility with the existing computing pl...Varieties of trusted computing products usually follow the mechanism of liner-style chain of trust according to the specifications of TCG.The distinct advantage is that the compatibility with the existing computing platform is preferable,while the shortcomings are obvious simultaneously.A new star-style trust model with the ability of data recovery is proposed in this paper.The model can enhance the hardware-based root of trust in platform measurement,reduce the loss of trust during transfer process,extend the border of trust flexibly,and have the ability of data backup and recovery.The security and reliability of system is much more improved.It is proved that the star-style trust model is much better than the liner-style trust model in trust transfer and boundary extending etc.using formal methods in this paper.We illuminate the design and implementation of a kind of trusted PDA acting on star-style trust model.展开更多
A semi-digital clock and data recovery (CDR) is presented. In order to lower CDR trace jitter and decrease loop latency, an average-based phase detection algorithm is adopted and realized with a novel circuit. Imple...A semi-digital clock and data recovery (CDR) is presented. In order to lower CDR trace jitter and decrease loop latency, an average-based phase detection algorithm is adopted and realized with a novel circuit. Implemented in a 0.13 μm standard 1PSM CMOS process, our CDR is integrated into a high speed serial and de-serial (SERDES) chip. Measurement results of the chip show that the CDR can trace the phase of the input data well and the RiMS jitter of the recovery clock in the observation pin is 122 ps at 75 MHz clock frequency, while the bit error rate of the recovery data is less than 10 × 10^-12.展开更多
A wide-range tracking technique for clock and data recovery(CDR) circuit is presented. Compared to the traditional technique, a digital CDR controller with calibration is adopted to extend the tracking range. Because ...A wide-range tracking technique for clock and data recovery(CDR) circuit is presented. Compared to the traditional technique, a digital CDR controller with calibration is adopted to extend the tracking range. Because of the use of digital circuits in the design, CDR is not sensitive to process and power supply variations. To verify the technique, the whole CDR circuit is implemented using 65-nm CMOS technology. Measurements show that the tracking range of CDR is greater than ±6×10-3 at 5 Gb/s. The receiver has good jitter tolerance performance and achieves a bit error rate of <10–12. The re-timed and re-multiplexed serial data has a root-mean-square jitter of 6.7 ps.展开更多
Non-Volatile Memory(NVM) offers byte-addressability and persistency. Because NVM can be plugged into memory and provide low latency, it offers a new opportunity to build new database systems with a single-layer storag...Non-Volatile Memory(NVM) offers byte-addressability and persistency. Because NVM can be plugged into memory and provide low latency, it offers a new opportunity to build new database systems with a single-layer storage design. A single-layer NVM-Native DataBase(N2 DB) provides zero copy and log freedom. Hence, all data are stored in NVM and there is no extra data duplication and logging during execution. N2 DB avoids complex data synchronization and logging overhead in the two-layer storage design of disk-oriented databases and in-memory databases. Garbage Collection(GC) is critical in such an NVM-based database because memory leaks on NVM are durable. Moreover, data recovery is equally essential to guarantee atomicity, consistency, isolation, and durability properties. Without logging, it is a great challenge for N2 DB to restore data to a consistent state after crashes and recoveries. This paper presents the GC and data recovery mechanisms for N2 DB. Evaluations show that the overall performance of N2 DB is up to 3:6 higher than that of InnoDB. Enabling GC reduces performance by up to 10%,but saves storage space by up to 67%. Moreover, our data recovery requires only 0:2% of the time and half of the storage space of InnoDB.展开更多
According to relevant data statistics, each year, nearly 70% of users have disc data loss because of misuse, viral damage, physical damage and hardware failure, bringing irreparable damage to enterprises, institutions...According to relevant data statistics, each year, nearly 70% of users have disc data loss because of misuse, viral damage, physical damage and hardware failure, bringing irreparable damage to enterprises, institutions and individuals. So data recovery technology has attracted wide attention of users and how to use data recovery technology to help to recover lost data and to minimize the loss has become an urgent need. This paper starts from the software and hardware failure to analyze two aspects of hard disk data loss, data storage structure theory, and combined with practical experience, it elaborates data damage type and related recovery methods.展开更多
Compaction correction is a key part of paleogeomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as ...Compaction correction is a key part of paleogeomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as sandstone and mudstone to undertake separate porositydepth compaction modeling. However, using just two lithologies is an oversimplification that cannot represent the compaction history. In such schemes, the precision of the compaction recovery is inadequate. To improve the precision of compaction recovery, a depth compaction model has been proposed that involves both porosity and clay content. A clastic lithological compaction unit classification method, based on clay content, has been designed to identify lithological boundaries and establish sets of compaction units. Also, on the basis of the clastic compaction unit classification, two methods of compaction recovery that integrate well and seismic data are employed to extrapolate well-based compaction information outward along seismic lines and recover the paleo-topography of the clastic strata in the region. The examples presented here show that a better understanding of paleo-geomorphology can be gained by applying the proposed compaction recovery technology.展开更多
The trusted sharing of Electronic Health Records(EHRs)can realize the efficient use of medical data resources.Generally speaking,EHRs are widely used in blockchain-based medical data platforms.EHRs are valuable privat...The trusted sharing of Electronic Health Records(EHRs)can realize the efficient use of medical data resources.Generally speaking,EHRs are widely used in blockchain-based medical data platforms.EHRs are valuable private assets of patients,and the ownership belongs to patients.While recent research has shown that patients can freely and effectively delete the EHRs stored in hospitals,it does not address the challenge of record sharing when patients revisit doctors.In order to solve this problem,this paper proposes a deletion and recovery scheme of EHRs based on Medical Certificate Blockchain.This paper uses cross-chain technology to connect the Medical Certificate Blockchain and the Hospital Blockchain to real-ize the recovery of deleted EHRs.At the same time,this paper uses the Medical Certificate Blockchain and the InterPlanetary File System(IPFS)to store Personal Health Records,which are generated by patients visiting different medical institutions.In addition,this paper also combines digital watermarking technology to ensure the authenticity of the restored electronic medical records.Under the combined effect of blockchain technology and digital watermarking,our proposal will not be affected by any other rights throughout the process.System analysis and security analysis illustrate the completeness and feasibility of the scheme.展开更多
A 28/56 Gb/s NRZ/PAM-4 dual-mode transceiver(TRx)designed in a 28-nm complementary metal-oxide-semiconduc-tor(CMOS)process is presented in this article.A voltage-mode(VM)driver featuring a 4-tap reconfigurable feed-fo...A 28/56 Gb/s NRZ/PAM-4 dual-mode transceiver(TRx)designed in a 28-nm complementary metal-oxide-semiconduc-tor(CMOS)process is presented in this article.A voltage-mode(VM)driver featuring a 4-tap reconfigurable feed-forward equal-izer(FFE)is employed in the quarter-rate transmitter(TX).The half-rate receiver(RX)incorporates a continuous-time linear equal-izer(CTLE),a 3-stage high-speed slicer with multi-clock-phase sampling,and a clock and data recovery(CDR).The experimen-tal results show that the TRx operates at a maximum speed of 56 Gb/s with chip-on board(COB)assembly.The 28 Gb/s NRZ eye diagram shows a far-end vertical eye opening of 210 mV with an output amplitude of 351 mV single-ended and the 56 Gb/s PAM-4 eye diagram exhibits far-end eye opening of 33 mV(upper-eye),31 mV(mid-eye),and 28 mV(lower-eye)with an output amplitude of 353 mV single-ended.The recovered 14 GHz clock from the RX exhibits random jitter(RJ)of 469 fs and deterministic jitter(DJ)of 8.76 ps.The 875 Mb/s de-multiplexed data features 593 ps horizontal eye opening with 32.02 ps RJ,at bit-error rate(BER)of 10-5(0.53 UI).The power dissipation of TX and RX are 125 and 181.4 mW,respectively,from a 0.9-V sup-ply.展开更多
As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry ...As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry high-speed and real time data,data losses and data quality degradation may happen constantly. For this problem,according to the strong spatial and temporal correlation of electricity data which isgenerated by human’s actions and feelings, we build a low-rank electricity data matrixwhere the row is time and the column is user. Inspired by matrix decomposition, we dividethe low-rank electricity data matrix into the multiply of two small matrices and use theknown data to approximate the low-rank electricity data matrix and recover the missedelectrical data. Based on the real electricity data, we analyze the low-rankness of theelectricity data matrix and perform the Matrix Decomposition-based method on the realdata. The experimental results verify the efficiency and efficiency of the proposed scheme.展开更多
With the intelligentization of the Internet of Vehicles(lovs),Artificial Intelligence(Al)technology is becoming more and more essential,especially deep learning.Federated Deep Learning(FDL)is a novel distributed machi...With the intelligentization of the Internet of Vehicles(lovs),Artificial Intelligence(Al)technology is becoming more and more essential,especially deep learning.Federated Deep Learning(FDL)is a novel distributed machine learning technology and is able to address the challenges like data security,privacy risks,and huge communication overheads from big raw data sets.However,FDL can only guarantee data security and privacy among multiple clients during data training.If the data sets stored locally in clients are corrupted,including being tampered with and lost,the training results of the FDL in intelligent IoVs must be negatively affected.In this paper,we are the first to design a secure data auditing protocol to guarantee the integrity and availability of data sets in FDL-empowered IoVs.Specifically,the cuckoo filter and Reed-Solomon codes are utilized to guarantee error tolerance,including efficient corrupted data locating and recovery.In addition,a novel data structure,Skip Hash Table(SHT)is designed to optimize data dynamics.Finally,we illustrate the security of the scheme with the Computational Diffie-Hellman(CDH)assumption on bilinear groups.Sufficient theoretical analyses and performance evaluations demonstrate the security and efficiency of our scheme for data sets in FDL-empowered IoVs.展开更多
文摘A 2.5Gb/s clock and data recovery (CDR) circuit is designed and realized in TSMC's standard 0.18/μm CMOS process. The clock recovery is based on a PLL. For phase noise optimization,a dynamic phase and frequency detector (PFD) is used in the PLL. The rms jitter of the recovered 2.5GHz clock is 2.4ps and the SSB phase noise is - 111dBc/Hz at 10kHz offset. The rms jitter of the recovered 2.5Gb/s data is 3.3ps. The power consumption is 120mW.
文摘The design of a 2. 488 Gbit/s clock and data recovery (CDR) If for synchronous digital hierarchy (SDH) STM-16 receiver is described. Based on the injected phase-locked loop (IPLL) and D-flip flop architectures, the CDR IC was implemented in a standard 0. 35 μan complementary metal-oxide-semiconductor (CMOS) technology. With 2^31 -1 pseudorandom bit sequences (PRBS) input, the sensitivity of data recovery circuit is less than 20 mV with 10^-12 bit error rate (BER). The recovered clock shows a root mean square (rms) jitter of 2. 8 ps and a phase noise of - 110 dBc/Hz at 100 kHz offset. The capture range of the circuit is larger than 40 MHz. With a 5 V supply, the circuit consumes 680 mW and the chip area is 1.49 mm × 1 mm.
文摘In this paper,a detailed analysis of a phase interpolator for clock recovery is presented. A mathematical model is setup for the phase interpolator and we perform a precise analysis using this model. The result shows that the output amplitude and linearity of phase interpolator is primarily related to the difference between the two input phases. A new encoding pattern is given to solve this problem. Analysis in the circuit domain was also undertaken. The simulation results show that the relation between RC time-constant and time difference of input clocks affects the linearity of the phase interpolator. To alleviate this undesired effect, two adjustable-RC buffers are added at the input of the PI. Finally,a 90nm CMOS phase interpolator,which can work in the frequency from 1GHz to 5GHz,is proposed. The power dissipation of the phase interpolator is lmW with a 1.2V power supply. Experiment results show that the phase interpolator has a monotone output phase and good linearity.
文摘A 2.5Gb/s/ch data recovery (DR) circuit is designed for an SFI-5 interface. To make the parallel data bit-synchronization and reduce the bit error rate (BER) ,a delay locked loop (DLL) is used to place the center of the data eye exactly at the rising edge of the data-sampling clock. A single channel DR circuit was fabricated in TSMC's standard 0. 18μm CMOS process. The chip area is 0. 46mm^2. With a 2^32 - 1 pseudorandom bit sequence (PRBS) input,the RMS jitter of the recovered 2.5Gb/s data is 3.3ps. The sensitivity of the single channel DR is less than 20mV with 10-12 BER.
文摘To solve the problem of data recovery on free disk sectors, an approach of data recovering based on intelligent pattern matching is proposed in this paper. Different from the methods based on the file directory, this approach utilizes the consistency among the data on the disk. A feature pattern library is established based on different types of fries according to the internal constructions of text. Data on sectors will be classified automatically by data clustering and evaluating. When the conflict happens on data classification, the digestion will be initiated by adopting context pattern. Based on this approach, the paper achieved the data recovery system aiming at pattern matching of txt, word and PDF fries. Raw and formatting recovery tests proved that the system works well.
基金supported by the Spark Program of Earthquake Sciences (Grant No. XH13002)
文摘Recovering accurate data is important for both earthquake and exploration seismology studies when data are sparsely sampled or partially missing. We present a method that allows for precise and accurate recovery of seismic data using a localized fractal recovery method. This method requires that the data are self- similar on local and global spatial scales. We present examples that show that the intrinsic structure associated with seismic data can be easily and accurately recovered by using this approach. This result, in turn, indicates that seismic data are indeed self-similar on local and global scales. This method is applicable not only for seismic studies, but also for any field studies that require accurate recovery of data from sparsely sampled datasets with partially missing data. Our ability to recover the missing data with high fidelity and accuracy will qualitatively improve the images of seismic tomography.
基金supported in part by the Science and Technology Project of China Southern Power Grid (No. 090000KK52190169/SZKJXM2019669)in part by the Open Fund of State Key Laboratory of Power System and Generation Equipment,Tsinghua University (No. SKLD21KM04)。
文摘During state perception of a power system, fragments of harmonic data are inevitably lost owing to the loss of synchronization signals, transmission delays, instrument failures, or other factors. A harmonic data recovery method is proposed based on multivariate norm matrix in this paper. The proposed method involves dynamic time warping for correlation analysis of harmonic data, normalized cuts for correlation clustering of power-quality monitoring devices, and adaptive alternating direction method of multipliers for multivariable norm joint optimization. Compared with existing data recovery methods, our proposed method maintains excellent recovery accuracy without requiring prior information or optimization of the power-quality monitoring device. Simulation results on the IEEE 39-bus and IEEE 118-bus test systems demonstrate the low computational complexity of the proposed method and its robustness against noise. In addition, the application of the proposed method to field data from a real-world system provides consistent results with those obtained from simulations.
基金supported by the National Key Research and Development Program of China(Grant No.2019YFC1509202)the National Natural Science Foundation of China(Grant Nos.41772350,61371189,and 41701513).
文摘The occurrence of earthquakes is closely related to the crustal geotectonic movement and the migration of mass,which consequently cause changes in gravity.The Gravity Recovery And Climate Experiment(GRACE)satellite data can be used to detect gravity changes associated with large earthquakes.However,previous GRACE satellite-based seismic gravity-change studies have focused more on coseismic gravity changes than on preseismic gravity changes.Moreover,the noise of the north–south stripe in GRACE data is difficult to eliminate,thereby resulting in the loss of some gravity information related to tectonic activities.To explore the preseismic gravity anomalies in a more refined way,we first propose a method of characterizing gravity variation based on the maximum shear strain of gravity,inspired by the concept of crustal strain.The offset index method is then adopted to describe the gravity anomalies,and the spatial and temporal characteristics of gravity anomalies before earthquakes are analyzed at the scales of the fault zone and plate,respectively.In this work,experiments are carried out on the Tibetan Plateau and its surrounding areas,and the following findings are obtained:First,from the observation scale of the fault zone,we detect the occurrence of large-area gravity anomalies near the epicenter,oftentimes about half a year before an earthquake,and these anomalies were distributed along the fault zone.Second,from the observation scale of the plate,we find that when an earthquake occurred on the Tibetan Plateau,a large number of gravity anomalies also occurred at the boundary of the Tibetan Plateau and the Indian Plate.Moreover,the aforementioned experiments confirm that the proposed method can successfully capture the preseismic gravity anomalies of large earthquakes with a magnitude of less than 8,which suggests a new idea for the application of gravity satellite data to earthquake research.
基金the National Natural Science Foundation of China(No.62172327)。
文摘To ensure the reliability and availability of data,redundancy strategies are always required for distributed storage systems.Erasure coding,one of the representative redundancy strategies,has the advantage of low storage overhead,which facilitates its employment in distributed storage systems.Among the various erasure coding schemes,XOR-based erasure codes are becoming popular due to their high computing speed.When a single-node failure occurs in such coding schemes,a process called data recovery takes place to retrieve the failed node’s lost data from surviving nodes.However,data transmission during the data recovery process usually requires a considerable amount of time.Current research has focused mainly on reducing the amount of data needed for data recovery to reduce the time required for data transmission,but it has encountered problems such as significant complexity and local optima.In this paper,we propose a random search recovery algorithm,named SA-RSR,to speed up single-node failure recovery of XOR-based erasure codes.SA-RSR uses a simulated annealing technique to search for an optimal recovery solution that reads and transmits a minimum amount of data.In addition,this search process can be done in polynomial time.We evaluate SA-RSR with a variety of XOR-based erasure codes in simulations and in a real storage system,Ceph.Experimental results in Ceph show that SA-RSR reduces the amount of data required for recovery by up to 30.0%and improves the performance of data recovery by up to 20.36%compared to the conventional recovery method.
基金supported by the Key Technology Research and Development Program of Jiangsu Province,Industry Part,China(No.BE2008128)
文摘We introduce a gated oscillator based on XONR/XOR cells and illustrate its working process. A halfrate BM-CDR circuit based on the proposed oscillator is designed, and the design is implemented in SMIC 0.13 μm CMOS technology occupying an area of 675 ×25 μm2. The measured results show that this circuit can recover clock and data from each 10 Gbit/s burst-mode data packet within 5 bits, and the recovered data pass eye-mask test defined in IEEE standard 802.3av.
基金Supported by the National Natural Science Foundation of China(61303024)the Natural Science Foundation of Hubei Province(2013CFB441)+1 种基金the Foundation of Science and Technology on Information Assurance Laboratory(KJ-13-106)the Natural Science Foundation of Jiangsu Province(BK20130372)
文摘Varieties of trusted computing products usually follow the mechanism of liner-style chain of trust according to the specifications of TCG.The distinct advantage is that the compatibility with the existing computing platform is preferable,while the shortcomings are obvious simultaneously.A new star-style trust model with the ability of data recovery is proposed in this paper.The model can enhance the hardware-based root of trust in platform measurement,reduce the loss of trust during transfer process,extend the border of trust flexibly,and have the ability of data backup and recovery.The security and reliability of system is much more improved.It is proved that the star-style trust model is much better than the liner-style trust model in trust transfer and boundary extending etc.using formal methods in this paper.We illuminate the design and implementation of a kind of trusted PDA acting on star-style trust model.
文摘A semi-digital clock and data recovery (CDR) is presented. In order to lower CDR trace jitter and decrease loop latency, an average-based phase detection algorithm is adopted and realized with a novel circuit. Implemented in a 0.13 μm standard 1PSM CMOS process, our CDR is integrated into a high speed serial and de-serial (SERDES) chip. Measurement results of the chip show that the CDR can trace the phase of the input data well and the RiMS jitter of the recovery clock in the observation pin is 122 ps at 75 MHz clock frequency, while the bit error rate of the recovery data is less than 10 × 10^-12.
基金Project supported by the National High-Tech R&D Program(863)of China(No.2011AA010403)the National Natural Science Foundation of China(No.61474134)
文摘A wide-range tracking technique for clock and data recovery(CDR) circuit is presented. Compared to the traditional technique, a digital CDR controller with calibration is adopted to extend the tracking range. Because of the use of digital circuits in the design, CDR is not sensitive to process and power supply variations. To verify the technique, the whole CDR circuit is implemented using 65-nm CMOS technology. Measurements show that the tracking range of CDR is greater than ±6×10-3 at 5 Gb/s. The receiver has good jitter tolerance performance and achieves a bit error rate of <10–12. The re-timed and re-multiplexed serial data has a root-mean-square jitter of 6.7 ps.
基金supported by the National Key Research & Development Program of China (No. 2016YFB1000504)the National Natural Science Foundation of China (Nos. 61877035, 61433008, 61373145, and 61572280)。
文摘Non-Volatile Memory(NVM) offers byte-addressability and persistency. Because NVM can be plugged into memory and provide low latency, it offers a new opportunity to build new database systems with a single-layer storage design. A single-layer NVM-Native DataBase(N2 DB) provides zero copy and log freedom. Hence, all data are stored in NVM and there is no extra data duplication and logging during execution. N2 DB avoids complex data synchronization and logging overhead in the two-layer storage design of disk-oriented databases and in-memory databases. Garbage Collection(GC) is critical in such an NVM-based database because memory leaks on NVM are durable. Moreover, data recovery is equally essential to guarantee atomicity, consistency, isolation, and durability properties. Without logging, it is a great challenge for N2 DB to restore data to a consistent state after crashes and recoveries. This paper presents the GC and data recovery mechanisms for N2 DB. Evaluations show that the overall performance of N2 DB is up to 3:6 higher than that of InnoDB. Enabling GC reduces performance by up to 10%,but saves storage space by up to 67%. Moreover, our data recovery requires only 0:2% of the time and half of the storage space of InnoDB.
文摘According to relevant data statistics, each year, nearly 70% of users have disc data loss because of misuse, viral damage, physical damage and hardware failure, bringing irreparable damage to enterprises, institutions and individuals. So data recovery technology has attracted wide attention of users and how to use data recovery technology to help to recover lost data and to minimize the loss has become an urgent need. This paper starts from the software and hardware failure to analyze two aspects of hard disk data loss, data storage structure theory, and combined with practical experience, it elaborates data damage type and related recovery methods.
文摘Compaction correction is a key part of paleogeomorphic recovery methods. Yet, the influence of lithology on the porosity evolution is not usually taken into account. Present methods merely classify the lithologies as sandstone and mudstone to undertake separate porositydepth compaction modeling. However, using just two lithologies is an oversimplification that cannot represent the compaction history. In such schemes, the precision of the compaction recovery is inadequate. To improve the precision of compaction recovery, a depth compaction model has been proposed that involves both porosity and clay content. A clastic lithological compaction unit classification method, based on clay content, has been designed to identify lithological boundaries and establish sets of compaction units. Also, on the basis of the clastic compaction unit classification, two methods of compaction recovery that integrate well and seismic data are employed to extrapolate well-based compaction information outward along seismic lines and recover the paleo-topography of the clastic strata in the region. The examples presented here show that a better understanding of paleo-geomorphology can be gained by applying the proposed compaction recovery technology.
基金supported by the National Natural Science Foundation of China under grant 61972207,U1836208,U1836110,61672290the Major Program of the National Social Science Fund of China under Grant No.17ZDA092+2 种基金by the National Key R&D Program of China under grant 2018YFB1003205by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fundby the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund.
文摘The trusted sharing of Electronic Health Records(EHRs)can realize the efficient use of medical data resources.Generally speaking,EHRs are widely used in blockchain-based medical data platforms.EHRs are valuable private assets of patients,and the ownership belongs to patients.While recent research has shown that patients can freely and effectively delete the EHRs stored in hospitals,it does not address the challenge of record sharing when patients revisit doctors.In order to solve this problem,this paper proposes a deletion and recovery scheme of EHRs based on Medical Certificate Blockchain.This paper uses cross-chain technology to connect the Medical Certificate Blockchain and the Hospital Blockchain to real-ize the recovery of deleted EHRs.At the same time,this paper uses the Medical Certificate Blockchain and the InterPlanetary File System(IPFS)to store Personal Health Records,which are generated by patients visiting different medical institutions.In addition,this paper also combines digital watermarking technology to ensure the authenticity of the restored electronic medical records.Under the combined effect of blockchain technology and digital watermarking,our proposal will not be affected by any other rights throughout the process.System analysis and security analysis illustrate the completeness and feasibility of the scheme.
基金supported by National Natural Science Foundation of China under Grant 62174132the Fundamental Research Funds for Central Universities under Grant xzy022022060.
文摘A 28/56 Gb/s NRZ/PAM-4 dual-mode transceiver(TRx)designed in a 28-nm complementary metal-oxide-semiconduc-tor(CMOS)process is presented in this article.A voltage-mode(VM)driver featuring a 4-tap reconfigurable feed-forward equal-izer(FFE)is employed in the quarter-rate transmitter(TX).The half-rate receiver(RX)incorporates a continuous-time linear equal-izer(CTLE),a 3-stage high-speed slicer with multi-clock-phase sampling,and a clock and data recovery(CDR).The experimen-tal results show that the TRx operates at a maximum speed of 56 Gb/s with chip-on board(COB)assembly.The 28 Gb/s NRZ eye diagram shows a far-end vertical eye opening of 210 mV with an output amplitude of 351 mV single-ended and the 56 Gb/s PAM-4 eye diagram exhibits far-end eye opening of 33 mV(upper-eye),31 mV(mid-eye),and 28 mV(lower-eye)with an output amplitude of 353 mV single-ended.The recovered 14 GHz clock from the RX exhibits random jitter(RJ)of 469 fs and deterministic jitter(DJ)of 8.76 ps.The 875 Mb/s de-multiplexed data features 593 ps horizontal eye opening with 32.02 ps RJ,at bit-error rate(BER)of 10-5(0.53 UI).The power dissipation of TX and RX are 125 and 181.4 mW,respectively,from a 0.9-V sup-ply.
文摘As the development of smart grid and energy internet, this leads to a significantincrease in the amount of data transmitted in real time. Due to the mismatch withcommunication networks that were not designed to carry high-speed and real time data,data losses and data quality degradation may happen constantly. For this problem,according to the strong spatial and temporal correlation of electricity data which isgenerated by human’s actions and feelings, we build a low-rank electricity data matrixwhere the row is time and the column is user. Inspired by matrix decomposition, we dividethe low-rank electricity data matrix into the multiply of two small matrices and use theknown data to approximate the low-rank electricity data matrix and recover the missedelectrical data. Based on the real electricity data, we analyze the low-rankness of theelectricity data matrix and perform the Matrix Decomposition-based method on the realdata. The experimental results verify the efficiency and efficiency of the proposed scheme.
基金supported by the National Natural Science Foundation of China under Grants No.U1836115,No.61922045,No.61877034,No.61772280the Natural Science Foundation of Jiangsu Province under Grant No.BK20181408+2 种基金the Peng Cheng Laboratory Project of Guangdong Province PCL2018KP004the CICAEET fundthe PAPD fund.
文摘With the intelligentization of the Internet of Vehicles(lovs),Artificial Intelligence(Al)technology is becoming more and more essential,especially deep learning.Federated Deep Learning(FDL)is a novel distributed machine learning technology and is able to address the challenges like data security,privacy risks,and huge communication overheads from big raw data sets.However,FDL can only guarantee data security and privacy among multiple clients during data training.If the data sets stored locally in clients are corrupted,including being tampered with and lost,the training results of the FDL in intelligent IoVs must be negatively affected.In this paper,we are the first to design a secure data auditing protocol to guarantee the integrity and availability of data sets in FDL-empowered IoVs.Specifically,the cuckoo filter and Reed-Solomon codes are utilized to guarantee error tolerance,including efficient corrupted data locating and recovery.In addition,a novel data structure,Skip Hash Table(SHT)is designed to optimize data dynamics.Finally,we illustrate the security of the scheme with the Computational Diffie-Hellman(CDH)assumption on bilinear groups.Sufficient theoretical analyses and performance evaluations demonstrate the security and efficiency of our scheme for data sets in FDL-empowered IoVs.