Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean...Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.展开更多
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
BACKGROUND Neurovascular compression(NVC) is the main cause of primary trigeminal neuralgia(TN) and hemifacial spasm(HFS). Microvascular decompression(MVD) is an effective surgical method for the treatment of TN and H...BACKGROUND Neurovascular compression(NVC) is the main cause of primary trigeminal neuralgia(TN) and hemifacial spasm(HFS). Microvascular decompression(MVD) is an effective surgical method for the treatment of TN and HFS caused by NVC. The judgement of NVC is a critical step in the preoperative evaluation of MVD, which is related to the effect of MVD treatment. Magnetic resonance imaging(MRI) technology has been used to detect NVC prior to MVD for several years. Among many MRI sequences, three-dimensional time-of-flight magnetic resonance angiography(3D TOF MRA) is the most widely used. However, 3D TOF MRA has some shortcomings in detecting NVC. Therefore, 3D TOF MRA combined with high resolution T2-weighted imaging(HR T2WI) is considered to be a more effective method to detect NVC.AIM To determine the value of 3D TOF MRA combined with HR T2WI in the judgment of NVC, and thus to assess its value in the preoperative evaluation of MVD.METHODS Related studies published from inception to September 2022 based on PubMed, Embase, Web of Science, and the Cochrane Library were retrieved. Studies that investigated 3D TOF MRA combined with HR T2WI to judge NVC in patients with TN or HFS were included according to the inclusion criteria. Studies without complete data or not relevant to the research topics were excluded. The Quality Assessment of Diagnostic Accuracy Studies checklist was used to assess the quality of included studies. The publication bias of the included literature was examined by Deeks’ test. An exact binomial rendition of the bivariate mixed-effects regression model was used to synthesize data. Data analysis was performed using the MIDAS module of statistical software Stata 16.0. Two independent investigators extracted patient and study characteristics, and discrepancies were resolved by consensus. Individual and pooled sensitivities and specificities were calculated. The I_(2) statistic and Q test were used to test heterogeneity. The study was registered on the website of PROSERO(registration No. CRD42022357158).RESULTS Our search identified 595 articles, of which 12(including 855 patients) fulfilled the inclusion criteria. Bivariate analysis showed that the pooled sensitivity and specificity of 3D TOF MRA combined with HR T2WI for detecting NVC were 0.96 [95% confidence interval(CI): 0.92-0.98] and 0.92(95%CI: 0.74-0.98), respectively. The pooled positive likelihood ratio was 12.4(95%CI: 3.2-47.8), pooled negative likelihood ratio was 0.04(95%CI: 0.02-0.09), and pooled diagnostic odds ratio was 283(95%CI: 50-1620). The area under the receiver operating characteristic curve was 0.98(95%CI: 0.97-0.99). The studies showed no substantial heterogeneity(I2 = 0, Q = 0.001 P = 0.50).CONCLUSION Our results suggest that 3D TOF MRA combined with HR T2WI has excellent sensitivity and specificity for judging NVC in patients with TN or HFS. This method can be used as an effective tool for preoperative evaluation of MVD.展开更多
Objective:To evaluate the clinical efficacy of the preoperative digita1 design combined with three dimensional(3D)printing models to assist percutaneous kyphoplasty(PKP)treatment for thoracolumbar compression frac tur...Objective:To evaluate the clinical efficacy of the preoperative digita1 design combined with three dimensional(3D)printing models to assist percutaneous kyphoplasty(PKP)treatment for thoracolumbar compression frac tures.Methods:From January 2018 to August 2020,we obtained data of 99 patients diagnosed thoracolumbar compression fractures.These patients were divided into control group(n=50)underwent traditional PKP surgery,and observation group(n=49)underwent preoperative digital design combined with 3D printing model assisted PKP treatment.The clinical efficacy was evaluated with five parameters,including operation time,number of intraoperative radiographs,visual analogue scale(VAS)score,Cobb Angle change,and high compression rate of injured vertebrae.Results:There were statistically significant differences of operation time and number of intraoperative radio graphs between the two groups(P<0.05).For VAS score,Cobb Angle change and vertebral height compression rate,all of these three parameters were significantly improved when the patients accepted surgery teatment in two groups(P<0.05).However,there were no significant differences between control group and observation group for these three parameters either before or after surgery(P>0.05).Conclusions:Through the design of preoperative surgical guide plate and the application of 3D printing model to guide the operation,the precise design of preoperative surgical puncture site and puncture Angle of the injured vertebra was realized,the number of intraoperative radiographs was reduced,the operation time was shortened and the operation efficiency was improved.展开更多
With the wide use of three-dimensional woven spacer composites(3DWSCs),the market expects greater mechanical properties from this material.By changing the weft fastening method of the traditional I-shape pile yarns,we...With the wide use of three-dimensional woven spacer composites(3DWSCs),the market expects greater mechanical properties from this material.By changing the weft fastening method of the traditional I-shape pile yarns,we designed three-dimensional woven spacer fabrics(3DWSFs)and 3DWSCs with the weft V-shape to improve the compression performance of traditional 3DWSFs.The effects of weft binding structures,V-pile densities,and V-shaped angle were investigated in this paper.It is found that the compression resistance of 3DWSFs with the weft V-shape is improved compared to that with the weft I-shape,the fabric height recovery rate is as high as 95.7%,and the average elastic recovery rate is 59.39%.When the interlayer pile yarn density is the same,the weft V-shaped and weft I-shaped 3DWSCs have similar flatwise pressure and edgewise pressure performance.The compression properties of the composite improve as the density of the V-pile yarns increases.The flatwise compression load decreases as the V-shaped angle decreases.When the V-shaped angle is 28°and 42°,the latitudinal V-shaped 3DWSCs perform exceptionally well in terms of anti-compression cushioning.The V-shaped weft binding method offers a novel approach to structural design of 3DWSCs.展开更多
Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre...Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.展开更多
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t...This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method展开更多
In order to solve the so-called "bull-eye" problem caused by using a simple bilinear interpolation as an observational mapping operator in the cost function in the multigrid three-dimensional variational (3DVAR) d...In order to solve the so-called "bull-eye" problem caused by using a simple bilinear interpolation as an observational mapping operator in the cost function in the multigrid three-dimensional variational (3DVAR) data assimilation scheme, a smoothing term, equivalent to a penalty term, is introduced into the cost function to serve as a means of troubleshooting. A theoretical analysis is first performed to figure out what on earth results in the issue of "bull-eye", and then the meaning of such smoothing term is elucidated and the uniqueness of solution of the multigrid 3DVAR with the smoothing term added is discussed through the theoretical deduction for one-dimensional (1D) case, and two idealized data assimilation experiments (one- and two-dimensional (2D) cases). By exploring the relationship between the smoothing term and the recursive filter theoretically and practically, it is revealed why satisfied analysis results can be achieved by using such proposed solution for the issue of the multigrid 3DVAR.展开更多
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on t...NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data.展开更多
Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)...Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.展开更多
Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. B...Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.展开更多
A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communica...A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.展开更多
The prediction of solar radiation is important for several applications in renewable energy research. There are a number of geographical variables which affect solar radiation prediction, the identification of these v...The prediction of solar radiation is important for several applications in renewable energy research. There are a number of geographical variables which affect solar radiation prediction, the identification of these variables for accurate solar radiation prediction is very important. This paper presents a hybrid method for the compression of solar radiation using predictive analysis. The prediction of minute wise solar radiation is performed by using different models of Artificial Neural Networks (ANN), namely Multi-layer perceptron neural network (MLPNN), Cascade feed forward back propagation (CFNN) and Elman back propagation (ELMNN). Root mean square error (RMSE) is used to evaluate the prediction accuracy of the three ANN models used. The information and knowledge gained from the present study could improve the accuracy of analysis concerning climate studies and help in congestion control.展开更多
Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of int...Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of intrinsic image structure.A novel approach is proposed to address these is-sues.Firstly,a chaotic sequence is generated using the Lorenz three-dimensional chaotic mapping to initiate the encryption process,which is XORed with each spectral band of the multispectral image to complete the initial encryption of the image.Then,a two-dimensional lifting 9/7 wavelet transform is applied to the processed image.Next,a key-sensitive Arnold scrambling technique is employed on the resulting low-frequency image.It effectively eliminates spatial redundancy in the multispectral image while enhancing the encryption process.To optimize the compression and encryption processes further,fast Tucker decomposition is applied to the wavelet sub-band tensor.It effectively removes both spectral redundancy and residual spatial redundancy in the multispectral image.Finally,the core tensor and pattern matrix obtained from the decomposition are subjected to entropy encoding,and real-time chaotic encryption is implemented during the encoding process,effectively integrating compression and encryption.The results show that the proposed algorithm is suitable for occasions with high requirements for compression and encryption,and it provides valuable insights for the de-velopment of compression and encryption in multispectral field.展开更多
The method of data compression, using orthogonal transform, is introduced so as to insure the minimal distortion of signal restoration. It, featured with transformation, can compress the data according to the needed p...The method of data compression, using orthogonal transform, is introduced so as to insure the minimal distortion of signal restoration. It, featured with transformation, can compress the data according to the needed precision. The ratio of compressed data is closely related to precision. The results show it to be favorable to different kinds of data compression.展开更多
Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sens...Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.展开更多
This paper presents a simple but eifective algorithm to speed up the codebook search in a vector quantization scheme of SAR raw data when a minimum square error(MSE) criterion is used. A considerable reduction in the ...This paper presents a simple but eifective algorithm to speed up the codebook search in a vector quantization scheme of SAR raw data when a minimum square error(MSE) criterion is used. A considerable reduction in the number of operations is achieved.展开更多
Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall per...Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall performance measure. This paper proposes a new technology to design the decoder codebook, which is different from the encoder codebook to optimise the overall performance. The performance improvement is achieved with no effect on encoding complexity, both storage and time consuming, but a modest increase in storage complexity of decoder.展开更多
A sixteen tree method of data compression of bilevel image is described.Thismethod has high efficiency,no information loss during compression,and easy to realize.
基金The National Key R&D Program of China under contract No.2021YFC3101603.
文摘Ocean temperature is an important physical variable in marine ecosystems,and ocean temperature prediction is an important research objective in ocean-related fields.Currently,one of the commonly used methods for ocean temperature prediction is based on data-driven,but research on this method is mostly limited to the sea surface,with few studies on the prediction of internal ocean temperature.Existing graph neural network-based methods usually use predefined graphs or learned static graphs,which cannot capture the dynamic associations among data.In this study,we propose a novel dynamic spatiotemporal graph neural network(DSTGN)to predict threedimensional ocean temperature(3D-OT),which combines static graph learning and dynamic graph learning to automatically mine two unknown dependencies between sequences based on the original 3D-OT data without prior knowledge.Temporal and spatial dependencies in the time series were then captured using temporal and graph convolutions.We also integrated dynamic graph learning,static graph learning,graph convolution,and temporal convolution into an end-to-end framework for 3D-OT prediction using time-series grid data.In this study,we conducted prediction experiments using high-resolution 3D-OT from the Copernicus global ocean physical reanalysis,with data covering the vertical variation of temperature from the sea surface to 1000 m below the sea surface.We compared five mainstream models that are commonly used for ocean temperature prediction,and the results showed that the method achieved the best prediction results at all prediction scales.
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
基金Supported by the Key Research and Development Plan of Shaanxi Province,No.2021SF-298.
文摘BACKGROUND Neurovascular compression(NVC) is the main cause of primary trigeminal neuralgia(TN) and hemifacial spasm(HFS). Microvascular decompression(MVD) is an effective surgical method for the treatment of TN and HFS caused by NVC. The judgement of NVC is a critical step in the preoperative evaluation of MVD, which is related to the effect of MVD treatment. Magnetic resonance imaging(MRI) technology has been used to detect NVC prior to MVD for several years. Among many MRI sequences, three-dimensional time-of-flight magnetic resonance angiography(3D TOF MRA) is the most widely used. However, 3D TOF MRA has some shortcomings in detecting NVC. Therefore, 3D TOF MRA combined with high resolution T2-weighted imaging(HR T2WI) is considered to be a more effective method to detect NVC.AIM To determine the value of 3D TOF MRA combined with HR T2WI in the judgment of NVC, and thus to assess its value in the preoperative evaluation of MVD.METHODS Related studies published from inception to September 2022 based on PubMed, Embase, Web of Science, and the Cochrane Library were retrieved. Studies that investigated 3D TOF MRA combined with HR T2WI to judge NVC in patients with TN or HFS were included according to the inclusion criteria. Studies without complete data or not relevant to the research topics were excluded. The Quality Assessment of Diagnostic Accuracy Studies checklist was used to assess the quality of included studies. The publication bias of the included literature was examined by Deeks’ test. An exact binomial rendition of the bivariate mixed-effects regression model was used to synthesize data. Data analysis was performed using the MIDAS module of statistical software Stata 16.0. Two independent investigators extracted patient and study characteristics, and discrepancies were resolved by consensus. Individual and pooled sensitivities and specificities were calculated. The I_(2) statistic and Q test were used to test heterogeneity. The study was registered on the website of PROSERO(registration No. CRD42022357158).RESULTS Our search identified 595 articles, of which 12(including 855 patients) fulfilled the inclusion criteria. Bivariate analysis showed that the pooled sensitivity and specificity of 3D TOF MRA combined with HR T2WI for detecting NVC were 0.96 [95% confidence interval(CI): 0.92-0.98] and 0.92(95%CI: 0.74-0.98), respectively. The pooled positive likelihood ratio was 12.4(95%CI: 3.2-47.8), pooled negative likelihood ratio was 0.04(95%CI: 0.02-0.09), and pooled diagnostic odds ratio was 283(95%CI: 50-1620). The area under the receiver operating characteristic curve was 0.98(95%CI: 0.97-0.99). The studies showed no substantial heterogeneity(I2 = 0, Q = 0.001 P = 0.50).CONCLUSION Our results suggest that 3D TOF MRA combined with HR T2WI has excellent sensitivity and specificity for judging NVC in patients with TN or HFS. This method can be used as an effective tool for preoperative evaluation of MVD.
基金supported in part by the General Program of Natural Science Foundation of Hubei Province,China(Grant No.2020CFB548)a Project in 2021 of Science and Technology Support Plan of Guizhou Province,China(Grant No.202158413293820389).
文摘Objective:To evaluate the clinical efficacy of the preoperative digita1 design combined with three dimensional(3D)printing models to assist percutaneous kyphoplasty(PKP)treatment for thoracolumbar compression frac tures.Methods:From January 2018 to August 2020,we obtained data of 99 patients diagnosed thoracolumbar compression fractures.These patients were divided into control group(n=50)underwent traditional PKP surgery,and observation group(n=49)underwent preoperative digital design combined with 3D printing model assisted PKP treatment.The clinical efficacy was evaluated with five parameters,including operation time,number of intraoperative radiographs,visual analogue scale(VAS)score,Cobb Angle change,and high compression rate of injured vertebrae.Results:There were statistically significant differences of operation time and number of intraoperative radio graphs between the two groups(P<0.05).For VAS score,Cobb Angle change and vertebral height compression rate,all of these three parameters were significantly improved when the patients accepted surgery teatment in two groups(P<0.05).However,there were no significant differences between control group and observation group for these three parameters either before or after surgery(P>0.05).Conclusions:Through the design of preoperative surgical guide plate and the application of 3D printing model to guide the operation,the precise design of preoperative surgical puncture site and puncture Angle of the injured vertebra was realized,the number of intraoperative radiographs was reduced,the operation time was shortened and the operation efficiency was improved.
基金Fundamental Research Funds for the Central Universities,China(Nos.2232022D-11 and 22D128102/007)Jiangsu Transformation and Upgrading Funding Program for Industrial and Information Industry,ChinaShanghai Natural Science Foundation of Shanghai Municipal Science and Technology Commission,China(No.20ZR1401600)。
文摘With the wide use of three-dimensional woven spacer composites(3DWSCs),the market expects greater mechanical properties from this material.By changing the weft fastening method of the traditional I-shape pile yarns,we designed three-dimensional woven spacer fabrics(3DWSFs)and 3DWSCs with the weft V-shape to improve the compression performance of traditional 3DWSFs.The effects of weft binding structures,V-pile densities,and V-shaped angle were investigated in this paper.It is found that the compression resistance of 3DWSFs with the weft V-shape is improved compared to that with the weft I-shape,the fabric height recovery rate is as high as 95.7%,and the average elastic recovery rate is 59.39%.When the interlayer pile yarn density is the same,the weft V-shaped and weft I-shaped 3DWSCs have similar flatwise pressure and edgewise pressure performance.The compression properties of the composite improve as the density of the V-pile yarns increases.The flatwise compression load decreases as the V-shaped angle decreases.When the V-shaped angle is 28°and 42°,the latitudinal V-shaped 3DWSCs perform exceptionally well in terms of anti-compression cushioning.The V-shaped weft binding method offers a novel approach to structural design of 3DWSCs.
基金Supported by the National Natural Science Foundation of China(61076019,61106018)the Aeronautical Science Foundation of China(20115552031)+3 种基金the China Postdoctoral Science Foundation(20100481134)the Jiangsu Province Key Technology R&D Program(BE2010003)the Nanjing University of Aeronautics and Astronautics Research Funding(NS2010115)the Nanjing University of Aeronatics and Astronautics Initial Funding for Talented Faculty(1004-YAH10027)~~
文摘Test data compression and test resource partitioning (TRP) are essential to reduce the amount of test data in system-on-chip testing. A novel variable-to-variable-length compression codes is designed as advanced fre- quency-directed run-length (AFDR) codes. Different [rom frequency-directed run-length (FDR) codes, AFDR encodes both 0- and 1-runs and uses the same codes to the equal length runs. It also modifies the codes for 00 and 11 to improve the compression performance. Experimental results for ISCAS 89 benchmark circuits show that AFDR codes achieve higher compression ratio than FDR and other compression codes.
文摘This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method
基金The National Basic Research Program of China under contract No. 2013CB430304the National High-Tech R&D Program of China under contract No. 2013AA09A505the National Natural Science Foundation of China under contract Nos 41030854,40906015,40906016,41106005 and 41176003
文摘In order to solve the so-called "bull-eye" problem caused by using a simple bilinear interpolation as an observational mapping operator in the cost function in the multigrid three-dimensional variational (3DVAR) data assimilation scheme, a smoothing term, equivalent to a penalty term, is introduced into the cost function to serve as a means of troubleshooting. A theoretical analysis is first performed to figure out what on earth results in the issue of "bull-eye", and then the meaning of such smoothing term is elucidated and the uniqueness of solution of the multigrid 3DVAR with the smoothing term added is discussed through the theoretical deduction for one-dimensional (1D) case, and two idealized data assimilation experiments (one- and two-dimensional (2D) cases). By exploring the relationship between the smoothing term and the recursive filter theoretically and practically, it is revealed why satisfied analysis results can be achieved by using such proposed solution for the issue of the multigrid 3DVAR.
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.
基金This project is supported by Provincial Key Project of Science and Technology of Zhejiang(No.2003C21031).
文摘NC code or STL file can be generated directly from measuring data in a fastreverse-engineering mode. Compressing the massive data from laser scanner is the key of the newmode. An adaptive compression method based on triangulated-surfaces model is put forward.Normal-vector angles between triangles are computed to find prime vertices for removal. Ring datastructure is adopted to save massive data effectively. It allows the efficient retrieval of allneighboring vertices and triangles of a given vertices. To avoid long and thin triangles, a newre-triangulation approach based on normalized minimum-vertex-distance is proposed, in which thevertex distance and interior angle of triangle are considered. Results indicate that the compressionmethod has high efficiency and can get reliable precision. The method can be applied in fastreverse engineering to acquire an optimal subset of the original massive data.
文摘Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.
基金The authors would like to acknowledge the support from Project“973”of the State Key Fundamental Research under grant G1998030415.
文摘Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.
文摘A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.
文摘The prediction of solar radiation is important for several applications in renewable energy research. There are a number of geographical variables which affect solar radiation prediction, the identification of these variables for accurate solar radiation prediction is very important. This paper presents a hybrid method for the compression of solar radiation using predictive analysis. The prediction of minute wise solar radiation is performed by using different models of Artificial Neural Networks (ANN), namely Multi-layer perceptron neural network (MLPNN), Cascade feed forward back propagation (CFNN) and Elman back propagation (ELMNN). Root mean square error (RMSE) is used to evaluate the prediction accuracy of the three ANN models used. The information and knowledge gained from the present study could improve the accuracy of analysis concerning climate studies and help in congestion control.
基金the National Natural Science Foundation of China(No.11803036)Climbing Program of Changchun University(No.ZKP202114).
文摘Multispectral image compression and encryption algorithms commonly suffer from issues such as low compression efficiency,lack of synchronization between the compression and encryption proces-ses,and degradation of intrinsic image structure.A novel approach is proposed to address these is-sues.Firstly,a chaotic sequence is generated using the Lorenz three-dimensional chaotic mapping to initiate the encryption process,which is XORed with each spectral band of the multispectral image to complete the initial encryption of the image.Then,a two-dimensional lifting 9/7 wavelet transform is applied to the processed image.Next,a key-sensitive Arnold scrambling technique is employed on the resulting low-frequency image.It effectively eliminates spatial redundancy in the multispectral image while enhancing the encryption process.To optimize the compression and encryption processes further,fast Tucker decomposition is applied to the wavelet sub-band tensor.It effectively removes both spectral redundancy and residual spatial redundancy in the multispectral image.Finally,the core tensor and pattern matrix obtained from the decomposition are subjected to entropy encoding,and real-time chaotic encryption is implemented during the encoding process,effectively integrating compression and encryption.The results show that the proposed algorithm is suitable for occasions with high requirements for compression and encryption,and it provides valuable insights for the de-velopment of compression and encryption in multispectral field.
文摘The method of data compression, using orthogonal transform, is introduced so as to insure the minimal distortion of signal restoration. It, featured with transformation, can compress the data according to the needed precision. The ratio of compressed data is closely related to precision. The results show it to be favorable to different kinds of data compression.
文摘Due to the large scale and complexity of civil infrastructures, structural health monitoring typically requires a substantial number of sensors, which consequently generate huge volumes of sensor data. Innovative sensor data compression techniques are highly desired to facilitate efficient data storage and remote retrieval of sensor data. This paper presents a vibration sensor data compression algorithm based on the Differential Pulse Code Modulation (DPCM) method and the consideration of effects of signal distortion due to lossy data compression on structural system identification. The DPCM system concerned consists of two primary components: linear predictor and quantizer. For the DPCM system considered in this study, the Least Square method is used to derive the linear predictor coefficients and Jayant quantizer is used for scalar quantization. A 5-DOF model structure is used as the prototype structure in numerical study. Numerical simulation was carried out to study the performance of the proposed DPCM-based data compression algorithm as well as its effect on the accuracy of structural identification including modal parameters and second order structural parameters such as stiffness and damping coefficients. It is found that the DPCM-based sensor data compression method is capable of reducing the raw sensor data size to a significant extent while having a minor effect on the modal parameters as well as second order structural parameters identified from reconstructed sensor data.
文摘This paper presents a simple but eifective algorithm to speed up the codebook search in a vector quantization scheme of SAR raw data when a minimum square error(MSE) criterion is used. A considerable reduction in the number of operations is achieved.
文摘Multistage Vector Quantization(MSVQ) can achieve very low encoding and storage complexity in comparison to unstructured vector quantization. However, the conventional MSVQ is suboptimal with respect to the overall performance measure. This paper proposes a new technology to design the decoder codebook, which is different from the encoder codebook to optimise the overall performance. The performance improvement is achieved with no effect on encoding complexity, both storage and time consuming, but a modest increase in storage complexity of decoder.
文摘A sixteen tree method of data compression of bilevel image is described.Thismethod has high efficiency,no information loss during compression,and easy to realize.