In consultative committee for space data systems(CCSDS) file delivery protocol(CFDP) recommendation of reliable transmission,there are no detail transmission procedure and delay calculation of prompted negative ac...In consultative committee for space data systems(CCSDS) file delivery protocol(CFDP) recommendation of reliable transmission,there are no detail transmission procedure and delay calculation of prompted negative acknowledge and asynchronous negative acknowledge models.CFDP is designed to provide data and storage management,story and forward,custody transfer and reliable end-to-end delivery over deep space characterized by huge latency,intermittent link,asymmetric bandwidth and big bit error rate(BER).Four reliable transmission models are analyzed and an expected file-delivery time is calculated with different trans-mission rates,numbers and sizes of packet data units,BERs and frequencies of external events,etc.By comparison of four CFDP models,the requirement of BER for typical missions in deep space is obtained and rules of choosing CFDP models under different uplink state informations are given,which provides references for protocol models selection,utilization and modification.展开更多
Data layout in a file system is the organization of data stored in external storages. The data layout has a huge impact on performance of storage systems. We survey three main kinds of data layout in traditional file ...Data layout in a file system is the organization of data stored in external storages. The data layout has a huge impact on performance of storage systems. We survey three main kinds of data layout in traditional file systems: in-place update file system, log-structured file system, and copy-on-write file sys- tem. Each file system has its own strengths and weaknesses under different circumstances. We also include a recent us- age of persistent layout in a file system that combines both flash memory and byte- addressable non- volatile memory. With this survey, we conclude that persistent data layout in file systems may evolve dramatically in the era of emerging non-volatile memory.展开更多
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorith...In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.展开更多
Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify dat...Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups.展开更多
In the Big Data era,numerous sources and environments generate massive amounts of data.This enormous amount of data necessitates specialized advanced tools and procedures that effectively evaluate the information and ...In the Big Data era,numerous sources and environments generate massive amounts of data.This enormous amount of data necessitates specialized advanced tools and procedures that effectively evaluate the information and anticipate decisions for future changes.Hadoop is used to process this kind of data.It is known to handle vast volumes of data more efficiently than tiny amounts,which results in inefficiency in the framework.This study proposes a novel solution to the problem by applying the Enhanced Best Fit Merging algorithm(EBFM)that merges files depending on predefined parameters(type and size).Implementing this algorithm will ensure that the maximum amount of the block size and the generated file size will be in the same range.Its primary goal is to dynamically merge files with the stated criteria based on the file type to guarantee the efficacy and efficiency of the established system.This procedure takes place before the files are available for the Hadoop framework.Additionally,the files generated by the system are named with specific keywords to ensure there is no data loss(file overwrite).The proposed approach guarantees the generation of the fewest possible large files,which reduces the input/output memory burden and corresponds to the Hadoop framework’s effectiveness.The findings show that the proposed technique enhances the framework’s performance by approximately 64%while comparing all other potential performance-impairing variables.The proposed approach is implementable in any environment that uses the Hadoop framework,not limited to smart cities,real-time data analysis,etc.展开更多
Data hiding(DH)is an important technology for securely transmitting secret data in networks,and has increasing become a research hotspot throughout the world.However,for Joint photographic experts group(JPEG)images,it...Data hiding(DH)is an important technology for securely transmitting secret data in networks,and has increasing become a research hotspot throughout the world.However,for Joint photographic experts group(JPEG)images,it is difficult to balance the contradiction among embedded capacity,visual quality and the file size increment in existing data hiding schemes.Thus,to deal with this problem,a high-imperceptibility data hiding for JPEG images is proposed based on direction modification.First,this proposed scheme sorts all of the quantized discrete cosine transform(DCT)block in ascending order according to the number of non-consecutive-zero alternating current(AC)coefficients.Then it selects non-consecutive-zero AC coefficients with absolute values less than or equal to 1 at the same frequency position in two adjacent blocks for pairing.Finally,the 2-bit secret data can be embedded into a coefficient-pair by using the filled reference matrix and the designed direction modification rules.The experiment was conducted on 5 standard test images and 1000 images of BOSSbase dataset,respectively.The experimental results showed that the visual quality of the proposed scheme was improved by 1∼4 dB compared with the comparison schemes,and the file size increment was reduced at most to 15%of the comparison schemes.展开更多
甚高频数据交换系统(Very high frequency Data Exchange System,VDES)作为新一代船舶通信系统,具有广阔的应用前景。由于卫星相对船舶的高速运动,VDES中上行应用特定消息(Application-specific Message,ASM)链路会产生较大的多普勒频移...甚高频数据交换系统(Very high frequency Data Exchange System,VDES)作为新一代船舶通信系统,具有广阔的应用前景。由于卫星相对船舶的高速运动,VDES中上行应用特定消息(Application-specific Message,ASM)链路会产生较大的多普勒频移,在接收端仅依靠已知训练序列估计的频偏等信道参数无法满足正确解调的性能要求。为此提出一种基于判决反馈的解调方法,通过分段解调,缩短每次解调的数据长度,提高解调时对频偏的容忍度,并利用每段解调的结果作为下一段未解调数据的导频,估计出当前数据中的信道参数。仿真结果表明,所提算法相较于无反馈相干解调算法性能大大提升。在上述研究的基础上,在可编程逻辑器件上实现了对ASM无导频上行链路的正确解调。展开更多
基金supported by the National Natural Science Fandation of China (6067208960772075)
文摘In consultative committee for space data systems(CCSDS) file delivery protocol(CFDP) recommendation of reliable transmission,there are no detail transmission procedure and delay calculation of prompted negative acknowledge and asynchronous negative acknowledge models.CFDP is designed to provide data and storage management,story and forward,custody transfer and reliable end-to-end delivery over deep space characterized by huge latency,intermittent link,asymmetric bandwidth and big bit error rate(BER).Four reliable transmission models are analyzed and an expected file-delivery time is calculated with different trans-mission rates,numbers and sizes of packet data units,BERs and frequencies of external events,etc.By comparison of four CFDP models,the requirement of BER for typical missions in deep space is obtained and rules of choosing CFDP models under different uplink state informations are given,which provides references for protocol models selection,utilization and modification.
基金supported by ZTE Industry-Academia-Research Cooperation Funds
文摘Data layout in a file system is the organization of data stored in external storages. The data layout has a huge impact on performance of storage systems. We survey three main kinds of data layout in traditional file systems: in-place update file system, log-structured file system, and copy-on-write file sys- tem. Each file system has its own strengths and weaknesses under different circumstances. We also include a recent us- age of persistent layout in a file system that combines both flash memory and byte- addressable non- volatile memory. With this survey, we conclude that persistent data layout in file systems may evolve dramatically in the era of emerging non-volatile memory.
文摘In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.
基金Supported by National Natural Science Foundation of China (No. 50475117)Tianjin Natural Science Foundation (No.06YFJMJC03700).
文摘Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups.
基金This research was supported by the Universiti Sains Malaysia(USM)and the ministry of Higher Education Malaysia through Fundamental Research Grant Scheme(FRGS-Grant No:FRGS/1/2020/TK0/USM/02/1).
文摘In the Big Data era,numerous sources and environments generate massive amounts of data.This enormous amount of data necessitates specialized advanced tools and procedures that effectively evaluate the information and anticipate decisions for future changes.Hadoop is used to process this kind of data.It is known to handle vast volumes of data more efficiently than tiny amounts,which results in inefficiency in the framework.This study proposes a novel solution to the problem by applying the Enhanced Best Fit Merging algorithm(EBFM)that merges files depending on predefined parameters(type and size).Implementing this algorithm will ensure that the maximum amount of the block size and the generated file size will be in the same range.Its primary goal is to dynamically merge files with the stated criteria based on the file type to guarantee the efficacy and efficiency of the established system.This procedure takes place before the files are available for the Hadoop framework.Additionally,the files generated by the system are named with specific keywords to ensure there is no data loss(file overwrite).The proposed approach guarantees the generation of the fewest possible large files,which reduces the input/output memory burden and corresponds to the Hadoop framework’s effectiveness.The findings show that the proposed technique enhances the framework’s performance by approximately 64%while comparing all other potential performance-impairing variables.The proposed approach is implementable in any environment that uses the Hadoop framework,not limited to smart cities,real-time data analysis,etc.
基金supported by the National Natural Science Foundation of China (62072325)Shanxi Scholarship Council of China (HGKY2019081)+1 种基金Fundamental Research Program of Shanxi Province (202103021224272)TYUST SRIF (20212039).
文摘Data hiding(DH)is an important technology for securely transmitting secret data in networks,and has increasing become a research hotspot throughout the world.However,for Joint photographic experts group(JPEG)images,it is difficult to balance the contradiction among embedded capacity,visual quality and the file size increment in existing data hiding schemes.Thus,to deal with this problem,a high-imperceptibility data hiding for JPEG images is proposed based on direction modification.First,this proposed scheme sorts all of the quantized discrete cosine transform(DCT)block in ascending order according to the number of non-consecutive-zero alternating current(AC)coefficients.Then it selects non-consecutive-zero AC coefficients with absolute values less than or equal to 1 at the same frequency position in two adjacent blocks for pairing.Finally,the 2-bit secret data can be embedded into a coefficient-pair by using the filled reference matrix and the designed direction modification rules.The experiment was conducted on 5 standard test images and 1000 images of BOSSbase dataset,respectively.The experimental results showed that the visual quality of the proposed scheme was improved by 1∼4 dB compared with the comparison schemes,and the file size increment was reduced at most to 15%of the comparison schemes.
文摘甚高频数据交换系统(Very high frequency Data Exchange System,VDES)作为新一代船舶通信系统,具有广阔的应用前景。由于卫星相对船舶的高速运动,VDES中上行应用特定消息(Application-specific Message,ASM)链路会产生较大的多普勒频移,在接收端仅依靠已知训练序列估计的频偏等信道参数无法满足正确解调的性能要求。为此提出一种基于判决反馈的解调方法,通过分段解调,缩短每次解调的数据长度,提高解调时对频偏的容忍度,并利用每段解调的结果作为下一段未解调数据的导频,估计出当前数据中的信道参数。仿真结果表明,所提算法相较于无反馈相干解调算法性能大大提升。在上述研究的基础上,在可编程逻辑器件上实现了对ASM无导频上行链路的正确解调。