This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac...This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.展开更多
DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the product...DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology.展开更多
Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)...Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS.展开更多
To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is nece...To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is necessary during data readout in HDS.In this study,we proposed a complex amplitude demodulation method based on deep learning from a single-shot diffraction intensity image and verified it by a non-interferometric lensless experiment demodulating four-level amplitude and four-level phase.By analyzing the correlation between the diffraction intensity features and the amplitude and phase encoding data pages,the inverse problem was decomposed into two backward operators denoted by two convolutional neural networks(CNNs)to demodulate amplitude and phase respectively.The experimental system is simple,stable,and robust,and it only needs a single diffraction image to realize the direct demodulation of both amplitude and phase.To our investigation,this is the first time in HDS that multilevel complex amplitude demodulation is achieved experimentally from one diffraction intensity image without iterations.展开更多
The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can b...The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.展开更多
The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to ...The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks.展开更多
Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic ...Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future.展开更多
The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarizatio...The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development.展开更多
In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and center...In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and centering on the server, the data will store model to data- centric data storage model. Storage is considered from the start, just keep a series of data, for the management system and storage device rarely consider the intrinsic value of the stored data. The prosperity of the Internet has changed the world data storage, and with the emergence of many new applications. Theoretically, the proposed algorithm has the ability of dealing with massive data and numerically, the algorithm could enhance the processing accuracy and speed which will be meaningful.展开更多
Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and com...Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
Internet of Things(IoT)based sensor network is largely utilized in various field for transmitting huge amount of data due to their ease and cheaper installation.While performing this entire process,there is a high pos...Internet of Things(IoT)based sensor network is largely utilized in various field for transmitting huge amount of data due to their ease and cheaper installation.While performing this entire process,there is a high possibility for data corruption in the mid of transmission.On the other hand,the network performance is also affected due to various attacks.To address these issues,an efficient algorithm that jointly offers improved data storage and reliable routing is proposed.Initially,after the deployment of sensor nodes,the election of the storage node is achieved based on a fuzzy expert system.Improved Random Linear Network Coding(IRLNC)is used to create an encoded packet.This encoded packet from the source and neighboring nodes is transmitted to the storage node.Finally,to transmit the encoded packet from the storage node to the destination shortest path is found using the Destination Sequenced Distance Vector(DSDV)algorithm.Experimental analysis of the proposed work is carried out by evaluating some of the statistical metrics.Average residual energy,packet delivery ratio,compression ratio and storage time achieved for the proposed work are 8.8%,0.92%,0.82%,and 69 s.Based on this analysis,it is revealed that better data storage system and system reliability is attained using this proposed work.展开更多
The ongoing quest for higher data storage density has led to a plethora of innovations in the field of optical data storage.This review paper provides a comprehensive overview of recent advancements in next-generation...The ongoing quest for higher data storage density has led to a plethora of innovations in the field of optical data storage.This review paper provides a comprehensive overview of recent advancements in next-generation optical data storage,offering insights into various technological roadmaps.We pay particular attention to multidimensional and superresolution approaches,each of which uniquely addresses the challenge of dense storage.The multidimensional approach exploits multiple parameters of light,allowing for the storage of multiple bits of information within a single voxel while still adhering to diffraction limitation.Alternatively,superresolution approaches leverage the photoexcitation and photoinhibition properties of materials to create diffraction-unlimited data voxels.We conclude by summarizing the immense opportunities these approaches present,while also outlining the formidable challenges they face in the transition to industrial applications.展开更多
Polypeptides consisting of amino acid(AA)sequences are suitable for high-density information storage.However,the lack of suitable encoding systems,which accommodate the characteristics of polypeptide synthesis,storage...Polypeptides consisting of amino acid(AA)sequences are suitable for high-density information storage.However,the lack of suitable encoding systems,which accommodate the characteristics of polypeptide synthesis,storage and sequencing,impedes the application of polypeptides for large-scale digital data storage.To address this,two reliable and highly efficient encoding systems,i.e.RaptorQ-Arithmetic-Base64-Shuffle-RS(RABSR)and RaptorQArithmetic-Huffman-Rotary-Shuffle-RS(RAHRSR)systems,are developed for polypeptide data storage.The two encoding systems realized the advantages of compressing data,correcting errors of AA chain loss,correcting errors within AA chains,eliminating homopolymers,and pseudo-randomized encrypting.The coding efficiency without arithmetic compression and error correction of audios,pictures and texts by the RABSR system was 3.20,3.12 and 3.53 Bits/AA,respectively.While that using the RAHRSR system reached 4.89,4.80 and 6.84 Bits/AA,respectively.When implemented with redundancy for error correction and arithmetic compression to reduce redundancy,the coding efficiency of audios,pictures and texts by the RABSR system was 4.43,4.36 and 5.22 Bits/AA,respectively.This efficiency further increased to 7.24,7.11 and 9.82 Bits/AA by the RAHRSR system,respectively.Therefore,the developed hexadecimal polypeptide-based systems may provide a new scenario for highly reliable and highly efficient data storage.展开更多
A kind of optical data storage medium based on electron-trapping materials,Y_(3)Al_(5)O_(12):Ce^(3+)fluorescent ceramic,was developed by vacuum sintering technology.The medium shows sufficiently deep traps[1.67 and 0....A kind of optical data storage medium based on electron-trapping materials,Y_(3)Al_(5)O_(12):Ce^(3+)fluorescent ceramic,was developed by vacuum sintering technology.The medium shows sufficiently deep traps[1.67 and 0.77 eV].The properties of trap levels were researched by thermoluminescence curves,and the optical storage mechanism based on Ce^(3+)ion doping was proposed.More importantly,the data can be written-in by 254 nm UV light,and readout by heating[300°C].This work expands the application fields of fluorescent ceramics,and it is expected to promote the development of electron-trapping materials.展开更多
With the rapid development of information technology,the development of blockchain technology has also been deeply impacted.When performing block verification in the blockchain network,if all transactions are verified...With the rapid development of information technology,the development of blockchain technology has also been deeply impacted.When performing block verification in the blockchain network,if all transactions are verified on the chain,this will cause the accumulation of data on the chain,resulting in data storage problems.At the same time,the security of data is also challenged,which will put enormous pressure on the block,resulting in extremely low communication efficiency of the block.The traditional blockchain system uses theMerkle Tree method to store data.While verifying the integrity and correctness of the data,the amount of proof is large,and it is impossible to verify the data in batches.A large amount of data proof will greatly impact the verification efficiency,which will cause end-to-end communication delays and seriously affect the blockchain system’s stability,efficiency,and security.In order to solve this problem,this paper proposes to replace the Merkle tree with polynomial commitments,which take advantage of the properties of polynomials to reduce the proof size and communication consumption.By realizing the ingenious use of aggregated proof and smart contracts,the verification efficiency of blocks is improved,and the pressure of node communication is reduced.展开更多
This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technol...This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support.展开更多
The advance of nanophotonics has provided a variety of avenues for light–matter interaction at the nanometer scale through the enriched mechanisms for physical and chemical reactions induced by nanometer-confined opt...The advance of nanophotonics has provided a variety of avenues for light–matter interaction at the nanometer scale through the enriched mechanisms for physical and chemical reactions induced by nanometer-confined optical probes in nanocomposite materials.These emerging nanophotonic devices and materials have enabled researchers to develop disruptive methods of tremendously increasing the storage capacity of current optical memory.In this paper,we present a review of the recent advancements in nanophotonics-enabled optical storage techniques.Particularly,we offer our perspective of using them as optical storage arrays for next-generation exabyte data centers.展开更多
There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed 'big data'. The structural shift of the storage m...There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed 'big data'. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mecha- nism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the ex- isling approaches using Brewer's CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.展开更多
Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abro...Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abroad. They mainly include the NASA EOS,World Wind,Google Earth,Google Maps,Bing Maps,Microsoft TerraServer,ESA,Earth Simulator,GeoEye,Map World,China Centre for Resources Satellite Data and Application,National Satellite Meteorological Centre,and National Satellite Ocean Application Service. By summing up the practical data storage and management technologies in terms of remote sensing data storage organization and storage architecture,it will be helpful to seek more suitable techniques and methods for massive remote sensing data storage and management.展开更多
文摘This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.
基金supported by the National Key Research and Development Program of China(2018YFA0900100)the Natural Science Foundation of Tianjin,China(19JCJQJC63300)Tianjin University。
文摘DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology.
基金supports from the National Key R&D Program of China (No. 2021YFB2802000 and 2021YFB2800500)the National Natural Science Foundation of China (Grant Nos. U20A20211, 51902286, 61775192, 61905215, and 62005164)+2 种基金Key Research Project of Zhejiang Labthe State Key Laboratory of High Field Laser Physics (Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences)China Postdoctoral Science Foundation (2021M702799)。
文摘Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS.
基金We are grateful for financial supports from National Key Research and Development Program of China(2018YFA0701800)Project of Fujian Province Major Science and Technology(2020HZ01012)+1 种基金Natural Science Foundation of Fujian Province(2021J01160)National Natural Science Foundation of China(62061136005).
文摘To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is necessary during data readout in HDS.In this study,we proposed a complex amplitude demodulation method based on deep learning from a single-shot diffraction intensity image and verified it by a non-interferometric lensless experiment demodulating four-level amplitude and four-level phase.By analyzing the correlation between the diffraction intensity features and the amplitude and phase encoding data pages,the inverse problem was decomposed into two backward operators denoted by two convolutional neural networks(CNNs)to demodulate amplitude and phase respectively.The experimental system is simple,stable,and robust,and it only needs a single diffraction image to realize the direct demodulation of both amplitude and phase.To our investigation,this is the first time in HDS that multilevel complex amplitude demodulation is achieved experimentally from one diffraction intensity image without iterations.
基金This research was supported in part by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)in part by the NRF grant funded by the Korea government(MSIT)(NRF-2022R1A2C1004401)in part by the 2022 Yeungnam University Research Grant.
文摘The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods.
文摘The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks.
基金financial supports from the National Natural Science Foundation of China(Grant Nos.62174073,61875073,11674130,91750110 and 61522504)the National Key R&D Program of China(Grant No.2018YFB1107200)+3 种基金the Guangdong Provincial Innovation and Entrepren-eurship Project(Grant No.2016ZT06D081)the Natural Science Founda-tion of Guangdong Province,China(Grant Nos.2016A030306016 and 2016TQ03X981)the Pearl River Nova Program of Guangzhou(Grant No.201806010040)the Technology Innovation and Development Plan of Yantai(Grant No.2020XDRH095).
文摘Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61774034 and 12104090)。
文摘The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development.
文摘In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and centering on the server, the data will store model to data- centric data storage model. Storage is considered from the start, just keep a series of data, for the management system and storage device rarely consider the intrinsic value of the stored data. The prosperity of the Internet has changed the world data storage, and with the emergence of many new applications. Theoretically, the proposed algorithm has the ability of dealing with massive data and numerically, the algorithm could enhance the processing accuracy and speed which will be meaningful.
基金the National Key R&D Program of China(Grant no.2019YFC1709803)National Natural Science Foundation of China(Grant no.81873183).
文摘Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
文摘Internet of Things(IoT)based sensor network is largely utilized in various field for transmitting huge amount of data due to their ease and cheaper installation.While performing this entire process,there is a high possibility for data corruption in the mid of transmission.On the other hand,the network performance is also affected due to various attacks.To address these issues,an efficient algorithm that jointly offers improved data storage and reliable routing is proposed.Initially,after the deployment of sensor nodes,the election of the storage node is achieved based on a fuzzy expert system.Improved Random Linear Network Coding(IRLNC)is used to create an encoded packet.This encoded packet from the source and neighboring nodes is transmitted to the storage node.Finally,to transmit the encoded packet from the storage node to the destination shortest path is found using the Destination Sequenced Distance Vector(DSDV)algorithm.Experimental analysis of the proposed work is carried out by evaluating some of the statistical metrics.Average residual energy,packet delivery ratio,compression ratio and storage time achieved for the proposed work are 8.8%,0.92%,0.82%,and 69 s.Based on this analysis,it is revealed that better data storage system and system reliability is attained using this proposed work.
基金supported by the National Key Research and Development Program of China(No.2022YFB2804300)the Creative Research Group Project of NSFC(No.61821003)+2 种基金the Innovation Fund of the Wuhan National Laboratory for Optoelectronicsthe Program for HUST Academic Frontier Youth Teamthe Innovation Project of Optics Valley Laboratory.
文摘The ongoing quest for higher data storage density has led to a plethora of innovations in the field of optical data storage.This review paper provides a comprehensive overview of recent advancements in next-generation optical data storage,offering insights into various technological roadmaps.We pay particular attention to multidimensional and superresolution approaches,each of which uniquely addresses the challenge of dense storage.The multidimensional approach exploits multiple parameters of light,allowing for the storage of multiple bits of information within a single voxel while still adhering to diffraction limitation.Alternatively,superresolution approaches leverage the photoexcitation and photoinhibition properties of materials to create diffraction-unlimited data voxels.We conclude by summarizing the immense opportunities these approaches present,while also outlining the formidable challenges they face in the transition to industrial applications.
基金supported by the National Key Research and Development Program of China (2018YFA0902600,2021YFF1200300,and 2020YFA0712102)the National Natural Science Foundation of China (21877104,21834007,22107097,21878258,22020102003,and 22125701)+2 种基金K.C.Wong Education Foundation (GJTD-2018-09)the Youth Innovation Promotion Association of CAS (2021226)the Zhejiang Provincial Natural Science Foundation of China (Y20B060027).
文摘Polypeptides consisting of amino acid(AA)sequences are suitable for high-density information storage.However,the lack of suitable encoding systems,which accommodate the characteristics of polypeptide synthesis,storage and sequencing,impedes the application of polypeptides for large-scale digital data storage.To address this,two reliable and highly efficient encoding systems,i.e.RaptorQ-Arithmetic-Base64-Shuffle-RS(RABSR)and RaptorQArithmetic-Huffman-Rotary-Shuffle-RS(RAHRSR)systems,are developed for polypeptide data storage.The two encoding systems realized the advantages of compressing data,correcting errors of AA chain loss,correcting errors within AA chains,eliminating homopolymers,and pseudo-randomized encrypting.The coding efficiency without arithmetic compression and error correction of audios,pictures and texts by the RABSR system was 3.20,3.12 and 3.53 Bits/AA,respectively.While that using the RAHRSR system reached 4.89,4.80 and 6.84 Bits/AA,respectively.When implemented with redundancy for error correction and arithmetic compression to reduce redundancy,the coding efficiency of audios,pictures and texts by the RABSR system was 4.43,4.36 and 5.22 Bits/AA,respectively.This efficiency further increased to 7.24,7.11 and 9.82 Bits/AA by the RAHRSR system,respectively.Therefore,the developed hexadecimal polypeptide-based systems may provide a new scenario for highly reliable and highly efficient data storage.
基金supported by the National Key Research and Development Program of China(No.2021YFB2802001)the National Natural Science Foundation of China(No.62175153)。
文摘A kind of optical data storage medium based on electron-trapping materials,Y_(3)Al_(5)O_(12):Ce^(3+)fluorescent ceramic,was developed by vacuum sintering technology.The medium shows sufficiently deep traps[1.67 and 0.77 eV].The properties of trap levels were researched by thermoluminescence curves,and the optical storage mechanism based on Ce^(3+)ion doping was proposed.More importantly,the data can be written-in by 254 nm UV light,and readout by heating[300°C].This work expands the application fields of fluorescent ceramics,and it is expected to promote the development of electron-trapping materials.
基金This work is supported by the Fundamental Research Funds for the central Universities(Zhejiang University NGICS Platform),Xiaofeng Yu receives the grant and the URLs to sponsors’websites are https://www.zju.edu.cn/.And the work are supported by China’s National Natural Science Foundation(No.62072249,62072056)JinWang and Yongjun Ren receive the grant and the URLs to sponsors’websites are https://www.nsfc.gov.cn/.This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029)Jin Wang receives the grant and the URLs to sponsors’websites are http://kjt.hunan.gov.cn/.
文摘With the rapid development of information technology,the development of blockchain technology has also been deeply impacted.When performing block verification in the blockchain network,if all transactions are verified on the chain,this will cause the accumulation of data on the chain,resulting in data storage problems.At the same time,the security of data is also challenged,which will put enormous pressure on the block,resulting in extremely low communication efficiency of the block.The traditional blockchain system uses theMerkle Tree method to store data.While verifying the integrity and correctness of the data,the amount of proof is large,and it is impossible to verify the data in batches.A large amount of data proof will greatly impact the verification efficiency,which will cause end-to-end communication delays and seriously affect the blockchain system’s stability,efficiency,and security.In order to solve this problem,this paper proposes to replace the Merkle tree with polynomial commitments,which take advantage of the properties of polynomials to reduce the proof size and communication consumption.By realizing the ingenious use of aggregated proof and smart contracts,the verification efficiency of blocks is improved,and the pressure of node communication is reduced.
文摘This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support.
基金The authors thank the Australian Research Council for its support through the Laureate Fellowship project(FL100100099).
文摘The advance of nanophotonics has provided a variety of avenues for light–matter interaction at the nanometer scale through the enriched mechanisms for physical and chemical reactions induced by nanometer-confined optical probes in nanocomposite materials.These emerging nanophotonic devices and materials have enabled researchers to develop disruptive methods of tremendously increasing the storage capacity of current optical memory.In this paper,we present a review of the recent advancements in nanophotonics-enabled optical storage techniques.Particularly,we offer our perspective of using them as optical storage arrays for next-generation exabyte data centers.
文摘There is a great thrust in industry toward the development of more feasible and viable tools for storing fast-growing volume, velocity, and diversity of data, termed 'big data'. The structural shift of the storage mechanism from traditional data management systems to NoSQL technology is due to the intention of fulfilling big data storage requirements. However, the available big data storage technologies are inefficient to provide consistent, scalable, and available solutions for continuously growing heterogeneous data. Storage is the preliminary process of big data analytics for real-world applications such as scientific experiments, healthcare, social networks, and e-business. So far, Amazon, Google, and Apache are some of the industry standards in providing big data storage solutions, yet the literature does not report an in-depth survey of storage technologies available for big data, investigating the performance and magnitude gains of these technologies. The primary objective of this paper is to conduct a comprehensive investigation of state-of-the-art storage technologies available for big data. A well-defined taxonomy of big data storage technologies is presented to assist data analysts and researchers in understanding and selecting a storage mecha- nism that better fits their needs. To evaluate the performance of different storage architectures, we compare and analyze the ex- isling approaches using Brewer's CAP theorem. The significance and applications of storage technologies and support to other categories are discussed. Several future research challenges are highlighted with the intention to expedite the deployment of a reliable and scalable storage system.
基金supported by the National Basic Research Program of China ("973" Program) (Grant No.61399)
文摘Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abroad. They mainly include the NASA EOS,World Wind,Google Earth,Google Maps,Bing Maps,Microsoft TerraServer,ESA,Earth Simulator,GeoEye,Map World,China Centre for Resources Satellite Data and Application,National Satellite Meteorological Centre,and National Satellite Ocean Application Service. By summing up the practical data storage and management technologies in terms of remote sensing data storage organization and storage architecture,it will be helpful to seek more suitable techniques and methods for massive remote sensing data storage and management.