期刊文献+
共找到66篇文章
< 1 2 4 >
每页显示 20 50 100
DNA Computing with Water Strider Based Vector Quantization for Data Storage Systems
1
作者 A.Arokiaraj Jovith S.Rama Sree +4 位作者 Gudikandhula Narasimha Rao K.Vijaya Kumar Woong Cho Gyanendra Prasad Joshi Sung Won Kim 《Computers, Materials & Continua》 SCIE EI 2023年第3期6429-6444,共16页
The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can b... The exponential growth of data necessitates an effective data storage scheme,which helps to effectively manage the large quantity of data.To accomplish this,Deoxyribonucleic Acid(DNA)digital data storage process can be employed,which encodes and decodes binary data to and from synthesized strands of DNA.Vector quantization(VQ)is a commonly employed scheme for image compression and the optimal codebook generation is an effective process to reach maximum compression efficiency.This article introduces a newDNAComputingwithWater StriderAlgorithm based Vector Quantization(DNAC-WSAVQ)technique for Data Storage Systems.The proposed DNAC-WSAVQ technique enables encoding data using DNA computing and then compresses it for effective data storage.Besides,the DNAC-WSAVQ model initially performsDNA encoding on the input images to generate a binary encoded form.In addition,aWater Strider algorithm with Linde-Buzo-Gray(WSA-LBG)model is applied for the compression process and thereby storage area can be considerably minimized.In order to generate optimal codebook for LBG,the WSA is applied to it.The performance validation of the DNAC-WSAVQ model is carried out and the results are inspected under several measures.The comparative study highlighted the improved outcomes of the DNAC-WSAVQ model over the existing methods. 展开更多
关键词 DNA computing data storage image compression vector quantization ws algorithm space saving
下载PDF
Data Secure Storage Mechanism for IIoT Based on Blockchain 被引量:1
2
作者 Jin Wang Guoshu Huang +2 位作者 R.Simon Sherratt Ding Huang Jia Ni 《Computers, Materials & Continua》 SCIE EI 2024年第3期4029-4048,共20页
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi... With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT. 展开更多
关键词 Blockchain IIoT data storage cryptographic commitment
下载PDF
Analysis of Secured Cloud Data Storage Model for Information
3
作者 Emmanuel Nwabueze Ekwonwune Udo Chukwuebuka Chigozie +1 位作者 Duroha Austin Ekekwe Georgina Chekwube Nwankwo 《Journal of Software Engineering and Applications》 2024年第5期297-320,共24页
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac... This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system. 展开更多
关键词 CLOUD data Information Model data storage Cloud Computing Security system data Encryption
下载PDF
Engineering DNA Materials for Sustainable Data Storage Using a DNA Movable-Type System
4
作者 Zi-Yi Gong Li-Fu Song +3 位作者 Guang-Sheng Pei Yu-Fei Dong Bing-Zhi Li Ying-Jin Yuan 《Engineering》 SCIE EI CAS CSCD 2023年第10期130-136,共7页
DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the product... DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology. 展开更多
关键词 Synthetic biology DNA data storage DNA movable types Economical DNA data storage
下载PDF
High density collinear holographic data storage system 被引量:9
5
作者 Xiaodi TAN Xiao LIN An'an WU Jingliang ZANG 《Frontiers of Optoelectronics》 CSCD 2014年第4期443-449,共7页
Holographic data storage system (HDSS) has been a good candidate for a volumetric recording technology, due to their large storage capacities and high transfer rates, and have been researched for tens of years after... Holographic data storage system (HDSS) has been a good candidate for a volumetric recording technology, due to their large storage capacities and high transfer rates, and have been researched for tens of years after the principle of holography was first proposed. However, these systems, called conventional 2-axis holography, still have essential issues for commercialization of products. Collinear HDSS, in which the information and reference beams are modulated co-axially by the same spatial light modulator (SLM), as a new read/write method for HDSS are very promising. With this unique configuration, the optical pickup can be designed as small as DVDs, and can be placed on one side of the recording media (disc). In the disc structure, the preformatted reflective layer is used for the focus/tracking servo and reading address information, and a dichroic mirror layer is used for detecting holographic recording information without interfering with the preformatted information. A 2-dimensional digital page data format is used and the shift-multiplexing method is employed to increase recording density. As servo technologies are being introduced to control the objective lens to be maintained precisely to the disc in the recording and reconstructing process, a vibration isolator is no longer necessary. Collinear holography can produce a small, practical HDSS more easily than conventional 2-axis holography. In this paper, we introduced the principle of the collinear holography and its media structure of disc. Some results of experimental and theoretical studies suggest that it is a very effective method. We also discussed some methods to increase the recording density and data transfer rates of collinear holography. 展开更多
关键词 holographic data storage system (HDSS) holography optical memory volumetric recording optical disc high density recording
原文传递
Research on data load balancing technology of massive storage systems for wearable devices 被引量:1
6
作者 Shujun Liang Jing Cheng Jianwei Zhang 《Digital Communications and Networks》 SCIE CSCD 2022年第2期143-149,共7页
Because of the limited memory of the increasing amount of information in current wearable devices,the processing capacity of the servers in the storage system can not keep up with the speed of information growth,resul... Because of the limited memory of the increasing amount of information in current wearable devices,the processing capacity of the servers in the storage system can not keep up with the speed of information growth,resulting in low load balancing,long load balancing time and data processing delay.Therefore,a data load balancing technology is applied to the massive storage systems of wearable devices in this paper.We first analyze the object-oriented load balancing method,and formally describe the dynamic load balancing issues,taking the load balancing as a mapping problem.Then,the task of assigning each data node and the request of the corresponding data node’s actual processing capacity are completed.Different data is allocated to the corresponding data storage node to complete the calculation of the comprehensive weight of the data storage node.According to the load information of each data storage node collected by the scheduler in the storage system,the load weight of the current data storage node is calculated and distributed.The data load balancing of the massive storage system for wearable devices is realized.The experimental results show that the average time of load balancing using this method is 1.75h,which is much lower than the traditional methods.The results show the data load balancing technology of the massive storage system of wearable devices has the advantages of short data load balancing time,high load balancing,strong data processing capability,short processing time and obvious application. 展开更多
关键词 Wearable device Massive data data storage system Load balancing Weigh
下载PDF
Ostensibly perpetual optical data storage in glass with ultra-high stability and tailored photoluminescence 被引量:4
7
作者 Zhuo Wang Bo Zhang +1 位作者 Dezhi Tan Jianrong Qiu 《Opto-Electronic Advances》 SCIE EI CAS CSCD 2023年第1期1-8,共8页
Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)... Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS. 展开更多
关键词 ultrafast laser photoluminescence tailoring ultralong lifetime optical data storage
下载PDF
Lensless complex amplitude demodulation based on deep learning in holographic data storage 被引量:2
8
作者 Jianying Hao Xiao Lin +5 位作者 Yongkun Lin Mingyong Chen Ruixian Chen Guohai Situ Hideyoshi Horimai Xiaodi Tan 《Opto-Electronic Advances》 SCIE EI CAS CSCD 2023年第3期42-56,共15页
To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is nece... To increase the storage capacity in holographic data storage(HDS),the information to be stored is encoded into a complex amplitude.Fast and accurate retrieval of amplitude and phase from the reconstructed beam is necessary during data readout in HDS.In this study,we proposed a complex amplitude demodulation method based on deep learning from a single-shot diffraction intensity image and verified it by a non-interferometric lensless experiment demodulating four-level amplitude and four-level phase.By analyzing the correlation between the diffraction intensity features and the amplitude and phase encoding data pages,the inverse problem was decomposed into two backward operators denoted by two convolutional neural networks(CNNs)to demodulate amplitude and phase respectively.The experimental system is simple,stable,and robust,and it only needs a single diffraction image to realize the direct demodulation of both amplitude and phase.To our investigation,this is the first time in HDS that multilevel complex amplitude demodulation is achieved experimentally from one diffraction intensity image without iterations. 展开更多
关键词 holographic data storage complex amplitude demodulation deep learning computational imaging
下载PDF
Data Security Storage Mechanism Based on Blockchain Network
9
作者 Jin Wang WeiOu +3 位作者 Wenhai Wang RSimon Sherratt Yongjun Ren Xiaofeng Yu 《Computers, Materials & Continua》 SCIE EI 2023年第3期4933-4950,共18页
With the rapid development of information technology,the development of blockchain technology has also been deeply impacted.When performing block verification in the blockchain network,if all transactions are verified... With the rapid development of information technology,the development of blockchain technology has also been deeply impacted.When performing block verification in the blockchain network,if all transactions are verified on the chain,this will cause the accumulation of data on the chain,resulting in data storage problems.At the same time,the security of data is also challenged,which will put enormous pressure on the block,resulting in extremely low communication efficiency of the block.The traditional blockchain system uses theMerkle Tree method to store data.While verifying the integrity and correctness of the data,the amount of proof is large,and it is impossible to verify the data in batches.A large amount of data proof will greatly impact the verification efficiency,which will cause end-to-end communication delays and seriously affect the blockchain system’s stability,efficiency,and security.In order to solve this problem,this paper proposes to replace the Merkle tree with polynomial commitments,which take advantage of the properties of polynomials to reduce the proof size and communication consumption.By realizing the ingenious use of aggregated proof and smart contracts,the verification efficiency of blocks is improved,and the pressure of node communication is reduced. 展开更多
关键词 Blockchain cryptographic commitment smart contract data storage
下载PDF
A Secure Microgrid Data Storage Strategy with Directed Acyclic Graph Consensus Mechanism
10
作者 Jian Shang Runmin Guan Wei Wang 《Intelligent Automation & Soft Computing》 SCIE 2023年第9期2609-2626,共18页
The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to ... The wide application of intelligent terminals in microgrids has fueled the surge of data amount in recent years.In real-world scenarios,microgrids must store large amounts of data efficiently while also being able to withstand malicious cyberattacks.To meet the high hardware resource requirements,address the vulnerability to network attacks and poor reliability in the tradi-tional centralized data storage schemes,this paper proposes a secure storage management method for microgrid data that considers node trust and directed acyclic graph(DAG)consensus mechanism.Firstly,the microgrid data storage model is designed based on the edge computing technology.The blockchain,deployed on the edge computing server and combined with cloud storage,ensures reliable data storage in the microgrid.Secondly,a blockchain consen-sus algorithm based on directed acyclic graph data structure is then proposed to effectively improve the data storage timeliness and avoid disadvantages in traditional blockchain topology such as long chain construction time and low consensus efficiency.Finally,considering the tolerance differences among the candidate chain-building nodes to network attacks,a hash value update mechanism of blockchain header with node trust identification to ensure data storage security is proposed.Experimental results from the microgrid data storage platform show that the proposed method can achieve a private key update time of less than 5 milliseconds.When the number of blockchain nodes is less than 25,the blockchain construction takes no more than 80 mins,and the data throughput is close to 300 kbps.Compared with the traditional chain-topology-based consensus methods that do not consider node trust,the proposed method has higher efficiency in data storage and better resistance to network attacks. 展开更多
关键词 MICROGRID data security storage node trust degree directed acyclic graph data structure consensus mechanism secure multi-party computing blockchain
下载PDF
A Review of the Status and Development Strategies of Computer Science and Technology Under the Background of Big Data
11
作者 Junlin Zhang 《Journal of Electronic Research and Application》 2024年第2期49-53,共5页
This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technol... This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support. 展开更多
关键词 Big data Computer science and technology data storage data processing data visualization
下载PDF
JSON-ASR:A lightweight data storage and exchange format for automatic systematic reviews of TCM
12
作者 Ji Xu Hongyong Deng 《TMR Modern Herbal Medicine》 2021年第2期37-43,共7页
Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and com... Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence. 展开更多
关键词 data storage and exchange Automatic systematic reviews Traditional Chinese medicine JavaScript object notation
下载PDF
Fuzzy and IRLNC-based routing approach to improve data storage and system reliability in IoT
13
作者 U.Indumathi A.R.Arunachalam 《Intelligent and Converged Networks》 EI 2024年第1期68-80,共13页
Internet of Things(IoT)based sensor network is largely utilized in various field for transmitting huge amount of data due to their ease and cheaper installation.While performing this entire process,there is a high pos... Internet of Things(IoT)based sensor network is largely utilized in various field for transmitting huge amount of data due to their ease and cheaper installation.While performing this entire process,there is a high possibility for data corruption in the mid of transmission.On the other hand,the network performance is also affected due to various attacks.To address these issues,an efficient algorithm that jointly offers improved data storage and reliable routing is proposed.Initially,after the deployment of sensor nodes,the election of the storage node is achieved based on a fuzzy expert system.Improved Random Linear Network Coding(IRLNC)is used to create an encoded packet.This encoded packet from the source and neighboring nodes is transmitted to the storage node.Finally,to transmit the encoded packet from the storage node to the destination shortest path is found using the Destination Sequenced Distance Vector(DSDV)algorithm.Experimental analysis of the proposed work is carried out by evaluating some of the statistical metrics.Average residual energy,packet delivery ratio,compression ratio and storage time achieved for the proposed work are 8.8%,0.92%,0.82%,and 69 s.Based on this analysis,it is revealed that better data storage system and system reliability is attained using this proposed work. 展开更多
关键词 Internet of Things(IoT) data storage management fuzzy system improved random linear network coding energy utilization system reliability
原文传递
Near-perfect fidelity polarization-encoded multilayer optical data storage based on aligned gold nanorods 被引量:9
14
作者 Linwei Zhu Yaoyu Cao +5 位作者 Qiuqun Chen Xu Ouyang Yi Xu Zhongliang Hu Jianrong Qiu Xiangping Li 《Opto-Electronic Advances》 SCIE 2021年第11期55-63,共9页
Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic ... Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future. 展开更多
关键词 optical data storage aligned gold nanorods FIDELITY nanocomposite film
下载PDF
Multi-authority proxy re-encryption based on CPABE for cloud storage systems 被引量:7
15
作者 Xiaolong Xu Jinglan Zhou +1 位作者 Xinheng Wang Yun Zhang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2016年第1期211-223,共13页
The dissociation between data management and data ownership makes it difficult to protect data security and privacy in cloud storage systems.Traditional encryption technologies are not suitable for data protection in ... The dissociation between data management and data ownership makes it difficult to protect data security and privacy in cloud storage systems.Traditional encryption technologies are not suitable for data protection in cloud storage systems.A novel multi-authority proxy re-encryption mechanism based on ciphertext-policy attribute-based encryption(MPRE-CPABE) is proposed for cloud storage systems.MPRE-CPABE requires data owner to split each file into two blocks,one big block and one small block.The small block is used to encrypt the big one as the private key,and then the encrypted big block will be uploaded to the cloud storage system.Even if the uploaded big block of file is stolen,illegal users cannot get the complete information of the file easily.Ciphertext-policy attribute-based encryption(CPABE)is always criticized for its heavy overload and insecure issues when distributing keys or revoking user's access right.MPRE-CPABE applies CPABE to the multi-authority cloud storage system,and solves the above issues.The weighted access structure(WAS) is proposed to support a variety of fine-grained threshold access control policy in multi-authority environments,and reduce the computational cost of key distribution.Meanwhile,MPRE-CPABE uses proxy re-encryption to reduce the computational cost of access revocation.Experiments are implemented on platforms of Ubuntu and CloudSim.Experimental results show that MPRE-CPABE can greatly reduce the computational cost of the generation of key components and the revocation of user's access right.MPRE-CPABE is also proved secure under the security model of decisional bilinear Diffie-Hellman(DBDH). 展开更多
关键词 cloud storage data partition multi-authority security proxy re-encryption attribute-based encryption(ABE).
下载PDF
Laser-modified luminescence for optical data storage 被引量:1
16
作者 魏鑫 赵伟玮 +3 位作者 郑婷 吕俊鹏 袁学勇 倪振华 《Chinese Physics B》 SCIE EI CAS CSCD 2022年第11期89-99,共11页
The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarizatio... The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development. 展开更多
关键词 LASER LUMINESCENCE data storage
下载PDF
Fault Management Cyber-Physical Systems in Virtual Storage Model
17
作者 Kailash Kumar Ahmad Abdullah Aljabr 《Computers, Materials & Continua》 SCIE EI 2022年第2期3781-3801,共21页
On average,every two years,the amount of data existing globally doubles.Software development will be affected and improved by Cyber-Physical Systems(CPS).The number of problems remained even though developments helped... On average,every two years,the amount of data existing globally doubles.Software development will be affected and improved by Cyber-Physical Systems(CPS).The number of problems remained even though developments helped Information Technology experts extract better value from their storage investments.Because of poor interoperability between different vendors and devices,countless numbers of Storage Area Networks were created.Network setup used for data storage includes a complex and rigid arrangement of routers,switch,hosts/servers,storage arrays.We have evaluated the performance of routing protocol Transmission Control Protocol(TCP)and Fibre Channel Protocol(FCP)under different network scenario by Network Simulator(NS)-3 Simulation.We simulated the Node Failure and NetworkCongestion issuewithDoS attacks and a counter effect on the Packet Distribution Ratio and End-to-End Delay efficiency metrics with different nodes and speed of node mobility.The study is performed for the Simple NetworkManagement Protocol(SNMP)on FCP routing.The results proved that the proposed method isolates the malicious and congested nodes and improves the Network’s performance. 展开更多
关键词 CPS data storage VIRTUALIZATION TCP FCP
下载PDF
Intelligent Identification over Power Big Data:Opportunities,Solutions,and Challenges
18
作者 Liang Luo Xingmei Li +4 位作者 Kaijiang Yang Mengyang Wei Jiong Chen Junqian Yang Liang Yao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第3期1565-1595,共31页
The emergence of power dispatching automation systems has greatly improved the efficiency of power industry operations and promoted the rapid development of the power industry.However,with the convergence and increase... The emergence of power dispatching automation systems has greatly improved the efficiency of power industry operations and promoted the rapid development of the power industry.However,with the convergence and increase in power data flow,the data dispatching network and the main station dispatching automation system have encountered substantial pressure.Therefore,themethod of online data resolution and rapid problemidentification of dispatching automation systems has been widely investigated.In this paper,we performa comprehensive review of automated dispatching of massive dispatching data from the perspective of intelligent identification,discuss unresolved research issues and outline future directions in this area.In particular,we divide intelligent identification over power big data into data acquisition and storage processes,anomaly detection and fault discrimination processes,and fault tracing for dispatching operations during communication.A detailed survey of the solutions to the challenges in intelligent identification over power big data is then presented.Moreover,opportunities and future directions are outlined. 展开更多
关键词 data acquisition data storage anomaly detection service fault-tolerant scheduling
下载PDF
Highly reliable and efficient encoding systems for hexadecimal polypeptide-based data storage 被引量:1
19
作者 Yubin Ren Yi Zhang +7 位作者 Yawei Liu Qinglin Wu Hong-Gang Hu Jingjing Li Chunhai Fan Dong Chen Kai Liu Hongjie Zhang 《Fundamental Research》 CAS CSCD 2023年第2期298-304,共7页
Polypeptides consisting of amino acid(AA)sequences are suitable for high-density information storage.However,the lack of suitable encoding systems,which accommodate the characteristics of polypeptide synthesis,storage... Polypeptides consisting of amino acid(AA)sequences are suitable for high-density information storage.However,the lack of suitable encoding systems,which accommodate the characteristics of polypeptide synthesis,storage and sequencing,impedes the application of polypeptides for large-scale digital data storage.To address this,two reliable and highly efficient encoding systems,i.e.RaptorQ-Arithmetic-Base64-Shuffle-RS(RABSR)and RaptorQArithmetic-Huffman-Rotary-Shuffle-RS(RAHRSR)systems,are developed for polypeptide data storage.The two encoding systems realized the advantages of compressing data,correcting errors of AA chain loss,correcting errors within AA chains,eliminating homopolymers,and pseudo-randomized encrypting.The coding efficiency without arithmetic compression and error correction of audios,pictures and texts by the RABSR system was 3.20,3.12 and 3.53 Bits/AA,respectively.While that using the RAHRSR system reached 4.89,4.80 and 6.84 Bits/AA,respectively.When implemented with redundancy for error correction and arithmetic compression to reduce redundancy,the coding efficiency of audios,pictures and texts by the RABSR system was 4.43,4.36 and 5.22 Bits/AA,respectively.This efficiency further increased to 7.24,7.11 and 9.82 Bits/AA by the RAHRSR system,respectively.Therefore,the developed hexadecimal polypeptide-based systems may provide a new scenario for highly reliable and highly efficient data storage. 展开更多
关键词 Biomaterial POLYPEPTIDE data storage HEXADECIMAL Encoding system
原文传递
Control System in Missiles for Whole-Trajectory-Controlled Trajectory Correction Projectile Based on DSP 被引量:1
20
作者 张志安 陈俊 雷晓云 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2014年第3期325-330,共6页
A control system for correction mechanisms through the whole trajectory is proposed based on the principle of one-dimensional trajectory correction projectile. Digital signal processing( DSP) is utilized as the core c... A control system for correction mechanisms through the whole trajectory is proposed based on the principle of one-dimensional trajectory correction projectile. Digital signal processing( DSP) is utilized as the core controller and gobal positioning system( GPS) is used to measure trajectory parameters to meet the requirements of calculating ballistics and system functions. Firstly,the hardware,mainly including communication module,ballistic calculation module,boosting& detonating module and data storage module,is designed. Secondly,the supporting software is developed based on the communication protocols of GPS and the workflow of control system. Finally,the feasibility and the reliability of the control system are verified through dynamic tests in a car and live firing experiments. The system lays a foundation for the research on trajectory correction projectile for the whole trajectory. 展开更多
关键词 trajectory correction projectile digital signal processing(DSP) GPS entire course controlling data storage
下载PDF
上一页 1 2 4 下一页 到第
使用帮助 返回顶部