In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose...In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.展开更多
Every day,an NDT(Non-Destructive Testing)report will govern key decisions and inform inspection strategies that could affect the flow of millions of dollars which ultimately affects local environments and potential ri...Every day,an NDT(Non-Destructive Testing)report will govern key decisions and inform inspection strategies that could affect the flow of millions of dollars which ultimately affects local environments and potential risk to life.There is a direct correlation between report quality and equipment capability.The more able the equipment is-in terms of efficient data gathering,signal to noise ratio,positioning,and coverage-the more actionable the report is.This results in optimal maintenance and repair strategies providing the report is clear and well presented.Furthermore,when considering tank floor storage inspection it is essential that asset owners have total confidence in inspection findings and the ensuing reports.Tank floor inspection equipment must not only be efficient and highly capable,but data sets should be traceable and integrity maintained throughout.Corrosion mapping of large surface areas such as storage tank bottoms is an inherently arduous and time-consuming process.MFL(magnetic flux leakage)based tank bottom scanners present a well-established and highly rated method for inspection.There are many benefits of using modern MFL technology to generate actionable reports.Chief among these includes efficiency of coverage while gaining valuable information regarding defect location,severity,surface origin and the extent of coverage.More recent advancements in modern MFL tank bottom scanners afford the ability to scan and record data sets at areas of the tank bottom which were previously classed as dead zones or areas not scanned due to physical restraints.An example of this includes scanning the CZ(critical zone)which is the area close to the annular to shell junction weld.Inclusion of these additional dead zones increases overall inspection coverage,quality and traceability.Inspection of the CZ areas allows engineers to quickly determine the integrity of arguably the most important area of the tank bottom.Herein we discuss notable developments in CZ coverage,inspection efficiency and data integrity that combines to deliver an actionable report.The asset owner can interrogate this report to develop pertinent and accurate maintenance and repair strategies.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This a...Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems.展开更多
This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hac...This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.展开更多
China's marine data includes marine hydrology,marine meteorology,marine biology,marine chemistry,marine substrate,marine geophysical,seabed topography and other categories of data,the total amount of data reaches ...China's marine data includes marine hydrology,marine meteorology,marine biology,marine chemistry,marine substrate,marine geophysical,seabed topography and other categories of data,the total amount of data reaches the magnitude of PB,and the amount of data is still increasing.The safe management of these marine data storage is the basis of building a Smart Ocean.This paper discusses the current situation of security management of marine data storage in China,analyzes the problems of security management in domestic marine data storage,and puts forward suggestions.展开更多
Recent years, optically controlled phase-change memory draws intensive attention owing to some advanced applications including integrated all-optical nonvolatile memory, in-memory computing, and neuromorphic computing...Recent years, optically controlled phase-change memory draws intensive attention owing to some advanced applications including integrated all-optical nonvolatile memory, in-memory computing, and neuromorphic computing. The light-induced phase transition is the key for this technology. Traditional understanding on the role of light is the heating effect. Generally, the RESET operation of phase-change memory is believed to be a melt-quenching-amorphization process. However, some recent experimental and theoretical investigations have revealed that ultrafast laser can manipulate the structures of phase-change materials by non-thermal effects and induces unconventional phase transitions including solid-to-solid amorphization and order-to-order phase transitions. Compared with the conventional thermal amorphization,these transitions have potential superiors such as faster speed, better endurance, and low power consumption. This article summarizes some recent progress of experimental observations and theoretical analyses on these unconventional phase transitions. The discussions mainly focus on the physical mechanism at atomic scale to provide guidance to control the phase transitions for optical storage. Outlook on some possible applications of the non-thermal phase transition is also presented to develop new types of devices.展开更多
Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic ...Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future.展开更多
Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)...Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS.展开更多
DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the product...DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology.展开更多
The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarizatio...The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development.展开更多
With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality a...With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.展开更多
This paper introduces agent-based methodology to build a distributed autonomic storage system infrastructure, and an effectively negotiation mechanism based on agent is applied for data location. We present Availabili...This paper introduces agent-based methodology to build a distributed autonomic storage system infrastructure, and an effectively negotiation mechanism based on agent is applied for data location. We present Availability-based Data Allocation (ADA) algorithm as a data placement strategy to achieve high efficient utilization of storage resources by employing multiple distributed storage resources. We use Bloom filter in each storage device to track the location of data. We present the data lookup strategy that small size of read request is handled directly, and large size of read request is handled by cooperation with storage devices.The performance evaluation shows that the data location mechanism is high available and can work well for heterogeneous autonomic storage systems.展开更多
To achieve the high availability of health data in erasure-coded cloud storage systems,the data update performance in erasure coding should be continuously optimized.However,the data update performance is often bottle...To achieve the high availability of health data in erasure-coded cloud storage systems,the data update performance in erasure coding should be continuously optimized.However,the data update performance is often bottlenecked by the constrained cross-rack bandwidth.Various techniques have been proposed in the literature to improve network bandwidth efficiency,including delta transmission,relay,and batch update.These techniques were largely proposed individually previously,and in this work,we seek to use them jointly.To mitigate the cross-rack update traffic,we propose DXR-DU which builds on four valuable techniques:(i)delta transmission,(ii)XOR-based data update,(iii)relay,and(iv)batch update.Meanwhile,we offer two selective update approaches:1)data-deltabased update,and 2)parity-delta-based update.The proposed DXR-DU is evaluated via trace-driven local testbed experiments.Comprehensive experiments show that DXR-DU can significantly improve data update throughput while mitigating the cross-rack update traffic.展开更多
As there is datum redundancy in tradition database and temporal database in existence and the quantities of temporal database are increasing fleetly. We put forward compress storage tactics for temporal datum which co...As there is datum redundancy in tradition database and temporal database in existence and the quantities of temporal database are increasing fleetly. We put forward compress storage tactics for temporal datum which combine compress technology in existence in order to settle datum redundancy in the course of temporal datum storage and temporal datum of slow acting domain and momentary acting domain are accessed by using each from independence clock method and mutual clock method .We also bring forward strategy of gridding storage to resolve the problems of temporal datum rising rapidly.展开更多
Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and com...Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence.展开更多
The benefits of cloud storage come along with challenges and open issues about availability of services, vendor lock-in and data security, etc. One solution to mitigate the problems is the multi-cloud storage, where t...The benefits of cloud storage come along with challenges and open issues about availability of services, vendor lock-in and data security, etc. One solution to mitigate the problems is the multi-cloud storage, where the selection of service providers is a key point. In this paper, an algorithm that can select optimal provider subset for data placement among a set of providers in multicloud storage architecture based on IDA is proposed, designed to achieve good tradeoff among storage cost, algorithm cost, vendor lock-in, transmission performance and data availability. Experiments demonstrate that it is efficient and accurate to find optimal solutions in reasonable amount of time, using parameters taken from real cloud providers.展开更多
Remote sensing data is a cheap form of surficial geoscientific data,and in terms of veracity,velocity and volume,can sometimes be considered big data.Its spatial and spectral resolution continues to improve over time,...Remote sensing data is a cheap form of surficial geoscientific data,and in terms of veracity,velocity and volume,can sometimes be considered big data.Its spatial and spectral resolution continues to improve over time,and some modern satellites,such as the Copernicus Programme’s Sentinel-2 remote sensing satellites,offer a spatial resolution of 10 m across many of their spectral bands.The abundance and quality of remote sensing data combined with accumulated primary geochemical data has provided an unprecedented opportunity to inferentially invert remote sensing data into geochemical data.The ability to derive geochemical data from remote sensing data would provide a form of secondary big geochemical data,which can be used for numerous downstream activities,particularly where data timeliness,volume and velocity are important.Major benefactors of secondary geochemical data would be environmental monitoring and applications of artificial intelligence and machine learning in geochemistry,which currently entirely relies on manually derived data that is primarily guided by scientific reduction.Furthermore,it permits the usage of well-established data analysis techniques from geochemistry to remote sensing that allows useable insights to be extracted beyond those typically associated with strictly remote sensing data analysis.Currently,no generally applicable and systematic method to derive chemical elemental concentrations from large-scale remote sensing data have been documented in geosciences.In this paper,we demonstrate that fusing geostatistically-augmented geochemical and remote sensing data produces an abundance of data that enables a more generalized machine learning-based geochemical data generation.We use gold grade data from a South African tailing storage facility(TSF)and data from both the Landsat-8 and Sentinel remote sensing satellites.We show that various machine learning algorithms can be used given the abundance of training data.Consequently,we are able to produce a high resolution(10 m grid size)gold concentration map of the TSF,which demonstrates the potential of our method to be used to guide extraction planning,online resource exploration,environmental monitoring and resource estimation.展开更多
The cloud storage service cannot be completely trusted because of the separation of data management and ownership, leading to the difficulty of data privacy protection. In order to protect the privacy of data on untru...The cloud storage service cannot be completely trusted because of the separation of data management and ownership, leading to the difficulty of data privacy protection. In order to protect the privacy of data on untrusted servers of cloud storage, a novel multi-authority access control scheme without a trustworthy central authority has been proposed based on CP-ABE for cloud storage systems, called non-centered multi-authority proxy re-encryption based on the cipher-text policy attribute-based encryption(NC-MACPABE). NC-MACPABE optimizes the weighted access structure(WAS) allowing different levels of operation on the same file in cloud storage system. The concept of identity dyeing is introduced to improve the users' information privacy further. The re-encryption algorithm is improved in the scheme so that the data owner can revoke user's access right in a more flexible way. The scheme is proved to be secure. And the experimental results also show that removing the central authority can resolve the existing performance bottleneck in the multi-authority architecture with a central authority, which significantly improves user experience when a large number of users apply for accesses to the cloud storage system at the same time.展开更多
In order to provide a practicable solution to data confidentiality in cloud storage service,a data assured deletion scheme,which achieves the fine grained access control,hopping and sniffing attacks resistance,data dy...In order to provide a practicable solution to data confidentiality in cloud storage service,a data assured deletion scheme,which achieves the fine grained access control,hopping and sniffing attacks resistance,data dynamics and deduplication,is proposed.In our scheme,data blocks are encrypted by a two-level encryption approach,in which the control keys are generated from a key derivation tree,encrypted by an All-OrNothing algorithm and then distributed into DHT network after being partitioned by secret sharing.This guarantees that only authorized users can recover the control keys and then decrypt the outsourced data in an ownerspecified data lifetime.Besides confidentiality,data dynamics and deduplication are also achieved separately by adjustment of key derivation tree and convergent encryption.The analysis and experimental results show that our scheme can satisfy its security goal and perform the assured deletion with low cost.展开更多
文摘In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.
文摘Every day,an NDT(Non-Destructive Testing)report will govern key decisions and inform inspection strategies that could affect the flow of millions of dollars which ultimately affects local environments and potential risk to life.There is a direct correlation between report quality and equipment capability.The more able the equipment is-in terms of efficient data gathering,signal to noise ratio,positioning,and coverage-the more actionable the report is.This results in optimal maintenance and repair strategies providing the report is clear and well presented.Furthermore,when considering tank floor storage inspection it is essential that asset owners have total confidence in inspection findings and the ensuing reports.Tank floor inspection equipment must not only be efficient and highly capable,but data sets should be traceable and integrity maintained throughout.Corrosion mapping of large surface areas such as storage tank bottoms is an inherently arduous and time-consuming process.MFL(magnetic flux leakage)based tank bottom scanners present a well-established and highly rated method for inspection.There are many benefits of using modern MFL technology to generate actionable reports.Chief among these includes efficiency of coverage while gaining valuable information regarding defect location,severity,surface origin and the extent of coverage.More recent advancements in modern MFL tank bottom scanners afford the ability to scan and record data sets at areas of the tank bottom which were previously classed as dead zones or areas not scanned due to physical restraints.An example of this includes scanning the CZ(critical zone)which is the area close to the annular to shell junction weld.Inclusion of these additional dead zones increases overall inspection coverage,quality and traceability.Inspection of the CZ areas allows engineers to quickly determine the integrity of arguably the most important area of the tank bottom.Herein we discuss notable developments in CZ coverage,inspection efficiency and data integrity that combines to deliver an actionable report.The asset owner can interrogate this report to develop pertinent and accurate maintenance and repair strategies.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
文摘Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems.
文摘This paper was motivated by the existing problems of Cloud Data storage in Imo State University, Nigeria such as outsourced data causing the loss of data and misuse of customer information by unauthorized users or hackers, thereby making customer/client data visible and unprotected. Also, this led to enormous risk of the clients/customers due to defective equipment, bugs, faulty servers, and specious actions. The aim if this paper therefore is to analyze a secure model using Unicode Transformation Format (UTF) base 64 algorithms for storage of data in cloud securely. The methodology used was Object Orientated Hypermedia Analysis and Design Methodology (OOHADM) was adopted. Python was used to develop the security model;the role-based access control (RBAC) and multi-factor authentication (MFA) to enhance security Algorithm were integrated into the Information System developed with HTML 5, JavaScript, Cascading Style Sheet (CSS) version 3 and PHP7. This paper also discussed some of the following concepts;Development of Computing in Cloud, Characteristics of computing, Cloud deployment Model, Cloud Service Models, etc. The results showed that the proposed enhanced security model for information systems of cooperate platform handled multiple authorization and authentication menace, that only one login page will direct all login requests of the different modules to one Single Sign On Server (SSOS). This will in turn redirect users to their requested resources/module when authenticated, leveraging on the Geo-location integration for physical location validation. The emergence of this newly developed system will solve the shortcomings of the existing systems and reduce time and resources incurred while using the existing system.
文摘China's marine data includes marine hydrology,marine meteorology,marine biology,marine chemistry,marine substrate,marine geophysical,seabed topography and other categories of data,the total amount of data reaches the magnitude of PB,and the amount of data is still increasing.The safe management of these marine data storage is the basis of building a Smart Ocean.This paper discusses the current situation of security management of marine data storage in China,analyzes the problems of security management in domestic marine data storage,and puts forward suggestions.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61922035 and 11904118)
文摘Recent years, optically controlled phase-change memory draws intensive attention owing to some advanced applications including integrated all-optical nonvolatile memory, in-memory computing, and neuromorphic computing. The light-induced phase transition is the key for this technology. Traditional understanding on the role of light is the heating effect. Generally, the RESET operation of phase-change memory is believed to be a melt-quenching-amorphization process. However, some recent experimental and theoretical investigations have revealed that ultrafast laser can manipulate the structures of phase-change materials by non-thermal effects and induces unconventional phase transitions including solid-to-solid amorphization and order-to-order phase transitions. Compared with the conventional thermal amorphization,these transitions have potential superiors such as faster speed, better endurance, and low power consumption. This article summarizes some recent progress of experimental observations and theoretical analyses on these unconventional phase transitions. The discussions mainly focus on the physical mechanism at atomic scale to provide guidance to control the phase transitions for optical storage. Outlook on some possible applications of the non-thermal phase transition is also presented to develop new types of devices.
基金financial supports from the National Natural Science Foundation of China(Grant Nos.62174073,61875073,11674130,91750110 and 61522504)the National Key R&D Program of China(Grant No.2018YFB1107200)+3 种基金the Guangdong Provincial Innovation and Entrepren-eurship Project(Grant No.2016ZT06D081)the Natural Science Founda-tion of Guangdong Province,China(Grant Nos.2016A030306016 and 2016TQ03X981)the Pearl River Nova Program of Guangzhou(Grant No.201806010040)the Technology Innovation and Development Plan of Yantai(Grant No.2020XDRH095).
文摘Encoding information in light polarization is of great importance in facilitating optical data storage(ODS)for information security and data storage capacity escalation.However,despite recent advances in nanophotonic techniques vastly en-hancing the feasibility of applying polarization channels,the data fidelity in reconstructed bits has been constrained by severe crosstalks occurring between varied polarization angles during data recording and reading process,which gravely hindered the utilization of this technique in practice.In this paper,we demonstrate an ultra-low crosstalk polarization-en-coding multilayer ODS technique for high-fidelity data recording and retrieving by utilizing a nanofibre-based nanocom-posite film involving highly aligned gold nanorods(GNRs).With parallelizing the gold nanorods in the recording medium,the information carrier configuration minimizes miswriting and misreading possibilities for information input and output,respectively,compared with its randomly self-assembled counterparts.The enhanced data accuracy has significantly im-proved the bit recall fidelity that is quantified by a correlation coefficient higher than 0.99.It is anticipated that the demon-strated technique can facilitate the development of multiplexing ODS for a greener future.
基金supports from the National Key R&D Program of China (No. 2021YFB2802000 and 2021YFB2800500)the National Natural Science Foundation of China (Grant Nos. U20A20211, 51902286, 61775192, 61905215, and 62005164)+2 种基金Key Research Project of Zhejiang Labthe State Key Laboratory of High Field Laser Physics (Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences)China Postdoctoral Science Foundation (2021M702799)。
文摘Long-term optical data storage(ODS)technology is essential to break the bottleneck of high energy consumption for information storage in the current era of big data.Here,ODS with an ultralong lifetime of 2×10^(7)years is attained with single ultrafast laser pulse induced reduction of Eu^(3+)ions and tailoring of optical properties inside the Eu-doped aluminosilicate glasses.We demonstrate that the induced local modifications in the glass can stand against the temperature of up to 970 K and strong ultraviolet light irradiation with the power density of 100 kW/cm^(2).Furthermore,the active ions of Eu^(2+)exhibit strong and broadband emission with the full width at half maximum reaching 190 nm,and the photoluminescence(PL)is flexibly tunable in the whole visible region by regulating the alkaline earth metal ions in the glasses.The developed technology and materials will be of great significance in photonic applications such as long-term ODS.
基金supported by the National Key Research and Development Program of China(2018YFA0900100)the Natural Science Foundation of Tianjin,China(19JCJQJC63300)Tianjin University。
文摘DNA molecules are green materials with great potential for high-density and long-term data storage.However,the current data-writing process of DNA data storage via DNA synthesis suffers from high costs and the production of hazards,limiting its practical applications.Here,we developed a DNA movable-type storage system that can utilize DNA fragments pre-produced by cell factories for data writing.In this system,these pre-generated DNA fragments,referred to herein as“DNA movable types,”are used as basic writing units in a repetitive way.The process of data writing is achieved by the rapid assembly of these DNA movable types,thereby avoiding the costly and environmentally hazardous process of de novo DNA synthesis.With this system,we successfully encoded 24 bytes of digital information in DNA and read it back accurately by means of high-throughput sequencing and decoding,thereby demonstrating the feasibility of this system.Through its repetitive usage and biological assembly of DNA movable-type fragments,this system exhibits excellent potential for writing cost reduction,opening up a novel route toward an economical and sustainable digital data-storage technology.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61774034 and 12104090)。
文摘The yearly growing quantities of dataflow create a desired requirement for advanced data storage methods.Luminescent materials,which possess adjustable parameters such as intensity,emission center,lifetime,polarization,etc.,can be used to enable multi-dimensional optical data storage(ODS)with higher capacity,longer lifetime and lower energy consumption.Multiplexed storage based on luminescent materials can be easily manipulated by lasers,and has been considered as a feasible option to break through the limits of ODS density.Substantial progresses in laser-modified luminescence based ODS have been made during the past decade.In this review,we recapitulated recent advancements in laser-modified luminescence based ODS,focusing on the defect-related regulation,nucleation,dissociation,photoreduction,ablation,etc.We conclude by discussing the current challenges in laser-modified luminescence based ODS and proposing the perspectives for future development.
文摘With the development of cloud computing, the mutual understandability among distributed data access control has become an important issue in the security field of cloud computing. To ensure security, confidentiality and fine-grained data access control of Cloud Data Storage (CDS) environment, we proposed Multi-Agent System (MAS) architecture. This architecture consists of two agents: Cloud Service Provider Agent (CSPA) and Cloud Data Confidentiality Agent (CDConA). CSPA provides a graphical interface to the cloud user that facilitates the access to the services offered by the system. CDConA provides each cloud user by definition and enforcement expressive and flexible access structure as a logic formula over cloud data file attributes. This new access control is named as Formula-Based Cloud Data Access Control (FCDAC). Our proposed FCDAC based on MAS architecture consists of four layers: interface layer, existing access control layer, proposed FCDAC layer and CDS layer as well as four types of entities of Cloud Service Provider (CSP), cloud users, knowledge base and confidentiality policy roles. FCDAC, it’s an access policy determined by our MAS architecture, not by the CSPs. A prototype of our proposed FCDAC scheme is implemented using the Java Agent Development Framework Security (JADE-S). Our results in the practical scenario defined formally in this paper, show the Round Trip Time (RTT) for an agent to travel in our system and measured by the times required for an agent to travel around different number of cloud users before and after implementing FCDAC.
基金Supported by the National Natural Science Foundation of China (60373088 )the National Key Laboratory Foundation(51484040504 JW0518)
文摘This paper introduces agent-based methodology to build a distributed autonomic storage system infrastructure, and an effectively negotiation mechanism based on agent is applied for data location. We present Availability-based Data Allocation (ADA) algorithm as a data placement strategy to achieve high efficient utilization of storage resources by employing multiple distributed storage resources. We use Bloom filter in each storage device to track the location of data. We present the data lookup strategy that small size of read request is handled directly, and large size of read request is handled by cooperation with storage devices.The performance evaluation shows that the data location mechanism is high available and can work well for heterogeneous autonomic storage systems.
基金supported by Major Special Project of Sichuan Science and Technology Department(2020YFG0460)Central University Project of China(ZYGX2020ZB020,ZYGX2020ZB019).
文摘To achieve the high availability of health data in erasure-coded cloud storage systems,the data update performance in erasure coding should be continuously optimized.However,the data update performance is often bottlenecked by the constrained cross-rack bandwidth.Various techniques have been proposed in the literature to improve network bandwidth efficiency,including delta transmission,relay,and batch update.These techniques were largely proposed individually previously,and in this work,we seek to use them jointly.To mitigate the cross-rack update traffic,we propose DXR-DU which builds on four valuable techniques:(i)delta transmission,(ii)XOR-based data update,(iii)relay,and(iv)batch update.Meanwhile,we offer two selective update approaches:1)data-deltabased update,and 2)parity-delta-based update.The proposed DXR-DU is evaluated via trace-driven local testbed experiments.Comprehensive experiments show that DXR-DU can significantly improve data update throughput while mitigating the cross-rack update traffic.
文摘As there is datum redundancy in tradition database and temporal database in existence and the quantities of temporal database are increasing fleetly. We put forward compress storage tactics for temporal datum which combine compress technology in existence in order to settle datum redundancy in the course of temporal datum storage and temporal datum of slow acting domain and momentary acting domain are accessed by using each from independence clock method and mutual clock method .We also bring forward strategy of gridding storage to resolve the problems of temporal datum rising rapidly.
基金the National Key R&D Program of China(Grant no.2019YFC1709803)National Natural Science Foundation of China(Grant no.81873183).
文摘Objectives:The aim of this study was to investigate and develop a data storage and exchange format for the process of automatic systematic reviews(ASR)of traditional Chinese medicine(TCM).Methods:A lightweight and commonly used data format,namely,JavaScript Object Notation(JSON),was introduced in this study.We designed a fully described data structure to collect TCM clinical trial information based on the JSON syntax.Results:A smart and powerful data format,JSON-ASR,was developed.JSON-ASR uses a plain-text data format in the form of key/value pairs and consists of six sections and more than 80 preset pairs.JSON-ASR adopts extensible structured arrays to support the situations of multi-groups and multi-outcomes.Conclusion:JSON-ASR has the characteristics of light weight,flexibility,and good scalability,which is suitable for the complex data of clinical evidence.
基金This study is supported by the National Natural Science Foundation of China(61370069), the National High Technology Research and Development Program("863"Program) of China (2012AA012600), the Cosponsored Project of Beijing Committee of Education,the Fundamental Research Funds for the Central Universities (BUPT2011RCZJ16) and China Information Security Special Fund (NDRC).
文摘The benefits of cloud storage come along with challenges and open issues about availability of services, vendor lock-in and data security, etc. One solution to mitigate the problems is the multi-cloud storage, where the selection of service providers is a key point. In this paper, an algorithm that can select optimal provider subset for data placement among a set of providers in multicloud storage architecture based on IDA is proposed, designed to achieve good tradeoff among storage cost, algorithm cost, vendor lock-in, transmission performance and data availability. Experiments demonstrate that it is efficient and accurate to find optimal solutions in reasonable amount of time, using parameters taken from real cloud providers.
基金provided by the Department of Science and Innovation(DSI)-National Research Foundation(NRF)Thuthuka Grant(Grant UID:121,973)DSI-NRF CIMERA.Yousef Ghorbani acknowledges financial support from the Centre for Advanced Mining and Metallurgy(CAMM),a strategic research environment established at the LuleåUniversity of Technology funded by the Swedish governmentWe also thank Sibanye-Stillwater Ltd.For their funding through the Wits Mining Institute(WMI).
文摘Remote sensing data is a cheap form of surficial geoscientific data,and in terms of veracity,velocity and volume,can sometimes be considered big data.Its spatial and spectral resolution continues to improve over time,and some modern satellites,such as the Copernicus Programme’s Sentinel-2 remote sensing satellites,offer a spatial resolution of 10 m across many of their spectral bands.The abundance and quality of remote sensing data combined with accumulated primary geochemical data has provided an unprecedented opportunity to inferentially invert remote sensing data into geochemical data.The ability to derive geochemical data from remote sensing data would provide a form of secondary big geochemical data,which can be used for numerous downstream activities,particularly where data timeliness,volume and velocity are important.Major benefactors of secondary geochemical data would be environmental monitoring and applications of artificial intelligence and machine learning in geochemistry,which currently entirely relies on manually derived data that is primarily guided by scientific reduction.Furthermore,it permits the usage of well-established data analysis techniques from geochemistry to remote sensing that allows useable insights to be extracted beyond those typically associated with strictly remote sensing data analysis.Currently,no generally applicable and systematic method to derive chemical elemental concentrations from large-scale remote sensing data have been documented in geosciences.In this paper,we demonstrate that fusing geostatistically-augmented geochemical and remote sensing data produces an abundance of data that enables a more generalized machine learning-based geochemical data generation.We use gold grade data from a South African tailing storage facility(TSF)and data from both the Landsat-8 and Sentinel remote sensing satellites.We show that various machine learning algorithms can be used given the abundance of training data.Consequently,we are able to produce a high resolution(10 m grid size)gold concentration map of the TSF,which demonstrates the potential of our method to be used to guide extraction planning,online resource exploration,environmental monitoring and resource estimation.
基金Projects(61472192,61202004)supported by the National Natural Science Foundation of ChinaProject(14KJB520014)supported by the Natural Science Fund of Higher Education of Jiangsu Province,China
文摘The cloud storage service cannot be completely trusted because of the separation of data management and ownership, leading to the difficulty of data privacy protection. In order to protect the privacy of data on untrusted servers of cloud storage, a novel multi-authority access control scheme without a trustworthy central authority has been proposed based on CP-ABE for cloud storage systems, called non-centered multi-authority proxy re-encryption based on the cipher-text policy attribute-based encryption(NC-MACPABE). NC-MACPABE optimizes the weighted access structure(WAS) allowing different levels of operation on the same file in cloud storage system. The concept of identity dyeing is introduced to improve the users' information privacy further. The re-encryption algorithm is improved in the scheme so that the data owner can revoke user's access right in a more flexible way. The scheme is proved to be secure. And the experimental results also show that removing the central authority can resolve the existing performance bottleneck in the multi-authority architecture with a central authority, which significantly improves user experience when a large number of users apply for accesses to the cloud storage system at the same time.
基金supported by the National Key Basic Research Program of China(973 program) under Grant No.2012CB315901
文摘In order to provide a practicable solution to data confidentiality in cloud storage service,a data assured deletion scheme,which achieves the fine grained access control,hopping and sniffing attacks resistance,data dynamics and deduplication,is proposed.In our scheme,data blocks are encrypted by a two-level encryption approach,in which the control keys are generated from a key derivation tree,encrypted by an All-OrNothing algorithm and then distributed into DHT network after being partitioned by secret sharing.This guarantees that only authorized users can recover the control keys and then decrypt the outsourced data in an ownerspecified data lifetime.Besides confidentiality,data dynamics and deduplication are also achieved separately by adjustment of key derivation tree and convergent encryption.The analysis and experimental results show that our scheme can satisfy its security goal and perform the assured deletion with low cost.