With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud...With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.展开更多
The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces ...The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology.展开更多
The technique of incremental updating,which can better guarantee the real-time situation of navigational map,is the developing orientation of navigational road network updating.The data center of vehicle navigation sy...The technique of incremental updating,which can better guarantee the real-time situation of navigational map,is the developing orientation of navigational road network updating.The data center of vehicle navigation system is in charge of storing incremental data,and the spatio-temporal data model for storing incremental data does affect the efficiency of the response of the data center to the requirements of incremental data from the vehicle terminal.According to the analysis on the shortcomings of several typical spatio-temporal data models used in the data center and based on the base map with overlay model,the reverse map with overlay model (RMOM) was put forward for the data center to make rapid response to incremental data request.RMOM supports the data center to store not only the current complete road network data,but also the overlays of incremental data from the time when each road network changed to the current moment.Moreover,the storage mechanism and index structure of the incremental data were designed,and the implementation algorithm of RMOM was developed.Taking navigational road network in Guangzhou City as an example,the simulation test was conducted to validate the efficiency of RMOM.Results show that the navigation database in the data center can response to the requirements of incremental data by only one query with RMOM,and costs less time.Compared with the base map with overlay model,the data center does not need to temporarily overlay incremental data with RMOM,so time-consuming of response is significantly reduced.RMOM greatly improves the efficiency of response and provides strong support for the real-time situation of navigational road network.展开更多
In order to settle the problem of workflow data consis-tency under the distributed environment, an invalidation strategy based-on timely updating record list is put forward. The strategy adopting the method of updatin...In order to settle the problem of workflow data consis-tency under the distributed environment, an invalidation strategy based-on timely updating record list is put forward. The strategy adopting the method of updating the records list and the recovery mechanism of updating message proves the classical invalidation strategy. When the request cycle of duplication is too long, the strategy uses the method of updating the records list to pause for sending updating message; when the long cycle duplication is requested again, it uses the recovery mechanism to resume the updating message. This strategy not only ensures the consistency of the workflow data, but also reduces the unnecessary network traffic. From theoretical comparison with those common strategies, the unnecessary network traffic of this strategy is fewer and more stable. The simulation results validate this conclusion.展开更多
Fingerprint⁃based Bluetooth positioning is a popular indoor positioning technology.However,the change of indoor environment and Bluetooth anchor locations has significant impact on signal distribution,which will resul...Fingerprint⁃based Bluetooth positioning is a popular indoor positioning technology.However,the change of indoor environment and Bluetooth anchor locations has significant impact on signal distribution,which will result in the decline of positioning accuracy.The widespread extension of Bluetooth positioning is limited by the need of manual effort to collect the fingerprints with position labels for fingerprint database construction and updating.To address this problem,this paper presents an adaptive fingerprint database updating approach.First,the crowdsourced data including the Bluetooth Received Signal Strength(RSS)sequences and the speed and heading of the pedestrian were recorded.Second,the recorded crowdsourced data were fused by the Kalman Filtering(KF),and then fed into the trajectory validity analysis model with the purpose of assigning the unlabeled RSS data with position labels to generate candidate fingerprints.Third,after enough candidate fingerprints were obtained at each Reference Point(RP),the Density⁃based Spatial Clustering of Applications with Noise(DBSCAN)approach was conducted on both the original and the candidate fingerprints to filter out the fingerprints which had been identified as the noise,and then the mean of fingerprints in the cluster with the largest data volume was selected as the updated fingerprint of the corresponding RP.Finally,the extensive experimental results show that with the increase of the number of candidate fingerprints and update iterations,the fingerprint⁃based Bluetooth positioning accuracy can be effectively improved.展开更多
The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal a...The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal analysis solutions. This approach may sometimes create discontinuities in analysis fields and produce undesirable spin ups and spin downs. This study explores using incremental analysis updates (IAU) in 4D-Var to reduce the analysis discontinuities. IAU-based 4D-Var has almost the same mathematical formula as conventional 4D-Var if the initial condition increments are replaced with time-integrated increments as control variables. The IAU technique was implemented in the NASA/GSFC 4D-Var prototype and compared against a control run without IAU. The results showed that the initial precipitation spikes were removed and that other discontinuities were also reduced, especially for the analysis of surface temperature.展开更多
To achieve the high availability of health data in erasure-coded cloud storage systems,the data update performance in erasure coding should be continuously optimized.However,the data update performance is often bottle...To achieve the high availability of health data in erasure-coded cloud storage systems,the data update performance in erasure coding should be continuously optimized.However,the data update performance is often bottlenecked by the constrained cross-rack bandwidth.Various techniques have been proposed in the literature to improve network bandwidth efficiency,including delta transmission,relay,and batch update.These techniques were largely proposed individually previously,and in this work,we seek to use them jointly.To mitigate the cross-rack update traffic,we propose DXR-DU which builds on four valuable techniques:(i)delta transmission,(ii)XOR-based data update,(iii)relay,and(iv)batch update.Meanwhile,we offer two selective update approaches:1)data-deltabased update,and 2)parity-delta-based update.The proposed DXR-DU is evaluated via trace-driven local testbed experiments.Comprehensive experiments show that DXR-DU can significantly improve data update throughput while mitigating the cross-rack update traffic.展开更多
Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching...Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching. In this paper, we study a joint content caching and updating strategy in IoT networks, taking both the energy consumption of the sensors and the freshness loss of the contents into account. In particular, we decide whether or not to cache the transient data and, if so, how often the servers should update their contents. We formulate this content caching and updating problem as a mixed 0–1 integer non-convex optimization programming, and devise a Harmony Search based content Caching and Updating (HSCU) algorithm, which is self-learning and derivativefree and hence stipulates no requirement on the relationship between the objective and variables. Finally, extensive simulation results verify the effectiveness of our proposed algorithm in terms of the achieved satisfaction ratio for content delivery, normalized energy consumption, and overall network utility, by comparing it with some baseline algorithms.展开更多
Aiming at improving the observation uncertainty caused by limited accuracy of sensors,and the uncertainty of observation source in clutters,through the dynamic combination of ensemble Kalman filter(EnKF) and probabili...Aiming at improving the observation uncertainty caused by limited accuracy of sensors,and the uncertainty of observation source in clutters,through the dynamic combination of ensemble Kalman filter(EnKF) and probabilistic data association(PDA),a novel probabilistic data association algorithm based on ensemble Kalman filter with observation iterated update is proposed.Firstly,combining with the advantages of data assimilation handling observation uncertainty in EnKF,an observation iterated update strategy is used to realize optimization of EnKF in structure.And the object is to further improve state estimation precision of nonlinear system.Secondly,the above algorithm is introduced to the framework of PDA,and the object is to increase reliability and stability of candidate echo acknowledgement.In addition,in order to decrease computation complexity in the combination of improved EnKF and PDA,the maximum observation iterated update mechanism is applied to the iteration of PDA.Finally,simulation results verify the feasibility and effectiveness of the proposed algorithm by a typical target tracking scene in clutters.展开更多
According to the different equipment, different system and heterogeneous database have be information "isolated island" problem, and the data of equipments can be updated in real time on the business node. The paper...According to the different equipment, different system and heterogeneous database have be information "isolated island" problem, and the data of equipments can be updated in real time on the business node. The paper proposes a program of data synchronization platform based on J2EE (JMS) and XML, and detailed analysis and description of the workflow system, its frame structure and the key technology. Practice shows that this scheme has the advantages of convenient and real-time etc..展开更多
Due to the development of 5G communication,many aspects of information technology(IT)services are changing.With the development of communication technologies such as 5G,it has become possible to provide IT services th...Due to the development of 5G communication,many aspects of information technology(IT)services are changing.With the development of communication technologies such as 5G,it has become possible to provide IT services that were difficult to provide in the past.One of the services made possible through this change is cloud-based collaboration.In order to support secure collaboration over cloud,encryption technology to securely manage dynamic data is essential.However,since the existing encryption technology is not suitable for encryption of dynamic data,a new technology that can provide encryption for dynamic data is required for secure cloudbased collaboration.In this paper,we propose a new encryption technology to support secure collaboration for dynamic data in the cloud.Specifically,we propose an encryption operation mode which can support data updates such as modification,addition,and deletion of encrypted data in an encrypted state.To support the dynamic update of encrypted data,we invent a new mode of operation technique named linked-block cipher(LBC).Basic idea of our work is to use an updatable random value so-called link to link two encrypted blocks.Due to the use of updatable random link values,we can modify,insert,and delete an encrypted data without decrypt it.展开更多
基金supported by the Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(RS-2024-00399401,Development of Quantum-Safe Infrastructure Migration and Quantum Security Verification Technologies).
文摘With the rise of remote collaboration,the demand for advanced storage and collaboration tools has rapidly increased.However,traditional collaboration tools primarily rely on access control,leaving data stored on cloud servers vulnerable due to insufficient encryption.This paper introduces a novel mechanism that encrypts data in‘bundle’units,designed to meet the dual requirements of efficiency and security for frequently updated collaborative data.Each bundle includes updated information,allowing only the updated portions to be reencrypted when changes occur.The encryption method proposed in this paper addresses the inefficiencies of traditional encryption modes,such as Cipher Block Chaining(CBC)and Counter(CTR),which require decrypting and re-encrypting the entire dataset whenever updates occur.The proposed method leverages update-specific information embedded within data bundles and metadata that maps the relationship between these bundles and the plaintext data.By utilizing this information,the method accurately identifies the modified portions and applies algorithms to selectively re-encrypt only those sections.This approach significantly enhances the efficiency of data updates while maintaining high performance,particularly in large-scale data environments.To validate this approach,we conducted experiments measuring execution time as both the size of the modified data and the total dataset size varied.Results show that the proposed method significantly outperforms CBC and CTR modes in execution speed,with greater performance gains as data size increases.Additionally,our security evaluation confirms that this method provides robust protection against both passive and active attacks.
基金sponsored by the National Natural Science Foundation of China(Nos.61972208,62102194 and 62102196)National Natural Science Foundation of China(Youth Project)(No.62302237)+3 种基金Six Talent Peaks Project of Jiangsu Province(No.RJFW-111),China Postdoctoral Science Foundation Project(No.2018M640509)Postgraduate Research and Practice Innovation Program of Jiangsu Province(Nos.KYCX22_1019,KYCX23_1087,KYCX22_1027,KYCX23_1087,SJCX24_0339 and SJCX24_0346)Innovative Training Program for College Students of Nanjing University of Posts and Telecommunications(No.XZD2019116)Nanjing University of Posts and Telecommunications College Students Innovation Training Program(Nos.XZD2019116,XYB2019331).
文摘The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology.
基金Under the auspices of National High Technology Research and Development Program of China (No.2007AA12Z242)
文摘The technique of incremental updating,which can better guarantee the real-time situation of navigational map,is the developing orientation of navigational road network updating.The data center of vehicle navigation system is in charge of storing incremental data,and the spatio-temporal data model for storing incremental data does affect the efficiency of the response of the data center to the requirements of incremental data from the vehicle terminal.According to the analysis on the shortcomings of several typical spatio-temporal data models used in the data center and based on the base map with overlay model,the reverse map with overlay model (RMOM) was put forward for the data center to make rapid response to incremental data request.RMOM supports the data center to store not only the current complete road network data,but also the overlays of incremental data from the time when each road network changed to the current moment.Moreover,the storage mechanism and index structure of the incremental data were designed,and the implementation algorithm of RMOM was developed.Taking navigational road network in Guangzhou City as an example,the simulation test was conducted to validate the efficiency of RMOM.Results show that the navigation database in the data center can response to the requirements of incremental data by only one query with RMOM,and costs less time.Compared with the base map with overlay model,the data center does not need to temporarily overlay incremental data with RMOM,so time-consuming of response is significantly reduced.RMOM greatly improves the efficiency of response and provides strong support for the real-time situation of navigational road network.
基金National Basic Research Program of China (973 Program) (2005CD312904)
文摘In order to settle the problem of workflow data consis-tency under the distributed environment, an invalidation strategy based-on timely updating record list is put forward. The strategy adopting the method of updating the records list and the recovery mechanism of updating message proves the classical invalidation strategy. When the request cycle of duplication is too long, the strategy uses the method of updating the records list to pause for sending updating message; when the long cycle duplication is requested again, it uses the recovery mechanism to resume the updating message. This strategy not only ensures the consistency of the workflow data, but also reduces the unnecessary network traffic. From theoretical comparison with those common strategies, the unnecessary network traffic of this strategy is fewer and more stable. The simulation results validate this conclusion.
基金Sponsored by the National Natural Science Foundation of China(Grant Nos.61771083,61704015)the Program for Changjiang Scholars and Innovative Research Team in University(Grant No.IRT1299)+3 种基金the Special Fund of Chongqing Key Laboratory(CSTC)Fundamental Science and Frontier Technology Research Project of Chongqing(Grant Nos.cstc2017jcyjAX0380,cstc2015jcyjBX0065)the Scientific and Technological Research Foundation of Chongqing Municipal Education Commission(Grant No.KJ1704083)the University Outstanding Achievement Transformation Project of Chongqing(Grant No.KJZH17117).
文摘Fingerprint⁃based Bluetooth positioning is a popular indoor positioning technology.However,the change of indoor environment and Bluetooth anchor locations has significant impact on signal distribution,which will result in the decline of positioning accuracy.The widespread extension of Bluetooth positioning is limited by the need of manual effort to collect the fingerprints with position labels for fingerprint database construction and updating.To address this problem,this paper presents an adaptive fingerprint database updating approach.First,the crowdsourced data including the Bluetooth Received Signal Strength(RSS)sequences and the speed and heading of the pedestrian were recorded.Second,the recorded crowdsourced data were fused by the Kalman Filtering(KF),and then fed into the trajectory validity analysis model with the purpose of assigning the unlabeled RSS data with position labels to generate candidate fingerprints.Third,after enough candidate fingerprints were obtained at each Reference Point(RP),the Density⁃based Spatial Clustering of Applications with Noise(DBSCAN)approach was conducted on both the original and the candidate fingerprints to filter out the fingerprints which had been identified as the noise,and then the mean of fingerprints in the cluster with the largest data volume was selected as the updated fingerprint of the corresponding RP.Finally,the extensive experimental results show that with the increase of the number of candidate fingerprints and update iterations,the fingerprint⁃based Bluetooth positioning accuracy can be effectively improved.
基金supported by NOAA’s Hurricane Forecast Improvement Project
文摘The four-dimensional variational (4D-Var) data assimilation systems used in most operational and research centers use initial condition increments as control variables and adjust initial increments to find optimal analysis solutions. This approach may sometimes create discontinuities in analysis fields and produce undesirable spin ups and spin downs. This study explores using incremental analysis updates (IAU) in 4D-Var to reduce the analysis discontinuities. IAU-based 4D-Var has almost the same mathematical formula as conventional 4D-Var if the initial condition increments are replaced with time-integrated increments as control variables. The IAU technique was implemented in the NASA/GSFC 4D-Var prototype and compared against a control run without IAU. The results showed that the initial precipitation spikes were removed and that other discontinuities were also reduced, especially for the analysis of surface temperature.
基金supported by Major Special Project of Sichuan Science and Technology Department(2020YFG0460)Central University Project of China(ZYGX2020ZB020,ZYGX2020ZB019).
文摘To achieve the high availability of health data in erasure-coded cloud storage systems,the data update performance in erasure coding should be continuously optimized.However,the data update performance is often bottlenecked by the constrained cross-rack bandwidth.Various techniques have been proposed in the literature to improve network bandwidth efficiency,including delta transmission,relay,and batch update.These techniques were largely proposed individually previously,and in this work,we seek to use them jointly.To mitigate the cross-rack update traffic,we propose DXR-DU which builds on four valuable techniques:(i)delta transmission,(ii)XOR-based data update,(iii)relay,and(iv)batch update.Meanwhile,we offer two selective update approaches:1)data-deltabased update,and 2)parity-delta-based update.The proposed DXR-DU is evaluated via trace-driven local testbed experiments.Comprehensive experiments show that DXR-DU can significantly improve data update throughput while mitigating the cross-rack update traffic.
基金National Natural Science Foundation of China(61701372)Talents Special Foundation of Northwest A&F University(Z111021801).
文摘Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching. In this paper, we study a joint content caching and updating strategy in IoT networks, taking both the energy consumption of the sensors and the freshness loss of the contents into account. In particular, we decide whether or not to cache the transient data and, if so, how often the servers should update their contents. We formulate this content caching and updating problem as a mixed 0–1 integer non-convex optimization programming, and devise a Harmony Search based content Caching and Updating (HSCU) algorithm, which is self-learning and derivativefree and hence stipulates no requirement on the relationship between the objective and variables. Finally, extensive simulation results verify the effectiveness of our proposed algorithm in terms of the achieved satisfaction ratio for content delivery, normalized energy consumption, and overall network utility, by comparing it with some baseline algorithms.
基金Supported by the National Nature Science Foundation of China(No.61300214)the Science and Technology Innovation Team Support Plan of Education Department of Henan Province(No.13IRTSTHN021)+5 种基金the National Natural Science Foundation of Henan Province(No.132300410148)the Science and Technology Research Key Project of Education Department of Henan Province(No.13A413066)the Postdoctoral Science Foundation of China(No.2014M551999)the Funding Scheme of Young Key Teacher of Henan Province Universities(No.2013GGJS-026)the Postdoctoral Fund of Henan Province(No.2013029)the Outstanding Young Cultivation Foundation of Henan University(No.0000A40366)
文摘Aiming at improving the observation uncertainty caused by limited accuracy of sensors,and the uncertainty of observation source in clutters,through the dynamic combination of ensemble Kalman filter(EnKF) and probabilistic data association(PDA),a novel probabilistic data association algorithm based on ensemble Kalman filter with observation iterated update is proposed.Firstly,combining with the advantages of data assimilation handling observation uncertainty in EnKF,an observation iterated update strategy is used to realize optimization of EnKF in structure.And the object is to further improve state estimation precision of nonlinear system.Secondly,the above algorithm is introduced to the framework of PDA,and the object is to increase reliability and stability of candidate echo acknowledgement.In addition,in order to decrease computation complexity in the combination of improved EnKF and PDA,the maximum observation iterated update mechanism is applied to the iteration of PDA.Finally,simulation results verify the feasibility and effectiveness of the proposed algorithm by a typical target tracking scene in clutters.
文摘According to the different equipment, different system and heterogeneous database have be information "isolated island" problem, and the data of equipments can be updated in real time on the business node. The paper proposes a program of data synchronization platform based on J2EE (JMS) and XML, and detailed analysis and description of the workflow system, its frame structure and the key technology. Practice shows that this scheme has the advantages of convenient and real-time etc..
基金This work was partly supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2021-0-00779Development of high-speed encryption data processing technology that guarantees privacy based hardware,50%)National R&D Program through the National Research Foundation of Korea(NRF)funded by Ministry of Science and ICT(NRF-2021R1F1A1056115,50%).
文摘Due to the development of 5G communication,many aspects of information technology(IT)services are changing.With the development of communication technologies such as 5G,it has become possible to provide IT services that were difficult to provide in the past.One of the services made possible through this change is cloud-based collaboration.In order to support secure collaboration over cloud,encryption technology to securely manage dynamic data is essential.However,since the existing encryption technology is not suitable for encryption of dynamic data,a new technology that can provide encryption for dynamic data is required for secure cloudbased collaboration.In this paper,we propose a new encryption technology to support secure collaboration for dynamic data in the cloud.Specifically,we propose an encryption operation mode which can support data updates such as modification,addition,and deletion of encrypted data in an encrypted state.To support the dynamic update of encrypted data,we invent a new mode of operation technique named linked-block cipher(LBC).Basic idea of our work is to use an updatable random value so-called link to link two encrypted blocks.Due to the use of updatable random link values,we can modify,insert,and delete an encrypted data without decrypt it.