A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environment...A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environments. At the mobile hosts, all transactions perform local pre-validation. The local pre-validation process is carried out against the committed transactions at the server in the last broadcast cycle. Transactions that survive in local pre-validation must be submitted to the server for local final validation. The new protocol eliminates conflicts between mobile read-only and mobile update transactions, and resolves data conflicts flexibly by using multiversion dynamic adjustment of serialization order to avoid unnecessary restarts of transactions. Mobile read-only transactions can be committed with no-blocking, and respond time of mobile read-only transactions is greatly shortened. The tolerance of mobile transactions of disconnections from the broadcast channel is increased. In global validation mobile distributed transactions have to do check to ensure distributed serializability in all participants. The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols in terms of miss rate, restart rate, commit rate. Under high work load (think time is ls) the miss rate of DMVOCC-MVDA is only 14.6%, is significantly lower than that of other protocols. The restart rate of DMVOCC-MVDA is only 32.3%, showing that DMVOCC-MVDA can effectively reduce the restart rate of mobile transactions. And the commit rate of DMVOCC-MVDA is up to 61.2%, which is obviously higher than that of other protocols.展开更多
In the evolving landscape of software engineering, Microservice Architecture (MSA) has emerged as a transformative approach, facilitating enhanced scalability, agility, and independent service deployment. This systema...In the evolving landscape of software engineering, Microservice Architecture (MSA) has emerged as a transformative approach, facilitating enhanced scalability, agility, and independent service deployment. This systematic literature review (SLR) explores the current state of distributed transaction management within MSA, focusing on the unique challenges, strategies, and technologies utilized in this domain. By synthesizing findings from 16 primary studies selected based on rigorous criteria, the review identifies key trends and best practices for maintaining data consistency and integrity across microservices. This SLR provides a comprehensive understanding of the complexities associated with distributed transactions in MSA, offering actionable insights and potential research directions for software architects, developers, and researchers.展开更多
A new scheduling algorithm called deferrable scheduling with time slice exchange (DS-EXC) was proposed to maintain the temporal validity of real-time data. In DS-EXC, the time slice exchange method was designed to fur...A new scheduling algorithm called deferrable scheduling with time slice exchange (DS-EXC) was proposed to maintain the temporal validity of real-time data. In DS-EXC, the time slice exchange method was designed to further defer the release time of transaction instances derived by the deferrable scheduling algorithm (DS-FP). In this way, more CPU time would be left for lower priority transactions and other transactions. In order to minimize the scheduling overhead, an off-line scheme was designed. In particular, the schedule for a transaction set is generated off-line until a repeating pattern is found, and then the pattern is used to construct the schedule on-line. The performance of DS-EXC was evaluated by sets of experiments. The results show that DS-EXC outperforms DS-FP in terms of increasing schedulable ratio. It also provides better performance under mixed workloads.展开更多
Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of sa...Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of satisfying the timing constraints of transactions, serializability is too strong as a correctness criterion and not suitable for real time databases in most cases. On the other hand, relaxed serializability including epsilon serializability and similarity serializability can allow more real time transactions to satisfy their timing constraints, but database consistency may be sacrificed to some extent. We thus propose the use of weak serializability(WSR) that is more relaxed than conflicting serializability while database consistency is maintained. In this paper, we first formally define the new notion of correctness called weak serializability. After the necessary and sufficient conditions for weak serializability are shown, corresponding concurrency control protocol WDHP(weak serializable distributed high priority protocol) is outlined for distributed real time databases, where a new lock mode called mask lock mode is proposed for simplifying the condition of global consistency. Finally, through a series of simulation studies, it is shown that using the new concurrency control protocol the performance of distributed real time databases can be greatly improved.展开更多
Glacier disasters occur frequently in alpine regions around the world,but the current conventional geological disaster measurement technology cannot be directly used for glacier disaster measurement.Hence,in this stud...Glacier disasters occur frequently in alpine regions around the world,but the current conventional geological disaster measurement technology cannot be directly used for glacier disaster measurement.Hence,in this study,a distributed multi-sensor measurement system for glacier deformation was established by integrating piezoelectric sensing,coded sensing,attitude sensing technology and wireless communication technology.The traditional Modbus protocol was optimized to solve the problem of data identification confusion of different acquisition nodes.Through indoor wireless transmission,adaptive performance analysis,error measurement experiment and landslide simulation experiment,the performance of the measurement system was analyzed and evaluated.Using unmanned aerial vehicle technology,the reliability and effectiveness of the measurement system were verified on the site of Galongla glacier in southeastern Tibet,China.The results show that the mean absolute percentage errors were only 1.13%and 2.09%for the displacement and temperature,respectively.The distributed glacier deformation real-time measurement system provides a new means for the assessment of the development process of glacier disasters and disaster prevention and mitigation.展开更多
Recovery performance in the event of failures is very important for distributed real-time database systems. This paper presents a time-cognizant logging-based crash recovery scheme (TCLCRS) that aims at distributed ...Recovery performance in the event of failures is very important for distributed real-time database systems. This paper presents a time-cognizant logging-based crash recovery scheme (TCLCRS) that aims at distributed real-time databases, which adopts a main memory database as its ground support. In our scheme, each site maintains a real-time log for local transactions and the subtransactions, which execute at the site, and execte local checkpointing independently. Log records are stored in non-volatile high- speed store, which is divided into four different partitions based on transaction classes. During restart recovery after a site crash, partitioned crash recovery strategy is adopted to ensure that the site can be brought up before the entire local secondary database is reloaded in main memory. The partitioned crash recovery strategy not only guarantees the internal consistency to be recovered, but also guarantee the temporal consistency and recovery of the sates of physical world influenced by uncommitted transactions. Combined with two- phase commit protocol, TCLCRS can guarantee failure atomicity of distributed real-time transactions.展开更多
Distributed speech recognition (DSR) applications have certain QoS (Quality of service) requirements in terms of latency, packet loss rate, etc. To deliver quality guaranteed DSR application over wirelined or wireless...Distributed speech recognition (DSR) applications have certain QoS (Quality of service) requirements in terms of latency, packet loss rate, etc. To deliver quality guaranteed DSR application over wirelined or wireless links, some QoS mechanisms should be provided. We put forward a RTP/RSVP transmission scheme with DSR-specific payload and QoS parameters by modifying the present WAP protocol stack. The simulation result shows that this scheme will provide adequate network bandwidth to keep the real-time transport of DSR data over either wirelined or wireless channels.展开更多
This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, thro...This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, through a series of simulation studies, it shows that using the new concurrency control protocol the performance of distributed real-time databases can be much improved.展开更多
Real-Time Transaction Processing System(RTTPS)is a type of e-government that processes documents using electronic communication technology.In this time of the pandemic,the study contributes to the necessity to perform...Real-Time Transaction Processing System(RTTPS)is a type of e-government that processes documents using electronic communication technology.In this time of the pandemic,the study contributes to the necessity to perform more processing online and less face-to-face.In terms of retrieving information,a comparison between Porter’s Stemming algorithm and this study was performed.The study aims to design a database that will serve as a repository of information in retrieving information and also to examine the efficacy of the real-time process in securing the government requirements using the Technology Acceptance Model.The respondents of this study have perceived ease of use and usefulness on the impact when securing the community tax certificate.展开更多
Harvesting energy for execution from the environment (e.g., solar, wind energy) has recently emerged as a feasible solution for low-cost and low-power distributed systems. When real-time responsiveness of a given appl...Harvesting energy for execution from the environment (e.g., solar, wind energy) has recently emerged as a feasible solution for low-cost and low-power distributed systems. When real-time responsiveness of a given application has to be guaranteed, the recharge rate of obtaining energy inevitably affects the task scheduling. This paper extends our previous works in?[1] [2] to explore the real-time task assignment problem on an energy-harvesting distributed system. The solution using Ant Colony Optimization (ACO) and several significant improvements are presented. Simulations compare the performance of the approaches, which demonstrate the solutions effectiveness and efficiency.展开更多
Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems...Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems across various fields.An increasing number of users are participating in application systems that use blockchain as their underlying architecture.As the number of transactions and the capital involved in blockchain grow,ensuring information security becomes imperative.Addressing the verification of transactional information security and privacy has emerged as a critical challenge.Blockchain-based verification methods can effectively eliminate the need for centralized third-party organizations.However,the efficiency of nodes in storing and verifying blockchain data faces unprecedented challenges.To address this issue,this paper introduces an efficient verification scheme for transaction security.Initially,it presents a node evaluation module to estimate the activity level of user nodes participating in transactions,accompanied by a probabilistic analysis for all transactions.Subsequently,this paper optimizes the conventional transaction organization form,introduces a heterogeneous Merkle tree storage structure,and designs algorithms for constructing these heterogeneous trees.Theoretical analyses and simulation experiments conclusively demonstrate the superior performance of this scheme.When verifying the same number of transactions,the heterogeneous Merkle tree transmits less data and is more efficient than traditional methods.The findings indicate that the heterogeneous Merkle tree structure is suitable for various blockchain applications,including the Internet of Things.This scheme can markedly enhance the efficiency of information verification and bolster the security of distributed systems.展开更多
The phase behavior of gas condensate in reservoir formations differs from that in pressure-volume-temperature(PVT)cells because it is influenced by porous media in the reservoir formations.Sandstone was used as a samp...The phase behavior of gas condensate in reservoir formations differs from that in pressure-volume-temperature(PVT)cells because it is influenced by porous media in the reservoir formations.Sandstone was used as a sample to investigate the influence of porous media on the phase behavior of the gas condensate.The pore structure was first analyzed using computed tomography(CT)scanning,digital core technology,and a pore network model.The sandstone core sample was then saturated with gas condensate for the pressure depletion experiment.After each pressure-depletion state was stable,realtime CT scanning was performed on the sample.The scanning results of the sample were reconstructed into three-dimensional grayscale images,and the gas condensate and condensate liquid were segmented based on gray value discrepancy to dynamically characterize the phase behavior of the gas condensate in porous media.Pore network models of the condensate liquid ganglia under different pressures were built to calculate the characteristic parameters,including the average radius,coordination number,and tortuosity,and to analyze the changing mechanism caused by the phase behavior change of the gas condensate.Four types of condensate liquid(clustered,branched,membranous,and droplet ganglia)were then classified by shape factor and Euler number to investigate their morphological changes dynamically and elaborately.The results show that the dew point pressure of the gas condensate in porous media is 12.7 MPa,which is 0.7 MPa higher than 12.0 MPa in PVT cells.The average radius,volume,and coordination number of the condensate liquid ganglia increased when the system pressure was between the dew point pressure(12.7 MPa)and the pressure for the maximum liquid dropout,Pmax(10.0 MPa),and decreased when it was below Pmax.The volume proportion of clustered ganglia was the highest,followed by branched,membranous,and droplet ganglia.This study provides crucial experimental evidence for the phase behavior changing process of gas condensate in porous media during the depletion production of gas condensate reservoirs.展开更多
A blockchain-based power transaction method is proposed for Active Distribution Network(ADN),considering the poor security and high cost of a centralized power trading system.Firstly,the decentralized blockchain struc...A blockchain-based power transaction method is proposed for Active Distribution Network(ADN),considering the poor security and high cost of a centralized power trading system.Firstly,the decentralized blockchain structure of the ADN power transaction is built and the transaction information is kept in blocks.Secondly,considering the transaction needs between users and power suppliers in ADN,an energy request mechanism is proposed,and the optimization objective function is designed by integrating cost aware requests and storage aware requests.Finally,the particle swarm optimization algorithm is used for multi-objective optimal search to find the power trading scheme with the minimum power purchase cost of users and the maximum power sold by power suppliers.The experimental demonstration of the proposed method based on the experimental platform shows that when the number of participants is no more than 10,the transaction delay time is 0.2 s,and the transaction cost fluctuates at 200,000 yuan,which is better than other comparison methods.展开更多
The new reality of smart distribution systems with use of generation sources of small and medium sizes brings new challenges for the operation of these systems. The complexity and the large number of nodes requires us...The new reality of smart distribution systems with use of generation sources of small and medium sizes brings new challenges for the operation of these systems. The complexity and the large number of nodes requires use of methods which can reduce the processing time of algorithms such as power flow, allowing its use in real time. This paper presents a known methodology for calculating the power flow in three phases using backward/forward sweep method, and also considering other network elements such as voltage regulators, shunt capacitors and sources of dispersed generation of types PV (active power and voltage) and PQ (active and reactive power). After that, new elements are introduced that allow the parallelization of this algorithm and an adequate distribution of work between the available processors. The algorithm was implemented using a multi-tiered architecture; the processing times were measured in many network configurations and compared with the same algorithm in the serial version.展开更多
As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g...As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g., bitcoinsystem), the node does not rely on any central organization, and every node keepsan entire copy of the transaction database. However, this feature determines thatthe size of blockchain transaction database is growing rapidly. Therefore, with thecontinuous system operations, the node memory also needs to be expanded tosupport the system running. Especially in the big data era, the increasing networktraffic will lead to faster transaction growth rate. This paper analyzes blockchaintransaction databases and proposes a storage optimization scheme. The proposedscheme divides blockchain transaction database into cold zone and hot zone usingexpiration recognition method based on Least Recently Used (LRU) algorithm. Itcan achieve storage optimization by moving unspent transaction outputs outsidethe in-memory transaction databases. We present the theoretical analysis on theoptimization method to validate the effectiveness. Extensive experiments showour proposed method outperforms the current mechanism for the blockchaintransaction databases.展开更多
The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high co...The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents.展开更多
In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches d...In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches data before it is needed according to the file access pattern,which can reduce the I/O waiting time and increase the system concurrency.However,prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching.In the massive small file situation,the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining.In this paper,we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency.Firstly,we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity.Secondly,we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately.Lastly,we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed.Experiments show that compared with the traditional LRU and the plain grouping methods,the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time.展开更多
Abstract-The ineffective utilization of power resources has attracted much attention in current years. This paper proposes a real-time distributed load scheduling algorithm considering constraints of power supply. Fir...Abstract-The ineffective utilization of power resources has attracted much attention in current years. This paper proposes a real-time distributed load scheduling algorithm considering constraints of power supply. Firstly, an objective function is designed based on the constraint, and a base load forecasting model is established when aggregating renewable generation and non-deferrable load into a power system, which aims to transform the problem of deferrable loads scheduling into a distributed optimal control problem. Then, to optimize the objective function, a real-time scheduling algorithm is presented to solve the proposed control problem. At every time step, the purpose is to minimize the variance of differences between power supply and aggregate load, which can thus ensure the effective utilization of power resources. Finally, simulation examples are provided to illustrate the effectiveness of the proposed algorithm.展开更多
基金Project(20030533011)supported by the National Research Foundation for the Doctoral Program of Higher Education of China
文摘A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environments. At the mobile hosts, all transactions perform local pre-validation. The local pre-validation process is carried out against the committed transactions at the server in the last broadcast cycle. Transactions that survive in local pre-validation must be submitted to the server for local final validation. The new protocol eliminates conflicts between mobile read-only and mobile update transactions, and resolves data conflicts flexibly by using multiversion dynamic adjustment of serialization order to avoid unnecessary restarts of transactions. Mobile read-only transactions can be committed with no-blocking, and respond time of mobile read-only transactions is greatly shortened. The tolerance of mobile transactions of disconnections from the broadcast channel is increased. In global validation mobile distributed transactions have to do check to ensure distributed serializability in all participants. The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols in terms of miss rate, restart rate, commit rate. Under high work load (think time is ls) the miss rate of DMVOCC-MVDA is only 14.6%, is significantly lower than that of other protocols. The restart rate of DMVOCC-MVDA is only 32.3%, showing that DMVOCC-MVDA can effectively reduce the restart rate of mobile transactions. And the commit rate of DMVOCC-MVDA is up to 61.2%, which is obviously higher than that of other protocols.
文摘In the evolving landscape of software engineering, Microservice Architecture (MSA) has emerged as a transformative approach, facilitating enhanced scalability, agility, and independent service deployment. This systematic literature review (SLR) explores the current state of distributed transaction management within MSA, focusing on the unique challenges, strategies, and technologies utilized in this domain. By synthesizing findings from 16 primary studies selected based on rigorous criteria, the review identifies key trends and best practices for maintaining data consistency and integrity across microservices. This SLR provides a comprehensive understanding of the complexities associated with distributed transactions in MSA, offering actionable insights and potential research directions for software architects, developers, and researchers.
基金Project(60873030) supported by the National Natural Science Foundation of China
文摘A new scheduling algorithm called deferrable scheduling with time slice exchange (DS-EXC) was proposed to maintain the temporal validity of real-time data. In DS-EXC, the time slice exchange method was designed to further defer the release time of transaction instances derived by the deferrable scheduling algorithm (DS-FP). In this way, more CPU time would be left for lower priority transactions and other transactions. In order to minimize the scheduling overhead, an off-line scheme was designed. In particular, the schedule for a transaction set is generated off-line until a repeating pattern is found, and then the pattern is used to construct the schedule on-line. The performance of DS-EXC was evaluated by sets of experiments. The results show that DS-EXC outperforms DS-FP in terms of increasing schedulable ratio. It also provides better performance under mixed workloads.
文摘Most of the proposed concurrency control protocols for real time database systems are based on serializability theorem. Owing to the unique characteristics of real time database applications and the importance of satisfying the timing constraints of transactions, serializability is too strong as a correctness criterion and not suitable for real time databases in most cases. On the other hand, relaxed serializability including epsilon serializability and similarity serializability can allow more real time transactions to satisfy their timing constraints, but database consistency may be sacrificed to some extent. We thus propose the use of weak serializability(WSR) that is more relaxed than conflicting serializability while database consistency is maintained. In this paper, we first formally define the new notion of correctness called weak serializability. After the necessary and sufficient conditions for weak serializability are shown, corresponding concurrency control protocol WDHP(weak serializable distributed high priority protocol) is outlined for distributed real time databases, where a new lock mode called mask lock mode is proposed for simplifying the condition of global consistency. Finally, through a series of simulation studies, it is shown that using the new concurrency control protocol the performance of distributed real time databases can be greatly improved.
基金funded by National Key R&D Program of China((Nos.2022YFC3003403 and 2018YFC1505203)Key Research and Development Program of Tibet Autonomous Region(XZ202301ZY0039G)+1 种基金Natural Science Foundation of Hebei Province(No.F2021201031)Geological Survey Project of China Geological Survey(No.DD20221747)。
文摘Glacier disasters occur frequently in alpine regions around the world,but the current conventional geological disaster measurement technology cannot be directly used for glacier disaster measurement.Hence,in this study,a distributed multi-sensor measurement system for glacier deformation was established by integrating piezoelectric sensing,coded sensing,attitude sensing technology and wireless communication technology.The traditional Modbus protocol was optimized to solve the problem of data identification confusion of different acquisition nodes.Through indoor wireless transmission,adaptive performance analysis,error measurement experiment and landslide simulation experiment,the performance of the measurement system was analyzed and evaluated.Using unmanned aerial vehicle technology,the reliability and effectiveness of the measurement system were verified on the site of Galongla glacier in southeastern Tibet,China.The results show that the mean absolute percentage errors were only 1.13%and 2.09%for the displacement and temperature,respectively.The distributed glacier deformation real-time measurement system provides a new means for the assessment of the development process of glacier disasters and disaster prevention and mitigation.
基金Project supported by National Natural Science Foundation ofChina (Grant No .60203017) Defense Pre-research Projectof the"Tenth Five-Year-Plan"of China (Grant No .413150403)
文摘Recovery performance in the event of failures is very important for distributed real-time database systems. This paper presents a time-cognizant logging-based crash recovery scheme (TCLCRS) that aims at distributed real-time databases, which adopts a main memory database as its ground support. In our scheme, each site maintains a real-time log for local transactions and the subtransactions, which execute at the site, and execte local checkpointing independently. Log records are stored in non-volatile high- speed store, which is divided into four different partitions based on transaction classes. During restart recovery after a site crash, partitioned crash recovery strategy is adopted to ensure that the site can be brought up before the entire local secondary database is reloaded in main memory. The partitioned crash recovery strategy not only guarantees the internal consistency to be recovered, but also guarantee the temporal consistency and recovery of the sates of physical world influenced by uncommitted transactions. Combined with two- phase commit protocol, TCLCRS can guarantee failure atomicity of distributed real-time transactions.
文摘Distributed speech recognition (DSR) applications have certain QoS (Quality of service) requirements in terms of latency, packet loss rate, etc. To deliver quality guaranteed DSR application over wirelined or wireless links, some QoS mechanisms should be provided. We put forward a RTP/RSVP transmission scheme with DSR-specific payload and QoS parameters by modifying the present WAP protocol stack. The simulation result shows that this scheme will provide adequate network bandwidth to keep the real-time transport of DSR data over either wirelined or wireless channels.
基金the National Natural Science Foundation of China and the Commission of Science,Technokgy and Industry for National Defense
文摘This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, through a series of simulation studies, it shows that using the new concurrency control protocol the performance of distributed real-time databases can be much improved.
文摘Real-Time Transaction Processing System(RTTPS)is a type of e-government that processes documents using electronic communication technology.In this time of the pandemic,the study contributes to the necessity to perform more processing online and less face-to-face.In terms of retrieving information,a comparison between Porter’s Stemming algorithm and this study was performed.The study aims to design a database that will serve as a repository of information in retrieving information and also to examine the efficacy of the real-time process in securing the government requirements using the Technology Acceptance Model.The respondents of this study have perceived ease of use and usefulness on the impact when securing the community tax certificate.
文摘Harvesting energy for execution from the environment (e.g., solar, wind energy) has recently emerged as a feasible solution for low-cost and low-power distributed systems. When real-time responsiveness of a given application has to be guaranteed, the recharge rate of obtaining energy inevitably affects the task scheduling. This paper extends our previous works in?[1] [2] to explore the real-time task assignment problem on an energy-harvesting distributed system. The solution using Ant Colony Optimization (ACO) and several significant improvements are presented. Simulations compare the performance of the approaches, which demonstrate the solutions effectiveness and efficiency.
基金funded by the National Natural Science Foundation of China(62072056,62172058)the Researchers Supporting Project Number(RSP2023R102)King Saud University,Riyadh,Saudi Arabia+4 种基金funded by the Hunan Provincial Key Research and Development Program(2022SK2107,2022GK2019)the Natural Science Foundation of Hunan Province(2023JJ30054)the Foundation of State Key Laboratory of Public Big Data(PBD2021-15)the Young Doctor Innovation Program of Zhejiang Shuren University(2019QC30)Postgraduate Scientific Research Innovation Project of Hunan Province(CX20220940,CX20220941).
文摘Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems across various fields.An increasing number of users are participating in application systems that use blockchain as their underlying architecture.As the number of transactions and the capital involved in blockchain grow,ensuring information security becomes imperative.Addressing the verification of transactional information security and privacy has emerged as a critical challenge.Blockchain-based verification methods can effectively eliminate the need for centralized third-party organizations.However,the efficiency of nodes in storing and verifying blockchain data faces unprecedented challenges.To address this issue,this paper introduces an efficient verification scheme for transaction security.Initially,it presents a node evaluation module to estimate the activity level of user nodes participating in transactions,accompanied by a probabilistic analysis for all transactions.Subsequently,this paper optimizes the conventional transaction organization form,introduces a heterogeneous Merkle tree storage structure,and designs algorithms for constructing these heterogeneous trees.Theoretical analyses and simulation experiments conclusively demonstrate the superior performance of this scheme.When verifying the same number of transactions,the heterogeneous Merkle tree transmits less data and is more efficient than traditional methods.The findings indicate that the heterogeneous Merkle tree structure is suitable for various blockchain applications,including the Internet of Things.This scheme can markedly enhance the efficiency of information verification and bolster the security of distributed systems.
基金the National Natural Science Foundation of China(Nos.52122402,12172334,52034010,52174051)Shandong Provincial Natural Science Foundation(Nos.ZR2021ME029,ZR2022JQ23)Fundamental Research Funds for the Central Universities(No.22CX01001A-4)。
文摘The phase behavior of gas condensate in reservoir formations differs from that in pressure-volume-temperature(PVT)cells because it is influenced by porous media in the reservoir formations.Sandstone was used as a sample to investigate the influence of porous media on the phase behavior of the gas condensate.The pore structure was first analyzed using computed tomography(CT)scanning,digital core technology,and a pore network model.The sandstone core sample was then saturated with gas condensate for the pressure depletion experiment.After each pressure-depletion state was stable,realtime CT scanning was performed on the sample.The scanning results of the sample were reconstructed into three-dimensional grayscale images,and the gas condensate and condensate liquid were segmented based on gray value discrepancy to dynamically characterize the phase behavior of the gas condensate in porous media.Pore network models of the condensate liquid ganglia under different pressures were built to calculate the characteristic parameters,including the average radius,coordination number,and tortuosity,and to analyze the changing mechanism caused by the phase behavior change of the gas condensate.Four types of condensate liquid(clustered,branched,membranous,and droplet ganglia)were then classified by shape factor and Euler number to investigate their morphological changes dynamically and elaborately.The results show that the dew point pressure of the gas condensate in porous media is 12.7 MPa,which is 0.7 MPa higher than 12.0 MPa in PVT cells.The average radius,volume,and coordination number of the condensate liquid ganglia increased when the system pressure was between the dew point pressure(12.7 MPa)and the pressure for the maximum liquid dropout,Pmax(10.0 MPa),and decreased when it was below Pmax.The volume proportion of clustered ganglia was the highest,followed by branched,membranous,and droplet ganglia.This study provides crucial experimental evidence for the phase behavior changing process of gas condensate in porous media during the depletion production of gas condensate reservoirs.
基金supported by the Postdoctoral Research Funding Program of Jiangsu Province under Grant 2021K622C.
文摘A blockchain-based power transaction method is proposed for Active Distribution Network(ADN),considering the poor security and high cost of a centralized power trading system.Firstly,the decentralized blockchain structure of the ADN power transaction is built and the transaction information is kept in blocks.Secondly,considering the transaction needs between users and power suppliers in ADN,an energy request mechanism is proposed,and the optimization objective function is designed by integrating cost aware requests and storage aware requests.Finally,the particle swarm optimization algorithm is used for multi-objective optimal search to find the power trading scheme with the minimum power purchase cost of users and the maximum power sold by power suppliers.The experimental demonstration of the proposed method based on the experimental platform shows that when the number of participants is no more than 10,the transaction delay time is 0.2 s,and the transaction cost fluctuates at 200,000 yuan,which is better than other comparison methods.
文摘The new reality of smart distribution systems with use of generation sources of small and medium sizes brings new challenges for the operation of these systems. The complexity and the large number of nodes requires use of methods which can reduce the processing time of algorithms such as power flow, allowing its use in real time. This paper presents a known methodology for calculating the power flow in three phases using backward/forward sweep method, and also considering other network elements such as voltage regulators, shunt capacitors and sources of dispersed generation of types PV (active power and voltage) and PQ (active and reactive power). After that, new elements are introduced that allow the parallelization of this algorithm and an adequate distribution of work between the available processors. The algorithm was implemented using a multi-tiered architecture; the processing times were measured in many network configurations and compared with the same algorithm in the serial version.
基金supported by Researchers Supporting Project(No.RSP-2020/102)King Saud University,Riyadh,Saudi Arabiathe National Natural Science Foundation of China(Nos.61802031,61772454,61811530332,61811540410)+4 种基金the Natural Science Foundation of Hunan Province,China(No.2019JGYB177)the Research Foundation of Education Bureau of Hunan Province,China(No.18C0216)the“Practical Innovation and Entrepreneurial Ability Improvement Plan”for Professional Degree Graduate students of Changsha University of Science and Technology(No.SJCX201971)Hunan Graduate Scientific Research Innovation Project,China(No.CX2019694)This work is also supported by the Programs of Transformation and Upgrading of Industries and Information Technologies of Jiangsu Province(No.JITC-1900AX2038/01).
文摘As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g., bitcoinsystem), the node does not rely on any central organization, and every node keepsan entire copy of the transaction database. However, this feature determines thatthe size of blockchain transaction database is growing rapidly. Therefore, with thecontinuous system operations, the node memory also needs to be expanded tosupport the system running. Especially in the big data era, the increasing networktraffic will lead to faster transaction growth rate. This paper analyzes blockchaintransaction databases and proposes a storage optimization scheme. The proposedscheme divides blockchain transaction database into cold zone and hot zone usingexpiration recognition method based on Least Recently Used (LRU) algorithm. Itcan achieve storage optimization by moving unspent transaction outputs outsidethe in-memory transaction databases. We present the theoretical analysis on theoptimization method to validate the effectiveness. Extensive experiments showour proposed method outperforms the current mechanism for the blockchaintransaction databases.
基金supported by The National Key R&D Program of China(2020YFB0905900):Research on artificial intelligence application of power internet of things.
文摘The coordinated optimization problem of the electricity-gas-heat integrated energy system(IES)has the characteristics of strong coupling,non-convexity,and nonlinearity.The centralized optimization method has a high cost of communication and complex modeling.Meanwhile,the traditional numerical iterative solution cannot deal with uncertainty and solution efficiency,which is difficult to apply online.For the coordinated optimization problem of the electricity-gas-heat IES in this study,we constructed a model for the distributed IES with a dynamic distribution factor and transformed the centralized optimization problem into a distributed optimization problem in the multi-agent reinforcement learning environment using multi-agent deep deterministic policy gradient.Introducing the dynamic distribution factor allows the system to consider the impact of changes in real-time supply and demand on system optimization,dynamically coordinating different energy sources for complementary utilization and effectively improving the system economy.Compared with centralized optimization,the distributed model with multiple decision centers can achieve similar results while easing the pressure on system communication.The proposed method considers the dual uncertainty of renewable energy and load in the training.Compared with the traditional iterative solution method,it can better cope with uncertainty and realize real-time decision making of the system,which is conducive to the online application.Finally,we verify the effectiveness of the proposed method using an example of an IES coupled with three energy hub agents.
基金This work is supported by‘The Fundamental Research Funds for the Central Universities(Grant No.HIT.NSRIF.201714)’‘Weihai Science and Technology Development Program(2016DXGJMS15)’‘Key Research and Development Program in Shandong Provincial(2017GGX90103)’.
文摘In distributed storage systems,file access efficiency has an important impact on the real-time nature of information forensics.As a popular approach to improve file accessing efficiency,prefetching model can fetches data before it is needed according to the file access pattern,which can reduce the I/O waiting time and increase the system concurrency.However,prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching.In the massive small file situation,the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining.In this paper,we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency.Firstly,we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity.Secondly,we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately.Lastly,we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed.Experiments show that compared with the traditional LRU and the plain grouping methods,the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time.
文摘Abstract-The ineffective utilization of power resources has attracted much attention in current years. This paper proposes a real-time distributed load scheduling algorithm considering constraints of power supply. Firstly, an objective function is designed based on the constraint, and a base load forecasting model is established when aggregating renewable generation and non-deferrable load into a power system, which aims to transform the problem of deferrable loads scheduling into a distributed optimal control problem. Then, to optimize the objective function, a real-time scheduling algorithm is presented to solve the proposed control problem. At every time step, the purpose is to minimize the variance of differences between power supply and aggregate load, which can thus ensure the effective utilization of power resources. Finally, simulation examples are provided to illustrate the effectiveness of the proposed algorithm.