Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems...Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems across various fields.An increasing number of users are participating in application systems that use blockchain as their underlying architecture.As the number of transactions and the capital involved in blockchain grow,ensuring information security becomes imperative.Addressing the verification of transactional information security and privacy has emerged as a critical challenge.Blockchain-based verification methods can effectively eliminate the need for centralized third-party organizations.However,the efficiency of nodes in storing and verifying blockchain data faces unprecedented challenges.To address this issue,this paper introduces an efficient verification scheme for transaction security.Initially,it presents a node evaluation module to estimate the activity level of user nodes participating in transactions,accompanied by a probabilistic analysis for all transactions.Subsequently,this paper optimizes the conventional transaction organization form,introduces a heterogeneous Merkle tree storage structure,and designs algorithms for constructing these heterogeneous trees.Theoretical analyses and simulation experiments conclusively demonstrate the superior performance of this scheme.When verifying the same number of transactions,the heterogeneous Merkle tree transmits less data and is more efficient than traditional methods.The findings indicate that the heterogeneous Merkle tree structure is suitable for various blockchain applications,including the Internet of Things.This scheme can markedly enhance the efficiency of information verification and bolster the security of distributed systems.展开更多
Recently,the ontological metamodel plays an increasingly important role to specify systems in two forms:ontology and metamodel.Ontology is a descriptive model representing reality by a set of concepts,their interrelat...Recently,the ontological metamodel plays an increasingly important role to specify systems in two forms:ontology and metamodel.Ontology is a descriptive model representing reality by a set of concepts,their interrelations,and constraints.On the other hand,metamodel is a more classical,but more powerful model in which concepts and relationships are represented in a prescriptive way.This study firstly clarifies the difference between the two approaches,then explains their advantages and limitations,and attempts to explore a general ontological metamodeling framework by integrating each characteristic,in order to implement semantic simulation model engineering.As a proof of concept,this paper takes the combat effectiveness simulation systems as a motivating case,uses the proposed framework to define a set of ontological composable modeling frameworks,and presents an underwater targets search scenario for running simulations and analyzing results.Finally,this paper expects that this framework will be generally used in other fields.展开更多
A bearnforming algorithm is introduced based on the general objective function that approximates the bit error rate for the wireless systems with binary phase shift keying and quadrature phase shift keying modulation ...A bearnforming algorithm is introduced based on the general objective function that approximates the bit error rate for the wireless systems with binary phase shift keying and quadrature phase shift keying modulation schemes. The proposed minimum approximate bit error rate (ABER) beamforming approach does not rely on the Gaussian assumption of the channel noise. Therefore, this approach is also applicable when the channel noise is non-Gaussian. The simulation results show that the proposed minimum ABER solution improves the standard minimum mean squares error beamforming solution, in terms of a smaller achievable system's bit error rate.展开更多
Optical methods such as Twyman-Green interferometry, moiré interferometry, holographic interferometry and speckle interferometry are useful to measure displacement and strain in the full-field of structures. Rece...Optical methods such as Twyman-Green interferometry, moiré interferometry, holographic interferometry and speckle interferometry are useful to measure displacement and strain in the full-field of structures. Recently phase analysis method of fringe patterns obtained by these optical methods becomes popular, and it provides accurate quantitative results in the full-field. In this paper, in order to measure displacement and strain, real-time or high-speed nano-meter displacement measurement methods developed by the authors are introduced. That is, (1) out-of-plane displacement analysis by Twyman-Green interferometry using integrated phase-shifting method using Fourier transform phase-shifting method, (2) simultaneous two-dimensional in-plane displacement analysis by moiré interferometry and (3) out-of-plane displacement analysis by phase-shifting digital holographic interferometry. The theories and applications are shown.展开更多
To achieve CO2 emissions reductions, the UK Building Regulations require developers of new residential buildings to calculate expected CO2 emissions arising from their energy consumption using a methodology such as St...To achieve CO2 emissions reductions, the UK Building Regulations require developers of new residential buildings to calculate expected CO2 emissions arising from their energy consumption using a methodology such as Standard Assessment Procedure (SAP 2005) or, more recently SAP 2009. SAP encompasses all domestic heat consumption and a limited proportion of the electricity consumption. However, these calculations are rarely verified with real energy consumption and related CO2 emissions. This work presents the results of an analysis based on weekly heat demand data for more than 200 individual fiats. The data were collected from a recently built residential development connected to a district heating network. A method for separating out the domestic hot water (DHW) use and space heating (SH) demand has been developed and these values are compared to the demand calculated using SAP 2005 and SAP 2009 methodologies. The analysis also shows the variation in DHW and SH consumption with size of flats and with tenure (privately owned or social housing). Evaluation of the space heating consumption also includes an estimate of the heating degree day (HDD) base temperature for each block of fiats and compares this to the average base temperature calculated using the SAP 2005 methodology.展开更多
The development of artificial intelligence(AI)and smart home technologies has driven the need for speech recognition-based solutions.This demand stems from the quest for more intuitive and natural interaction between ...The development of artificial intelligence(AI)and smart home technologies has driven the need for speech recognition-based solutions.This demand stems from the quest for more intuitive and natural interaction between users and smart devices in their homes.Speech recognition allows users to control devices and perform everyday actions through spoken commands,eliminating the need for physical interfaces or touch screens and enabling specific tasks such as turning on or off the light,heating,or lowering the blinds.The purpose of this study is to develop a speech-based classification model for recognizing human actions in the smart home.It seeks to demonstrate the effectiveness and feasibility of using machine learning techniques in predicting categories,subcategories,and actions from sentences.A dataset labeled with relevant information about categories,subcategories,and actions related to human actions in the smart home is used.The methodology uses machine learning techniques implemented in Python,extracting features using CountVectorizer to convert sentences into numerical representations.The results show that the classification model is able to accurately predict categories,subcategories,and actions based on sentences,with 82.99%accuracy for category,76.19%accuracy for subcategory,and 90.28%accuracy for action.The study concludes that using machine learning techniques is effective for recognizing and classifying human actions in the smart home,supporting its feasibility in various scenarios and opening new possibilities for advanced natural language processing systems in the field of AI and smart homes.展开更多
The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behavior, though its interpretation has a subjective nature because of the manual process ...The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behavior, though its interpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identification of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave latency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a method for evaluating the relation between the responses given by the examiners. Results: The analysis suggests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the obtained wave latency differences and 18% of the investigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
To reduce complexity, the combat effectiveness simulation system(CESS) is often decomposed into static structure,physical behavior, and cognitive behavior, and model abstraction is layered onto domain invariant knowle...To reduce complexity, the combat effectiveness simulation system(CESS) is often decomposed into static structure,physical behavior, and cognitive behavior, and model abstraction is layered onto domain invariant knowledge(DIK) and application variant knowledge(AVK) levels. This study concentrates on the specification of CESS’s physical behaviors at the DIK level of abstraction, and proposes a model driven framework for efficiently developing simulation models within model-driven engineering(MDE). Technically, this framework integrates the four-layer metamodeling architecture and a set of model transformation techniques with the objective of reducing model heterogeneity and enhancing model continuity. As a proof of concept, a torpedo example is illustrated to explain how physical models are developed following the proposed framework. Finally, a combat scenario is constructed to demonstrate the availability, and a further verification is shown by a reasonable agreement between simulation results and field observations.展开更多
As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g...As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g., bitcoinsystem), the node does not rely on any central organization, and every node keepsan entire copy of the transaction database. However, this feature determines thatthe size of blockchain transaction database is growing rapidly. Therefore, with thecontinuous system operations, the node memory also needs to be expanded tosupport the system running. Especially in the big data era, the increasing networktraffic will lead to faster transaction growth rate. This paper analyzes blockchaintransaction databases and proposes a storage optimization scheme. The proposedscheme divides blockchain transaction database into cold zone and hot zone usingexpiration recognition method based on Least Recently Used (LRU) algorithm. Itcan achieve storage optimization by moving unspent transaction outputs outsidethe in-memory transaction databases. We present the theoretical analysis on theoptimization method to validate the effectiveness. Extensive experiments showour proposed method outperforms the current mechanism for the blockchaintransaction databases.展开更多
Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is in...Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is introduced to promote efficiency.However,most existing Transformer-based log anomaly detection methods convert unstructured log messages into structured templates by log parsing,which introduces parsing errors.They only extract simple semantic feature,which ignores other features,and are generally supervised,relying on the amount of labeled data.To overcome the limitations of existing methods,this paper proposes a novel unsupervised log anomaly detection method based on multi-feature(UMFLog).UMFLog includes two sub-models to consider two kinds of features:semantic feature and statistical feature,respectively.UMFLog applies the log original content with detailed parameters instead of templates or template IDs to avoid log parsing errors.In the first sub-model,UMFLog uses Bidirectional Encoder Representations from Transformers(BERT)instead of random initialization to extract effective semantic feature,and an unsupervised hypersphere-based Transformer model to learn compact log sequence representations and obtain anomaly candidates.In the second sub-model,UMFLog exploits a statistical feature-based Variational Autoencoder(VAE)about word occurrence times to identify the final anomaly from anomaly candidates.Extensive experiments and evaluations are conducted on three real public log datasets.The results show that UMFLog significantly improves F1-scores compared to the state-of-the-art(SOTA)methods because of the multi-feature.展开更多
The distributed hybrid processing optimization problem of non-cooperative targets is an important research direction for future networked air-defense and anti-missile firepower systems. In this paper, the air-defense ...The distributed hybrid processing optimization problem of non-cooperative targets is an important research direction for future networked air-defense and anti-missile firepower systems. In this paper, the air-defense anti-missile targets defense problem is abstracted as a nonconvex constrained combinatorial optimization problem with the optimization objective of maximizing the degree of contribution of the processing scheme to non-cooperative targets, and the constraints mainly consider geographical conditions and anti-missile equipment resources. The grid discretization concept is used to partition the defense area into network nodes, and the overall defense strategy scheme is described as a nonlinear programming problem to solve the minimum defense cost within the maximum defense capability of the defense system network. In the solution of the minimum defense cost problem, the processing scheme, equipment coverage capability, constraints and node cost requirements are characterized, then a nonlinear mathematical model of the non-cooperative target distributed hybrid processing optimization problem is established, and a local optimal solution based on the sequential quadratic programming algorithm is constructed, and the optimal firepower processing scheme is given by using the sequential quadratic programming method containing non-convex quadratic equations and inequality constraints. Finally, the effectiveness of the proposed method is verified by simulation examples.展开更多
Data security and user privacy have become crucial elements in multi-tenant data centers.Various traffic types in the multi-tenant data center in the cloud environment have their characteristics and requirements.In th...Data security and user privacy have become crucial elements in multi-tenant data centers.Various traffic types in the multi-tenant data center in the cloud environment have their characteristics and requirements.In the data center network(DCN),short and long flows are sensitive to low latency and high throughput,respectively.The traditional security processing approaches,however,neglect these characteristics and requirements.This paper proposes a fine-grained security enhancement mechanism(SEM)to solve the problem of heterogeneous traffic and reduce the traffic completion time(FCT)of short flows while ensuring the security of multi-tenant traffic transmission.Specifically,for short flows in DCN,the lightweight GIFT encryption method is utilized.For Intra-DCN long flows and Inter-DCN traffic,the asymmetric elliptic curve encryption algorithm(ECC)is utilized.The NS-3 simulation results demonstrate that SEM dramatically reduces the FCT of short flows by 70%compared to several conventional encryption techniques,effectively enhancing the security and anti-attack of traffic transmission between DCNs in cloud computing environments.Additionally,SEM performs better than other encryption methods under high load and in largescale cloud environments.展开更多
In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios ev...In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios even with the existing congestion control solutions. To addressthe head-of-line blocking problem of PFC, we propose a new congestioncontrol mechanism. The key point of Congestion Control Using In-NetworkTelemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry(INT) technology to obtain comprehensive congestion information, which isthen fed back to the sender to adjust the sending rate timely and accurately.It is possible to control congestion in time, converge to the target rate quickly,and maintain a near-zero queue length at the switch when using ICC. Weconducted Network Simulator-3 (NS-3) simulation experiments to test theICC’s performance. When compared to Congestion Control for Large-ScaleRDMA Deployments (DCQCN), TIMELY: RTT-based Congestion Controlfor the Datacenter (TIMELY), and Re-architecting Congestion Managementin Lossless Ethernet (PCN), ICC effectively reduces PFC pause messages andFlow Completion Time (FCT) by 47%, 56%, 34%, and 15.3×, 14.8×, and11.2×, respectively.展开更多
With the rapid development of information technology,the development of blockchain technology has also been deeply impacted.When performing block verification in the blockchain network,if all transactions are verified...With the rapid development of information technology,the development of blockchain technology has also been deeply impacted.When performing block verification in the blockchain network,if all transactions are verified on the chain,this will cause the accumulation of data on the chain,resulting in data storage problems.At the same time,the security of data is also challenged,which will put enormous pressure on the block,resulting in extremely low communication efficiency of the block.The traditional blockchain system uses theMerkle Tree method to store data.While verifying the integrity and correctness of the data,the amount of proof is large,and it is impossible to verify the data in batches.A large amount of data proof will greatly impact the verification efficiency,which will cause end-to-end communication delays and seriously affect the blockchain system’s stability,efficiency,and security.In order to solve this problem,this paper proposes to replace the Merkle tree with polynomial commitments,which take advantage of the properties of polynomials to reduce the proof size and communication consumption.By realizing the ingenious use of aggregated proof and smart contracts,the verification efficiency of blocks is improved,and the pressure of node communication is reduced.展开更多
The modern power system has evolved into a cyber-physical system with deep coupling of physical and information domains,which brings new security risks.Aiming at the problem that the“information-physical”cross-domai...The modern power system has evolved into a cyber-physical system with deep coupling of physical and information domains,which brings new security risks.Aiming at the problem that the“information-physical”cross-domain attacks with key nodes as springboards seriously threaten the safe and stable operation of power grids,a risk propagation model considering key nodes of power communication coupling networks is proposed to study the risk propagation characteristics of malicious attacks on key nodes and the impact on the system.First,combined with the complex network theory,a topological model of the power communication coupling network is established,and the key nodes of the coupling network are screened out by Technique for Order Preference by Similarity to Ideal Solution(TOPSIS)method under the comprehensive evaluation index based on topological characteristics and physical characteristics.Second,a risk propagation model is established for malicious attacks on key nodes to study its propagation characteristics and analyze the state changes of each node in the coupled network.Then,two loss-causing factors:the minimum load loss ratio and transmission delay factor are constructed to quantify the impact of risk propagation on the coupled network.Finally,simulation analysis based on the IEEE 39-node system shows that the probability of node being breached(α)and the security tolerance of the system(β)are the key factors affecting the risk propagation characteristics of the coupled network,as well as the criticality of the node is positively correlated with the damage-causing factor.The proposed methodological model can provide an effective exploration of the diffusion of security risks in control systems on a macro level.展开更多
This research aimed to evaluate the efficiency of eucalyptus(E)and bamboo(B)residual biomass biochars as filter materials for drinking water treatment.The efficiencies of these two biochars in the rapid filtration pro...This research aimed to evaluate the efficiency of eucalyptus(E)and bamboo(B)residual biomass biochars as filter materials for drinking water treatment.The efficiencies of these two biochars in the rapid filtration process were evaluated using water(raw,flocculated and settled)at the rate of 120 m^(3)/m^(2)/d.Finding that bamboo biochar manufactured under a slow pyrolysis process"b"(Bb)had the best performance.Subsequently,Bb was evaluated with three different granulometries,and it was found that the effective size with the best performance was the finest(0.6-1.18 mm).Subsequently,this biochar was compared with conventional filter materials such as gravel,sand and anthracite,using different types of water(raw,flocculated and settled)and at different filtration rates(120 and 240 m^(3)/m^(2)/d),and it was found that the filter material with the best performance was precisely biochar,with average removal efficiencies of 64.37%turbidity and 45.08%colour for raw water;93.9%turbidity and 90.75%colour for flocculated water,and 80.79%turbidity and 69.03%colour for settled water.The efficiency using simple beds of sand,biochar,anthracite and gravel at the rate of 180 m^(3)/m^(2)/d was 75.9%copper,90.72%aluminium,95.7%iron,10.9%nitrates,94.3%total coliforms and 88.9%fecal coliforms.The efficiencies achieved by biochar were higher compared to those of conventional filter materials.It was also found that biochar contributes to improving the performance of sand and anthracite in mixed beds.Additionally,it was possible to demonstrate that the volume of washing water required for the biochar is lower compared to the other filter beds.Finally,it is recommended to carry out more tests for the purification of water with biochars from rural areas affected by the mining and oil exploitation,as well as the purification of seawater with biochars from coastal areas with residues from dry forests and organic residues from municipalities.展开更多
This paper discusses the data-driven design of linear quadratic regulators,i.e.,to obtain the regulators directly from experimental data without using the models of plants.In particular,we aim to improve an existing d...This paper discusses the data-driven design of linear quadratic regulators,i.e.,to obtain the regulators directly from experimental data without using the models of plants.In particular,we aim to improve an existing design method by reducing the amount of the required experimental data.Reducing the data amount leads to the cost reduction of experiments and computation for the data-driven design.We present a simplified version of the existing method,where parameters yielding the gain of the regulator are estimated from only part of the data required in the existing method.We then show that the data amount required in the presented method is less than half of that in the existing method under certain conditions.In addition,assuming the presence of measurement noise,we analyze the relations between the expectations and variances of the estimated parameters and the noise.As a result,it is shown that using a larger amount of the experimental data might mitigate the effects of the noise on the estimated parameters.These results are verified by numerical examples.展开更多
The rapid evolution of the Internet has been appealing for effective recommender systems to pinpoint useful information from online resources. Although historical rating data has been widely used as the most important...The rapid evolution of the Internet has been appealing for effective recommender systems to pinpoint useful information from online resources. Although historical rating data has been widely used as the most important information in recommendation methods, recent advancements have been demonstrating the improvement in recommendation performance with the incorporation of tag information. Furthermore, the availability of tag annotations has been well addressed by such fruitful online social tagging applications as CiteULike, MovieLens and BibSonomy, which allow users to express their preferences, upload resources and assign their own tags. Nevertheless, most existing tag-aware recommendation approaches model relationships among users, objects and tags using a tripartite graph, and hence overlook relationships within the same types of nodes. To overcome this limitation, we propose a novel approach, Trinity, to integrate historical data and tag information towards personalised recommendation. Trinity constructs a three-layered object-user-tag network that considers not only interconnections between different types of nodes but also relationships within the same types of nodes. Based on this heterogeneous network, Trinity adopts a random walk with restart model to assign the strength of associations to candidate objects, thereby providing a means of prioritizing the objects for a query user. We validate our approach via a series of large-scale 10-fold cross-validation experiments and evaluate its performance using three comprehensive criteria. Results show that our method outperforms several existing methods, including supervised random walk with restart, simulation of resource allocating processes, and traditional collaborative filtering.展开更多
The level set method is commonly used to address image noise removal. Existing studies concentrate mainly on determining the speed function of the evolution equation. Based on the idea of a Canny operator, this letter...The level set method is commonly used to address image noise removal. Existing studies concentrate mainly on determining the speed function of the evolution equation. Based on the idea of a Canny operator, this letter introduces a new method of controlling the level set evolution, in which the edge strength is taken into account in choosing curvature flows for the speed function and the normal to edge direction is used to orient the diffusion of the moving interface. The addition of an energy term to penalize the irregularity allows for better preservation of local edge information. In contrast with previous Canny-based level set methods that usually adopt a two-stage framework, the proposed algorithm can execute all the above operations in one process during noise removal.展开更多
基金funded by the National Natural Science Foundation of China(62072056,62172058)the Researchers Supporting Project Number(RSP2023R102)King Saud University,Riyadh,Saudi Arabia+4 种基金funded by the Hunan Provincial Key Research and Development Program(2022SK2107,2022GK2019)the Natural Science Foundation of Hunan Province(2023JJ30054)the Foundation of State Key Laboratory of Public Big Data(PBD2021-15)the Young Doctor Innovation Program of Zhejiang Shuren University(2019QC30)Postgraduate Scientific Research Innovation Project of Hunan Province(CX20220940,CX20220941).
文摘Blockchain can realize the reliable storage of a large amount of data that is chronologically related and verifiable within the system.This technology has been widely used and has developed rapidly in big data systems across various fields.An increasing number of users are participating in application systems that use blockchain as their underlying architecture.As the number of transactions and the capital involved in blockchain grow,ensuring information security becomes imperative.Addressing the verification of transactional information security and privacy has emerged as a critical challenge.Blockchain-based verification methods can effectively eliminate the need for centralized third-party organizations.However,the efficiency of nodes in storing and verifying blockchain data faces unprecedented challenges.To address this issue,this paper introduces an efficient verification scheme for transaction security.Initially,it presents a node evaluation module to estimate the activity level of user nodes participating in transactions,accompanied by a probabilistic analysis for all transactions.Subsequently,this paper optimizes the conventional transaction organization form,introduces a heterogeneous Merkle tree storage structure,and designs algorithms for constructing these heterogeneous trees.Theoretical analyses and simulation experiments conclusively demonstrate the superior performance of this scheme.When verifying the same number of transactions,the heterogeneous Merkle tree transmits less data and is more efficient than traditional methods.The findings indicate that the heterogeneous Merkle tree structure is suitable for various blockchain applications,including the Internet of Things.This scheme can markedly enhance the efficiency of information verification and bolster the security of distributed systems.
基金the National Natural Science Foundation of China(61273198).
文摘Recently,the ontological metamodel plays an increasingly important role to specify systems in two forms:ontology and metamodel.Ontology is a descriptive model representing reality by a set of concepts,their interrelations,and constraints.On the other hand,metamodel is a more classical,but more powerful model in which concepts and relationships are represented in a prescriptive way.This study firstly clarifies the difference between the two approaches,then explains their advantages and limitations,and attempts to explore a general ontological metamodeling framework by integrating each characteristic,in order to implement semantic simulation model engineering.As a proof of concept,this paper takes the combat effectiveness simulation systems as a motivating case,uses the proposed framework to define a set of ontological composable modeling frameworks,and presents an underwater targets search scenario for running simulations and analyzing results.Finally,this paper expects that this framework will be generally used in other fields.
文摘A bearnforming algorithm is introduced based on the general objective function that approximates the bit error rate for the wireless systems with binary phase shift keying and quadrature phase shift keying modulation schemes. The proposed minimum approximate bit error rate (ABER) beamforming approach does not rely on the Gaussian assumption of the channel noise. Therefore, this approach is also applicable when the channel noise is non-Gaussian. The simulation results show that the proposed minimum ABER solution improves the standard minimum mean squares error beamforming solution, in terms of a smaller achievable system's bit error rate.
文摘Optical methods such as Twyman-Green interferometry, moiré interferometry, holographic interferometry and speckle interferometry are useful to measure displacement and strain in the full-field of structures. Recently phase analysis method of fringe patterns obtained by these optical methods becomes popular, and it provides accurate quantitative results in the full-field. In this paper, in order to measure displacement and strain, real-time or high-speed nano-meter displacement measurement methods developed by the authors are introduced. That is, (1) out-of-plane displacement analysis by Twyman-Green interferometry using integrated phase-shifting method using Fourier transform phase-shifting method, (2) simultaneous two-dimensional in-plane displacement analysis by moiré interferometry and (3) out-of-plane displacement analysis by phase-shifting digital holographic interferometry. The theories and applications are shown.
文摘To achieve CO2 emissions reductions, the UK Building Regulations require developers of new residential buildings to calculate expected CO2 emissions arising from their energy consumption using a methodology such as Standard Assessment Procedure (SAP 2005) or, more recently SAP 2009. SAP encompasses all domestic heat consumption and a limited proportion of the electricity consumption. However, these calculations are rarely verified with real energy consumption and related CO2 emissions. This work presents the results of an analysis based on weekly heat demand data for more than 200 individual fiats. The data were collected from a recently built residential development connected to a district heating network. A method for separating out the domestic hot water (DHW) use and space heating (SH) demand has been developed and these values are compared to the demand calculated using SAP 2005 and SAP 2009 methodologies. The analysis also shows the variation in DHW and SH consumption with size of flats and with tenure (privately owned or social housing). Evaluation of the space heating consumption also includes an estimate of the heating degree day (HDD) base temperature for each block of fiats and compares this to the average base temperature calculated using the SAP 2005 methodology.
基金supported by Generalitat Valenciana with HAAS(CIAICO/2021/039)the Spanish Ministry of Science and Innovation under the Project AVANTIA PID2020-114480RB-I00.
文摘The development of artificial intelligence(AI)and smart home technologies has driven the need for speech recognition-based solutions.This demand stems from the quest for more intuitive and natural interaction between users and smart devices in their homes.Speech recognition allows users to control devices and perform everyday actions through spoken commands,eliminating the need for physical interfaces or touch screens and enabling specific tasks such as turning on or off the light,heating,or lowering the blinds.The purpose of this study is to develop a speech-based classification model for recognizing human actions in the smart home.It seeks to demonstrate the effectiveness and feasibility of using machine learning techniques in predicting categories,subcategories,and actions from sentences.A dataset labeled with relevant information about categories,subcategories,and actions related to human actions in the smart home is used.The methodology uses machine learning techniques implemented in Python,extracting features using CountVectorizer to convert sentences into numerical representations.The results show that the classification model is able to accurately predict categories,subcategories,and actions based on sentences,with 82.99%accuracy for category,76.19%accuracy for subcategory,and 90.28%accuracy for action.The study concludes that using machine learning techniques is effective for recognizing and classifying human actions in the smart home,supporting its feasibility in various scenarios and opening new possibilities for advanced natural language processing systems in the field of AI and smart homes.
文摘The analysis of the Auditory Brainstem Response (ABR) is of fundamental importance to the investigation of the auditory system behavior, though its interpretation has a subjective nature because of the manual process employed in its study and the clinical experience required for its analysis. When analyzing the ABR, clinicians are often interested in the identification of ABR signal components referred to as Jewett waves. In particular, the detection and study of the time when these waves occur (i.e., the wave latency) is a practical tool for the diagnosis of disorders affecting the auditory system. In this context, the aim of this research is to compare ABR manual/visual analysis provided by different examiners. Methods: The ABR data were collected from 10 normal-hearing subjects (5 men and 5 women, from 20 to 52 years). A total of 160 data samples were analyzed and a pair- wise comparison between four distinct examiners was executed. We carried out a statistical study aiming to identify significant differences between assessments provided by the examiners. For this, we used Linear Regression in conjunction with Bootstrap, as a method for evaluating the relation between the responses given by the examiners. Results: The analysis suggests agreement among examiners however reveals differences between assessments of the variability of the waves. We quantified the magnitude of the obtained wave latency differences and 18% of the investigated waves presented substantial differences (large and moderate) and of these 3.79% were considered not acceptable for the clinical practice. Conclusions: Our results characterize the variability of the manual analysis of ABR data and the necessity of establishing unified standards and protocols for the analysis of these data. These results may also contribute to the validation and development of automatic systems that are employed in the early diagnosis of hearing loss.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
基金supported by the National Natural Science Foundation of China(61273198)
文摘To reduce complexity, the combat effectiveness simulation system(CESS) is often decomposed into static structure,physical behavior, and cognitive behavior, and model abstraction is layered onto domain invariant knowledge(DIK) and application variant knowledge(AVK) levels. This study concentrates on the specification of CESS’s physical behaviors at the DIK level of abstraction, and proposes a model driven framework for efficiently developing simulation models within model-driven engineering(MDE). Technically, this framework integrates the four-layer metamodeling architecture and a set of model transformation techniques with the objective of reducing model heterogeneity and enhancing model continuity. As a proof of concept, a torpedo example is illustrated to explain how physical models are developed following the proposed framework. Finally, a combat scenario is constructed to demonstrate the availability, and a further verification is shown by a reasonable agreement between simulation results and field observations.
基金supported by Researchers Supporting Project(No.RSP-2020/102)King Saud University,Riyadh,Saudi Arabiathe National Natural Science Foundation of China(Nos.61802031,61772454,61811530332,61811540410)+4 种基金the Natural Science Foundation of Hunan Province,China(No.2019JGYB177)the Research Foundation of Education Bureau of Hunan Province,China(No.18C0216)the“Practical Innovation and Entrepreneurial Ability Improvement Plan”for Professional Degree Graduate students of Changsha University of Science and Technology(No.SJCX201971)Hunan Graduate Scientific Research Innovation Project,China(No.CX2019694)This work is also supported by the Programs of Transformation and Upgrading of Industries and Information Technologies of Jiangsu Province(No.JITC-1900AX2038/01).
文摘As the typical peer-to-peer distributed networks, blockchain systemsrequire each node to copy a complete transaction database, so as to ensure newtransactions can by verified independently. In a blockchain system (e.g., bitcoinsystem), the node does not rely on any central organization, and every node keepsan entire copy of the transaction database. However, this feature determines thatthe size of blockchain transaction database is growing rapidly. Therefore, with thecontinuous system operations, the node memory also needs to be expanded tosupport the system running. Especially in the big data era, the increasing networktraffic will lead to faster transaction growth rate. This paper analyzes blockchaintransaction databases and proposes a storage optimization scheme. The proposedscheme divides blockchain transaction database into cold zone and hot zone usingexpiration recognition method based on Least Recently Used (LRU) algorithm. Itcan achieve storage optimization by moving unspent transaction outputs outsidethe in-memory transaction databases. We present the theoretical analysis on theoptimization method to validate the effectiveness. Extensive experiments showour proposed method outperforms the current mechanism for the blockchaintransaction databases.
基金supported in part by the National Natural Science Foundation of China under Grant 62272062the Scientific Research Fund of Hunan Provincial Transportation Department(No.202143)the Open Fund ofKey Laboratory of Safety Control of Bridge Engineering,Ministry of Education(Changsha University of Science Technology)under Grant 21KB07.
文摘Log anomaly detection is an important paradigm for system troubleshooting.Existing log anomaly detection based on Long Short-Term Memory(LSTM)networks is time-consuming to handle long sequences.Transformer model is introduced to promote efficiency.However,most existing Transformer-based log anomaly detection methods convert unstructured log messages into structured templates by log parsing,which introduces parsing errors.They only extract simple semantic feature,which ignores other features,and are generally supervised,relying on the amount of labeled data.To overcome the limitations of existing methods,this paper proposes a novel unsupervised log anomaly detection method based on multi-feature(UMFLog).UMFLog includes two sub-models to consider two kinds of features:semantic feature and statistical feature,respectively.UMFLog applies the log original content with detailed parameters instead of templates or template IDs to avoid log parsing errors.In the first sub-model,UMFLog uses Bidirectional Encoder Representations from Transformers(BERT)instead of random initialization to extract effective semantic feature,and an unsupervised hypersphere-based Transformer model to learn compact log sequence representations and obtain anomaly candidates.In the second sub-model,UMFLog exploits a statistical feature-based Variational Autoencoder(VAE)about word occurrence times to identify the final anomaly from anomaly candidates.Extensive experiments and evaluations are conducted on three real public log datasets.The results show that UMFLog significantly improves F1-scores compared to the state-of-the-art(SOTA)methods because of the multi-feature.
基金supported by the National Natural Science Foundation of China (61903025)the Fundamental Research Funds for the Cent ral Universities (FRF-IDRY-20-013)。
文摘The distributed hybrid processing optimization problem of non-cooperative targets is an important research direction for future networked air-defense and anti-missile firepower systems. In this paper, the air-defense anti-missile targets defense problem is abstracted as a nonconvex constrained combinatorial optimization problem with the optimization objective of maximizing the degree of contribution of the processing scheme to non-cooperative targets, and the constraints mainly consider geographical conditions and anti-missile equipment resources. The grid discretization concept is used to partition the defense area into network nodes, and the overall defense strategy scheme is described as a nonlinear programming problem to solve the minimum defense cost within the maximum defense capability of the defense system network. In the solution of the minimum defense cost problem, the processing scheme, equipment coverage capability, constraints and node cost requirements are characterized, then a nonlinear mathematical model of the non-cooperative target distributed hybrid processing optimization problem is established, and a local optimal solution based on the sequential quadratic programming algorithm is constructed, and the optimal firepower processing scheme is given by using the sequential quadratic programming method containing non-convex quadratic equations and inequality constraints. Finally, the effectiveness of the proposed method is verified by simulation examples.
基金This work is supported by the National Natural Science Foundation of China(62102046,62072056)the Natural Science Foundation of Hunan Province(2022JJ30618,2020JJ2029)the Scientific Research Fund of Hunan Provincial Education Department(22B0300).
文摘Data security and user privacy have become crucial elements in multi-tenant data centers.Various traffic types in the multi-tenant data center in the cloud environment have their characteristics and requirements.In the data center network(DCN),short and long flows are sensitive to low latency and high throughput,respectively.The traditional security processing approaches,however,neglect these characteristics and requirements.This paper proposes a fine-grained security enhancement mechanism(SEM)to solve the problem of heterogeneous traffic and reduce the traffic completion time(FCT)of short flows while ensuring the security of multi-tenant traffic transmission.Specifically,for short flows in DCN,the lightweight GIFT encryption method is utilized.For Intra-DCN long flows and Inter-DCN traffic,the asymmetric elliptic curve encryption algorithm(ECC)is utilized.The NS-3 simulation results demonstrate that SEM dramatically reduces the FCT of short flows by 70%compared to several conventional encryption techniques,effectively enhancing the security and anti-attack of traffic transmission between DCNs in cloud computing environments.Additionally,SEM performs better than other encryption methods under high load and in largescale cloud environments.
基金supported by the National Natural Science Foundation of China (No.62102046,62072249,62072056)JinWang,YongjunRen,and Jinbin Hu receive the grant,and the URLs to the sponsors’websites are https://www.nsfc.gov.cn/.This work is also funded by the National Science Foundation of Hunan Province (No.2022JJ30618,2020JJ2029).
文摘In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios even with the existing congestion control solutions. To addressthe head-of-line blocking problem of PFC, we propose a new congestioncontrol mechanism. The key point of Congestion Control Using In-NetworkTelemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry(INT) technology to obtain comprehensive congestion information, which isthen fed back to the sender to adjust the sending rate timely and accurately.It is possible to control congestion in time, converge to the target rate quickly,and maintain a near-zero queue length at the switch when using ICC. Weconducted Network Simulator-3 (NS-3) simulation experiments to test theICC’s performance. When compared to Congestion Control for Large-ScaleRDMA Deployments (DCQCN), TIMELY: RTT-based Congestion Controlfor the Datacenter (TIMELY), and Re-architecting Congestion Managementin Lossless Ethernet (PCN), ICC effectively reduces PFC pause messages andFlow Completion Time (FCT) by 47%, 56%, 34%, and 15.3×, 14.8×, and11.2×, respectively.
基金This work is supported by the Fundamental Research Funds for the central Universities(Zhejiang University NGICS Platform),Xiaofeng Yu receives the grant and the URLs to sponsors’websites are https://www.zju.edu.cn/.And the work are supported by China’s National Natural Science Foundation(No.62072249,62072056)JinWang and Yongjun Ren receive the grant and the URLs to sponsors’websites are https://www.nsfc.gov.cn/.This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029)Jin Wang receives the grant and the URLs to sponsors’websites are http://kjt.hunan.gov.cn/.
文摘With the rapid development of information technology,the development of blockchain technology has also been deeply impacted.When performing block verification in the blockchain network,if all transactions are verified on the chain,this will cause the accumulation of data on the chain,resulting in data storage problems.At the same time,the security of data is also challenged,which will put enormous pressure on the block,resulting in extremely low communication efficiency of the block.The traditional blockchain system uses theMerkle Tree method to store data.While verifying the integrity and correctness of the data,the amount of proof is large,and it is impossible to verify the data in batches.A large amount of data proof will greatly impact the verification efficiency,which will cause end-to-end communication delays and seriously affect the blockchain system’s stability,efficiency,and security.In order to solve this problem,this paper proposes to replace the Merkle tree with polynomial commitments,which take advantage of the properties of polynomials to reduce the proof size and communication consumption.By realizing the ingenious use of aggregated proof and smart contracts,the verification efficiency of blocks is improved,and the pressure of node communication is reduced.
基金This work was jointly supported by the National Natural Science Foundation of China(No.52177068)Hunan Provincial Natural Science Foundation of China(No.2023J30028)Graduate Research Innovation Project of Changsha University of Science and Technology(No.CXCLY2022076).
文摘The modern power system has evolved into a cyber-physical system with deep coupling of physical and information domains,which brings new security risks.Aiming at the problem that the“information-physical”cross-domain attacks with key nodes as springboards seriously threaten the safe and stable operation of power grids,a risk propagation model considering key nodes of power communication coupling networks is proposed to study the risk propagation characteristics of malicious attacks on key nodes and the impact on the system.First,combined with the complex network theory,a topological model of the power communication coupling network is established,and the key nodes of the coupling network are screened out by Technique for Order Preference by Similarity to Ideal Solution(TOPSIS)method under the comprehensive evaluation index based on topological characteristics and physical characteristics.Second,a risk propagation model is established for malicious attacks on key nodes to study its propagation characteristics and analyze the state changes of each node in the coupled network.Then,two loss-causing factors:the minimum load loss ratio and transmission delay factor are constructed to quantify the impact of risk propagation on the coupled network.Finally,simulation analysis based on the IEEE 39-node system shows that the probability of node being breached(α)and the security tolerance of the system(β)are the key factors affecting the risk propagation characteristics of the coupled network,as well as the criticality of the node is positively correlated with the damage-causing factor.The proposed methodological model can provide an effective exploration of the diffusion of security risks in control systems on a macro level.
文摘This research aimed to evaluate the efficiency of eucalyptus(E)and bamboo(B)residual biomass biochars as filter materials for drinking water treatment.The efficiencies of these two biochars in the rapid filtration process were evaluated using water(raw,flocculated and settled)at the rate of 120 m^(3)/m^(2)/d.Finding that bamboo biochar manufactured under a slow pyrolysis process"b"(Bb)had the best performance.Subsequently,Bb was evaluated with three different granulometries,and it was found that the effective size with the best performance was the finest(0.6-1.18 mm).Subsequently,this biochar was compared with conventional filter materials such as gravel,sand and anthracite,using different types of water(raw,flocculated and settled)and at different filtration rates(120 and 240 m^(3)/m^(2)/d),and it was found that the filter material with the best performance was precisely biochar,with average removal efficiencies of 64.37%turbidity and 45.08%colour for raw water;93.9%turbidity and 90.75%colour for flocculated water,and 80.79%turbidity and 69.03%colour for settled water.The efficiency using simple beds of sand,biochar,anthracite and gravel at the rate of 180 m^(3)/m^(2)/d was 75.9%copper,90.72%aluminium,95.7%iron,10.9%nitrates,94.3%total coliforms and 88.9%fecal coliforms.The efficiencies achieved by biochar were higher compared to those of conventional filter materials.It was also found that biochar contributes to improving the performance of sand and anthracite in mixed beds.Additionally,it was possible to demonstrate that the volume of washing water required for the biochar is lower compared to the other filter beds.Finally,it is recommended to carry out more tests for the purification of water with biochars from rural areas affected by the mining and oil exploitation,as well as the purification of seawater with biochars from coastal areas with residues from dry forests and organic residues from municipalities.
文摘This paper discusses the data-driven design of linear quadratic regulators,i.e.,to obtain the regulators directly from experimental data without using the models of plants.In particular,we aim to improve an existing design method by reducing the amount of the required experimental data.Reducing the data amount leads to the cost reduction of experiments and computation for the data-driven design.We present a simplified version of the existing method,where parameters yielding the gain of the regulator are estimated from only part of the data required in the existing method.We then show that the data amount required in the presented method is less than half of that in the existing method under certain conditions.In addition,assuming the presence of measurement noise,we analyze the relations between the expectations and variances of the estimated parameters and the noise.As a result,it is shown that using a larger amount of the experimental data might mitigate the effects of the noise on the estimated parameters.These results are verified by numerical examples.
基金This work was partially supported by the National Natural Science Foundation of China under Grant Nos. 71101010 and 71471016.
文摘The rapid evolution of the Internet has been appealing for effective recommender systems to pinpoint useful information from online resources. Although historical rating data has been widely used as the most important information in recommendation methods, recent advancements have been demonstrating the improvement in recommendation performance with the incorporation of tag information. Furthermore, the availability of tag annotations has been well addressed by such fruitful online social tagging applications as CiteULike, MovieLens and BibSonomy, which allow users to express their preferences, upload resources and assign their own tags. Nevertheless, most existing tag-aware recommendation approaches model relationships among users, objects and tags using a tripartite graph, and hence overlook relationships within the same types of nodes. To overcome this limitation, we propose a novel approach, Trinity, to integrate historical data and tag information towards personalised recommendation. Trinity constructs a three-layered object-user-tag network that considers not only interconnections between different types of nodes but also relationships within the same types of nodes. Based on this heterogeneous network, Trinity adopts a random walk with restart model to assign the strength of associations to candidate objects, thereby providing a means of prioritizing the objects for a query user. We validate our approach via a series of large-scale 10-fold cross-validation experiments and evaluate its performance using three comprehensive criteria. Results show that our method outperforms several existing methods, including supervised random walk with restart, simulation of resource allocating processes, and traditional collaborative filtering.
基金supported by the National Natural Science Foundation of China under Grant No.60872097
文摘The level set method is commonly used to address image noise removal. Existing studies concentrate mainly on determining the speed function of the evolution equation. Based on the idea of a Canny operator, this letter introduces a new method of controlling the level set evolution, in which the edge strength is taken into account in choosing curvature flows for the speed function and the normal to edge direction is used to orient the diffusion of the moving interface. The addition of an energy term to penalize the irregularity allows for better preservation of local edge information. In contrast with previous Canny-based level set methods that usually adopt a two-stage framework, the proposed algorithm can execute all the above operations in one process during noise removal.