Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems....Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems.They imi-tate the theory of natural selection and evolution.The harmony search algorithm(HSA)is one of the most recent search algorithms in the last years.It imitates the behavior of a musician tofind the best harmony.Scholars have estimated the simi-larities and the differences between genetic algorithms and the harmony search algorithm in diverse research domains.The test data generation process represents a critical task in software validation.Unfortunately,there is no work comparing the performance of genetic algorithms and the harmony search algorithm in the test data generation process.This paper studies the similarities and the differences between genetic algorithms and the harmony search algorithm based on the ability and speed offinding the required test data.The current research performs an empirical comparison of the HSA and the GAs,and then the significance of the results is estimated using the t-Test.The study investigates the efficiency of the harmony search algorithm and the genetic algorithms according to(1)the time performance,(2)the significance of the generated test data,and(3)the adequacy of the generated test data to satisfy a given testing criterion.The results showed that the harmony search algorithm is significantly faster than the genetic algo-rithms because the t-Test showed that the p-value of the time values is 0.026<α(αis the significance level=0.05 at 95%confidence level).In contrast,there is no significant difference between the two algorithms in generating the adequate test data because the t-Test showed that the p-value of thefitness values is 0.25>α.展开更多
For rechargeable wireless sensor networks,limited energy storage capacity,dynamic energy supply,low and dynamic duty cycles cause that it is unpractical to maintain a fixed routing path for packets delivery permanentl...For rechargeable wireless sensor networks,limited energy storage capacity,dynamic energy supply,low and dynamic duty cycles cause that it is unpractical to maintain a fixed routing path for packets delivery permanently from a source to destination in a distributed scenario.Therefore,before data delivery,a sensor has to update its waking schedule continuously and share them to its neighbors,which lead to high energy expenditure for reestablishing path links frequently and low efficiency of energy utilization for collecting packets.In this work,we propose the maximum data generation rate routing protocol based on data flow controlling technology.For a sensor,it does not share its waking schedule to its neighbors and cache any waking schedules of other sensors.Hence,the energy consumption for time synchronization,location information and waking schedule shared will be reduced significantly.The saving energy can be used for improving data collection rate.Simulation shows our scheme is efficient to improve packets generation rate in rechargeable wireless sensor networks.展开更多
Data augmentation is an important task of using existing data to expand data sets.Using generative countermeasure network technology to realize data augmentation has the advantages of high-quality generated samples,si...Data augmentation is an important task of using existing data to expand data sets.Using generative countermeasure network technology to realize data augmentation has the advantages of high-quality generated samples,simple training,and fewer restrictions on the number of generated samples.However,in the field of transmission line insulator images,the freely synthesized samples are prone to produce fuzzy backgrounds and disordered samples of the main insulator features.To solve the above problems,this paper uses the cycle generative adversarial network(Cycle-GAN)used for domain conversion in the generation countermeasure network as the initial framework and uses the self-attention mechanism and channel attention mechanism to assist the conversion to realize the mutual conversion of different insulator samples.The attention module with prior knowledge is used to build the generation countermeasure network,and the generative adversarial network(GAN)model with local controllable generation is built to realize the directional generation of insulator belt defect samples.The experimental results show that the samples obtained by this method are improved in a number of quality indicators,and the quality effect of the samples obtained is excellent,which has a reference value for the data expansion of insulator images.展开更多
This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observa...This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observation of data peculiar features,it draws the conclusion that data have the epochal characteristics of non-competitiveness and non-exclusivity,decreasing marginal cost and increasing marginal return,non-physical and intangible form,and non-finiteness and non-scarcity.It is the epochal characteristics of data that undermine the traditional theory of value and innovate the“production-exchange”theory,including data value generation,data value realization,data value rights determination and data value pricing.From the perspective of data value generation,the levels of data quality,processing,use and connectivity,data application scenarios and data openness will influence data value.From the perspective of data value realization,data,as independent factors of production,show value creation effect,create a value multiplier effect by empowering other factors of production,and substitute other factors of production to create a zero-price effect.From the perspective of data value rights determination,based on the theory of property,the tragedy of the private outweighs the comedy of the private with respect to data,and based on the theory of sharing economy,the comedy of the commons outweighs the tragedy of the commons with respect to data.From the perspective of data pricing,standardized data products can be priced according to the physical product attributes,and non-standardized data products can be priced according to the virtual product attributes.Based on the epochal characteristics of data and theoretical innovation,the“production-exchange”paradigm has undergone a transformation from“using tangible factors to produce tangible products and exchanging tangible products for tangible products”to“using intangible factors to produce tangible products and exchanging intangible products for tangible products”and ultimately to“using intangible factors to produce intangible products and exchanging intangible products for intangible products”.展开更多
The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets covera...The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets coverage requirements.This paper presents an improved Whale Genetic Algorithm for generating test data re-quired for unit testing MC/DC coverage.The proposed algorithm introduces an elite retention strategy to avoid the genetic algorithm from falling into iterative degradation.At the same time,the mutation threshold of the whale algorithm is introduced to balance the global exploration and local search capabilities of the genetic al-gorithm.The threshold is dynamically adjusted according to the diversity and evolution stage of current popu-lation,which positively guides the evolution of the population.Finally,an improved crossover strategy is pro-posed to accelerate the convergence of the algorithm.The improved whale genetic algorithm is compared with genetic algorithm,whale algorithm and particle swarm algorithm on two benchmark programs.The results show that the proposed algorithm is faster for test data generation than comparison methods and can provide better coverage with fewer evaluations,and has great advantages in generating test data.展开更多
Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system...Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system developments speed.Model-based testing(MBT)is a technique that uses system models to generate and execute test cases automatically.It was identified that the test data generation(TDG)in many existing model-based test case generation(MB-TCG)approaches were still manual.An automatic and effective TDG can further reduce testing cost while detecting more faults.This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model(EFSM).The proposed approach integrates MBT with combinatorial testing.The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach.The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventionalMB-TCG but at the same time generated 43 more tests.The proposed approach effectively detects faults,but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing.展开更多
Along with the development of big data, various Natural Language Generation systems (NLGs) have recently been developed by different companies. The aim of this paper is to propose a better understanding of how these s...Along with the development of big data, various Natural Language Generation systems (NLGs) have recently been developed by different companies. The aim of this paper is to propose a better understanding of how these systems are designed and used. We propose to study in details one of them which is the NLGs developed by the company Nomao. First, we show the development of this NLGs underlies strong economic stakes since the business model of Nomao partly depends on it. Then, thanks to an eye movement analysis conducted with 28 participants, we show that the texts generated by Nomao’s NLGs contain syntactic and semantic structures that are easy to read but lack socio-semantic coherence which would improve their understanding. From a scientific perspective, our research results highlight the importance of socio-semantic coherence in text-based communication produced by NLGs.展开更多
ZTE Corporation (ZTE) announced on February 16,2009 that their complete line of mobile broadband data cards would support Windows 7 and be compliant with the Windows Network Driver Interface Specification 6.20,NDIS6.20.
Based on updating of new generation weather radar software,compilation system of new generation weather radar case data could automatically back up data and compile radar case.Using C language and VC++6.0 development ...Based on updating of new generation weather radar software,compilation system of new generation weather radar case data could automatically back up data and compile radar case.Using C language and VC++6.0 development technology,the software realizes the automatic sorting and saving of radar base data,radar products and radar status information on different machines every day,and automatically creates various folders and files required for compiling data.By inputting the days,date,start and end times,renaming and compression of the base data,product data and status information could be automatically completed,to realize automation,batch,process and standardization of case data compilation.Since putting into the radar business,the operation has been stable and reliable.The working efficiency of business personnel has been improved,and a large number of manpower has been saved.It can be transplanted and popularized in other new generation weather radar stations.展开更多
Grey sequence generation can draw out and develop implied rules of the original data. Different kinds of generation methods were summarized and classified into two types: partial generation and whole generation. The a...Grey sequence generation can draw out and develop implied rules of the original data. Different kinds of generation methods were summarized and classified into two types: partial generation and whole generation. The average generation and stepwise ratio generation is disussed , the preference generation is regard as a special case of proportional division based on analysis geometric theory, propose an idea of using concave and convex status of discrete data to determine the generation coefficient. Based on the stepwise and smooth ratio generation, a tendency average generation is proposed and have a comparison using the data provided in papers listed in the references. The comparison proves that the new generation is better than the other two generations and errors are obviously reduced.展开更多
Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features ...Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.展开更多
The key generation algorithm of AES was introduced;the weaknesses of the key generation design of AES were investigated. According to the key demand put forward a kind of new design idea, and this designing strategy w...The key generation algorithm of AES was introduced;the weaknesses of the key generation design of AES were investigated. According to the key demand put forward a kind of new design idea, and this designing strategy was developed, which can be used to improve the key generation algorithm of AES. An analysis shows that such improvement can enhance the safety of the original algorithm without reducing its efficiency.展开更多
For the implementation of power market in China,medium-and Iong-term security checks are essential for bilateral transactions,of which the electricity quantity that constitutes the generation feasible region(GFR)is th...For the implementation of power market in China,medium-and Iong-term security checks are essential for bilateral transactions,of which the electricity quantity that constitutes the generation feasible region(GFR)is the target.However,uncertainties from load forecasting errors and transmission contingencies are threats to medium-and Iong-term electricity tradi ng in terms of their in flue nces on the GFR.In this paper,we prese nt a graphic distortio n pattern in a typical threegenerator system using the Monte Carlo method and projection theory based on security constrained economic dispatch.The underlying potential risk to GFR from uncertainties is clearly visualized,and their impact characteristics are discussed.A case study on detailed GFR distortion was included to dem on strate the effectiveness of this visualization model.The result implies that a small uncertainty could distort the GFR to a remarkable extent and that different line-contingency precipitates disparate the GFR distortion patterns,thereby eliciting great emphasis on load forecasting and line reliability in electricity transacti ons.展开更多
Being photovoltaic power generation affected by radiation strength, wind speed, clouds cover and environment temperature, the generating in each moment is fluctuating. The operational characteristics of grid-connected...Being photovoltaic power generation affected by radiation strength, wind speed, clouds cover and environment temperature, the generating in each moment is fluctuating. The operational characteristics of grid-connected PV systems are coincided with gray theory application conditions. A gray theory model has been applied in short-term forecast of grid-connected photovoltaic system. The verification model of the probability of small error will help to check the accuracy of the gray forecast results. The calculated result shows that the ?model accuracy has been greatly enhanced.展开更多
Many Internet of things application scenarios have the characteristics of limited hardware resources and limited energy supply,which are not suitable for traditional security technology.The security technology based o...Many Internet of things application scenarios have the characteristics of limited hardware resources and limited energy supply,which are not suitable for traditional security technology.The security technology based on the physicalmechanism has attracted extensive attention.How to improve the key generation rate has always been one of the urgent problems to be solved in the security technology based on the physical mechanism.In this paper,superlattice technology is introduced to the security field of Internet of things,and a high-speed symmetric key generation scheme based on superlattice for Internet of things is proposed.In order to ensure the efficiency and privacy of data transmission,we also combine the superlattice symmetric key and compressive sensing technology to build a lightweight data transmission scheme that supports data compression and data encryption at the same time.Theoretical analysis and experimental evaluation results show that the proposed scheme is superior to the most closely related work.展开更多
Due to the development of technology in medicine,millions of health-related data such as scanning the images are generated.It is a great challenge to store the data and handle a massive volume of data.Healthcare data ...Due to the development of technology in medicine,millions of health-related data such as scanning the images are generated.It is a great challenge to store the data and handle a massive volume of data.Healthcare data is stored in the cloud-fog storage environments.This cloud-Fog based health model allows the users to get health-related data from different sources,and duplicated informa-tion is also available in the background.Therefore,it requires an additional sto-rage area,increase in data acquisition time,and insecure data replication in the environment.This paper is proposed to eliminate the de-duplication data using a window size chunking algorithm with a biased sampling-based bloomfilter and provide the health data security using the Advanced Signature-Based Encryp-tion(ASE)algorithm in the Fog-Cloud Environment(WCA-BF+ASE).This WCA-BF+ASE eliminates the duplicate copy of the data and minimizes its sto-rage space and maintenance cost.The data is also stored in an efficient and in a highly secured manner.The security level in the cloud storage environment Win-dows Chunking Algorithm(WSCA)has got 86.5%,two thresholds two divisors(TTTD)80%,Ordinal in Python(ORD)84.4%,Boom Filter(BF)82%,and the proposed work has got better security storage of 97%.And also,after applying the de-duplication process,the proposed method WCA-BF+ASE has required only less storage space for variousfile sizes of 10 KB for 200,400 MB has taken only 22 KB,and 600 MB has required 35 KB,800 MB has consumed only 38 KB,1000 MB has taken 40 KB of storage spaces.展开更多
Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects an...Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects and the high cost associated with data collection.Consequently,devising algorithms capable of accurately localizing specific objects within a scene in scenarios where annotated data is limited remains a formidable challenge.To solve this problem,this paper proposes an object discovery by request problem setting and a corresponding algorithmic framework.The proposed problem setting aims to identify specified objects in scenes,and the associated algorithmic framework comprises pseudo data generation and object discovery by request network.Pseudo-data generation generates images resembling natural scenes through various data augmentation rules,using a small number of object samples and scene images.The network structure of object discovery by request utilizes the pre-trained Vision Transformer(ViT)model as the backbone,employs object-centric methods to learn the latent representations of foreground objects,and applies patch-level reconstruction constraints to the model.During the validation phase,we use the generated pseudo datasets as training sets and evaluate the performance of our model on the original test sets.Experiments have proved that our method achieves state-of-the-art performance on Unmanned Aerial Vehicles-Bottle Detection(UAV-BD)dataset and self-constructed dataset Bottle,especially in multi-object scenarios.展开更多
The difficulty of bumblebee data collecting and the laborious nature of bumblebee data annotation sometimes result in a lack of training data,which impairs the effectiveness of deep learning based counting methods.Giv...The difficulty of bumblebee data collecting and the laborious nature of bumblebee data annotation sometimes result in a lack of training data,which impairs the effectiveness of deep learning based counting methods.Given that it is challenging to produce the detailed background information in the generated bumblebee images using current data augmentation methods,in this paper,a joint multi-scale convolutional neural network and multi-channel attention based generative adversarial network(MMGAN)is proposed.MMGAN generates the bumblebee image in accordance with the corresponding density map marking the bumblebee positions.Specifically,the multi-scale convolutional neural network(CNN)module utilizes multiple convolution kernels to completely extract features of different scales from the input bumblebee image and density map.To generate various targets in the generated image,the multi-channel attention module builds numerous intermediate generation layers and attention maps.These targets are then stacked to produce a bumblebee image with a specific number of bumblebees.The proposed model obtains the greatest performance in bumblebee image generating tasks,and such generated bumblebee images considerably improve the efficiency of deep learning based counting methods in bumblebee counting applications.展开更多
文摘Many search-based algorithms have been successfully applied in sev-eral software engineering activities.Genetic algorithms(GAs)are the most used in the scientific domains by scholars to solve software testing problems.They imi-tate the theory of natural selection and evolution.The harmony search algorithm(HSA)is one of the most recent search algorithms in the last years.It imitates the behavior of a musician tofind the best harmony.Scholars have estimated the simi-larities and the differences between genetic algorithms and the harmony search algorithm in diverse research domains.The test data generation process represents a critical task in software validation.Unfortunately,there is no work comparing the performance of genetic algorithms and the harmony search algorithm in the test data generation process.This paper studies the similarities and the differences between genetic algorithms and the harmony search algorithm based on the ability and speed offinding the required test data.The current research performs an empirical comparison of the HSA and the GAs,and then the significance of the results is estimated using the t-Test.The study investigates the efficiency of the harmony search algorithm and the genetic algorithms according to(1)the time performance,(2)the significance of the generated test data,and(3)the adequacy of the generated test data to satisfy a given testing criterion.The results showed that the harmony search algorithm is significantly faster than the genetic algo-rithms because the t-Test showed that the p-value of the time values is 0.026<α(αis the significance level=0.05 at 95%confidence level).In contrast,there is no significant difference between the two algorithms in generating the adequate test data because the t-Test showed that the p-value of thefitness values is 0.25>α.
基金This work was supported by The National Natural Science Fund of China(Grant No.31670554)The Natural Science Foundation of Jiangsu Province of China(Grant No.BK20161527)+1 种基金We also received three Projects Funded by The Project funded by China Postdoctoral Science Foundation(Grant Nos.2018T110505,2017M611828)The Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institutions.The authors wish to express their appreciation to the reviewers for their helpful suggestions which greatly improved the presentation of this paper.
文摘For rechargeable wireless sensor networks,limited energy storage capacity,dynamic energy supply,low and dynamic duty cycles cause that it is unpractical to maintain a fixed routing path for packets delivery permanently from a source to destination in a distributed scenario.Therefore,before data delivery,a sensor has to update its waking schedule continuously and share them to its neighbors,which lead to high energy expenditure for reestablishing path links frequently and low efficiency of energy utilization for collecting packets.In this work,we propose the maximum data generation rate routing protocol based on data flow controlling technology.For a sensor,it does not share its waking schedule to its neighbors and cache any waking schedules of other sensors.Hence,the energy consumption for time synchronization,location information and waking schedule shared will be reduced significantly.The saving energy can be used for improving data collection rate.Simulation shows our scheme is efficient to improve packets generation rate in rechargeable wireless sensor networks.
基金supported in part by the National Natural Science Foundation of China under Grant No.61973055Fundamental Research Funds for the Central Universities under Grant No.ZYGX2020J011Regional Innovation Cooperation Funds of Sichuan under Grant No.2024YFHZ0089.
文摘Data augmentation is an important task of using existing data to expand data sets.Using generative countermeasure network technology to realize data augmentation has the advantages of high-quality generated samples,simple training,and fewer restrictions on the number of generated samples.However,in the field of transmission line insulator images,the freely synthesized samples are prone to produce fuzzy backgrounds and disordered samples of the main insulator features.To solve the above problems,this paper uses the cycle generative adversarial network(Cycle-GAN)used for domain conversion in the generation countermeasure network as the initial framework and uses the self-attention mechanism and channel attention mechanism to assist the conversion to realize the mutual conversion of different insulator samples.The attention module with prior knowledge is used to build the generation countermeasure network,and the generative adversarial network(GAN)model with local controllable generation is built to realize the directional generation of insulator belt defect samples.The experimental results show that the samples obtained by this method are improved in a number of quality indicators,and the quality effect of the samples obtained is excellent,which has a reference value for the data expansion of insulator images.
基金funded by“Management Model Innovation of Chinese Enterprises”Research Project,Institute of Industrial Economics,CASS(Grant No.2019-gjs-06)Project under the Graduate Student Scientific and Research Innovation Support Program,University of Chinese Academy of Social Sciences(Graduate School)(Grant No.2022-KY-118).
文摘This paper explores the data theory of value along the line of reasoning epochal characteristics of data-theoretical innovation-paradigmatic transformation and,through a comparison of hard and soft factors and observation of data peculiar features,it draws the conclusion that data have the epochal characteristics of non-competitiveness and non-exclusivity,decreasing marginal cost and increasing marginal return,non-physical and intangible form,and non-finiteness and non-scarcity.It is the epochal characteristics of data that undermine the traditional theory of value and innovate the“production-exchange”theory,including data value generation,data value realization,data value rights determination and data value pricing.From the perspective of data value generation,the levels of data quality,processing,use and connectivity,data application scenarios and data openness will influence data value.From the perspective of data value realization,data,as independent factors of production,show value creation effect,create a value multiplier effect by empowering other factors of production,and substitute other factors of production to create a zero-price effect.From the perspective of data value rights determination,based on the theory of property,the tragedy of the private outweighs the comedy of the private with respect to data,and based on the theory of sharing economy,the comedy of the commons outweighs the tragedy of the commons with respect to data.From the perspective of data pricing,standardized data products can be priced according to the physical product attributes,and non-standardized data products can be priced according to the virtual product attributes.Based on the epochal characteristics of data and theoretical innovation,the“production-exchange”paradigm has undergone a transformation from“using tangible factors to produce tangible products and exchanging tangible products for tangible products”to“using intangible factors to produce tangible products and exchanging intangible products for tangible products”and ultimately to“using intangible factors to produce intangible products and exchanging intangible products for intangible products”.
文摘The automatic generation of test data is a key step in realizing automated testing.Most automated testing tools for unit testing only provide test case execution drivers and cannot generate test data that meets coverage requirements.This paper presents an improved Whale Genetic Algorithm for generating test data re-quired for unit testing MC/DC coverage.The proposed algorithm introduces an elite retention strategy to avoid the genetic algorithm from falling into iterative degradation.At the same time,the mutation threshold of the whale algorithm is introduced to balance the global exploration and local search capabilities of the genetic al-gorithm.The threshold is dynamically adjusted according to the diversity and evolution stage of current popu-lation,which positively guides the evolution of the population.Finally,an improved crossover strategy is pro-posed to accelerate the convergence of the algorithm.The improved whale genetic algorithm is compared with genetic algorithm,whale algorithm and particle swarm algorithm on two benchmark programs.The results show that the proposed algorithm is faster for test data generation than comparison methods and can provide better coverage with fewer evaluations,and has great advantages in generating test data.
基金The research was funded by Universiti Teknologi Malaysia(UTM)and the MalaysianMinistry of Higher Education(MOHE)under the Industry-International Incentive Grant Scheme(IIIGS)(Vote Number:Q.J130000.3651.02M67 and Q.J130000.3051.01M86)the Aca-demic Fellowship Scheme(SLAM).
文摘Testing is an integral part of software development.Current fastpaced system developments have rendered traditional testing techniques obsolete.Therefore,automated testing techniques are needed to adapt to such system developments speed.Model-based testing(MBT)is a technique that uses system models to generate and execute test cases automatically.It was identified that the test data generation(TDG)in many existing model-based test case generation(MB-TCG)approaches were still manual.An automatic and effective TDG can further reduce testing cost while detecting more faults.This study proposes an automated TDG approach in MB-TCG using the extended finite state machine model(EFSM).The proposed approach integrates MBT with combinatorial testing.The information available in an EFSM model and the boundary value analysis strategy are used to automate the domain input classifications which were done manually by the existing approach.The results showed that the proposed approach was able to detect 6.62 percent more faults than the conventionalMB-TCG but at the same time generated 43 more tests.The proposed approach effectively detects faults,but a further treatment to the generated tests such as test case prioritization should be done to increase the effectiveness and efficiency of testing.
文摘Along with the development of big data, various Natural Language Generation systems (NLGs) have recently been developed by different companies. The aim of this paper is to propose a better understanding of how these systems are designed and used. We propose to study in details one of them which is the NLGs developed by the company Nomao. First, we show the development of this NLGs underlies strong economic stakes since the business model of Nomao partly depends on it. Then, thanks to an eye movement analysis conducted with 28 participants, we show that the texts generated by Nomao’s NLGs contain syntactic and semantic structures that are easy to read but lack socio-semantic coherence which would improve their understanding. From a scientific perspective, our research results highlight the importance of socio-semantic coherence in text-based communication produced by NLGs.
文摘ZTE Corporation (ZTE) announced on February 16,2009 that their complete line of mobile broadband data cards would support Windows 7 and be compliant with the Windows Network Driver Interface Specification 6.20,NDIS6.20.
基金Supported by Scientific Research and Technology Development Project of Wuzhou Meteorological Bureau(WUQIKE2020001)。
文摘Based on updating of new generation weather radar software,compilation system of new generation weather radar case data could automatically back up data and compile radar case.Using C language and VC++6.0 development technology,the software realizes the automatic sorting and saving of radar base data,radar products and radar status information on different machines every day,and automatically creates various folders and files required for compiling data.By inputting the days,date,start and end times,renaming and compression of the base data,product data and status information could be automatically completed,to realize automation,batch,process and standardization of case data compilation.Since putting into the radar business,the operation has been stable and reliable.The working efficiency of business personnel has been improved,and a large number of manpower has been saved.It can be transplanted and popularized in other new generation weather radar stations.
文摘Grey sequence generation can draw out and develop implied rules of the original data. Different kinds of generation methods were summarized and classified into two types: partial generation and whole generation. The average generation and stepwise ratio generation is disussed , the preference generation is regard as a special case of proportional division based on analysis geometric theory, propose an idea of using concave and convex status of discrete data to determine the generation coefficient. Based on the stepwise and smooth ratio generation, a tendency average generation is proposed and have a comparison using the data provided in papers listed in the references. The comparison proves that the new generation is better than the other two generations and errors are obviously reduced.
基金support from the Deanship of Scientific Research,University of Hail,Saudi Arabia through the project Ref.(RG-191315).
文摘Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.
文摘The key generation algorithm of AES was introduced;the weaknesses of the key generation design of AES were investigated. According to the key demand put forward a kind of new design idea, and this designing strategy was developed, which can be used to improve the key generation algorithm of AES. An analysis shows that such improvement can enhance the safety of the original algorithm without reducing its efficiency.
基金the National Key R&D Program of China under Grant No.2020YFB0905900in part by the State Grid Corporation of China project“Research on inter-provincial price coupling mechanism of national unified electricity spot market”.
文摘For the implementation of power market in China,medium-and Iong-term security checks are essential for bilateral transactions,of which the electricity quantity that constitutes the generation feasible region(GFR)is the target.However,uncertainties from load forecasting errors and transmission contingencies are threats to medium-and Iong-term electricity tradi ng in terms of their in flue nces on the GFR.In this paper,we prese nt a graphic distortio n pattern in a typical threegenerator system using the Monte Carlo method and projection theory based on security constrained economic dispatch.The underlying potential risk to GFR from uncertainties is clearly visualized,and their impact characteristics are discussed.A case study on detailed GFR distortion was included to dem on strate the effectiveness of this visualization model.The result implies that a small uncertainty could distort the GFR to a remarkable extent and that different line-contingency precipitates disparate the GFR distortion patterns,thereby eliciting great emphasis on load forecasting and line reliability in electricity transacti ons.
文摘Being photovoltaic power generation affected by radiation strength, wind speed, clouds cover and environment temperature, the generating in each moment is fluctuating. The operational characteristics of grid-connected PV systems are coincided with gray theory application conditions. A gray theory model has been applied in short-term forecast of grid-connected photovoltaic system. The verification model of the probability of small error will help to check the accuracy of the gray forecast results. The calculated result shows that the ?model accuracy has been greatly enhanced.
基金This work was supported by the Humanities and Social Science Youth Fund of Ministry of Education of China(19YJCZH254)the Innovation driven plan project of Hunan University of Technology and Business in 2020,the Scientific Research Fund of Hunan Provincial Education Department(19B315)this work was funded by the Researchers Supporting Project No.(RSP-2021/102)King Saud University,Riyadh,Saudi Arabia.
文摘Many Internet of things application scenarios have the characteristics of limited hardware resources and limited energy supply,which are not suitable for traditional security technology.The security technology based on the physicalmechanism has attracted extensive attention.How to improve the key generation rate has always been one of the urgent problems to be solved in the security technology based on the physical mechanism.In this paper,superlattice technology is introduced to the security field of Internet of things,and a high-speed symmetric key generation scheme based on superlattice for Internet of things is proposed.In order to ensure the efficiency and privacy of data transmission,we also combine the superlattice symmetric key and compressive sensing technology to build a lightweight data transmission scheme that supports data compression and data encryption at the same time.Theoretical analysis and experimental evaluation results show that the proposed scheme is superior to the most closely related work.
文摘Due to the development of technology in medicine,millions of health-related data such as scanning the images are generated.It is a great challenge to store the data and handle a massive volume of data.Healthcare data is stored in the cloud-fog storage environments.This cloud-Fog based health model allows the users to get health-related data from different sources,and duplicated informa-tion is also available in the background.Therefore,it requires an additional sto-rage area,increase in data acquisition time,and insecure data replication in the environment.This paper is proposed to eliminate the de-duplication data using a window size chunking algorithm with a biased sampling-based bloomfilter and provide the health data security using the Advanced Signature-Based Encryp-tion(ASE)algorithm in the Fog-Cloud Environment(WCA-BF+ASE).This WCA-BF+ASE eliminates the duplicate copy of the data and minimizes its sto-rage space and maintenance cost.The data is also stored in an efficient and in a highly secured manner.The security level in the cloud storage environment Win-dows Chunking Algorithm(WSCA)has got 86.5%,two thresholds two divisors(TTTD)80%,Ordinal in Python(ORD)84.4%,Boom Filter(BF)82%,and the proposed work has got better security storage of 97%.And also,after applying the de-duplication process,the proposed method WCA-BF+ASE has required only less storage space for variousfile sizes of 10 KB for 200,400 MB has taken only 22 KB,and 600 MB has required 35 KB,800 MB has consumed only 38 KB,1000 MB has taken 40 KB of storage spaces.
文摘Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects and the high cost associated with data collection.Consequently,devising algorithms capable of accurately localizing specific objects within a scene in scenarios where annotated data is limited remains a formidable challenge.To solve this problem,this paper proposes an object discovery by request problem setting and a corresponding algorithmic framework.The proposed problem setting aims to identify specified objects in scenes,and the associated algorithmic framework comprises pseudo data generation and object discovery by request network.Pseudo-data generation generates images resembling natural scenes through various data augmentation rules,using a small number of object samples and scene images.The network structure of object discovery by request utilizes the pre-trained Vision Transformer(ViT)model as the backbone,employs object-centric methods to learn the latent representations of foreground objects,and applies patch-level reconstruction constraints to the model.During the validation phase,we use the generated pseudo datasets as training sets and evaluate the performance of our model on the original test sets.Experiments have proved that our method achieves state-of-the-art performance on Unmanned Aerial Vehicles-Bottle Detection(UAV-BD)dataset and self-constructed dataset Bottle,especially in multi-object scenarios.
文摘The difficulty of bumblebee data collecting and the laborious nature of bumblebee data annotation sometimes result in a lack of training data,which impairs the effectiveness of deep learning based counting methods.Given that it is challenging to produce the detailed background information in the generated bumblebee images using current data augmentation methods,in this paper,a joint multi-scale convolutional neural network and multi-channel attention based generative adversarial network(MMGAN)is proposed.MMGAN generates the bumblebee image in accordance with the corresponding density map marking the bumblebee positions.Specifically,the multi-scale convolutional neural network(CNN)module utilizes multiple convolution kernels to completely extract features of different scales from the input bumblebee image and density map.To generate various targets in the generated image,the multi-channel attention module builds numerous intermediate generation layers and attention maps.These targets are then stacked to produce a bumblebee image with a specific number of bumblebees.The proposed model obtains the greatest performance in bumblebee image generating tasks,and such generated bumblebee images considerably improve the efficiency of deep learning based counting methods in bumblebee counting applications.