Smart farming has become a strategic approach of sustainable agriculture management and monitoring with the infrastructure to exploit modern technologies,including big data,the cloud,and the Internet of Things(IoT).Ma...Smart farming has become a strategic approach of sustainable agriculture management and monitoring with the infrastructure to exploit modern technologies,including big data,the cloud,and the Internet of Things(IoT).Many researchers try to integrate IoT-based smart farming on cloud platforms effectively.They define various frameworks on smart farming and monitoring system and still lacks to define effective data management schemes.Since IoT-cloud systems involve massive structured and unstructured data,data optimization comes into the picture.Hence,this research designs an Information-Centric IoT-based Smart Farming with Dynamic Data Optimization(ICISF-DDO),which enhances the performance of the smart farming infrastructure with minimal energy consumption and improved lifetime.Here,a conceptual framework of the proposed scheme and statistical design model has beenwell defined.The information storage and management with DDO has been expanded individually to show the effective use of membership parameters in data optimization.The simulation outcomes state that the proposed ICISF-DDO can surpass existing smart farming systems with a data optimization ratio of 97.71%,reliability ratio of 98.63%,a coverage ratio of 99.67%,least sensor error rate of 8.96%,and efficient energy consumption ratio of 4.84%.展开更多
This article presents a new scheme for dynamic data optimization in IoT(Internet of Things)-assisted sensor networks.The various components of IoT assisted cloud platform are discussed.In addition,a new architecture f...This article presents a new scheme for dynamic data optimization in IoT(Internet of Things)-assisted sensor networks.The various components of IoT assisted cloud platform are discussed.In addition,a new architecture for IoT assisted sensor networks is presented.Further,a model for data optimization in IoT assisted sensor networks is proposed.A novel Membership inducing Dynamic Data Optimization Membership inducing Dynamic Data Optimization(MIDDO)algorithm for IoT assisted sensor network is proposed in this research.The proposed algorithm considers every node data and utilized membership function for the optimized data allocation.The proposed framework is compared with two stage optimization,dynamic stochastic optimization and sparsity inducing optimization and evaluated in terms of reliability ratio,coverage ratio and sensing error.Data optimization was performed based on the availability of cloud resource,sensor energy,data flow volume and the centroid of each state.It was inferred that the proposed MIDDO algorithm achieves an average performance ratio of 76.55%,reliability ratio of 94.74%,coverage ratio of 85.75%and sensing error of 0.154.展开更多
Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these adv...Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.展开更多
The main aim of this work is to improve the security of data hiding forsecret image sharing. The privacy and security of digital information have becomea primary concern nowadays due to the enormous usage of digital t...The main aim of this work is to improve the security of data hiding forsecret image sharing. The privacy and security of digital information have becomea primary concern nowadays due to the enormous usage of digital technology.The security and the privacy of users’ images are ensured through reversible datahiding techniques. The efficiency of the existing data hiding techniques did notprovide optimum performance with multiple end nodes. These issues are solvedby using Separable Data Hiding and Adaptive Particle Swarm Optimization(SDHAPSO) algorithm to attain optimal performance. Image encryption, dataembedding, data extraction/image recovery are the main phases of the proposedapproach. DFT is generally used to extract the transform coefficient matrix fromthe original image. DFT coefficients are in float format, which assists in transforming the image to integral format using the round function. After obtainingthe encrypted image by data-hider, additional data embedding is formulated intohigh-frequency coefficients. The proposed SDHAPSO is mainly utilized for performance improvement through optimal pixel location selection within the imagefor secret bits concealment. In addition, the secret data embedding capacityenhancement is focused on image visual quality maintenance. Hence, it isobserved from the simulation results that the proposed SDHAPSO techniqueoffers high-level security outcomes with respect to higher PSNR, security level,lesser MSE and higher correlation than existing techniques. Hence, enhancedsensitive information protection is attained, which improves the overall systemperformance.展开更多
In this paper a critical assessment and optimization of the phase diagrams and thermodynamic properties of the PrCl_3-MCl(M=Li,Na)and PrCl_3-MCl_2(M=Mg,Ca,Sr,Ba) binary systems have been per- formed.The assessed and o...In this paper a critical assessment and optimization of the phase diagrams and thermodynamic properties of the PrCl_3-MCl(M=Li,Na)and PrCl_3-MCl_2(M=Mg,Ca,Sr,Ba) binary systems have been per- formed.The assessed and optimized binary phase diagrams and thermodynamic data with self consistency are a better basis for constructing multicomponent phase diagrams.展开更多
Big data are regarded as a tremendous technology for processing a huge variety of data in a short time and with a large storage capacity.The user’s access over the internet creates massive data processing over the in...Big data are regarded as a tremendous technology for processing a huge variety of data in a short time and with a large storage capacity.The user’s access over the internet creates massive data processing over the internet.Big data require an intelligent feature selection model by addressing huge varieties of data.Traditional feature selection techniques are only applicable to simple data mining.Intelligent techniques are needed in big data processing and machine learning for an efficient classification.Major feature selection algorithms read the input features as they are.Then,the features are preprocessed and classified.Here,an algorithm does not consider the relatedness.During feature selection,all features are misread as outputs.Accordingly,a less optimal solution is achieved.In our proposed research,we focus on the feature selection by using supervised learning techniques called grey wolf optimization(GWO)with decomposed random differential grouping(DrnDG-GWO).First,decomposition of features into subsets based on relatedness in variables is performed.Random differential grouping is performed using a fitness value of two variables.Now,every subset is regarded as a population in GWO techniques.The combination of supervised machine learning with swarm intelligence techniques produces best feature optimization results in this research.Once the features are optimized,we classify using advanced kNN process for accurate data classification.The result of DrnDGGWO is compared with those of the standard GWO and GWO with PSO for feature selection to compare the efficiency of the proposed algorithm.The accuracy and time complexity of the proposed algorithm are 98%and 5 s,which are better than the existing techniques.展开更多
Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption i...Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.展开更多
A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterpr...A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterprise or from a big data provider.Numerous simulation experiments are implemented to test the efficiency of the optimization model.Simulation experiment results show that when increasing the weight of knowledge from big data knowledge provider,the total discount expectation of profits will increase,and the transfer cost will be reduced.The calculated results are in accordance with the actual economic situation.The optimization model can provide useful decision support for enterprises in a big data environment.展开更多
Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platf...Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process.展开更多
Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features ...Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.展开更多
Cooling process of iron ore pellets in a circular cooler has great impacts on the pellet quality and systematic energy exploitation. However, multi-variables and non-visualization of this gray system is unfavorable to...Cooling process of iron ore pellets in a circular cooler has great impacts on the pellet quality and systematic energy exploitation. However, multi-variables and non-visualization of this gray system is unfavorable to efficient production. Thus, the cooling process of iron ore pellets was optimized using mathematical model and data mining techniques. A mathematical model was established and validated by steady-state production data, and the results show that the calculated values coincide very well with the measured values. Based on the proposed model, effects of important process parameters on gas-pellet temperature profiles within the circular cooler were analyzed to better understand the entire cooling process. Two data mining techniques—Association Rules Induction and Clustering were also applied on the steady-state production data to obtain expertise operating rules and optimized targets. Finally, an optimized control strategy for the circular cooler was proposed and an operation guidance system was developed. The system could realize the visualization of thermal process at steady state and provide operation guidance to optimize the circular cooler.展开更多
An effective solution method of fractional ordinary and partial differential equations is proposed in the present paper.The standard Adomian Decomposition Method(ADM)is modified via introducing a functional term invol...An effective solution method of fractional ordinary and partial differential equations is proposed in the present paper.The standard Adomian Decomposition Method(ADM)is modified via introducing a functional term involving both a variable and a parameter.A residual approach is then adopted to identify the optimal value of the embedded parameter within the frame of L^(2) norm.Numerical experiments on sample problems of open literature prove that the presented algorithm is quite accurate,more advantageous over the traditional ADM and straightforward to implement for the fractional ordinary and partial differential equations of the recent focus of mathematical models.Better performance of the method is further evidenced against some compared commonly used numerical techniques.展开更多
In order to reduce both the weight of vehicles and the damage of occupants in a crash event simultaneously, it is necessary to perform a multi-objective optimal design of the automotive energy absorbing components. Mo...In order to reduce both the weight of vehicles and the damage of occupants in a crash event simultaneously, it is necessary to perform a multi-objective optimal design of the automotive energy absorbing components. Modified non-dominated sorting genetic algorithm II(NSGA II) was used for multi-objective optimization of automotive S-rail considering absorbed energy(E), peak crushing force(Fmax) and mass of the structure(W) as three conflicting objective functions. In the multi-objective optimization problem(MOP), E and Fmax are defined by polynomial models extracted using the software GEvo M based on train and test data obtained from numerical simulation of quasi-static crushing of the S-rail using ABAQUS. Finally, the nearest to ideal point(NIP)method and technique for ordering preferences by similarity to ideal solution(TOPSIS) method are used to find the some trade-off optimum design points from all non-dominated optimum design points represented by the Pareto fronts. Results represent that the optimum design point obtained from TOPSIS method exhibits better trade-off in comparison with that of optimum design point obtained from NIP method.展开更多
This paper aims to increase the diagnosis accuracy of the fault classification of power transformers by introducing a new off-line hybrid model based on a combination subset of the et method(C-set)&modified fuzzy ...This paper aims to increase the diagnosis accuracy of the fault classification of power transformers by introducing a new off-line hybrid model based on a combination subset of the et method(C-set)&modified fuzzy C-mean algorithm(MFCM)and the optimizable multiclass-SVM(MCSVM).The innovation in this paper is shown in terms of solving the predicaments of outliers,boundary proportion,and unequal data existing in both traditional and intelligence models.Taking into consideration the closeness of dissolved gas analysis(DGA)data,the C-set method is implemented to subset the DGA data samples based on their type of faults within unrepeated subsets.Then,the MFCM is used for removing outliers from DGA samples by combining highly similar data for every subset within the same cluster to obtain the optimized training data(OTD)set.It is also used to minimize dimensionality of DGA samples and the uncertainty of transformer condition monitoring.After that,the optimized MCSVM is trained by using the(OTD).The proposed model diagnosis accuracy is 93.3%.The obtained results indicate that our model significantly improves the fault identification accuracy in power transformers when compared with other conventional and intelligence models.展开更多
Seismic illumination plays an important role in subsurface imaging. A better image can be expected either through optimizing acquisition geometry or introducing more advanced seismic mi- gration and/or tomographic inv...Seismic illumination plays an important role in subsurface imaging. A better image can be expected either through optimizing acquisition geometry or introducing more advanced seismic mi- gration and/or tomographic inversion methods involving illumination compensation. Vertical cable survey is a potential replacement of traditional marine seismic survey for its flexibility and data quality. Conventional vertical cable data processing requires separation of primaries and multiples before migration. We proposed to use multi-scale full waveform inversion (FWI) to improve illumination coverage of vertical cable survey. A deep water velocity model is built to test the capability of multi-scale FWI in detecting low velocity anomalies below seabed. Synthetic results show that multi-scale FWI is an effective model building tool in deep-water exploration. Geometry optimization through target ori- ented illumination analysis and multi-scale FWI may help to mitigate the risks of vertical cable survey. The combination of multi-scale FWI, low-frequency data and multi-vertical-cable acquisition system may provide both high resolution and high fidelity subsurface models.展开更多
Wireless big data describes a wide range of massive data that is generated,collected and stored in wireless networks by wireless devices and users.While these data share some common properties with traditional big dat...Wireless big data describes a wide range of massive data that is generated,collected and stored in wireless networks by wireless devices and users.While these data share some common properties with traditional big data,they have their own unique characteristics and provide numerous advantages for academic research and practical applications.This article reviews the recent advances and trends in the field of wireless big data.Due to space constraints,this survey is not intended to cover all aspects in this field,but to focus on the data aided transmission,data driven network optimization and novel applications.It is expected that the survey will help the readers to understand this exciting and emerging research field better.Moreover,open issues and promising future directions are also identified.展开更多
The accurate prediction of vehicle speed plays an important role in vehicle's real-time energy management and online optimization control. However, the current forecast methods are mostly based on traffic conditio...The accurate prediction of vehicle speed plays an important role in vehicle's real-time energy management and online optimization control. However, the current forecast methods are mostly based on traffic conditions to predict the speed, while ignoring the impact of the driver-vehicle-road system on the actual speed profile. In this paper, the correlation of velocity and its effect factors under various driving conditions were firstly analyzed based on driver-vehicle-road-traffic data records for a more accurate prediction model. With the modeling time and prediction time considered separately, the effectiveness and accuracy of several typical artificial-intelligence speed prediction algorithms were analyzed. The results show that the combination of niche immunegenetic algorithm-support vector machine(NIGA-SVM) prediction algorithm on the city roads with genetic algorithmsupport vector machine(GA-SVM) prediction algorithm on the suburb roads and on the freeway can sharply improve the accuracy and timeliness of vehicle speed forecasting. Afterwards, the optimized GA-SVM vehicle speed prediction model was established in accordance with the optimized GA-SVM prediction algorithm at different times. And the test results verified its validity and rationality of the prediction algorithm.展开更多
Structure Data Layout Optimization (SDLO) is a prevailing compiler optimization technique to improve cache efficiency. Structure transformation is a critical step for SDLO. Diversity of transformation methods and ex...Structure Data Layout Optimization (SDLO) is a prevailing compiler optimization technique to improve cache efficiency. Structure transformation is a critical step for SDLO. Diversity of transformation methods and existence of complex data types are major challenges for structure transformation. We have designed and implemented STrans, a well-defined system which provides controllable and comprehensive functionality on structure transformation. Compared to known systems, it has less limitation on data types for transformation. In this paper we give formal definition of the approach STrans transforms data types. We have also designed Transformation Specification Language, a mini language to configure how to transform structures, which can be either manually tuned or generated by compiler. STrans supports three kinds of transformation methods, i.e., splitting, peeling, and pool-splitting, and works well on different combinations of compound data types. STrans is the transformation system used in ASLOP and is well tested for all benchmarks for ASLOR展开更多
文摘Smart farming has become a strategic approach of sustainable agriculture management and monitoring with the infrastructure to exploit modern technologies,including big data,the cloud,and the Internet of Things(IoT).Many researchers try to integrate IoT-based smart farming on cloud platforms effectively.They define various frameworks on smart farming and monitoring system and still lacks to define effective data management schemes.Since IoT-cloud systems involve massive structured and unstructured data,data optimization comes into the picture.Hence,this research designs an Information-Centric IoT-based Smart Farming with Dynamic Data Optimization(ICISF-DDO),which enhances the performance of the smart farming infrastructure with minimal energy consumption and improved lifetime.Here,a conceptual framework of the proposed scheme and statistical design model has beenwell defined.The information storage and management with DDO has been expanded individually to show the effective use of membership parameters in data optimization.The simulation outcomes state that the proposed ICISF-DDO can surpass existing smart farming systems with a data optimization ratio of 97.71%,reliability ratio of 98.63%,a coverage ratio of 99.67%,least sensor error rate of 8.96%,and efficient energy consumption ratio of 4.84%.
文摘This article presents a new scheme for dynamic data optimization in IoT(Internet of Things)-assisted sensor networks.The various components of IoT assisted cloud platform are discussed.In addition,a new architecture for IoT assisted sensor networks is presented.Further,a model for data optimization in IoT assisted sensor networks is proposed.A novel Membership inducing Dynamic Data Optimization Membership inducing Dynamic Data Optimization(MIDDO)algorithm for IoT assisted sensor network is proposed in this research.The proposed algorithm considers every node data and utilized membership function for the optimized data allocation.The proposed framework is compared with two stage optimization,dynamic stochastic optimization and sparsity inducing optimization and evaluated in terms of reliability ratio,coverage ratio and sensing error.Data optimization was performed based on the availability of cloud resource,sensor energy,data flow volume and the centroid of each state.It was inferred that the proposed MIDDO algorithm achieves an average performance ratio of 76.55%,reliability ratio of 94.74%,coverage ratio of 85.75%and sensing error of 0.154.
文摘Over the past decade, Graphics Processing Units (GPUs) have revolutionized high-performance computing, playing pivotal roles in advancing fields like IoT, autonomous vehicles, and exascale computing. Despite these advancements, efficiently programming GPUs remains a daunting challenge, often relying on trial-and-error optimization methods. This paper introduces an optimization technique for CUDA programs through a novel Data Layout strategy, aimed at restructuring memory data arrangement to significantly enhance data access locality. Focusing on the dynamic programming algorithm for chained matrix multiplication—a critical operation across various domains including artificial intelligence (AI), high-performance computing (HPC), and the Internet of Things (IoT)—this technique facilitates more localized access. We specifically illustrate the importance of efficient matrix multiplication in these areas, underscoring the technique’s broader applicability and its potential to address some of the most pressing computational challenges in GPU-accelerated applications. Our findings reveal a remarkable reduction in memory consumption and a substantial 50% decrease in execution time for CUDA programs utilizing this technique, thereby setting a new benchmark for optimization in GPU computing.
文摘The main aim of this work is to improve the security of data hiding forsecret image sharing. The privacy and security of digital information have becomea primary concern nowadays due to the enormous usage of digital technology.The security and the privacy of users’ images are ensured through reversible datahiding techniques. The efficiency of the existing data hiding techniques did notprovide optimum performance with multiple end nodes. These issues are solvedby using Separable Data Hiding and Adaptive Particle Swarm Optimization(SDHAPSO) algorithm to attain optimal performance. Image encryption, dataembedding, data extraction/image recovery are the main phases of the proposedapproach. DFT is generally used to extract the transform coefficient matrix fromthe original image. DFT coefficients are in float format, which assists in transforming the image to integral format using the round function. After obtainingthe encrypted image by data-hider, additional data embedding is formulated intohigh-frequency coefficients. The proposed SDHAPSO is mainly utilized for performance improvement through optimal pixel location selection within the imagefor secret bits concealment. In addition, the secret data embedding capacityenhancement is focused on image visual quality maintenance. Hence, it isobserved from the simulation results that the proposed SDHAPSO techniqueoffers high-level security outcomes with respect to higher PSNR, security level,lesser MSE and higher correlation than existing techniques. Hence, enhancedsensitive information protection is attained, which improves the overall systemperformance.
文摘In this paper a critical assessment and optimization of the phase diagrams and thermodynamic properties of the PrCl_3-MCl(M=Li,Na)and PrCl_3-MCl_2(M=Mg,Ca,Sr,Ba) binary systems have been per- formed.The assessed and optimized binary phase diagrams and thermodynamic data with self consistency are a better basis for constructing multicomponent phase diagrams.
文摘Big data are regarded as a tremendous technology for processing a huge variety of data in a short time and with a large storage capacity.The user’s access over the internet creates massive data processing over the internet.Big data require an intelligent feature selection model by addressing huge varieties of data.Traditional feature selection techniques are only applicable to simple data mining.Intelligent techniques are needed in big data processing and machine learning for an efficient classification.Major feature selection algorithms read the input features as they are.Then,the features are preprocessed and classified.Here,an algorithm does not consider the relatedness.During feature selection,all features are misread as outputs.Accordingly,a less optimal solution is achieved.In our proposed research,we focus on the feature selection by using supervised learning techniques called grey wolf optimization(GWO)with decomposed random differential grouping(DrnDG-GWO).First,decomposition of features into subsets based on relatedness in variables is performed.Random differential grouping is performed using a fitness value of two variables.Now,every subset is regarded as a population in GWO techniques.The combination of supervised machine learning with swarm intelligence techniques produces best feature optimization results in this research.Once the features are optimized,we classify using advanced kNN process for accurate data classification.The result of DrnDGGWO is compared with those of the standard GWO and GWO with PSO for feature selection to compare the efficiency of the proposed algorithm.The accuracy and time complexity of the proposed algorithm are 98%and 5 s,which are better than the existing techniques.
基金The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the Project Number(PSAU/2023/01/27268).
文摘Cloud computing has become increasingly popular due to its capacity to perform computations without relying on physical infrastructure,thereby revolutionizing computer processes.However,the rising energy consumption in cloud centers poses a significant challenge,especially with the escalating energy costs.This paper tackles this issue by introducing efficient solutions for data placement and node management,with a clear emphasis on the crucial role of the Internet of Things(IoT)throughout the research process.The IoT assumes a pivotal role in this study by actively collecting real-time data from various sensors strategically positioned in and around data centers.These sensors continuously monitor vital parameters such as energy usage and temperature,thereby providing a comprehensive dataset for analysis.The data generated by the IoT is seamlessly integrated into the Hybrid TCN-GRU-NBeat(NGT)model,enabling a dynamic and accurate representation of the current state of the data center environment.Through the incorporation of the Seagull Optimization Algorithm(SOA),the NGT model optimizes storage migration strategies based on the latest information provided by IoT sensors.The model is trained using 80%of the available dataset and subsequently tested on the remaining 20%.The results demonstrate the effectiveness of the proposed approach,with a Mean Squared Error(MSE)of 5.33%and a Mean Absolute Error(MAE)of 2.83%,accurately estimating power prices and leading to an average reduction of 23.88%in power costs.Furthermore,the integration of IoT data significantly enhances the accuracy of the NGT model,outperforming benchmark algorithms such as DenseNet,Support Vector Machine(SVM),Decision Trees,and AlexNet.The NGT model achieves an impressive accuracy rate of 97.9%,surpassing the rates of 87%,83%,80%,and 79%,respectively,for the benchmark algorithms.These findings underscore the effectiveness of the proposed method in optimizing energy efficiency and enhancing the predictive capabilities of cloud computing systems.The IoT plays a critical role in driving these advancements by providing real-time data insights into the operational aspects of data centers.
基金supported by NSFC(Grant No.71373032)the Natural Science Foundation of Hunan Province(Grant No.12JJ4073)+3 种基金the Scientific Research Fund of Hunan Provincial Education Department(Grant No.11C0029)the Educational Economy and Financial Research Base of Hunan Province(Grant No.13JCJA2)the Project of China Scholarship Council for Overseas Studies(201208430233201508430121)
文摘A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterprise or from a big data provider.Numerous simulation experiments are implemented to test the efficiency of the optimization model.Simulation experiment results show that when increasing the weight of knowledge from big data knowledge provider,the total discount expectation of profits will increase,and the transfer cost will be reduced.The calculated results are in accordance with the actual economic situation.The optimization model can provide useful decision support for enterprises in a big data environment.
基金funded by Ministry of Industry and Information Technology of the People’s Republic of China[Grant No.2018473].
文摘Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process.
基金support from the Deanship of Scientific Research,University of Hail,Saudi Arabia through the project Ref.(RG-191315).
文摘Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.
基金Item Sponsored by National Natural Science Foundation of China(51174253)
文摘Cooling process of iron ore pellets in a circular cooler has great impacts on the pellet quality and systematic energy exploitation. However, multi-variables and non-visualization of this gray system is unfavorable to efficient production. Thus, the cooling process of iron ore pellets was optimized using mathematical model and data mining techniques. A mathematical model was established and validated by steady-state production data, and the results show that the calculated values coincide very well with the measured values. Based on the proposed model, effects of important process parameters on gas-pellet temperature profiles within the circular cooler were analyzed to better understand the entire cooling process. Two data mining techniques—Association Rules Induction and Clustering were also applied on the steady-state production data to obtain expertise operating rules and optimized targets. Finally, an optimized control strategy for the circular cooler was proposed and an operation guidance system was developed. The system could realize the visualization of thermal process at steady state and provide operation guidance to optimize the circular cooler.
文摘An effective solution method of fractional ordinary and partial differential equations is proposed in the present paper.The standard Adomian Decomposition Method(ADM)is modified via introducing a functional term involving both a variable and a parameter.A residual approach is then adopted to identify the optimal value of the embedded parameter within the frame of L^(2) norm.Numerical experiments on sample problems of open literature prove that the presented algorithm is quite accurate,more advantageous over the traditional ADM and straightforward to implement for the fractional ordinary and partial differential equations of the recent focus of mathematical models.Better performance of the method is further evidenced against some compared commonly used numerical techniques.
文摘In order to reduce both the weight of vehicles and the damage of occupants in a crash event simultaneously, it is necessary to perform a multi-objective optimal design of the automotive energy absorbing components. Modified non-dominated sorting genetic algorithm II(NSGA II) was used for multi-objective optimization of automotive S-rail considering absorbed energy(E), peak crushing force(Fmax) and mass of the structure(W) as three conflicting objective functions. In the multi-objective optimization problem(MOP), E and Fmax are defined by polynomial models extracted using the software GEvo M based on train and test data obtained from numerical simulation of quasi-static crushing of the S-rail using ABAQUS. Finally, the nearest to ideal point(NIP)method and technique for ordering preferences by similarity to ideal solution(TOPSIS) method are used to find the some trade-off optimum design points from all non-dominated optimum design points represented by the Pareto fronts. Results represent that the optimum design point obtained from TOPSIS method exhibits better trade-off in comparison with that of optimum design point obtained from NIP method.
基金supported by the National Natural Science Foundation of China under grant Ui966209Natural Science Foundation of Shandong Province under grant ZR2020ME196.
文摘This paper aims to increase the diagnosis accuracy of the fault classification of power transformers by introducing a new off-line hybrid model based on a combination subset of the et method(C-set)&modified fuzzy C-mean algorithm(MFCM)and the optimizable multiclass-SVM(MCSVM).The innovation in this paper is shown in terms of solving the predicaments of outliers,boundary proportion,and unequal data existing in both traditional and intelligence models.Taking into consideration the closeness of dissolved gas analysis(DGA)data,the C-set method is implemented to subset the DGA data samples based on their type of faults within unrepeated subsets.Then,the MFCM is used for removing outliers from DGA samples by combining highly similar data for every subset within the same cluster to obtain the optimized training data(OTD)set.It is also used to minimize dimensionality of DGA samples and the uncertainty of transformer condition monitoring.After that,the optimized MCSVM is trained by using the(OTD).The proposed model diagnosis accuracy is 93.3%.The obtained results indicate that our model significantly improves the fault identification accuracy in power transformers when compared with other conventional and intelligence models.
基金the financial support by the National Natural Science Foundation of China (Nos.41304109 and 41230318)the Fundamental Research Funds for the Central Universities,China University of Geosciences (Wuhan) (Nos.CUG130103 and CUG110803)
文摘Seismic illumination plays an important role in subsurface imaging. A better image can be expected either through optimizing acquisition geometry or introducing more advanced seismic mi- gration and/or tomographic inversion methods involving illumination compensation. Vertical cable survey is a potential replacement of traditional marine seismic survey for its flexibility and data quality. Conventional vertical cable data processing requires separation of primaries and multiples before migration. We proposed to use multi-scale full waveform inversion (FWI) to improve illumination coverage of vertical cable survey. A deep water velocity model is built to test the capability of multi-scale FWI in detecting low velocity anomalies below seabed. Synthetic results show that multi-scale FWI is an effective model building tool in deep-water exploration. Geometry optimization through target ori- ented illumination analysis and multi-scale FWI may help to mitigate the risks of vertical cable survey. The combination of multi-scale FWI, low-frequency data and multi-vertical-cable acquisition system may provide both high resolution and high fidelity subsurface models.
基金This research work is supported in part by the U.S.OASD(R&E)(Office of the Assistant Secretary of Defense for Research and Engineering)(No.FA8750-15-2-0119)by the U.S.Army Research Office(No.W911NF-16-1-0496)+1 种基金The U.S.Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements,either expressed or implied,of the Office of the Assistant Secretary of Defense for Research and Engineering(OASD(R&E)),the Army Research Office,or the U.S.Government.Sihai Zhang received support from Key Program of National Natural Science Foundation of China(No.61631018)the Fundamental Research Funds for the Central Universities and Huawei Technology Innovative Research on Wireless Big Data.
文摘Wireless big data describes a wide range of massive data that is generated,collected and stored in wireless networks by wireless devices and users.While these data share some common properties with traditional big data,they have their own unique characteristics and provide numerous advantages for academic research and practical applications.This article reviews the recent advances and trends in the field of wireless big data.Due to space constraints,this survey is not intended to cover all aspects in this field,but to focus on the data aided transmission,data driven network optimization and novel applications.It is expected that the survey will help the readers to understand this exciting and emerging research field better.Moreover,open issues and promising future directions are also identified.
基金supported by the Nanjing University of Aeronautics and Astronautics Research Funding(Grant No.NS2015028)
文摘The accurate prediction of vehicle speed plays an important role in vehicle's real-time energy management and online optimization control. However, the current forecast methods are mostly based on traffic conditions to predict the speed, while ignoring the impact of the driver-vehicle-road system on the actual speed profile. In this paper, the correlation of velocity and its effect factors under various driving conditions were firstly analyzed based on driver-vehicle-road-traffic data records for a more accurate prediction model. With the modeling time and prediction time considered separately, the effectiveness and accuracy of several typical artificial-intelligence speed prediction algorithms were analyzed. The results show that the combination of niche immunegenetic algorithm-support vector machine(NIGA-SVM) prediction algorithm on the city roads with genetic algorithmsupport vector machine(GA-SVM) prediction algorithm on the suburb roads and on the freeway can sharply improve the accuracy and timeliness of vehicle speed forecasting. Afterwards, the optimized GA-SVM vehicle speed prediction model was established in accordance with the optimized GA-SVM prediction algorithm at different times. And the test results verified its validity and rationality of the prediction algorithm.
基金supported by the National Natural Science Foundation of China(No.61133006)the National High-Tech Research and Development(863)Program of China(No.2012AA010901)
文摘Structure Data Layout Optimization (SDLO) is a prevailing compiler optimization technique to improve cache efficiency. Structure transformation is a critical step for SDLO. Diversity of transformation methods and existence of complex data types are major challenges for structure transformation. We have designed and implemented STrans, a well-defined system which provides controllable and comprehensive functionality on structure transformation. Compared to known systems, it has less limitation on data types for transformation. In this paper we give formal definition of the approach STrans transforms data types. We have also designed Transformation Specification Language, a mini language to configure how to transform structures, which can be either manually tuned or generated by compiler. STrans supports three kinds of transformation methods, i.e., splitting, peeling, and pool-splitting, and works well on different combinations of compound data types. STrans is the transformation system used in ASLOP and is well tested for all benchmarks for ASLOR