The new measurement scheme of IP performance metrics is for the mobile network in heterogeneous wireless network environment. In the proposed scheme, when Mobile Nodes (MNs) inside the mobile network needs to under...The new measurement scheme of IP performance metrics is for the mobile network in heterogeneous wireless network environment. In the proposed scheme, when Mobile Nodes (MNs) inside the mobile network needs to understand the condition of multiple comrmunicatinn paths outside the mobile netwtrk, they can get IP performance metrics, such as delay, jitter, bandwidth, packet loss, etc., irrespective of the preserre or absence of measurement functionality. At the same time, the proposed scheme dees not require the MN to he involved in measuring IP performance metrice. The Multihomed Mobile Router (MMR) with heterogeneons wireless interfaces measures IP performance metrics on behalf of the MNs inside the mobile network. Then, MNs can get measured IP perfonmnce metries from the MMR using L3 messages. The proposed scheme can reduce burden and power consumption of MNs with limited resource and batty power since MNs don' t measure IP performance metrics directly. In addition, it can reduce considerably traffic overhead over wireless links on multiple measurement paths since signaling messages and injeeted testing traffic are reduced.展开更多
Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation...Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation metric for image classifier models and apply it to the CT image classification of lung cancer. A convolutional neural network is employed as the deep neural network (DNN) image classifier, with the residual network (ResNet) 50 chosen as the DNN archi-tecture. The image data used comprise a lung CT image set. Two classification models are built from datasets with varying amounts of data, and lung cancer is categorized into four classes using 10-fold cross-validation. Furthermore, we employ t-distributed stochastic neighbor embedding to visually explain the data distribution after classification. Experimental results demonstrate that cross en-tropy is a highly useful metric for evaluating the reliability of image classifier models. It is noted that for a more comprehensive evaluation of model perfor-mance, combining with other evaluation metrics is considered essential. .展开更多
Zinc–bromine rechargeable batteries(ZBRBs)are one of the most powerful candidates for next-generation energy storage due to their potentially lower material cost,deep discharge capability,non-flammable electrolytes,r...Zinc–bromine rechargeable batteries(ZBRBs)are one of the most powerful candidates for next-generation energy storage due to their potentially lower material cost,deep discharge capability,non-flammable electrolytes,relatively long lifetime and good reversibility.However,many opportunities remain to improve the efficiency and stability of these batteries for long-life operation.Here,we discuss the device configurations,working mechanisms and performance evaluation of ZBRBs.Both non-flow(static)and flow-type cells are highlighted in detail in this review.The fundamental electrochemical aspects,including the key challenges and promising solutions,are discussed,with particular attention paid to zinc and bromine half-cells,as their performance plays a critical role in determining the electrochemical performance of the battery system.The following sections examine the key performance metrics of ZBRBs and assessment methods using various ex situ and in situ/operando techniques.The review concludes with insights into future developments and prospects for high-performance ZBRBs.展开更多
The purpose of this article is to receive environmental assessments of combustion of different types of coal fuel depending on the preparation(unscreened,size-graded,briquetted and heat-treated)in automated boilers an...The purpose of this article is to receive environmental assessments of combustion of different types of coal fuel depending on the preparation(unscreened,size-graded,briquetted and heat-treated)in automated boilers and boilers with manual load-ing.The assessments were made on the basis of data obtained from experimental methods of coal preparation and calculated methods of determining the amount of pollutant and greenhouse gas emissions,as well as the mass of ash and slag waste.The main pollutants from coal combustion are calculated:particulate matter,benz(a)pyrene,nitrogen oxides,sulfur dioxide,carbon monoxide.Of the greenhouse gases carbon dioxide is calculated.As a result of conducted research it is shown that the simplest preliminary preparation(size-graded)of coal significantly improves combustion efficiency and environmental performance:emissions are reduced by 13%for hard coal and up to 20%for brown coal.The introduction of automated boil-ers with heat-treated coal in small boiler facilities allows to reduce emissions and ash and slag waste by 2-3 times.The best environmental indicators correspond to heat-treated lignite,which is characterized by the absence of sulfur dioxide emissions.展开更多
Performance metrics and models are prerequisites for scientific understanding and optimization. This paper introduces a new footprint-based theory and reviews the research in the past four decades leading to the new t...Performance metrics and models are prerequisites for scientific understanding and optimization. This paper introduces a new footprint-based theory and reviews the research in the past four decades leading to the new theory. The review groups the past work into metrics and their models in particular those of the reuse distance, metrics conversion, models of shared cache, performance and optimization, and other related techniques.展开更多
Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications indu...Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications industry loses millions of dollars due to poor video Quality of Experience(QoE)for users.Among the standard proposals for standardizing the quality of video streaming over internet service providers(ISPs)is the Mean Opinion Score(MOS).However,the accurate finding of QoE by MOS is subjective and laborious,and it varies depending on the user.A fully automated data analytics framework is required to reduce the inter-operator variability characteristic in QoE assessment.This work addresses this concern by suggesting a novel hybrid XGBStackQoE analytical model using a two-level layering technique.Level one combines multiple Machine Learning(ML)models via a layer one Hybrid XGBStackQoE-model.Individual ML models at level one are trained using the entire training data set.The level two Hybrid XGBStackQoE-Model is fitted using the outputs(meta-features)of the layer one ML models.The proposed model outperformed the conventional models,with an accuracy improvement of 4 to 5 percent,which is still higher than the current traditional models.The proposed framework could significantly improve video QoE accuracy.展开更多
Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques...Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.展开更多
We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine M...We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model efficiency and feasibility of parallel computing gravity gradiometry data. and real density data, and verify the high algorithms in the inversion of full tensor展开更多
In order to attain better communications performance rather than just expand coverage and save system cost,criteria related to the communications quality and capacity are extracted and revised to build an integrated p...In order to attain better communications performance rather than just expand coverage and save system cost,criteria related to the communications quality and capacity are extracted and revised to build an integrated performance metric system which aims to effectively guide the satellite communications constellation design.These performance metrics together with the system cost serve as the multiple objectives whilst the coverage requirement is regarded as the basic constraint in the optimization of the constellation configuration design applying a revised NSGA-II algorithm.The Pareto hyper-volumes lead to the best configuration schemes which achieve better integrated system performance compared with the conventional design results based merely on coverage and cost.展开更多
Fog computing brings computational services near the network edge to meet the latency constraints of cyber-physical System(CPS)applications.Edge devices enable limited computational capacity and energy availability th...Fog computing brings computational services near the network edge to meet the latency constraints of cyber-physical System(CPS)applications.Edge devices enable limited computational capacity and energy availability that hamper end user performance.We designed a novel performance measurement index to gauge a device’s resource capacity.This examination addresses the offloading mechanism issues,where the end user(EU)offloads a part of its workload to a nearby edge server(ES).Sometimes,the ES further offloads the workload to another ES or cloud server to achieve reliable performance because of limited resources(such as storage and computation).The manuscript aims to reduce the service offloading rate by selecting a potential device or server to accomplish a low average latency and service completion time to meet the deadline constraints of sub-divided services.In this regard,an adaptive online status predictive model design is significant for prognosticating the asset requirement of arrived services to make float decisions.Consequently,the development of a reinforcement learning-based flexible x-scheduling(RFXS)approach resolves the service offloading issues,where x=service/resource for producing the low latency and high performance of the network.Our approach to the theoretical bound and computational complexity is derived by formulating the system efficiency.A quadratic restraint mechanism is employed to formulate the service optimization issue according to a set ofmeasurements,as well as the behavioural association rate and adulation factor.Our system managed an average 0.89%of the service offloading rate,with 39 ms of delay over complex scenarios(using three servers with a 50%service arrival rate).The simulation outcomes confirm that the proposed scheme attained a low offloading uncertainty,and is suitable for simulating heterogeneous CPS frameworks.展开更多
This paper proposes a cryptographic technique on images based on the Sudoku solution.Sudoku is a number puzzle,which needs applying defined protocols and filling the empty boxes with numbers.Given a small size of numb...This paper proposes a cryptographic technique on images based on the Sudoku solution.Sudoku is a number puzzle,which needs applying defined protocols and filling the empty boxes with numbers.Given a small size of numbers as input,solving the sudoku puzzle yields an expanded big size of numbers,which can be used as a key for the Encryption/Decryption of images.In this way,the given small size of numbers can be stored as the prime key,which means the key is compact.A prime key clue in the sudoku puzzle always leads to only one solution,which means the key is always stable.This feature is the background for the paper,where the Sudoku puzzle output can be innovatively introduced in image cryptography.Sudoku solution is expanded to any size image using a sequence of expansion techniques that involve filling of the number matrix,Linear X-Y rotational shifting,and reverse shifting based on a standard zig-zag pattern.The crypto key for an image dictates the details of positions,where the image pixels have to be shuffled.Shuffling is made at two levels,namely pixel and sub-pixel(RGB)levels for an image,with the latter having more effective Encryption.The brought-out technique falls under the Image scrambling method with partial diffusion.Performance metrics are impressive and are given by a Histogram deviation of 0.997,a Correlation coefficient of 10−2 and an NPCR of 99.98%.Hence,it is evident that the image cryptography with the sudoku kept in place is more efficient against Plaintext and Differential attacks.展开更多
Cryptocurrency price prediction has garnered significant attention due to the growing importance of digital assets in the financial landscape. This paper presents a comprehensive study on predicting future cryptocurre...Cryptocurrency price prediction has garnered significant attention due to the growing importance of digital assets in the financial landscape. This paper presents a comprehensive study on predicting future cryptocurrency prices using machine learning algorithms. Open-source historical data from various cryptocurrency exchanges is utilized. Interpolation techniques are employed to handle missing data, ensuring the completeness and reliability of the dataset. Four technical indicators are selected as features for prediction. The study explores the application of five machine learning algorithms to capture the complex patterns in the highly volatile cryptocurrency market. The findings demonstrate the strengths and limitations of the different approaches, highlighting the significance of feature engineering and algorithm selection in achieving accurate cryptocurrency price predictions. The research contributes valuable insights into the dynamic and rapidly evolving field of cryptocurrency price prediction, assisting investors and traders in making informed decisions amidst the challenges posed by the cryptocurrency market.展开更多
Existing systems use key performance indicators(KPIs)as metrics for physical layer(PHY)optimization,which suffers from the problem of overoptimization,because some unnecessary PHY enhancements are imperceptible to ter...Existing systems use key performance indicators(KPIs)as metrics for physical layer(PHY)optimization,which suffers from the problem of overoptimization,because some unnecessary PHY enhancements are imperceptible to terminal users and thus induce additional cost and energy waste.Therefore,it is necessary to utilize directly the quality of experience(QoE)of user as a metric of optimization,which can achieve the global optimum of QoE under cost and energy constraints.However,QoE is still a metric of application layer that cannot be easily used to design and optimize the PHY.To address this problem,we in this paper propose a novel end-to-end QoE(E2E-QoE)based optimization architecture at the user-side for the first time.Specifically,a cross-layer parameterized model is proposed to establish the relationship between PHY and E2E-QoE.Based on this,an E2E-QoE oriented PHY anomaly diagnosis method is further designed to locate the time and root cause of anomalies.Finally,we investigate to optimize the PHY algorithm directly based on the E2E-QoE.The proposed frameworks and algorithms are all validated using the data from real fifth-generation(5G)mobile system,which show that using E2E-QoE as the metric of PHY optimization is feasible and can outperform existing schemes.展开更多
This paper presents a novel general method for computing optimal motions of an industrial robot manipulator (AdeptOne XL robot) in the presence of fixed and oscillating obstacles. The optimization model considers th...This paper presents a novel general method for computing optimal motions of an industrial robot manipulator (AdeptOne XL robot) in the presence of fixed and oscillating obstacles. The optimization model considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacle avoidance. The problem has 6 objective functions, 88 variables, and 21 constraints. Two evolutionary algorithms, namely, elitist non-dominated sorting genetic algorithm (NSGA-II) and multi-objective differential evolution (MODE), have been used for the optimization. Two methods (normalized weighting objective functions and average fitness factor) are used to select the best solution tradeoffs. Two multi-objective performance measures, namely solution spread measure and ratio of non-dominated individuals, are used to evaluate the Pareto optimal fronts. Two multi-objective performance measures, namely, optimizer overhead and algorithm effort, are used to find the computational effort of the optimization algorithm. The trajectories are defined by B-spline functions. The results obtained from NSGA-II and MODE are compared and analyzed.展开更多
Greenroads(www.greenroads.us)is a performance metric for sustainable practices associated with the design and construction of roads.It assigns points for approved sustainable choices/practices and can be used to asses...Greenroads(www.greenroads.us)is a performance metric for sustainable practices associated with the design and construction of roads.It assigns points for approved sustainable choices/practices and can be used to assess roadway project sustainability measures based on total points.Such a metric can(1)provide a quantitative means of sustainability assessment,(2)allow informed sustainability decisions,(3)provide baseline sustainability standards,and(4)stimulate improvement and innovation in integrated roadway sustainability.This paper describes Greenroads version 1.0,which consists of 11 requirements and 37 voluntary practices that can be used as a project-level sustainability performance metric.Development efforts and a Washington State Department of Transportation(WSDOT)case study suggest(1)existing project data can serve as the data source for performance assessment,(2)some requirements and voluntary actions need refinement,(3)projects need to treat sustainability in a holistic manner to meet a reasonable sustainability performance standard,(4),the financial impact of Greenroads use must be studied,and(5)several pilot projects are needed.The Greenroads sustainability performance metric can be a viable means of projectlevel sustainability performance assessment and decision support.展开更多
The rapid growth of interconnected high performance workstations has produced a new computing paradigm called clustered of workstations computing. In these systems load balance problem is a serious impediment to achie...The rapid growth of interconnected high performance workstations has produced a new computing paradigm called clustered of workstations computing. In these systems load balance problem is a serious impediment to achieve good performance. The main concern of this paper is the implementation of dynamic load balancing algorithm, asynchronous Round Robin (ARR), for balancing workload of parallel tree computation depth-first-search algorithm on Cluster of Heterogeneous Workstations (COW) Many algorithms in artificial intelligence and other areas of computer science are based on depth first search in implicitty defined trees. For these algorithms a load-balancing scheme is required, which is able to evenly distribute parts of an irregularly shaped tree over the workstations with minimal interprocessor communication and without prior knowledge of the tree’s shape. For the (ARR) algorithm only minimal interprocessor communication is needed when necessary and it runs under the MPI (Message passing interface) that allows parallel execution on heterogeneous SUN cluster of workstation platform. The program code is written in C language and executed under UNIX operating system (Solaris version).展开更多
Monitoring a computing cluster requires collecting and understanding log data generated at the core, computer, and cluster levels at run time. Visualizing the log data of a computing cluster is a challenging problem d...Monitoring a computing cluster requires collecting and understanding log data generated at the core, computer, and cluster levels at run time. Visualizing the log data of a computing cluster is a challenging problem due to the complexity of the underlying dataset: it is streaming, hierarchical, heterogeneous, and multi-sourced. This paper presents an integrated visualization system that employs a two-stage streaming process mode. Prior to the visual display of the multi-sourced information, the data generated from the clusters is gathered, cleaned, and modeled within a data processor. The visualization supported by a visual computing processor consists of a set of multivariate and time variant visualization techniques, including time sequence chart, treemap, and parallel coordinates. Novel techniques to illustrate the time tendency and abnormal status are also introduced. We demonstrate the effectiveness and scalability of the proposed system framework on a commodity cloud-computing platform.展开更多
Identifying code has been widely used in man-machine verification to maintain network security.The challenge in engaging man-machine verification involves the correct classification of man and machine tracks.In this s...Identifying code has been widely used in man-machine verification to maintain network security.The challenge in engaging man-machine verification involves the correct classification of man and machine tracks.In this study,we propose a random forest(RF)model for man-machine verification based on the mouse movement trajectory dataset.We also compare the RF model with the baseline models(logistic regression and support vector machine)based on performance metrics such as precision,recall,false positive rates,false negative rates,F-measure,and weighted accuracy.The performance metrics of the RF model exceed those of the baseline models.展开更多
Purpose-Neural network(NN)-based deep learning(DL)approach is considered for sentiment analysis(SA)by incorporating convolutional neural network(CNN),bi-directional long short-term memory(Bi-LSTM)and attention methods...Purpose-Neural network(NN)-based deep learning(DL)approach is considered for sentiment analysis(SA)by incorporating convolutional neural network(CNN),bi-directional long short-term memory(Bi-LSTM)and attention methods.Unlike the conventional supervised machine learning natural language processing algorithms,the authors have used unsupervised deep learning algorithms.Design/methodology/approach-The method presented for sentiment analysis is designed using CNN,Bi-LSTM and the attention mechanism.Word2vec word embedding is used for natural language processing(NLP).The discussed approach is designed for sentence-level SA which consists of one embedding layer,two convolutional layers with max-pooling,oneLSTMlayer and two fully connected(FC)layers.Overall the system training time is 30 min.Findings-The method performance is analyzed using metrics like precision,recall,F1 score,and accuracy.CNN is helped to reduce the complexity and Bi-LSTM is helped to process the long sequence input text.Originality/value-The attention mechanism is adopted to decide the significance of every hidden state and give a weighted sum of all the features fed as input.展开更多
Purpose-Diabetic retinopathy(DR)is a central root of blindness all over the world.Though DR is tough to diagnose in starting stages,and the detection procedure might be time-consuming even for qualified experts.Nowada...Purpose-Diabetic retinopathy(DR)is a central root of blindness all over the world.Though DR is tough to diagnose in starting stages,and the detection procedure might be time-consuming even for qualified experts.Nowadays,intelligent disease detection techniques are extremely acceptable for progress analysis and recognition of various diseases.Therefore,a computer-aided diagnosis scheme based on intelligent learning approaches is intended to propose for diagnosing DR effectively using a benchmark dataset.Design/methodology/approach-The proposed DR diagnostic procedure involves four main steps:(1)image pre-processing,(2)blood vessel segmentation,(3)feature extraction,and(4)classification.Initially,the retinal fundus image is taken for pre-processing with the help of Contrast Limited Adaptive Histogram Equalization(CLAHE)and average filter.In the next step,the blood vessel segmentation is carried out using a segmentation process with optimized gray-level thresholding.Once the blood vessels are extracted,feature extraction is done,using Local Binary Pattern(LBP),Texture Energy Measurement(TEM based on Laws of Texture Energy),and two entropy computations-Shanon’s entropy,and Kapur’s entropy.These collected features are subjected to a classifier called Neural Network(NN)with an optimized training algorithm.Both the gray-level thresholding and NN is enhanced by the Modified Levy Updated-Dragonfly Algorithm(MLU-DA),which operates to maximize the segmentation accuracy and to reduce the error difference between the predicted and actual outcomes of the NN.Finally,this classification error can correctly prove the efficiency of the proposed DR detection model.Findings-The overall accuracy of the proposed MLU-DA was 16.6%superior to conventional classifiers,and the precision of the developed MLU-DA was 22%better than LM-NN,16.6%better than PSO-NN,GWO-NN,and DA-NN.Finally,it is concluded that the implemented MLU-DA outperformed state-of-the-art algorithms in detecting DR.Originality/value-This paper adopts the latest optimization algorithm called MLU-DA-Neural Network with optimal gray-level thresholding for detecting diabetic retinopathy disease.This is the first work utilizes MLU-DA-based Neural Network for computer-aided Diabetic Retinopathy diagnosis.展开更多
文摘The new measurement scheme of IP performance metrics is for the mobile network in heterogeneous wireless network environment. In the proposed scheme, when Mobile Nodes (MNs) inside the mobile network needs to understand the condition of multiple comrmunicatinn paths outside the mobile netwtrk, they can get IP performance metrics, such as delay, jitter, bandwidth, packet loss, etc., irrespective of the preserre or absence of measurement functionality. At the same time, the proposed scheme dees not require the MN to he involved in measuring IP performance metrice. The Multihomed Mobile Router (MMR) with heterogeneons wireless interfaces measures IP performance metrics on behalf of the MNs inside the mobile network. Then, MNs can get measured IP perfonmnce metries from the MMR using L3 messages. The proposed scheme can reduce burden and power consumption of MNs with limited resource and batty power since MNs don' t measure IP performance metrics directly. In addition, it can reduce considerably traffic overhead over wireless links on multiple measurement paths since signaling messages and injeeted testing traffic are reduced.
文摘Cross entropy is a measure in machine learning and deep learning that assesses the difference between predicted and actual probability distributions. In this study, we propose cross entropy as a performance evaluation metric for image classifier models and apply it to the CT image classification of lung cancer. A convolutional neural network is employed as the deep neural network (DNN) image classifier, with the residual network (ResNet) 50 chosen as the DNN archi-tecture. The image data used comprise a lung CT image set. Two classification models are built from datasets with varying amounts of data, and lung cancer is categorized into four classes using 10-fold cross-validation. Furthermore, we employ t-distributed stochastic neighbor embedding to visually explain the data distribution after classification. Experimental results demonstrate that cross en-tropy is a highly useful metric for evaluating the reliability of image classifier models. It is noted that for a more comprehensive evaluation of model perfor-mance, combining with other evaluation metrics is considered essential. .
基金flnancial support from Australian Research Council through its Discovery,Future Fellowship ProgramsImam Mohammad Ibn Saud Islamic University (IMSIU) in Riyadh,Saudi Arabia,for flnancial support of this work.
文摘Zinc–bromine rechargeable batteries(ZBRBs)are one of the most powerful candidates for next-generation energy storage due to their potentially lower material cost,deep discharge capability,non-flammable electrolytes,relatively long lifetime and good reversibility.However,many opportunities remain to improve the efficiency and stability of these batteries for long-life operation.Here,we discuss the device configurations,working mechanisms and performance evaluation of ZBRBs.Both non-flow(static)and flow-type cells are highlighted in detail in this review.The fundamental electrochemical aspects,including the key challenges and promising solutions,are discussed,with particular attention paid to zinc and bromine half-cells,as their performance plays a critical role in determining the electrochemical performance of the battery system.The following sections examine the key performance metrics of ZBRBs and assessment methods using various ex situ and in situ/operando techniques.The review concludes with insights into future developments and prospects for high-performance ZBRBs.
基金The research was carried out under State Assignment Projects(FWEU-2021-0004,FWEU-2021-0005)of the Fundamental Research Program of Russian Federation 2021-2030.
文摘The purpose of this article is to receive environmental assessments of combustion of different types of coal fuel depending on the preparation(unscreened,size-graded,briquetted and heat-treated)in automated boilers and boilers with manual load-ing.The assessments were made on the basis of data obtained from experimental methods of coal preparation and calculated methods of determining the amount of pollutant and greenhouse gas emissions,as well as the mass of ash and slag waste.The main pollutants from coal combustion are calculated:particulate matter,benz(a)pyrene,nitrogen oxides,sulfur dioxide,carbon monoxide.Of the greenhouse gases carbon dioxide is calculated.As a result of conducted research it is shown that the simplest preliminary preparation(size-graded)of coal significantly improves combustion efficiency and environmental performance:emissions are reduced by 13%for hard coal and up to 20%for brown coal.The introduction of automated boil-ers with heat-treated coal in small boiler facilities allows to reduce emissions and ash and slag waste by 2-3 times.The best environmental indicators correspond to heat-treated lignite,which is characterized by the absence of sulfur dioxide emissions.
基金partially supported by the National Natural Science Foundation of China(NSFC)under Grant No.61232008the NSFC Joint Research Fund for Overseas Chinese Scholars and Scholars in Hong Kong and Macao under Grant No.61328201+2 种基金the National Science Foundation of USA under Contract Nos.CNS-1319617,CCF-1116104,CCF-0963759an IBM CAS Faculty Fellowshipa research grant from Huawei
文摘Performance metrics and models are prerequisites for scientific understanding and optimization. This paper introduces a new footprint-based theory and reviews the research in the past four decades leading to the new theory. The review groups the past work into metrics and their models in particular those of the reuse distance, metrics conversion, models of shared cache, performance and optimization, and other related techniques.
文摘Video streaming applications have grown considerably in recent years.As a result,this becomes one of the most significant contributors to global internet traffic.According to recent studies,the telecommunications industry loses millions of dollars due to poor video Quality of Experience(QoE)for users.Among the standard proposals for standardizing the quality of video streaming over internet service providers(ISPs)is the Mean Opinion Score(MOS).However,the accurate finding of QoE by MOS is subjective and laborious,and it varies depending on the user.A fully automated data analytics framework is required to reduce the inter-operator variability characteristic in QoE assessment.This work addresses this concern by suggesting a novel hybrid XGBStackQoE analytical model using a two-level layering technique.Level one combines multiple Machine Learning(ML)models via a layer one Hybrid XGBStackQoE-model.Individual ML models at level one are trained using the entire training data set.The level two Hybrid XGBStackQoE-Model is fitted using the outputs(meta-features)of the layer one ML models.The proposed model outperformed the conventional models,with an accuracy improvement of 4 to 5 percent,which is still higher than the current traditional models.The proposed framework could significantly improve video QoE accuracy.
基金This paper’s logical organisation and content quality have been enhanced,so the authors thank anonymous reviewers and journal editors for assistance.
文摘Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.
基金supported by the Sino-Probe09(No.201011078)National High-tech R&D Program(No.863 and2014AA06A613)
文摘We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model efficiency and feasibility of parallel computing gravity gradiometry data. and real density data, and verify the high algorithms in the inversion of full tensor
文摘In order to attain better communications performance rather than just expand coverage and save system cost,criteria related to the communications quality and capacity are extracted and revised to build an integrated performance metric system which aims to effectively guide the satellite communications constellation design.These performance metrics together with the system cost serve as the multiple objectives whilst the coverage requirement is regarded as the basic constraint in the optimization of the constellation configuration design applying a revised NSGA-II algorithm.The Pareto hyper-volumes lead to the best configuration schemes which achieve better integrated system performance compared with the conventional design results based merely on coverage and cost.
基金Zulqar and Kim’s research was supported in part by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)in part by the NRF grant funded by the Korea government(MSIT)(NRF-2022R1A2C1004401)+1 种基金Mekala’s research was supported in part by the Basic Science Research Program of the Ministry of Education(NRF-2018R1A2B6005105)in part by the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)(no.2019R1A5A8080290).
文摘Fog computing brings computational services near the network edge to meet the latency constraints of cyber-physical System(CPS)applications.Edge devices enable limited computational capacity and energy availability that hamper end user performance.We designed a novel performance measurement index to gauge a device’s resource capacity.This examination addresses the offloading mechanism issues,where the end user(EU)offloads a part of its workload to a nearby edge server(ES).Sometimes,the ES further offloads the workload to another ES or cloud server to achieve reliable performance because of limited resources(such as storage and computation).The manuscript aims to reduce the service offloading rate by selecting a potential device or server to accomplish a low average latency and service completion time to meet the deadline constraints of sub-divided services.In this regard,an adaptive online status predictive model design is significant for prognosticating the asset requirement of arrived services to make float decisions.Consequently,the development of a reinforcement learning-based flexible x-scheduling(RFXS)approach resolves the service offloading issues,where x=service/resource for producing the low latency and high performance of the network.Our approach to the theoretical bound and computational complexity is derived by formulating the system efficiency.A quadratic restraint mechanism is employed to formulate the service optimization issue according to a set ofmeasurements,as well as the behavioural association rate and adulation factor.Our system managed an average 0.89%of the service offloading rate,with 39 ms of delay over complex scenarios(using three servers with a 50%service arrival rate).The simulation outcomes confirm that the proposed scheme attained a low offloading uncertainty,and is suitable for simulating heterogeneous CPS frameworks.
基金supported by the government of the Basque Country for the ELKARTEK21/10 KK-2021/00014 and ELKARTEK22/85 Research Programs,respectively。
文摘This paper proposes a cryptographic technique on images based on the Sudoku solution.Sudoku is a number puzzle,which needs applying defined protocols and filling the empty boxes with numbers.Given a small size of numbers as input,solving the sudoku puzzle yields an expanded big size of numbers,which can be used as a key for the Encryption/Decryption of images.In this way,the given small size of numbers can be stored as the prime key,which means the key is compact.A prime key clue in the sudoku puzzle always leads to only one solution,which means the key is always stable.This feature is the background for the paper,where the Sudoku puzzle output can be innovatively introduced in image cryptography.Sudoku solution is expanded to any size image using a sequence of expansion techniques that involve filling of the number matrix,Linear X-Y rotational shifting,and reverse shifting based on a standard zig-zag pattern.The crypto key for an image dictates the details of positions,where the image pixels have to be shuffled.Shuffling is made at two levels,namely pixel and sub-pixel(RGB)levels for an image,with the latter having more effective Encryption.The brought-out technique falls under the Image scrambling method with partial diffusion.Performance metrics are impressive and are given by a Histogram deviation of 0.997,a Correlation coefficient of 10−2 and an NPCR of 99.98%.Hence,it is evident that the image cryptography with the sudoku kept in place is more efficient against Plaintext and Differential attacks.
文摘Cryptocurrency price prediction has garnered significant attention due to the growing importance of digital assets in the financial landscape. This paper presents a comprehensive study on predicting future cryptocurrency prices using machine learning algorithms. Open-source historical data from various cryptocurrency exchanges is utilized. Interpolation techniques are employed to handle missing data, ensuring the completeness and reliability of the dataset. Four technical indicators are selected as features for prediction. The study explores the application of five machine learning algorithms to capture the complex patterns in the highly volatile cryptocurrency market. The findings demonstrate the strengths and limitations of the different approaches, highlighting the significance of feature engineering and algorithm selection in achieving accurate cryptocurrency price predictions. The research contributes valuable insights into the dynamic and rapidly evolving field of cryptocurrency price prediction, assisting investors and traders in making informed decisions amidst the challenges posed by the cryptocurrency market.
文摘Existing systems use key performance indicators(KPIs)as metrics for physical layer(PHY)optimization,which suffers from the problem of overoptimization,because some unnecessary PHY enhancements are imperceptible to terminal users and thus induce additional cost and energy waste.Therefore,it is necessary to utilize directly the quality of experience(QoE)of user as a metric of optimization,which can achieve the global optimum of QoE under cost and energy constraints.However,QoE is still a metric of application layer that cannot be easily used to design and optimize the PHY.To address this problem,we in this paper propose a novel end-to-end QoE(E2E-QoE)based optimization architecture at the user-side for the first time.Specifically,a cross-layer parameterized model is proposed to establish the relationship between PHY and E2E-QoE.Based on this,an E2E-QoE oriented PHY anomaly diagnosis method is further designed to locate the time and root cause of anomalies.Finally,we investigate to optimize the PHY algorithm directly based on the E2E-QoE.The proposed frameworks and algorithms are all validated using the data from real fifth-generation(5G)mobile system,which show that using E2E-QoE as the metric of PHY optimization is feasible and can outperform existing schemes.
文摘This paper presents a novel general method for computing optimal motions of an industrial robot manipulator (AdeptOne XL robot) in the presence of fixed and oscillating obstacles. The optimization model considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacle avoidance. The problem has 6 objective functions, 88 variables, and 21 constraints. Two evolutionary algorithms, namely, elitist non-dominated sorting genetic algorithm (NSGA-II) and multi-objective differential evolution (MODE), have been used for the optimization. Two methods (normalized weighting objective functions and average fitness factor) are used to select the best solution tradeoffs. Two multi-objective performance measures, namely solution spread measure and ratio of non-dominated individuals, are used to evaluate the Pareto optimal fronts. Two multi-objective performance measures, namely, optimizer overhead and algorithm effort, are used to find the computational effort of the optimization algorithm. The trajectories are defined by B-spline functions. The results obtained from NSGA-II and MODE are compared and analyzed.
文摘Greenroads(www.greenroads.us)is a performance metric for sustainable practices associated with the design and construction of roads.It assigns points for approved sustainable choices/practices and can be used to assess roadway project sustainability measures based on total points.Such a metric can(1)provide a quantitative means of sustainability assessment,(2)allow informed sustainability decisions,(3)provide baseline sustainability standards,and(4)stimulate improvement and innovation in integrated roadway sustainability.This paper describes Greenroads version 1.0,which consists of 11 requirements and 37 voluntary practices that can be used as a project-level sustainability performance metric.Development efforts and a Washington State Department of Transportation(WSDOT)case study suggest(1)existing project data can serve as the data source for performance assessment,(2)some requirements and voluntary actions need refinement,(3)projects need to treat sustainability in a holistic manner to meet a reasonable sustainability performance standard,(4),the financial impact of Greenroads use must be studied,and(5)several pilot projects are needed.The Greenroads sustainability performance metric can be a viable means of projectlevel sustainability performance assessment and decision support.
文摘The rapid growth of interconnected high performance workstations has produced a new computing paradigm called clustered of workstations computing. In these systems load balance problem is a serious impediment to achieve good performance. The main concern of this paper is the implementation of dynamic load balancing algorithm, asynchronous Round Robin (ARR), for balancing workload of parallel tree computation depth-first-search algorithm on Cluster of Heterogeneous Workstations (COW) Many algorithms in artificial intelligence and other areas of computer science are based on depth first search in implicitty defined trees. For these algorithms a load-balancing scheme is required, which is able to evenly distribute parts of an irregularly shaped tree over the workstations with minimal interprocessor communication and without prior knowledge of the tree’s shape. For the (ARR) algorithm only minimal interprocessor communication is needed when necessary and it runs under the MPI (Message passing interface) that allows parallel execution on heterogeneous SUN cluster of workstation platform. The program code is written in C language and executed under UNIX operating system (Solaris version).
基金supported by the National Natural Science Foundation of China (Nos. 61232012 and 61202279)the National High-Tech Research and Development (863) Program of China (No. 2012AA120903)the Doctoral Fund of Ministry of Education of China (No. 20120101110134)
文摘Monitoring a computing cluster requires collecting and understanding log data generated at the core, computer, and cluster levels at run time. Visualizing the log data of a computing cluster is a challenging problem due to the complexity of the underlying dataset: it is streaming, hierarchical, heterogeneous, and multi-sourced. This paper presents an integrated visualization system that employs a two-stage streaming process mode. Prior to the visual display of the multi-sourced information, the data generated from the clusters is gathered, cleaned, and modeled within a data processor. The visualization supported by a visual computing processor consists of a set of multivariate and time variant visualization techniques, including time sequence chart, treemap, and parallel coordinates. Novel techniques to illustrate the time tendency and abnormal status are also introduced. We demonstrate the effectiveness and scalability of the proposed system framework on a commodity cloud-computing platform.
基金Project supported by the National Natural Science Foundation of China(Nos.61673361 and 61422307)
文摘Identifying code has been widely used in man-machine verification to maintain network security.The challenge in engaging man-machine verification involves the correct classification of man and machine tracks.In this study,we propose a random forest(RF)model for man-machine verification based on the mouse movement trajectory dataset.We also compare the RF model with the baseline models(logistic regression and support vector machine)based on performance metrics such as precision,recall,false positive rates,false negative rates,F-measure,and weighted accuracy.The performance metrics of the RF model exceed those of the baseline models.
文摘Purpose-Neural network(NN)-based deep learning(DL)approach is considered for sentiment analysis(SA)by incorporating convolutional neural network(CNN),bi-directional long short-term memory(Bi-LSTM)and attention methods.Unlike the conventional supervised machine learning natural language processing algorithms,the authors have used unsupervised deep learning algorithms.Design/methodology/approach-The method presented for sentiment analysis is designed using CNN,Bi-LSTM and the attention mechanism.Word2vec word embedding is used for natural language processing(NLP).The discussed approach is designed for sentence-level SA which consists of one embedding layer,two convolutional layers with max-pooling,oneLSTMlayer and two fully connected(FC)layers.Overall the system training time is 30 min.Findings-The method performance is analyzed using metrics like precision,recall,F1 score,and accuracy.CNN is helped to reduce the complexity and Bi-LSTM is helped to process the long sequence input text.Originality/value-The attention mechanism is adopted to decide the significance of every hidden state and give a weighted sum of all the features fed as input.
文摘Purpose-Diabetic retinopathy(DR)is a central root of blindness all over the world.Though DR is tough to diagnose in starting stages,and the detection procedure might be time-consuming even for qualified experts.Nowadays,intelligent disease detection techniques are extremely acceptable for progress analysis and recognition of various diseases.Therefore,a computer-aided diagnosis scheme based on intelligent learning approaches is intended to propose for diagnosing DR effectively using a benchmark dataset.Design/methodology/approach-The proposed DR diagnostic procedure involves four main steps:(1)image pre-processing,(2)blood vessel segmentation,(3)feature extraction,and(4)classification.Initially,the retinal fundus image is taken for pre-processing with the help of Contrast Limited Adaptive Histogram Equalization(CLAHE)and average filter.In the next step,the blood vessel segmentation is carried out using a segmentation process with optimized gray-level thresholding.Once the blood vessels are extracted,feature extraction is done,using Local Binary Pattern(LBP),Texture Energy Measurement(TEM based on Laws of Texture Energy),and two entropy computations-Shanon’s entropy,and Kapur’s entropy.These collected features are subjected to a classifier called Neural Network(NN)with an optimized training algorithm.Both the gray-level thresholding and NN is enhanced by the Modified Levy Updated-Dragonfly Algorithm(MLU-DA),which operates to maximize the segmentation accuracy and to reduce the error difference between the predicted and actual outcomes of the NN.Finally,this classification error can correctly prove the efficiency of the proposed DR detection model.Findings-The overall accuracy of the proposed MLU-DA was 16.6%superior to conventional classifiers,and the precision of the developed MLU-DA was 22%better than LM-NN,16.6%better than PSO-NN,GWO-NN,and DA-NN.Finally,it is concluded that the implemented MLU-DA outperformed state-of-the-art algorithms in detecting DR.Originality/value-This paper adopts the latest optimization algorithm called MLU-DA-Neural Network with optimal gray-level thresholding for detecting diabetic retinopathy disease.This is the first work utilizes MLU-DA-based Neural Network for computer-aided Diabetic Retinopathy diagnosis.