The development of new wave energy converters has shed light on a number of unanswered questions in fluid mechanics, but has also identified a number of new issues of importance for their future deployment. The main c...The development of new wave energy converters has shed light on a number of unanswered questions in fluid mechanics, but has also identified a number of new issues of importance for their future deployment. The main concerns relevant to the practical use of wave energy converters are sustainability, survivability, and maintainability. Of course,it is also necessary to maximize the capture per unit area of the structure as well as to minimize the cost. In this review, we consider some of the questions related to the topics of sustainability, survivability, and maintenance access, with respect to sea conditions, for generic wave energy converters with an emphasis on the oscillating wave surge converter. New analytical models that have been developed are a topic of particular discussion. It is also shown how existing numerical models have been pushed to their limits to provide answers to open questions relating to the operation and characteristics of wave energy converters.展开更多
Workow management technologies have been dramatically improving their deployment architectures and systems along with the evolution and proliferation of cloud distributed computing environments.Especially,such cloud c...Workow management technologies have been dramatically improving their deployment architectures and systems along with the evolution and proliferation of cloud distributed computing environments.Especially,such cloud computing environments ought to be providing a suitable distributed computing paradigm to deploy very large-scale workow processes and applications with scalable on-demand services.In this paper,we focus on the distribution paradigm and its deployment formalism for such very large-scale workow applications being deployed and enacted across the multiple and heterogeneous cloud computing environments.We propose a formal approach to vertically as well as horizontally fragment very large-scale workow processes and their applications and to deploy the workow process and application fragments over three types of cloud deployment models and architectures.To concretize the formal approach,we rstly devise a series of operational situations fragmenting into cloud workow process and application components and deploying onto three different types of cloud deployment models and architectures.These concrete approaches are called the deployment-driven fragmentation mechanism to be applied to such very large-scale workow process and applications as an implementing component for cloud workow management systems.Finally,we strongly believe that our approach with the fragmentation formalisms becomes a theoretical basis of designing and implementing very large-scale and maximally distributed workow processes and applications to be deployed on cloud deployment models and architectural computing environments as well.展开更多
As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge device...As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.展开更多
Energy supply is one of the most critical challenges of wireless sensor networks(WSNs)and industrial wireless sensor networks(IWSNs).While research on coverage optimization problem(COP)centers on the network’s monito...Energy supply is one of the most critical challenges of wireless sensor networks(WSNs)and industrial wireless sensor networks(IWSNs).While research on coverage optimization problem(COP)centers on the network’s monitoring coverage,this research focuses on the power banks’energy supply coverage.The study of 2-D and 3-D spaces is typical in IWSN,with the realistic environment being more complex with obstacles(i.e.,machines).A 3-D surface is the field of interest(FOI)in this work with the established hybrid power bank deployment model for the energy supply COP optimization of IWSN.The hybrid power bank deployment model is highly adaptive and flexible for new or existing plants already using the IWSN system.The model improves the power supply to a more considerable extent with the least number of power bank deployments.The main innovation in this work is the utilization of a more practical surface model with obstacles and training while improving the convergence speed and quality of the heuristic algorithm.An overall probabilistic coverage rate analysis of every point on the FOI is provided,not limiting the scope to target points or areas.Bresenham’s algorithm is extended from 2-D to 3-D surface to enhance the probabilistic covering model for coverage measurement.A dynamic search strategy(DSS)is proposed to modify the artificial bee colony(ABC)and balance the exploration and exploitation ability for better convergence toward eliminating NP-hard deployment problems.Further,the cellular automata(CA)is utilized to enhance the convergence speed.The case study based on two typical FOI in the IWSN shows that the CA scheme effectively speeds up the optimization process.Comparative experiments are conducted on four benchmark functions to validate the effectiveness of the proposed method.The experimental results show that the proposed algorithm outperforms the ABC and gbest-guided ABC(GABC)algorithms.The results show that the proposed energy coverage optimization method based on the hybrid power bank deployment model generates more accurate results than the results obtained by similar algorithms(i.e.,ABC,GABC).The proposed model is,therefore,effective and efficient for optimization in the IWSN.展开更多
The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to d...The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to databit width. Reducing the data bit width will result in a loss of accuracy. Therefore, it is difficult to determinethe optimal bit width for different parts of the network with guaranteed accuracy. Mixed precision quantizationcan effectively reduce the amount of computation while keeping the model accuracy basically unchanged. In thispaper, a hardware-aware mixed precision quantization strategy optimal assignment algorithm adapted to low bitwidth is proposed, and reinforcement learning is used to automatically predict the mixed precision that meets theconstraints of hardware resources. In the state-space design, the standard deviation of weights is used to measurethe distribution difference of data, the execution speed feedback of simulated neural network accelerator inferenceis used as the environment to limit the action space of the agent, and the accuracy of the quantization model afterretraining is used as the reward function to guide the agent to carry out deep reinforcement learning training. Theexperimental results show that the proposed method obtains a suitable model layer-by-layer quantization strategyunder the condition that the computational resources are satisfied, and themodel accuracy is effectively improved.The proposed method has strong intelligence and certain universality and has strong application potential in thefield of mixed precision quantization and embedded neural network model deployment.展开更多
Academic and industrial communities have been paying significant attention to the 6th Generation (6G) wireless communication systems after the commercial deployment of 5G cellular communications. Among the emerging te...Academic and industrial communities have been paying significant attention to the 6th Generation (6G) wireless communication systems after the commercial deployment of 5G cellular communications. Among the emerging technologies, Vehicular Edge Computing (VEC) can provide essential assurance for the robustness of Artificial Intelligence (AI) algorithms to be used in the 6G systems. Therefore, in this paper, a strategy for enhancing the robustness of AI model deployment using 6G-VEC is proposed, taking the object detection task as an example. This strategy includes two stages: model stabilization and model adaptation. In the former, the state-of-the-art methods are appended to the model to improve its robustness. In the latter, two targeted compression methods are implemented, namely model parameter pruning and knowledge distillation, which result in a trade-off between model performance and runtime resources. Numerical results indicate that the proposed strategy can be smoothly deployed in the onboard edge terminals, where the introduced trade-off outperforms the other strategies available.展开更多
Depression is a crippling affliction and affects millions of individuals around the world.In general,the physicians screen patients for mental health disorders on a regular basis and treat patients in collaboration wi...Depression is a crippling affliction and affects millions of individuals around the world.In general,the physicians screen patients for mental health disorders on a regular basis and treat patients in collaboration with psychologists and other mental health experts,which results in lower costs and improved patient outcomes.However,this strategy can necessitate a lot of buy-in from a large number of people,as well as additional training and logistical considerations.Thus,utilizing the machine learning algorithms,patients with depression based on information generally present in a medical file were analyzed and predicted.The methodology of this proposed study is divided into six parts:Proposed Research Architecture(PRA),Data Pre-processing Approach(DPA),Research Hypothesis Testing(RHT),Concentrated Algorithm Pipeline(CAP),Loss Optimization Stratagem(LOS),and Model Deployment Architecture(MDA).The Null Hypothesis and Alternative Hypothesis are applied to test the RHT.In addition,Ensemble Learning Approach(ELA)and Frequent Model Retraining(FMR)have been utilized for optimizing the loss function.Besides,the Features Importance Interpretation is also delineated in this research.These forecasts could help individuals connect with expert mental health specialists more quickly and easily.According to the findings,71%of people with depression and 80%of those who do not have depression can be appropriately diagnosed.This study obtained 91%and 92%accuracy through the Random Forest(RF)and Extra Tree Classifier.But after applying the Receiver operating characteristic(ROC)curve,79%accuracy was found on top of RF,81%found on Extra Tree,and 82%recorded for the eXtreme Gradient Boosting(XGBoost)algorithm.Besides,several factors are identified in terms of predicting depression through statistical data analysis.Though the additional effort is needed to develop a more accurate model,this model can be adjustable in the healthcare sector for diagnosing depression.展开更多
Geo-analysis models can be shared and reused via model-services to support more effective responses to risks and help to build a sustainable world.The deployment of model-services typically requires significant effort...Geo-analysis models can be shared and reused via model-services to support more effective responses to risks and help to build a sustainable world.The deployment of model-services typically requires significant effort,primarily because of the complexity and disciplinary specifics of geo-analysis models.Various modelling participants engage in the collaborative modelling process:geo-analysis model resources are provided by model providers,computational resources are provided by computational resource providers,and the published model-services are accessed by model users.This paper primarily focuses on model-service deployment,with the basic goal of providing a collaboration-oriented method for modelling participants to conveniently work together and make full use of modelling and computational resources across an open web environment.For model resource providers,a model-deployment description method is studied to help build model-deployment packages;for computational resource providers,a computational resource description method is studied to help build model-service containers and connectors.An experimental system for sharing and reusing geo-analysis models is built to verify the capability and feasibility of the proposed methods.Through this strategy,modellers from dispersed regions can work together more easily,thus providing dynamic and reliable geospatial information for Future Earth studies.展开更多
基金funded by the Science Foundation Ireland (SFI) under the research project “High-end Computational Modelling for Wave Energy Systems” (Grant SFI/10/IN.1/12996) in collaboration with Marine Renewable Energy Ireland (Ma REI)the SFI Centre for Marine Renewable Energy Research (SFI/12/RC/2302)+4 种基金support from EPSRC through Project Grant EP/M021394/1the Sustainable Energy Authority of Ireland (SEAI) through the Renewable Energy Research Development & Demonstration Programme (Grant RE/OE/13/20132074)the European Space Agency (ESA)the numerical simulations were performed on the Stokes and Fionn clusters at the Irish Centre for High-end Computing (ICHEC)the Swiss National Computing Centre under the PRACE-2IP project (Grant FP7 RI-283493)
文摘The development of new wave energy converters has shed light on a number of unanswered questions in fluid mechanics, but has also identified a number of new issues of importance for their future deployment. The main concerns relevant to the practical use of wave energy converters are sustainability, survivability, and maintainability. Of course,it is also necessary to maximize the capture per unit area of the structure as well as to minimize the cost. In this review, we consider some of the questions related to the topics of sustainability, survivability, and maintenance access, with respect to sea conditions, for generic wave energy converters with an emphasis on the oscillating wave surge converter. New analytical models that have been developed are a topic of particular discussion. It is also shown how existing numerical models have been pushed to their limits to provide answers to open questions relating to the operation and characteristics of wave energy converters.
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(Grant Number 2020R1A6A1A03040583)。
文摘Workow management technologies have been dramatically improving their deployment architectures and systems along with the evolution and proliferation of cloud distributed computing environments.Especially,such cloud computing environments ought to be providing a suitable distributed computing paradigm to deploy very large-scale workow processes and applications with scalable on-demand services.In this paper,we focus on the distribution paradigm and its deployment formalism for such very large-scale workow applications being deployed and enacted across the multiple and heterogeneous cloud computing environments.We propose a formal approach to vertically as well as horizontally fragment very large-scale workow processes and their applications and to deploy the workow process and application fragments over three types of cloud deployment models and architectures.To concretize the formal approach,we rstly devise a series of operational situations fragmenting into cloud workow process and application components and deploying onto three different types of cloud deployment models and architectures.These concrete approaches are called the deployment-driven fragmentation mechanism to be applied to such very large-scale workow process and applications as an implementing component for cloud workow management systems.Finally,we strongly believe that our approach with the fragmentation formalisms becomes a theoretical basis of designing and implementing very large-scale and maximally distributed workow processes and applications to be deployed on cloud deployment models and architectural computing environments as well.
基金supported by the National Natural Science Foundation of China(U21A20519)。
文摘As a large amount of data is increasingly generated from edge devices,such as smart homes,mobile phones,and wearable devices,it becomes crucial for many applications to deploy machine learning modes across edge devices.The execution speed of the deployed model is a key element to ensure service quality.Considering a highly heterogeneous edge deployment scenario,deep learning compiling is a novel approach that aims to solve this problem.It defines models using certain DSLs and generates efficient code implementations on different hardware devices.However,there are still two aspects that are not yet thoroughly investigated yet.The first is the optimization of memory-intensive operations,and the second problem is the heterogeneity of the deployment target.To that end,in this work,we propose a system solution that optimizes memory-intensive operation,optimizes the subgraph distribution,and enables the compiling and deployment of DNN models on multiple targets.The evaluation results show the performance of our proposed system.
文摘Energy supply is one of the most critical challenges of wireless sensor networks(WSNs)and industrial wireless sensor networks(IWSNs).While research on coverage optimization problem(COP)centers on the network’s monitoring coverage,this research focuses on the power banks’energy supply coverage.The study of 2-D and 3-D spaces is typical in IWSN,with the realistic environment being more complex with obstacles(i.e.,machines).A 3-D surface is the field of interest(FOI)in this work with the established hybrid power bank deployment model for the energy supply COP optimization of IWSN.The hybrid power bank deployment model is highly adaptive and flexible for new or existing plants already using the IWSN system.The model improves the power supply to a more considerable extent with the least number of power bank deployments.The main innovation in this work is the utilization of a more practical surface model with obstacles and training while improving the convergence speed and quality of the heuristic algorithm.An overall probabilistic coverage rate analysis of every point on the FOI is provided,not limiting the scope to target points or areas.Bresenham’s algorithm is extended from 2-D to 3-D surface to enhance the probabilistic covering model for coverage measurement.A dynamic search strategy(DSS)is proposed to modify the artificial bee colony(ABC)and balance the exploration and exploitation ability for better convergence toward eliminating NP-hard deployment problems.Further,the cellular automata(CA)is utilized to enhance the convergence speed.The case study based on two typical FOI in the IWSN shows that the CA scheme effectively speeds up the optimization process.Comparative experiments are conducted on four benchmark functions to validate the effectiveness of the proposed method.The experimental results show that the proposed algorithm outperforms the ABC and gbest-guided ABC(GABC)algorithms.The results show that the proposed energy coverage optimization method based on the hybrid power bank deployment model generates more accurate results than the results obtained by similar algorithms(i.e.,ABC,GABC).The proposed model is,therefore,effective and efficient for optimization in the IWSN.
文摘The quantization algorithm compresses the original network by reducing the numerical bit width of the model,which improves the computation speed. Because different layers have different redundancy and sensitivity to databit width. Reducing the data bit width will result in a loss of accuracy. Therefore, it is difficult to determinethe optimal bit width for different parts of the network with guaranteed accuracy. Mixed precision quantizationcan effectively reduce the amount of computation while keeping the model accuracy basically unchanged. In thispaper, a hardware-aware mixed precision quantization strategy optimal assignment algorithm adapted to low bitwidth is proposed, and reinforcement learning is used to automatically predict the mixed precision that meets theconstraints of hardware resources. In the state-space design, the standard deviation of weights is used to measurethe distribution difference of data, the execution speed feedback of simulated neural network accelerator inferenceis used as the environment to limit the action space of the agent, and the accuracy of the quantization model afterretraining is used as the reward function to guide the agent to carry out deep reinforcement learning training. Theexperimental results show that the proposed method obtains a suitable model layer-by-layer quantization strategyunder the condition that the computational resources are satisfied, and themodel accuracy is effectively improved.The proposed method has strong intelligence and certain universality and has strong application potential in thefield of mixed precision quantization and embedded neural network model deployment.
基金supported by the National Key Research and Development Program of China(2020YFB1807500), the National Natural Science Foundation of China (62072360, 62001357, 62172438,61901367), the key research and development plan of Shaanxi province(2021ZDLGY02-09, 2020JQ-844)the Natural Science Foundation of Guangdong Province of China(2022A1515010988)+2 种基金Key Project on Artificial Intelligence of Xi'an Science and Technology Plan(2022JH-RGZN-0003)Xi'an Science and Technology Plan(20RGZN0005)the Xi'an Key Laboratory of Mobile Edge Computing and Security (201805052-ZD3CG36).
文摘Academic and industrial communities have been paying significant attention to the 6th Generation (6G) wireless communication systems after the commercial deployment of 5G cellular communications. Among the emerging technologies, Vehicular Edge Computing (VEC) can provide essential assurance for the robustness of Artificial Intelligence (AI) algorithms to be used in the 6G systems. Therefore, in this paper, a strategy for enhancing the robustness of AI model deployment using 6G-VEC is proposed, taking the object detection task as an example. This strategy includes two stages: model stabilization and model adaptation. In the former, the state-of-the-art methods are appended to the model to improve its robustness. In the latter, two targeted compression methods are implemented, namely model parameter pruning and knowledge distillation, which result in a trade-off between model performance and runtime resources. Numerical results indicate that the proposed strategy can be smoothly deployed in the onboard edge terminals, where the introduced trade-off outperforms the other strategies available.
文摘Depression is a crippling affliction and affects millions of individuals around the world.In general,the physicians screen patients for mental health disorders on a regular basis and treat patients in collaboration with psychologists and other mental health experts,which results in lower costs and improved patient outcomes.However,this strategy can necessitate a lot of buy-in from a large number of people,as well as additional training and logistical considerations.Thus,utilizing the machine learning algorithms,patients with depression based on information generally present in a medical file were analyzed and predicted.The methodology of this proposed study is divided into six parts:Proposed Research Architecture(PRA),Data Pre-processing Approach(DPA),Research Hypothesis Testing(RHT),Concentrated Algorithm Pipeline(CAP),Loss Optimization Stratagem(LOS),and Model Deployment Architecture(MDA).The Null Hypothesis and Alternative Hypothesis are applied to test the RHT.In addition,Ensemble Learning Approach(ELA)and Frequent Model Retraining(FMR)have been utilized for optimizing the loss function.Besides,the Features Importance Interpretation is also delineated in this research.These forecasts could help individuals connect with expert mental health specialists more quickly and easily.According to the findings,71%of people with depression and 80%of those who do not have depression can be appropriately diagnosed.This study obtained 91%and 92%accuracy through the Random Forest(RF)and Extra Tree Classifier.But after applying the Receiver operating characteristic(ROC)curve,79%accuracy was found on top of RF,81%found on Extra Tree,and 82%recorded for the eXtreme Gradient Boosting(XGBoost)algorithm.Besides,several factors are identified in terms of predicting depression through statistical data analysis.Though the additional effort is needed to develop a more accurate model,this model can be adjustable in the healthcare sector for diagnosing depression.
基金supported by the National Basic Research Program of China(973 Program)under Grant number 2015CB954102the National Natural Science Foundation of China under Grant number 41471317,Grant number 41301414 and Grant number 41371424the Priority Academic Program Development of the Jiangsu Higher Education Institutions under Grant number 164320H116.
文摘Geo-analysis models can be shared and reused via model-services to support more effective responses to risks and help to build a sustainable world.The deployment of model-services typically requires significant effort,primarily because of the complexity and disciplinary specifics of geo-analysis models.Various modelling participants engage in the collaborative modelling process:geo-analysis model resources are provided by model providers,computational resources are provided by computational resource providers,and the published model-services are accessed by model users.This paper primarily focuses on model-service deployment,with the basic goal of providing a collaboration-oriented method for modelling participants to conveniently work together and make full use of modelling and computational resources across an open web environment.For model resource providers,a model-deployment description method is studied to help build model-deployment packages;for computational resource providers,a computational resource description method is studied to help build model-service containers and connectors.An experimental system for sharing and reusing geo-analysis models is built to verify the capability and feasibility of the proposed methods.Through this strategy,modellers from dispersed regions can work together more easily,thus providing dynamic and reliable geospatial information for Future Earth studies.