The hydrothermal wave was investigated numerically for large-Prandtl-number fluid (Pr = 105.6) in a shallow cavity with different heated sidewalls. The traveling wave appears and propagates in the direction opposite t...The hydrothermal wave was investigated numerically for large-Prandtl-number fluid (Pr = 105.6) in a shallow cavity with different heated sidewalls. The traveling wave appears and propagates in the direction opposite to the surface flow (upstream) in the case of zero gravity when the applied temperature difference grows and over the critical value. The phase relationships of the disturbed velocity,temperature and pressure demonstrate that the traveling wave is driven by the disturbed tem-perature,which is named hydrothermal wave. The hydrothermal wave is so weak that the oscillatory flow field and temperature distribution can hardly be observed in the liquid layer. The exciting mechanism of hydrothermal wave is analyzed and discussed in the present paper.展开更多
With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests...With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests from both the industrial and academic communities.Input shaping(IS),as a simple and effective feedforward method,is greatly demanded in DDVC methods.It convolves the desired input command with impulse sequence without requiring parametric dynamics and the closed-loop system structure,thereby suppressing the residual vibration separately.Based on a thorough investigation into the state-of-the-art DDVC methods,this survey has made the following efforts:1)Introducing the IS theory and typical input shapers;2)Categorizing recent progress of DDVC methods;3)Summarizing commonly adopted metrics for DDVC;and 4)Discussing the engineering applications and future trends of DDVC.By doing so,this study provides a systematic and comprehensive overview of existing DDVC methods from designing to optimizing perspectives,aiming at promoting future research regarding this emerging and vital issue.展开更多
According to the Annex Technical Regulations for Integrated Curriculum Development(Trial)in Document No.30 of the General Office of the Ministry of Human Resources and Social Security(2012),this paper studies the form...According to the Annex Technical Regulations for Integrated Curriculum Development(Trial)in Document No.30 of the General Office of the Ministry of Human Resources and Social Security(2012),this paper studies the formulation of the curriculum standards for the integration of Chinese medicinal materials production.We focus on the formulation ideas of the curriculum standards for the integration of Chinese medicinal materials production,the formulation process of the curriculum standards for the integration of Chinese medicinal materials production,including the description of typical work tasks,the determination of curriculum objectives,the analysis of study content,the description of referential study tasks,teaching implementation suggestions,assessment and evaluation suggestions,which can provide a reference for the development and research of other related integrated courses.展开更多
More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud com...More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.展开更多
Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield env...Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield environment simulation system and command and control system is still imperfect, and the traditional simulation data model cannot meet war fighters’ high-efficient and accurate understanding and analysis on battlefield environment’s information. Therefore, a kind of task-orientated battlefield environment simulation process model needs to be construed to effectively analyze the key information demands of the command and control system. The structured characteristics of tasks and simulation process are analyzed, and the simulation process concept model is constructed with the method of object-orientated. The data model and formal syntax of GeoBML are analyzed, and the logical model of simulation process is constructed with formal language. The object data structure of simulation process is defined and the object model of simulation process which maps tasks is constructed. In the end, the battlefield environment simulation platform modules are designed and applied based on this model, verifying that the model can effectively express the real-time dynamic correlation between battlefield environment simulation data and operational tasks.展开更多
Cross-training is a phenomenon related to motor learning, where motor performance of the untrained limb shows improvement in strength and skill execution following unilateral training of the homologous contralateral l...Cross-training is a phenomenon related to motor learning, where motor performance of the untrained limb shows improvement in strength and skill execution following unilateral training of the homologous contralateral limb. We used functional MRI to investigate whether motor performance of the untrained limb could be improved using a serial reaction time task according to motor sequential learning of the trained limb, and whether these skill acquisitions led to changes in brain activation patterns. We recruited 20 right-handed healthy subjects, who were randomly allocated into training and control groups. The training group was trained in performance of a serial reaction time task using their non-dominant left hand, 40 minutes per day, for 10 days, over a period of 2 weeks. The control group did not receive training. Measurements of response time and percentile of response accuracy were performed twice during pre- and post-training, while brain functional MRI was scanned during performance of the serial reaction time task using the untrained right hand. In the training group, prominent changes in response time and percentile of response accuracy were observed in both the untrained right hand and the trained left hand between pre- and post-training. The control group showed no significant changes in the untrained hand between pre- and post-training. In the training group, the activated volume of the cortical areas related to motor function (i.e., primary motor cortex, premotor area, posterior parietal cortex) showed a gradual decrease, and enhanced cerebellar activation of the vermis and the newly activated ipsilateral dentate nucleus were observed during performance of the serial reaction time task using the untrained right hand, accompanied by the cross-motor learning effect. However, no significant changes were observed in the control group. Our findings indicate that motor skills learned over the 2-week training using the trained limb were transferred to the opposite homologous limb, and motor skill acquisition of the untrained limb led to changes in brain activation patterns in the cerebral cortex and cerebellum.展开更多
Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function...Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function.Besides,traumatic brain injury(TBI)and various brain diseases are also greatly influenced by the brain's mechanical properties.Whether white matter or grey matter,brain tissue contains multiscale structures composed of neurons,glial cells,fibers,blood vessels,etc.,each with different mechanical properties.As such,brain tissue exhibits complex mechanical behavior,usually with strong nonlinearity,heterogeneity,and directional dependence.Building a constitutive law for multiscale brain tissue using traditional function-based approaches can be very challenging.Instead,this paper proposes a data-driven approach to establish the desired mechanical model of brain tissue.We focus on blood vessels with internal pressure embedded in a white or grey matter matrix material to demonstrate our approach.The matrix is described by an isotropic or anisotropic nonlinear elastic model.A representative unit cell(RUC)with blood vessels is built,which is used to generate the stress-strain data under different internal blood pressure and various proportional displacement loading paths.The generated stress-strain data is then used to train a mechanical law using artificial neural networks to predict the macroscopic mechanical response of brain tissue under different internal pressures.Finally,the trained material model is implemented into finite element software to predict the mechanical behavior of a whole brain under intracranial pressure and distributed body forces.Compared with a direct numerical simulation that employs a reference material model,our proposed approach greatly reduces the computational cost and improves modeling efficiency.The predictions made by our trained model demonstrate sufficient accuracy.Specifically,we find that the level of internal blood pressure can greatly influence stress distribution and determine the possible related damage behaviors.展开更多
In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task ...In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).展开更多
To solve the problem of distributed tasks-platforms scheduling in holonic command and control(C2) organization,the basic elements of the organization are analyzed firstly and the formal description of organizational e...To solve the problem of distributed tasks-platforms scheduling in holonic command and control(C2) organization,the basic elements of the organization are analyzed firstly and the formal description of organizational elements and structure is provided. Based on the improvement of task execution quality,a single task resource scheduling model is established and the solving method based on the m-best algorithm is proposed. For the problem of tactical decision-holon cannot handle tasks with low priority effectively, a distributed resource scheduling collaboration mechanism based on platform pricing and a platform exchange mechanism based on resource capacities are designed. Finally,a series of experiments are designed to prove the effectiveness of these methods. The results show that the proposed distributed scheduling methods can realize the effective balance of platform resources.展开更多
Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encoun...Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.展开更多
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ...The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.展开更多
The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-rel...The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-related coupling relationships, Io MT faces unprecedented challenges. Considering the associative connections among tasks, this paper proposes a computing offloading policy for multiple-user devices(UDs) considering device-to-device(D2D) communication and a multi-access edge computing(MEC)technique under the scenario of Io MT. Specifically,to minimize the total delay and energy consumption concerning the requirement of Io MT, we first analyze and model the detailed local execution, MEC execution, D2D execution, and associated tasks offloading exchange model. Consequently, the associated tasks’ offloading scheme of multi-UDs is formulated as a mixed-integer nonconvex optimization problem. Considering the advantages of deep reinforcement learning(DRL) in processing tasks related to coupling relationships, a Double DQN based associative tasks computing offloading(DDATO) algorithm is then proposed to obtain the optimal solution, which can make the best offloading decision under the condition that tasks of UDs are associative. Furthermore, to reduce the complexity of the DDATO algorithm, the cacheaided procedure is intentionally introduced before the data training process. This avoids redundant offloading and computing procedures concerning tasks that previously have already been cached by other UDs. In addition, we use a dynamic ε-greedy strategy in the action selection section of the algorithm, thus preventing the algorithm from falling into a locally optimal solution. Simulation results demonstrate that compared with other existing methods for associative task models concerning different structures in the Io MT network, the proposed algorithm can lower the total cost more effectively and efficiently while also providing a tradeoff between delay and energy consumption tolerance.展开更多
With the advancement of technology and the continuous innovation of applications, low-latency applications such as drones, online games and virtual reality are gradually becoming popular demands in modern society. How...With the advancement of technology and the continuous innovation of applications, low-latency applications such as drones, online games and virtual reality are gradually becoming popular demands in modern society. However, these applications pose a great challenge to the traditional centralized mobile cloud computing paradigm, and it is obvious that the traditional cloud computing model is already struggling to meet such demands. To address the shortcomings of cloud computing, mobile edge computing has emerged. Mobile edge computing provides users with computing and storage resources by offloading computing tasks to servers at the edge of the network. However, most existing work only considers single-objective performance optimization in terms of latency or energy consumption, but not balanced optimization in terms of latency and energy consumption. To reduce task latency and device energy consumption, the problem of joint optimization of computation offloading and resource allocation in multi-cell, multi-user, multi-server MEC environments is investigated. In this paper, a dynamic computation offloading algorithm based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) is proposed to obtain the optimal policy. The experimental results show that the algorithm proposed in this paper reduces the delay by 5 ms compared to PPO, 1.5 ms compared to DDPG and 10.7 ms compared to DQN, and reduces the energy consumption by 300 compared to PPO, 760 compared to DDPG and 380 compared to DQN. This fully proves that the algorithm proposed in this paper has excellent performance.展开更多
Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concer...Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concerns.However,these three factors have intrinsic trade-off relationships.The existing studies show that load concentration can reduce the number of servers and hence save energy.In this paper,we deal with the problem of reliable task deployment in data centers,with the goal of minimizing the number of servers used in cloud data centers under the constraint that the job execution deadline can be met upon single server failure.We propose a QoS-Constrained,Reliable and Energy-efficient task replica deployment(QSRE)algorithm for the problem by combining task replication and re-execution.For each task in a job that cannot finish executing by re-execution within deadline,we initiate two replicas for the task:main task and task replica.Each main task runs on an individual server.The associated task replica is deployed on a backup server and completes part of the whole task load before the main task failure.Different from the main tasks,multiple task replicas can be allocated to the same backup server to reduce the energy consumption of cloud data centers by minimizing the number of servers required for running the task replicas.Specifically,QSRE assigns the task replicas with the longest and the shortest execution time to the backup servers in turn,such that the task replicas can meet the QoS-specified job execution deadline under the main task failure.We conduct experiments through simulations.The experimental results show that QSRE can effectively reduce the number of servers used,while ensuring the reliability and QoS of job execution.展开更多
The IEC 61131-3 standard defines a model and a set of programming languages for the development of industrial automation software. It is widely accepted by industry and most of the commercial tool vendors advertise co...The IEC 61131-3 standard defines a model and a set of programming languages for the development of industrial automation software. It is widely accepted by industry and most of the commercial tool vendors advertise compliance with it. On the other side, Model Driven Development (MDD) has been proved as a quite successful paradigm in general-purpose computing. This was the motivation for exploiting the benefits of MDD in the industrial automation domain. With the emerging IEC 61131 specification that defines an object-oriented (OO) extension to the function block model, there will be a push to the industry to better exploit the benefits of MDD in automation systems development. This work discusses possible alternatives to integrate the current but also the emerging specification of IEC 61131 in the model driven development process of automation systems. IEC 61499, UML and SysML are considered as possible alternatives to allow the developer to work in higher layers of abstraction than the one supported by IEC 61131 and to more effectively move from requirement specifications into the implementation model of the system.展开更多
In the case of lid-driven deep cavity flow, the effects of different resolutions of local grid refinement have been studied in the frame of multiple relaxation times (MRT) lattice Boltzmann method (LBM). In all the ca...In the case of lid-driven deep cavity flow, the effects of different resolutions of local grid refinement have been studied in the frame of multiple relaxation times (MRT) lattice Boltzmann method (LBM). In all the cases, the aspect ratio and Reynolds number are set as 1.5 and 3.200, respectively. First, the applied method is validated by comparing it with two reported works, with which agreements are reached. Then, six separate degrees of local grid refinement at the upper left corner, i.e. purely coarse grid, including 1/64, 1/32, 1/16, 1/8, 1/4 refinements of the lattice number in the width direction have been studied in detail. The results give the following indications:① The refinement degrees lower than 1/8 produce similar results;② For single corner refinement, 1/4 refinement is adequate for clearing the noises in the singularity zone to a large extent;③ New noise around the interface between coarse and fine zones are introduced by local grid refinement. Finally, refinement of entire subzone neighboring the lid is examined to avoid introducing new noises and it has been found effective.展开更多
基金Supported by the National Natural Science Foundation of China (Grant No. 10432060)
文摘The hydrothermal wave was investigated numerically for large-Prandtl-number fluid (Pr = 105.6) in a shallow cavity with different heated sidewalls. The traveling wave appears and propagates in the direction opposite to the surface flow (upstream) in the case of zero gravity when the applied temperature difference grows and over the critical value. The phase relationships of the disturbed velocity,temperature and pressure demonstrate that the traveling wave is driven by the disturbed tem-perature,which is named hydrothermal wave. The hydrothermal wave is so weak that the oscillatory flow field and temperature distribution can hardly be observed in the liquid layer. The exciting mechanism of hydrothermal wave is analyzed and discussed in the present paper.
基金supported by the National Natural Science Foundation of China (62272078)。
文摘With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests from both the industrial and academic communities.Input shaping(IS),as a simple and effective feedforward method,is greatly demanded in DDVC methods.It convolves the desired input command with impulse sequence without requiring parametric dynamics and the closed-loop system structure,thereby suppressing the residual vibration separately.Based on a thorough investigation into the state-of-the-art DDVC methods,this survey has made the following efforts:1)Introducing the IS theory and typical input shapers;2)Categorizing recent progress of DDVC methods;3)Summarizing commonly adopted metrics for DDVC;and 4)Discussing the engineering applications and future trends of DDVC.By doing so,this study provides a systematic and comprehensive overview of existing DDVC methods from designing to optimizing perspectives,aiming at promoting future research regarding this emerging and vital issue.
基金Supported by Scientific Research Fund Project of Yunnan Provincial Department of Education (2023J2034).
文摘According to the Annex Technical Regulations for Integrated Curriculum Development(Trial)in Document No.30 of the General Office of the Ministry of Human Resources and Social Security(2012),this paper studies the formulation of the curriculum standards for the integration of Chinese medicinal materials production.We focus on the formulation ideas of the curriculum standards for the integration of Chinese medicinal materials production,the formulation process of the curriculum standards for the integration of Chinese medicinal materials production,including the description of typical work tasks,the determination of curriculum objectives,the analysis of study content,the description of referential study tasks,teaching implementation suggestions,assessment and evaluation suggestions,which can provide a reference for the development and research of other related integrated courses.
基金in part by the Hubei Natural Science and Research Project under Grant 2020418in part by the 2021 Light of Taihu Science and Technology Projectin part by the 2022 Wuxi Science and Technology Innovation and Entrepreneurship Program.
文摘More devices in the Intelligent Internet of Things(AIoT)result in an increased number of tasks that require low latency and real-time responsiveness,leading to an increased demand for computational resources.Cloud computing’s low-latency performance issues in AIoT scenarios have led researchers to explore fog computing as a complementary extension.However,the effective allocation of resources for task execution within fog environments,characterized by limitations and heterogeneity in computational resources,remains a formidable challenge.To tackle this challenge,in this study,we integrate fog computing and cloud computing.We begin by establishing a fog-cloud environment framework,followed by the formulation of a mathematical model for task scheduling.Lastly,we introduce an enhanced hybrid Equilibrium Optimizer(EHEO)tailored for AIoT task scheduling.The overarching objective is to decrease both the makespan and energy consumption of the fog-cloud system while accounting for task deadlines.The proposed EHEO method undergoes a thorough evaluation against multiple benchmark algorithms,encompassing metrics likemakespan,total energy consumption,success rate,and average waiting time.Comprehensive experimental results unequivocally demonstrate the superior performance of EHEO across all assessed metrics.Notably,in the most favorable conditions,EHEO significantly diminishes both the makespan and energy consumption by approximately 50%and 35.5%,respectively,compared to the secondbest performing approach,which affirms its efficacy in advancing the efficiency of AIoT task scheduling within fog-cloud networks.
基金The National Natural Science Foundation of China(41271393).
文摘Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield environment simulation system and command and control system is still imperfect, and the traditional simulation data model cannot meet war fighters’ high-efficient and accurate understanding and analysis on battlefield environment’s information. Therefore, a kind of task-orientated battlefield environment simulation process model needs to be construed to effectively analyze the key information demands of the command and control system. The structured characteristics of tasks and simulation process are analyzed, and the simulation process concept model is constructed with the method of object-orientated. The data model and formal syntax of GeoBML are analyzed, and the logical model of simulation process is constructed with formal language. The object data structure of simulation process is defined and the object model of simulation process which maps tasks is constructed. In the end, the battlefield environment simulation platform modules are designed and applied based on this model, verifying that the model can effectively express the real-time dynamic correlation between battlefield environment simulation data and operational tasks.
基金supported by the Yeungnam College of Science & Technology Research Grants in 2012
文摘Cross-training is a phenomenon related to motor learning, where motor performance of the untrained limb shows improvement in strength and skill execution following unilateral training of the homologous contralateral limb. We used functional MRI to investigate whether motor performance of the untrained limb could be improved using a serial reaction time task according to motor sequential learning of the trained limb, and whether these skill acquisitions led to changes in brain activation patterns. We recruited 20 right-handed healthy subjects, who were randomly allocated into training and control groups. The training group was trained in performance of a serial reaction time task using their non-dominant left hand, 40 minutes per day, for 10 days, over a period of 2 weeks. The control group did not receive training. Measurements of response time and percentile of response accuracy were performed twice during pre- and post-training, while brain functional MRI was scanned during performance of the serial reaction time task using the untrained right hand. In the training group, prominent changes in response time and percentile of response accuracy were observed in both the untrained right hand and the trained left hand between pre- and post-training. The control group showed no significant changes in the untrained hand between pre- and post-training. In the training group, the activated volume of the cortical areas related to motor function (i.e., primary motor cortex, premotor area, posterior parietal cortex) showed a gradual decrease, and enhanced cerebellar activation of the vermis and the newly activated ipsilateral dentate nucleus were observed during performance of the serial reaction time task using the untrained right hand, accompanied by the cross-motor learning effect. However, no significant changes were observed in the control group. Our findings indicate that motor skills learned over the 2-week training using the trained limb were transferred to the opposite homologous limb, and motor skill acquisition of the untrained limb led to changes in brain activation patterns in the cerebral cortex and cerebellum.
文摘Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function.Besides,traumatic brain injury(TBI)and various brain diseases are also greatly influenced by the brain's mechanical properties.Whether white matter or grey matter,brain tissue contains multiscale structures composed of neurons,glial cells,fibers,blood vessels,etc.,each with different mechanical properties.As such,brain tissue exhibits complex mechanical behavior,usually with strong nonlinearity,heterogeneity,and directional dependence.Building a constitutive law for multiscale brain tissue using traditional function-based approaches can be very challenging.Instead,this paper proposes a data-driven approach to establish the desired mechanical model of brain tissue.We focus on blood vessels with internal pressure embedded in a white or grey matter matrix material to demonstrate our approach.The matrix is described by an isotropic or anisotropic nonlinear elastic model.A representative unit cell(RUC)with blood vessels is built,which is used to generate the stress-strain data under different internal blood pressure and various proportional displacement loading paths.The generated stress-strain data is then used to train a mechanical law using artificial neural networks to predict the macroscopic mechanical response of brain tissue under different internal pressures.Finally,the trained material model is implemented into finite element software to predict the mechanical behavior of a whole brain under intracranial pressure and distributed body forces.Compared with a direct numerical simulation that employs a reference material model,our proposed approach greatly reduces the computational cost and improves modeling efficiency.The predictions made by our trained model demonstrate sufficient accuracy.Specifically,we find that the level of internal blood pressure can greatly influence stress distribution and determine the possible related damage behaviors.
文摘In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).
基金supported by the National Natural Science Foundation of China(6157301761703425)+2 种基金the Aeronautical Science Fund(20175796014)Shaanxi Province Natural Science Foundation(2016JQ60622017JM6062)
文摘To solve the problem of distributed tasks-platforms scheduling in holonic command and control(C2) organization,the basic elements of the organization are analyzed firstly and the formal description of organizational elements and structure is provided. Based on the improvement of task execution quality,a single task resource scheduling model is established and the solving method based on the m-best algorithm is proposed. For the problem of tactical decision-holon cannot handle tasks with low priority effectively, a distributed resource scheduling collaboration mechanism based on platform pricing and a platform exchange mechanism based on resource capacities are designed. Finally,a series of experiments are designed to prove the effectiveness of these methods. The results show that the proposed distributed scheduling methods can realize the effective balance of platform resources.
基金National Natural Science Foundation of China(62072392).
文摘Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.
文摘The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.
基金supported by National Natural Science Foundation of China(Grant No.62071377,62101442,62201456)Natural Science Foundation of Shaanxi Province(Grant No.2023-YBGY-036,2022JQ-687)The Graduate Student Innovation Foundation Project of Xi’an University of Posts and Telecommunications under Grant CXJJDL2022003.
文摘The Internet of Medical Things(Io MT) is regarded as a critical technology for intelligent healthcare in the foreseeable 6G era. Nevertheless, due to the limited computing power capability of edge devices and task-related coupling relationships, Io MT faces unprecedented challenges. Considering the associative connections among tasks, this paper proposes a computing offloading policy for multiple-user devices(UDs) considering device-to-device(D2D) communication and a multi-access edge computing(MEC)technique under the scenario of Io MT. Specifically,to minimize the total delay and energy consumption concerning the requirement of Io MT, we first analyze and model the detailed local execution, MEC execution, D2D execution, and associated tasks offloading exchange model. Consequently, the associated tasks’ offloading scheme of multi-UDs is formulated as a mixed-integer nonconvex optimization problem. Considering the advantages of deep reinforcement learning(DRL) in processing tasks related to coupling relationships, a Double DQN based associative tasks computing offloading(DDATO) algorithm is then proposed to obtain the optimal solution, which can make the best offloading decision under the condition that tasks of UDs are associative. Furthermore, to reduce the complexity of the DDATO algorithm, the cacheaided procedure is intentionally introduced before the data training process. This avoids redundant offloading and computing procedures concerning tasks that previously have already been cached by other UDs. In addition, we use a dynamic ε-greedy strategy in the action selection section of the algorithm, thus preventing the algorithm from falling into a locally optimal solution. Simulation results demonstrate that compared with other existing methods for associative task models concerning different structures in the Io MT network, the proposed algorithm can lower the total cost more effectively and efficiently while also providing a tradeoff between delay and energy consumption tolerance.
文摘With the advancement of technology and the continuous innovation of applications, low-latency applications such as drones, online games and virtual reality are gradually becoming popular demands in modern society. However, these applications pose a great challenge to the traditional centralized mobile cloud computing paradigm, and it is obvious that the traditional cloud computing model is already struggling to meet such demands. To address the shortcomings of cloud computing, mobile edge computing has emerged. Mobile edge computing provides users with computing and storage resources by offloading computing tasks to servers at the edge of the network. However, most existing work only considers single-objective performance optimization in terms of latency or energy consumption, but not balanced optimization in terms of latency and energy consumption. To reduce task latency and device energy consumption, the problem of joint optimization of computation offloading and resource allocation in multi-cell, multi-user, multi-server MEC environments is investigated. In this paper, a dynamic computation offloading algorithm based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) is proposed to obtain the optimal policy. The experimental results show that the algorithm proposed in this paper reduces the delay by 5 ms compared to PPO, 1.5 ms compared to DDPG and 10.7 ms compared to DQN, and reduces the energy consumption by 300 compared to PPO, 760 compared to DDPG and 380 compared to DQN. This fully proves that the algorithm proposed in this paper has excellent performance.
文摘Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concerns.However,these three factors have intrinsic trade-off relationships.The existing studies show that load concentration can reduce the number of servers and hence save energy.In this paper,we deal with the problem of reliable task deployment in data centers,with the goal of minimizing the number of servers used in cloud data centers under the constraint that the job execution deadline can be met upon single server failure.We propose a QoS-Constrained,Reliable and Energy-efficient task replica deployment(QSRE)algorithm for the problem by combining task replication and re-execution.For each task in a job that cannot finish executing by re-execution within deadline,we initiate two replicas for the task:main task and task replica.Each main task runs on an individual server.The associated task replica is deployed on a backup server and completes part of the whole task load before the main task failure.Different from the main tasks,multiple task replicas can be allocated to the same backup server to reduce the energy consumption of cloud data centers by minimizing the number of servers required for running the task replicas.Specifically,QSRE assigns the task replicas with the longest and the shortest execution time to the backup servers in turn,such that the task replicas can meet the QoS-specified job execution deadline under the main task failure.We conduct experiments through simulations.The experimental results show that QSRE can effectively reduce the number of servers used,while ensuring the reliability and QoS of job execution.
文摘The IEC 61131-3 standard defines a model and a set of programming languages for the development of industrial automation software. It is widely accepted by industry and most of the commercial tool vendors advertise compliance with it. On the other side, Model Driven Development (MDD) has been proved as a quite successful paradigm in general-purpose computing. This was the motivation for exploiting the benefits of MDD in the industrial automation domain. With the emerging IEC 61131 specification that defines an object-oriented (OO) extension to the function block model, there will be a push to the industry to better exploit the benefits of MDD in automation systems development. This work discusses possible alternatives to integrate the current but also the emerging specification of IEC 61131 in the model driven development process of automation systems. IEC 61499, UML and SysML are considered as possible alternatives to allow the developer to work in higher layers of abstraction than the one supported by IEC 61131 and to more effectively move from requirement specifications into the implementation model of the system.
基金Supported by Science and Technology Development Planning of Shandong Province,P.R.China(2016GGX104018)
文摘In the case of lid-driven deep cavity flow, the effects of different resolutions of local grid refinement have been studied in the frame of multiple relaxation times (MRT) lattice Boltzmann method (LBM). In all the cases, the aspect ratio and Reynolds number are set as 1.5 and 3.200, respectively. First, the applied method is validated by comparing it with two reported works, with which agreements are reached. Then, six separate degrees of local grid refinement at the upper left corner, i.e. purely coarse grid, including 1/64, 1/32, 1/16, 1/8, 1/4 refinements of the lattice number in the width direction have been studied in detail. The results give the following indications:① The refinement degrees lower than 1/8 produce similar results;② For single corner refinement, 1/4 refinement is adequate for clearing the noises in the singularity zone to a large extent;③ New noise around the interface between coarse and fine zones are introduced by local grid refinement. Finally, refinement of entire subzone neighboring the lid is examined to avoid introducing new noises and it has been found effective.