The Internet of Things(IoT)connects objects to Internet through sensor devices,radio frequency identification devices and other information collection and processing devices to realize information interaction.IoT is w...The Internet of Things(IoT)connects objects to Internet through sensor devices,radio frequency identification devices and other information collection and processing devices to realize information interaction.IoT is widely used in many fields,including intelligent transportation,intelligent healthcare,intelligent home and industry.In these fields,IoT devices connected via high-speed internet for efficient and reliable communications and faster response times.展开更多
For the reliability and power consumption issues of Ethernet data transmission based on the field programmable gate array (FPGA), a low-power consumption design method is proposed, which is suitable for FPGA impleme...For the reliability and power consumption issues of Ethernet data transmission based on the field programmable gate array (FPGA), a low-power consumption design method is proposed, which is suitable for FPGA implementation. To reduce the dynamic power consumption of integrated circuit (IC) design, the proposed method adopts the dynamic control of the clock frequency. For most of the time, when the port is in the idle state or lower-rate state, users can reduce or even turn off the reading clock frequency and reduce the clock flip frequency in order to reduce the dynamic power consumption. When the receiving rate is high, the reading clock frequency will be improved timely to ensure that no data will lost. Simulated and verified by Modelsim, the proposed method can dynamically control the clock frequency, including the dynamic switching of high-speed and low-speed clock flip rates, or stop of the clock flip.展开更多
In classical smoothed particle hydrodynamics(SPH)fluid simulation approaches,the smoothing length of Lagrangian particles is typically constant.One major disadvantage is the lack of adaptiveness,which may compromise a...In classical smoothed particle hydrodynamics(SPH)fluid simulation approaches,the smoothing length of Lagrangian particles is typically constant.One major disadvantage is the lack of adaptiveness,which may compromise accuracy in fluid regions such as splashes and surfaces.Attempts to address this problem used variable smoothing lengths.Yet the existing methods are computationally complex and non-efficient,because the smoothing length is typically calculated using iterative optimization.Here,we propose an efficient non-iterative SPH fluid simulation method with variable smoothing length(VSLSPH).VSLSPH correlates the smoothing length to the density change,and adaptively adjusts the smoothing length of particles with high accuracy and low computational cost,enabling large time steps.Our experimental results demonstrate the advantages of the VSLSPH approach in terms of its simulation accuracy and efficiency.展开更多
Dear Editor,This letter presents a multi-automated guided vehicles(AGV) routing planning method based on deep reinforcement learning(DRL)and recurrent neural network(RNN), specifically utilizing proximal policy optimi...Dear Editor,This letter presents a multi-automated guided vehicles(AGV) routing planning method based on deep reinforcement learning(DRL)and recurrent neural network(RNN), specifically utilizing proximal policy optimization(PPO) and long short-term memory(LSTM).展开更多
Traditional Fuzzy C-Means(FCM)and Possibilistic C-Means(PCM)clustering algorithms are data-driven,and their objective function minimization process is based on the available numeric data.Recently,knowledge hints have ...Traditional Fuzzy C-Means(FCM)and Possibilistic C-Means(PCM)clustering algorithms are data-driven,and their objective function minimization process is based on the available numeric data.Recently,knowledge hints have been introduced to formknowledge-driven clustering algorithms,which reveal a data structure that considers not only the relationships between data but also the compatibility with knowledge hints.However,these algorithms cannot produce the optimal number of clusters by the clustering algorithm itself;they require the assistance of evaluation indices.Moreover,knowledge hints are usually used as part of the data structure(directly replacing some clustering centers),which severely limits the flexibility of the algorithm and can lead to knowledgemisguidance.To solve this problem,this study designs a newknowledge-driven clustering algorithmcalled the PCM clusteringwith High-density Points(HP-PCM),in which domain knowledge is represented in the form of so-called high-density points.First,a newdatadensitycalculation function is proposed.The Density Knowledge Points Extraction(DKPE)method is established to filter out high-density points from the dataset to form knowledge hints.Then,these hints are incorporated into the PCM objective function so that the clustering algorithm is guided by high-density points to discover the natural data structure.Finally,the initial number of clusters is set to be greater than the true one based on the number of knowledge hints.Then,the HP-PCM algorithm automatically determines the final number of clusters during the clustering process by considering the cluster elimination mechanism.Through experimental studies,including some comparative analyses,the results highlight the effectiveness of the proposed algorithm,such as the increased success rate in clustering,the ability to determine the optimal cluster number,and the faster convergence speed.展开更多
Monitoring various internal parameters plays a core role in ensuring the safety of lithium-ion batteries in power supply applications.It also influences the sustainability effect and online state of charge prediction....Monitoring various internal parameters plays a core role in ensuring the safety of lithium-ion batteries in power supply applications.It also influences the sustainability effect and online state of charge prediction.An improved multiple feature-electrochemical thermal coupling modeling method is proposed considering low-temperature performance degradation for the complete characteristic expression of multi-dimensional information.This is to obtain the parameter influence mechanism with a multi-variable coupling relationship.An optimized decoupled deviation strategy is constructed for accurate state of charge prediction with real-time correction of time-varying current and temperature effects.The innovative decoupling method is combined with the functional relationships of state of charge and open-circuit voltage to capture energy management ef-fectively.Then,an adaptive equivalent-prediction model is constructed using the state-space equation and iterative feedback correction,making the proposed model adaptive to fractional calculation.The maximum state of charge estimation errors of the proposed method are 4.57% and 0.223% under the Beijing bus dynamic stress test and dynamic stress test conditions,respectively.The improved multiple feature-electrochemical thermal coupling modeling realizes the effective correction of the current and temperature variations with noise influencing coefficient,and provides an efficient state of charge prediction method adaptive to complex conditions.展开更多
his paper adopts the 3-3-2 information processing method for the capture of moving objects as its premise, and proposes a basic principle of three-dimensional (3D) imaging using biological compound eye. Traditional bi...his paper adopts the 3-3-2 information processing method for the capture of moving objects as its premise, and proposes a basic principle of three-dimensional (3D) imaging using biological compound eye. Traditional bionic vision is limited by the available hardware. Therefore, in this paper, the new-generation technology of microlens-array light-field camera is proposed as a potential method for the extraction of depth information from a single image. A significant characteristic of light-field imaging is that it records intensity and directional information from the lights entering the camera. Herein, a refocusing method using light-field image is proposed. By calculating the focusing cost at different depths from the object, the imaging plane of the object is determined, and a depth map is constructed based on the position of the object’s imaging plane. Compared with traditional light-field depth estimation, the depth map calculated by this method can significantly improve resolution and does not depend on the number of light-field microlenses. In addition, considering that software algorithms rely on hardware structure, this study develops an imaging hardware that is only 7 cm long based on the second-generation microlens camera’s structure, further validating its important refocusing characteristics. It thereby provides a technical foundation for 3D imaging with a single camera.展开更多
Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global...Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.展开更多
Workflow scheduling is a key issue and remains a challenging problem in cloud computing.Faced with the large number of virtual machine(VM)types offered by cloud providers,cloud users need to choose the most appropriat...Workflow scheduling is a key issue and remains a challenging problem in cloud computing.Faced with the large number of virtual machine(VM)types offered by cloud providers,cloud users need to choose the most appropriate VM type for each task.Multiple task scheduling sequences exist in a workflow application.Different task scheduling sequences have a significant impact on the scheduling performance.It is not easy to determine the most appropriate set of VM types for tasks and the best task scheduling sequence.Besides,the idle time slots on VM instances should be used fully to increase resources'utilization and save the execution cost of a workflow.This paper considers these three aspects simultaneously and proposes a cloud workflow scheduling approach which combines particle swarm optimization(PSO)and idle time slot-aware rules,to minimize the execution cost of a workflow application under a deadline constraint.A new particle encoding is devised to represent the VM type required by each task and the scheduling sequence of tasks.An idle time slot-aware decoding procedure is proposed to decode a particle into a scheduling solution.To handle tasks'invalid priorities caused by the randomness of PSO,a repair method is used to repair those priorities to produce valid task scheduling sequences.The proposed approach is compared with state-of-the-art cloud workflow scheduling algorithms.Experiments show that the proposed approach outperforms the comparative algorithms in terms of both of the execution cost and the success rate in meeting the deadline.展开更多
Burden distribution is one of the most important operations, and also an important upper regulation in blast furnace(BF) iron-making process. Burden distribution output behaviors(BDOB) at the throat of BF is a 3-dimen...Burden distribution is one of the most important operations, and also an important upper regulation in blast furnace(BF) iron-making process. Burden distribution output behaviors(BDOB) at the throat of BF is a 3-dimensional spatial distribution produced by burden distribution matrix(BDM),including burden surface output shape(BSOS) and material layer initial thickness distribution(MLITD). Due to the lack of effective model to describe the complex input-output relations,BDM optimization and adjustment is carried out by experienced foremen. Focusing on this practical challenge, this work studies complex burden distribution input-output relations, and gives a description of expected MLITD under specific integral constraint on the basis of engineering practice. Furthermore, according to the decision variables in different number fields, this work studies optimization of BDM with expected MLITD, and proposes a multi-mode based particle swarm optimization(PSO) procedure for optimization of decision variables. Finally, experiments using industrial data show that the proposed model is effective, and optimized BDM calculated by this multi-model based PSO method can be used for expected distribution tracking.展开更多
This paper proposes some low complexity algorithms for active user detection(AUD),channel estimation(CE)and multi-user detection(MUD)in uplink non-orthogonal multiple access(NOMA)systems,including single-carrier and m...This paper proposes some low complexity algorithms for active user detection(AUD),channel estimation(CE)and multi-user detection(MUD)in uplink non-orthogonal multiple access(NOMA)systems,including single-carrier and multi-carrier cases.In particular,we first propose a novel algorithm to estimate the active users and the channels for single-carrier based on complex alternating direction method of multipliers(ADMM),where fast decaying feature of non-zero components in sparse signal is considered.More importantly,the reliable estimated information is used for AUD,and the unreliable information will be further handled based on estimated symbol energy and total accurate or approximate number of active users.Then,the proposed algorithm for AUD in single-carrier model can be extended to multi-carrier case by exploiting the block sparse structure.Besides,we propose a low complexity MUD detection algorithm based on alternating minimization to estimate the active users’data,which avoids the Hessian matrix inverse.The convergence and the complexity of proposed algorithms are analyzed and discussed finally.Simulation results show that the proposed algorithms have better performance in terms of AUD,CE and MUD.Moreover,we can detect active users perfectly for multi-carrier NOMA system.展开更多
With the emergence of large-scale knowledge base,how to use triple information to generate natural questions is a key technology in question answering systems.The traditional way of generating questions require a lot ...With the emergence of large-scale knowledge base,how to use triple information to generate natural questions is a key technology in question answering systems.The traditional way of generating questions require a lot of manual intervention and produce lots of noise.To solve these problems,we propose a joint model based on semi-automated model and End-to-End neural network to automatically generate questions.The semi-automated model can generate question templates and real questions combining the knowledge base and center graph.The End-to-End neural network directly sends the knowledge base and real questions to BiLSTM network.Meanwhile,the attention mechanism is utilized in the decoding layer,which makes the triples and generated questions more relevant.Finally,the experimental results on SimpleQuestions demonstrate the effectiveness of the proposed approach.展开更多
Owing to the effect of classified models was different in Protein-Protein Interaction(PPI) extraction, which was made by different single kernel functions, and only using single kernel function hardly trained the opti...Owing to the effect of classified models was different in Protein-Protein Interaction(PPI) extraction, which was made by different single kernel functions, and only using single kernel function hardly trained the optimal classified model to extract PPI, this paper presents a strategy to find the optimal kernel function from a kernel function set. The strategy is that in the kernel function set which consists of different single kernel functions, endlessly finding the last two kernel functions on the performance in PPI extraction, using their optimal kernel function to replace them, until there is only one kernel function and it’s the final optimal kernel function. Finally, extracting PPI using the classified model made by this kernel function. This paper conducted the PPI extraction experiment on AIMed corpus, the experimental result shows that the optimal convex combination kernel function this paper presents can effectively improve the extraction performance than single kernel function, and it gets the best precision which reaches 65.0 among the similar PPI extraction systems.展开更多
With the rapid development of marine activities,there has been an increasing use of Internet-of-Things(IoT) devices for maritime applications.This leads to a growing demand for high-speed and ultra-reliable maritime c...With the rapid development of marine activities,there has been an increasing use of Internet-of-Things(IoT) devices for maritime applications.This leads to a growing demand for high-speed and ultra-reliable maritime communications.Current maritime communication networks (MCNs) mainly rely on satellites and on-shore base stations (BSs).The former generally provides limited transmission rate while the latter lacks wide-area coverage capability.As a result,the development of current MCN lags far behind the terrestrial fifth-generation (5G) network.展开更多
his special issue is dedicated to security problems in wireless and quan-turn communications. Papers for this issue were invited, and after peer review, eight were selected for publication. The first part of this issu...his special issue is dedicated to security problems in wireless and quan-turn communications. Papers for this issue were invited, and after peer review, eight were selected for publication. The first part of this issue comprises four papers on recent advances in physical layer security forwireless networks. The second Part comprises another four papers on quantum com- munications.展开更多
Particle swarm optimization(PSO)is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation.However,PSO still h...Particle swarm optimization(PSO)is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation.However,PSO still has certain deficiencies,such as a poor trade-off between exploration and exploitation and premature convergence.Hence,this paper proposes a dual-stage hybrid learning particle swarm optimization(DHLPSO).In the algorithm,the iterative process is partitioned into two stages.The learning strategy used at each stage emphasizes exploration and exploitation,respectively.In the first stage,to increase population variety,a Manhattan distance based learning strategy is proposed.In this strategy,each particle chooses the furthest Manhattan distance particle and a better particle for learning.In the second stage,an excellent example learning strategy is adopted to perform local optimization operations on the population,in which each particle learns from the global optimal particle and a better particle.Utilizing the Gaussian mutation strategy,the algorithm’s searchability in particular multimodal functions is significantly enhanced.On benchmark functions from CEC 2013,DHLPSO is evaluated alongside other PSO variants already in existence.The comparison results clearly demonstrate that,compared to other cutting-edge PSO variations,DHLPSO implements highly competitive performance in handling global optimization problems.展开更多
文摘The Internet of Things(IoT)connects objects to Internet through sensor devices,radio frequency identification devices and other information collection and processing devices to realize information interaction.IoT is widely used in many fields,including intelligent transportation,intelligent healthcare,intelligent home and industry.In these fields,IoT devices connected via high-speed internet for efficient and reliable communications and faster response times.
基金supported by the Natural Science Foundation of China under Grant No.61376024 and No.61306024Natural Science Foundation of Guangdong Province under Grant No.S2013040014366Basic Research Programme of Shenzhen under Grant No.JCYJ20140417113430642 and No.JCYJ20140901003939020
文摘For the reliability and power consumption issues of Ethernet data transmission based on the field programmable gate array (FPGA), a low-power consumption design method is proposed, which is suitable for FPGA implementation. To reduce the dynamic power consumption of integrated circuit (IC) design, the proposed method adopts the dynamic control of the clock frequency. For most of the time, when the port is in the idle state or lower-rate state, users can reduce or even turn off the reading clock frequency and reduce the clock flip frequency in order to reduce the dynamic power consumption. When the receiving rate is high, the reading clock frequency will be improved timely to ensure that no data will lost. Simulated and verified by Modelsim, the proposed method can dynamically control the clock frequency, including the dynamic switching of high-speed and low-speed clock flip rates, or stop of the clock flip.
基金the Key Program of National Natural Science Foundation of China,No.62237001National Natural Science Foundation for Excellent Young Scholars,No.6212200101+2 种基金National Natural Science Foundation for General Program,Nos.62176066 and 61976052Guangdong Provincial Science and Technology Innovation Strategy Fund,No.2019B121203012and Guangzhou Science and Technology Plan,No.202007040005.
文摘In classical smoothed particle hydrodynamics(SPH)fluid simulation approaches,the smoothing length of Lagrangian particles is typically constant.One major disadvantage is the lack of adaptiveness,which may compromise accuracy in fluid regions such as splashes and surfaces.Attempts to address this problem used variable smoothing lengths.Yet the existing methods are computationally complex and non-efficient,because the smoothing length is typically calculated using iterative optimization.Here,we propose an efficient non-iterative SPH fluid simulation method with variable smoothing length(VSLSPH).VSLSPH correlates the smoothing length to the density change,and adaptively adjusts the smoothing length of particles with high accuracy and low computational cost,enabling large time steps.Our experimental results demonstrate the advantages of the VSLSPH approach in terms of its simulation accuracy and efficiency.
基金supported by the National Natural Science Foundation of China (62202352,61902039,61972300)the Basic and Applied Basic Research Program of Guangdong Province (2021A1515110518)the Key Research and Development Program of Shaanxi Province (2020ZDLGY09-04)。
文摘Dear Editor,This letter presents a multi-automated guided vehicles(AGV) routing planning method based on deep reinforcement learning(DRL)and recurrent neural network(RNN), specifically utilizing proximal policy optimization(PPO) and long short-term memory(LSTM).
基金supported by the National Key Research and Development Program of China(No.2022YFB3304400)the National Natural Science Foundation of China(Nos.6230311,62303111,62076060,61932007,and 62176083)the Key Research and Development Program of Jiangsu Province of China(No.BE2022157).
文摘Traditional Fuzzy C-Means(FCM)and Possibilistic C-Means(PCM)clustering algorithms are data-driven,and their objective function minimization process is based on the available numeric data.Recently,knowledge hints have been introduced to formknowledge-driven clustering algorithms,which reveal a data structure that considers not only the relationships between data but also the compatibility with knowledge hints.However,these algorithms cannot produce the optimal number of clusters by the clustering algorithm itself;they require the assistance of evaluation indices.Moreover,knowledge hints are usually used as part of the data structure(directly replacing some clustering centers),which severely limits the flexibility of the algorithm and can lead to knowledgemisguidance.To solve this problem,this study designs a newknowledge-driven clustering algorithmcalled the PCM clusteringwith High-density Points(HP-PCM),in which domain knowledge is represented in the form of so-called high-density points.First,a newdatadensitycalculation function is proposed.The Density Knowledge Points Extraction(DKPE)method is established to filter out high-density points from the dataset to form knowledge hints.Then,these hints are incorporated into the PCM objective function so that the clustering algorithm is guided by high-density points to discover the natural data structure.Finally,the initial number of clusters is set to be greater than the true one based on the number of knowledge hints.Then,the HP-PCM algorithm automatically determines the final number of clusters during the clustering process by considering the cluster elimination mechanism.Through experimental studies,including some comparative analyses,the results highlight the effectiveness of the proposed algorithm,such as the increased success rate in clustering,the ability to determine the optimal cluster number,and the faster convergence speed.
基金supported by the National Natural Science Foundation of China(No.62173281)the Natural Science Foundation of Sichuan Province(No.23ZDYF0734 and No.2023NSFSC1436)the Fund of Robot Technology Used for Special Environment Key Laboratory of Sichuan Province(No.18kftk03).
文摘Monitoring various internal parameters plays a core role in ensuring the safety of lithium-ion batteries in power supply applications.It also influences the sustainability effect and online state of charge prediction.An improved multiple feature-electrochemical thermal coupling modeling method is proposed considering low-temperature performance degradation for the complete characteristic expression of multi-dimensional information.This is to obtain the parameter influence mechanism with a multi-variable coupling relationship.An optimized decoupled deviation strategy is constructed for accurate state of charge prediction with real-time correction of time-varying current and temperature effects.The innovative decoupling method is combined with the functional relationships of state of charge and open-circuit voltage to capture energy management ef-fectively.Then,an adaptive equivalent-prediction model is constructed using the state-space equation and iterative feedback correction,making the proposed model adaptive to fractional calculation.The maximum state of charge estimation errors of the proposed method are 4.57% and 0.223% under the Beijing bus dynamic stress test and dynamic stress test conditions,respectively.The improved multiple feature-electrochemical thermal coupling modeling realizes the effective correction of the current and temperature variations with noise influencing coefficient,and provides an efficient state of charge prediction method adaptive to complex conditions.
基金The National Major Project Research and Development Project (2017YFB0503003)The National Natural Science Foundation of China(61101157, 60602042).
文摘his paper adopts the 3-3-2 information processing method for the capture of moving objects as its premise, and proposes a basic principle of three-dimensional (3D) imaging using biological compound eye. Traditional bionic vision is limited by the available hardware. Therefore, in this paper, the new-generation technology of microlens-array light-field camera is proposed as a potential method for the extraction of depth information from a single image. A significant characteristic of light-field imaging is that it records intensity and directional information from the lights entering the camera. Herein, a refocusing method using light-field image is proposed. By calculating the focusing cost at different depths from the object, the imaging plane of the object is determined, and a depth map is constructed based on the position of the object’s imaging plane. Compared with traditional light-field depth estimation, the depth map calculated by this method can significantly improve resolution and does not depend on the number of light-field microlenses. In addition, considering that software algorithms rely on hardware structure, this study develops an imaging hardware that is only 7 cm long based on the second-generation microlens camera’s structure, further validating its important refocusing characteristics. It thereby provides a technical foundation for 3D imaging with a single camera.
基金supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006)the National Natural Science Foundation of China (61972102)+1 种基金the Guangzhou Science and Technology Plan Project (023A04J1729)the Science and Technology development fund (FDCT),Macao SAR (015/2020/AMJ)。
文摘Most existing domain adaptation(DA) methods aim to explore favorable performance under complicated environments by sampling.However,there are three unsolved problems that limit their efficiencies:ⅰ) they adopt global sampling but neglect to exploit global and local sampling simultaneously;ⅱ)they either transfer knowledge from a global perspective or a local perspective,while overlooking transmission of confident knowledge from both perspectives;and ⅲ) they apply repeated sampling during iteration,which takes a lot of time.To address these problems,knowledge transfer learning via dual density sampling(KTL-DDS) is proposed in this study,which consists of three parts:ⅰ) Dual density sampling(DDS) that jointly leverages two sampling methods associated with different views,i.e.,global density sampling that extracts representative samples with the most common features and local density sampling that selects representative samples with critical boundary information;ⅱ)Consistent maximum mean discrepancy(CMMD) that reduces intra-and cross-domain risks and guarantees high consistency of knowledge by shortening the distances of every two subsets among the four subsets collected by DDS;and ⅲ) Knowledge dissemination(KD) that transmits confident and consistent knowledge from the representative target samples with global and local properties to the whole target domain by preserving the neighboring relationships of the target domain.Mathematical analyses show that DDS avoids repeated sampling during the iteration.With the above three actions,confident knowledge with both global and local properties is transferred,and the memory and running time are greatly reduced.In addition,a general framework named dual density sampling approximation(DDSA) is extended,which can be easily applied to other DA algorithms.Extensive experiments on five datasets in clean,label corruption(LC),feature missing(FM),and LC&FM environments demonstrate the encouraging performance of KTL-DDS.
基金is with the School of Computing Science,Beijing University of Posts and Telecommunications,Beijing 100876,and also with the Key Laboratory of Trustworthy Distributed Computing and Service(BUPT),Ministry of Education,Beijing 100876,China(e-mail:zuoxq@bupt.edu.cn).supported in part by the National Natural Science Foundation of China(61874204,61663028,61703199)the Science and Technology Plan Project of Jiangxi Provincial Education Department(GJJ190959)。
文摘Workflow scheduling is a key issue and remains a challenging problem in cloud computing.Faced with the large number of virtual machine(VM)types offered by cloud providers,cloud users need to choose the most appropriate VM type for each task.Multiple task scheduling sequences exist in a workflow application.Different task scheduling sequences have a significant impact on the scheduling performance.It is not easy to determine the most appropriate set of VM types for tasks and the best task scheduling sequence.Besides,the idle time slots on VM instances should be used fully to increase resources'utilization and save the execution cost of a workflow.This paper considers these three aspects simultaneously and proposes a cloud workflow scheduling approach which combines particle swarm optimization(PSO)and idle time slot-aware rules,to minimize the execution cost of a workflow application under a deadline constraint.A new particle encoding is devised to represent the VM type required by each task and the scheduling sequence of tasks.An idle time slot-aware decoding procedure is proposed to decode a particle into a scheduling solution.To handle tasks'invalid priorities caused by the randomness of PSO,a repair method is used to repair those priorities to produce valid task scheduling sequences.The proposed approach is compared with state-of-the-art cloud workflow scheduling algorithms.Experiments show that the proposed approach outperforms the comparative algorithms in terms of both of the execution cost and the success rate in meeting the deadline.
基金supported by the National Natural Science Foundation of China(61763038,61763039,61621004,61790572,61890934,61973137)the Fundamental Research Funds for the Central Universities(N180802003)
文摘Burden distribution is one of the most important operations, and also an important upper regulation in blast furnace(BF) iron-making process. Burden distribution output behaviors(BDOB) at the throat of BF is a 3-dimensional spatial distribution produced by burden distribution matrix(BDM),including burden surface output shape(BSOS) and material layer initial thickness distribution(MLITD). Due to the lack of effective model to describe the complex input-output relations,BDM optimization and adjustment is carried out by experienced foremen. Focusing on this practical challenge, this work studies complex burden distribution input-output relations, and gives a description of expected MLITD under specific integral constraint on the basis of engineering practice. Furthermore, according to the decision variables in different number fields, this work studies optimization of BDM with expected MLITD, and proposes a multi-mode based particle swarm optimization(PSO) procedure for optimization of decision variables. Finally, experiments using industrial data show that the proposed model is effective, and optimized BDM calculated by this multi-model based PSO method can be used for expected distribution tracking.
基金supported by National Natural Science Foundation of China(NSFC)under Grant No.62001190The work of J.Wen was supported by NSFC(Nos.11871248,61932010,61932011)+3 种基金the Guangdong Province Universities and Colleges Pearl River Scholar Funded Scheme(2019),Guangdong Major Project of Basic and Applied Basic Research(2019B030302008)the Fundamental Research Funds for the Central Universities(No.21618329)The work of P.Fan was supported by National Key R&D Project(No.2018YFB1801104)NSFC Project(No.6202010600).
文摘This paper proposes some low complexity algorithms for active user detection(AUD),channel estimation(CE)and multi-user detection(MUD)in uplink non-orthogonal multiple access(NOMA)systems,including single-carrier and multi-carrier cases.In particular,we first propose a novel algorithm to estimate the active users and the channels for single-carrier based on complex alternating direction method of multipliers(ADMM),where fast decaying feature of non-zero components in sparse signal is considered.More importantly,the reliable estimated information is used for AUD,and the unreliable information will be further handled based on estimated symbol energy and total accurate or approximate number of active users.Then,the proposed algorithm for AUD in single-carrier model can be extended to multi-carrier case by exploiting the block sparse structure.Besides,we propose a low complexity MUD detection algorithm based on alternating minimization to estimate the active users’data,which avoids the Hessian matrix inverse.The convergence and the complexity of proposed algorithms are analyzed and discussed finally.Simulation results show that the proposed algorithms have better performance in terms of AUD,CE and MUD.Moreover,we can detect active users perfectly for multi-carrier NOMA system.
基金supported by National Nature Science Foundation(No.61501529,No.61331013)National Language Committee Project of China(No.ZDI125-36)Young Teachers'Scientific Research Project in Minzu University of China.
文摘With the emergence of large-scale knowledge base,how to use triple information to generate natural questions is a key technology in question answering systems.The traditional way of generating questions require a lot of manual intervention and produce lots of noise.To solve these problems,we propose a joint model based on semi-automated model and End-to-End neural network to automatically generate questions.The semi-automated model can generate question templates and real questions combining the knowledge base and center graph.The End-to-End neural network directly sends the knowledge base and real questions to BiLSTM network.Meanwhile,the attention mechanism is utilized in the decoding layer,which makes the triples and generated questions more relevant.Finally,the experimental results on SimpleQuestions demonstrate the effectiveness of the proposed approach.
文摘Owing to the effect of classified models was different in Protein-Protein Interaction(PPI) extraction, which was made by different single kernel functions, and only using single kernel function hardly trained the optimal classified model to extract PPI, this paper presents a strategy to find the optimal kernel function from a kernel function set. The strategy is that in the kernel function set which consists of different single kernel functions, endlessly finding the last two kernel functions on the performance in PPI extraction, using their optimal kernel function to replace them, until there is only one kernel function and it’s the final optimal kernel function. Finally, extracting PPI using the classified model made by this kernel function. This paper conducted the PPI extraction experiment on AIMed corpus, the experimental result shows that the optimal convex combination kernel function this paper presents can effectively improve the extraction performance than single kernel function, and it gets the best precision which reaches 65.0 among the similar PPI extraction systems.
文摘With the rapid development of marine activities,there has been an increasing use of Internet-of-Things(IoT) devices for maritime applications.This leads to a growing demand for high-speed and ultra-reliable maritime communications.Current maritime communication networks (MCNs) mainly rely on satellites and on-shore base stations (BSs).The former generally provides limited transmission rate while the latter lacks wide-area coverage capability.As a result,the development of current MCN lags far behind the terrestrial fifth-generation (5G) network.
文摘his special issue is dedicated to security problems in wireless and quan-turn communications. Papers for this issue were invited, and after peer review, eight were selected for publication. The first part of this issue comprises four papers on recent advances in physical layer security forwireless networks. The second Part comprises another four papers on quantum com- munications.
基金the National Natural Science Foundation of China(Nos.62066019 and 61903089)the Natural Science Foundation of Jiangxi Province(Nos.20202BABL202020 and 20202BAB202014)the Graduate Innovation Foundation of Jiangxi University of Science and Technology(Nos.XY2021-S092 and YC2022-S641).
文摘Particle swarm optimization(PSO)is a type of swarm intelligence algorithm that is frequently used to resolve specific global optimization problems due to its rapid convergence and ease of operation.However,PSO still has certain deficiencies,such as a poor trade-off between exploration and exploitation and premature convergence.Hence,this paper proposes a dual-stage hybrid learning particle swarm optimization(DHLPSO).In the algorithm,the iterative process is partitioned into two stages.The learning strategy used at each stage emphasizes exploration and exploitation,respectively.In the first stage,to increase population variety,a Manhattan distance based learning strategy is proposed.In this strategy,each particle chooses the furthest Manhattan distance particle and a better particle for learning.In the second stage,an excellent example learning strategy is adopted to perform local optimization operations on the population,in which each particle learns from the global optimal particle and a better particle.Utilizing the Gaussian mutation strategy,the algorithm’s searchability in particular multimodal functions is significantly enhanced.On benchmark functions from CEC 2013,DHLPSO is evaluated alongside other PSO variants already in existence.The comparison results clearly demonstrate that,compared to other cutting-edge PSO variations,DHLPSO implements highly competitive performance in handling global optimization problems.