In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on traini...In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on training a shared machine learning model locally,eliminating the need for uploading raw data to a central server.It is perhaps the only training paradigm that preserves the privacy of user data,which is essential for computing environments as personal as the metaverse.However,the original FL architecture proposed is not scalable to a large number of user devices in the metaverse community.To mitigate this problem,hierarchical federated learning(HFL)has been introduced as a general distributed learning paradigm,inspiring a number of research works.In this paper,we present several types of HFL architectures,with a special focus on the three-layer client-edge-cloud HFL architecture,which is most pertinent to the metaverse due to its delay-sensitive nature.We also examine works that take advantage of the natural layered organization of three-layer client-edge-cloud HFL to tackle some of the most challenging problems in FL within the metaverse.Finally,we outline some future research directions of HFL in the metaverse.展开更多
Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited...Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited number of participants for model aggregation and communication latency are two major bottlenecks.Hierarchical federated learning(HFL),with a cloud-edge-client hierarchy,can leverage the large coverage of cloud servers and the low transmission latency of edge servers.There are growing research interests in implementing FL in vehicular networks due to the requirements of timely ML training for intelligent vehicles.However,the limited number of participants in vehicular networks and vehicle mobility degrade the performance of FL training.In this context,HFL,which stands out for lower latency,wider coverage and more participants,is promising in vehicular networks.In this paper,we begin with the background and motivation of HFL and the feasibility of implementing HFL in vehicular networks.Then,the architecture of HFL is illustrated.Next,we clarify new issues in HFL and review several existing solutions.Furthermore,we introduce some typical use cases in vehicular networks as well as our initial efforts on implementing HFL in vehicular networks.Finally,we conclude with future research directions.展开更多
In reinforcement learning an agent may explore ineffectively when dealing with sparse reward tasks where finding a reward point is difficult.To solve the problem,we propose an algorithm called hierarchical deep reinfo...In reinforcement learning an agent may explore ineffectively when dealing with sparse reward tasks where finding a reward point is difficult.To solve the problem,we propose an algorithm called hierarchical deep reinforcement learning with automatic sub-goal identification via computer vision(HADS)which takes advantage of hierarchical reinforcement learning to alleviate the sparse reward problem and improve efficiency of exploration by utilizing a sub-goal mechanism.HADS uses a computer vision method to identify sub-goals automatically for hierarchical deep reinforcement learning.Due to the fact that not all sub-goal points are reachable,a mechanism is proposed to remove unreachable sub-goal points so as to further improve the performance of the algorithm.HADS involves contour recognition to identify sub-goals from the state image where some salient states in the state image may be recognized as sub-goals,while those that are not will be removed based on prior knowledge.Our experiments verified the effect of the algorithm.展开更多
It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed-layer local learning (HC...It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed-layer local learning (HCFLL) based support vector machine(SVM) algorithm is proposed to deal with this problem. Firstly, HCFLL hierarchically clusters a given dataset into a modified clustering feature tree based on the ideas of unsupervised clustering and supervised clustering. Then it locally trains SVM on each labeled subtree at a fixed-layer of the tree. The experimental results show that compared with the existing popular algorithms such as core vector machine and decision-tree support vector machine, HCFLL can significantly improve the training and testing speeds with comparable testing accuracy.展开更多
The guidance strategy is an extremely critical factor in determining the striking effect of the missile operation.A novel guidance law is presented by exploiting the deep reinforcement learning(DRL)with the hierarchic...The guidance strategy is an extremely critical factor in determining the striking effect of the missile operation.A novel guidance law is presented by exploiting the deep reinforcement learning(DRL)with the hierarchical deep deterministic policy gradient(DDPG)algorithm.The reward functions are constructed to minimize the line-of-sight(LOS)angle rate and avoid the threat caused by the opposed obstacles.To attenuate the chattering of the acceleration,a hierarchical reinforcement learning structure and an improved reward function with action penalty are put forward.The simulation results validate that the missile under the proposed method can hit the target successfully and keep away from the threatened areas effectively.展开更多
Deep learning(DL)has shown unprecedented performance for many image analysis and image enhancement tasks.Yet,solving large-scale inverse problems like tomographic reconstruction remains challenging for DL.These proble...Deep learning(DL)has shown unprecedented performance for many image analysis and image enhancement tasks.Yet,solving large-scale inverse problems like tomographic reconstruction remains challenging for DL.These problems involve non-local and space-variant integral transforms between the input and output domains,for which no efficient neural network models are readily available.A prior attempt to solve tomographic reconstruction problems with supervised learning relied on a brute-force fully connected network and only allowed reconstruction with a 128^(4)system matrix size.This cannot practically scale to realistic data sizes such as 512^(4)and 512^(6)for three-dimensional datasets.Here we present a novel framework to solve such problems with DL by casting the original problem as a continuum of intermediate representations between the input and output domains.The original problem is broken down into a sequence of simpler transformations that can be well mapped onto an efficient hierarchical network architecture,with exponentially fewer parameters than a fully connected network would need.We applied the approach to computed tomography(CT)image reconstruction for a 5124 system matrix size.This work introduces a new kind of data-driven DL solver for full-size CT reconstruction without relying on the structure of direct(analytical)or iterative(numerical)inversion techniques.This work presents a feasibility demonstration of full-scale learnt reconstruction,whereas more developments will be needed to demonstrate superiority relative to traditional reconstruction approaches.The proposed approach is also extendable to other imaging problems such as emission and magnetic resonance reconstruction.More broadly,hierarchical DL opens the door to a new class of solvers for general inverse problems,which could potentially lead to improved signal-to-noise ratio,spatial resolution and computational efficiency in various areas.展开更多
The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized...The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized cloud server is not applicable due to data privacy and communication costs concerns,hindering artificial intelligence from empowering mobile devices.Moreover,these data are not identically and independently distributed(Non-IID)caused by their different context,which will deteriorate the performance of the model.To address these issues,we propose a novel Distributed Learning algorithm based on hierarchical clustering and Adaptive Dataset Condensation,named ADC-DL,which learns a shared model by collecting the synthetic samples generated on each device.To tackle the heterogeneity of data distribution,we propose an entropy topsis comprehensive tiering model for hierarchical clustering,which distinguishes clients in terms of their data characteristics.Subsequently,synthetic dummy samples are generated based on the hierarchical structure utilizing adaptive dataset condensation.The procedure of dataset condensation can be adjusted adaptively according to the tier of the client.Extensive experiments demonstrate that the performance of our ADC-DL is more outstanding in prediction accuracy and communication costs compared with existing algorithms.展开更多
Artificial intelligence,which has recently emerged with the rapid development of information technology,is drawing attention as a tool for solving various problems demanded by society and industry.In particular,convol...Artificial intelligence,which has recently emerged with the rapid development of information technology,is drawing attention as a tool for solving various problems demanded by society and industry.In particular,convolutional neural networks(CNNs),a type of deep learning technology,are highlighted in computer vision fields,such as image classification and recognition and object tracking.Training these CNN models requires a large amount of data,and a lack of data can lead to performance degradation problems due to overfitting.As CNN architecture development and optimization studies become active,ensemble techniques have emerged to perform image classification by combining features extracted from multiple CNN models.In this study,data augmentation and contour image extraction were performed to overcome the data shortage problem.In addition,we propose a hierarchical ensemble technique to achieve high image classification accuracy,even if trained from a small amount of data.First,we trained the UCMerced land use dataset and the contour images for each image on pretrained VGGNet,GoogLeNet,ResNet,DenseNet,and EfficientNet.We then apply a hierarchical ensemble technique to the number of cases in which each model can be deployed.These experiments were performed in cases where the proportion of training datasets was 30%,50%,and 70%,resulting in a performance improvement of up to 4.68%compared to the average accuracy of the entire model.展开更多
Most modern face recognition and classification systems mainly rely on hand-crafted image feature descriptors. In this paper, we propose a novel deep learning algorithm combining unsupervised and supervised learning n...Most modern face recognition and classification systems mainly rely on hand-crafted image feature descriptors. In this paper, we propose a novel deep learning algorithm combining unsupervised and supervised learning named deep belief network embedded with Softmax regress (DBNESR) as a natural source for obtaining additional, complementary hierarchical representations, which helps to relieve us from the complicated hand-crafted feature-design step. DBNESR first learns hierarchical representations of feature by greedy layer-wise unsupervised learning in a feed-forward (bottom-up) and back-forward (top-down) manner and then makes more efficient recognition with Softmax regress by supervised learning. As a comparison with the algorithms only based on supervised learning, we again propose and design many kinds of classifiers: BP, HBPNNs, RBF, HRBFNNs, SVM and multiple classification decision fusion classifier (MCDFC)—hybrid HBPNNs-HRBFNNs-SVM classifier. The conducted experiments validate: Firstly, the proposed DBNESR is optimal for face recognition with the highest and most stable recognition rates;second, the algorithm combining unsupervised and supervised learning has better effect than all supervised learning algorithms;third, hybrid neural networks have better effect than single model neural network;fourth, the average recognition rate and variance of these algorithms in order of the largest to the smallest are respectively shown as DBNESR, MCDFC, SVM, HRBFNNs, RBF, HBPNNs, BP and BP, RBF, HBPNNs, HRBFNNs, SVM, MCDFC, DBNESR;at last, it reflects hierarchical representations of feature by DBNESR in terms of its capability of modeling hard artificial intelligent tasks.展开更多
Proposes a reinforcement learning scheme based on a special Hierarchical Fuzzy Neural-Networks (HFNN)for solving complicated learning tasks in a continuous multi-variables environment. The output of the previous layer...Proposes a reinforcement learning scheme based on a special Hierarchical Fuzzy Neural-Networks (HFNN)for solving complicated learning tasks in a continuous multi-variables environment. The output of the previous layer in the HFNN is no longer used as if-part of the next layer, but used only in then-part. Thus it can deal with the difficulty when the output of the previous layer is meaningless or its meaning is uncertain. The proposed HFNN has a minimal number of fuzzy rules and can successfully solve the problem of rules combination explosion and decrease the quantity of computation and memory requirement. In the learning process, two HFNN with the same structure perform fuzzy action composition and evaluation function approximation simultaneously where the parameters of neural-networks are tuned and updated on line by using gradient descent algorithm. The reinforcement learning method is proved to be correct and feasible by simulation of a double inverted pendulum system.展开更多
Data fusion generates fused data by combining multiple sources,resulting in information that is more consistent,accurate,and useful than any individual source and more reliable and consistent than the raw original dat...Data fusion generates fused data by combining multiple sources,resulting in information that is more consistent,accurate,and useful than any individual source and more reliable and consistent than the raw original data,which are often imperfect,inconsistent,complex,and uncertain.Traditional data fusion methods like probabilistic fusion,set-based fusion,and evidential belief reasoning fusion methods are computationally complex and require accurate classification and proper handling of raw data.Data fusion is the process of integrating multiple data sources.Data filtering means examining a dataset to exclude,rearrange,or apportion data according to the criteria.Different sensors generate a large amount of data,requiring the development of machine learning(ML)algorithms to overcome the challenges of traditional methods.The advancement in hardware acceleration and the abundance of data from various sensors have led to the development of machine learning(ML)algorithms,expected to address the limitations of traditional methods.However,many open issues still exist as machine learning algorithms are used for data fusion.From the literature,nine issues have been identified irrespective of any application.The decision-makers should pay attention to these issues as data fusion becomes more applicable and successful.A fuzzy analytical hierarchical process(FAHP)enables us to handle these issues.It helps to get the weights for each corresponding issue and rank issues based on these calculated weights.The most significant issue identified is the lack of deep learning models used for data fusion that improve accuracy and learning quality weighted 0.141.The least significant one is the cross-domain multimodal data fusion weighted 0.076 because the whole semantic knowledge for multimodal data cannot be captured.展开更多
The products of an archival culture in colleges and universities are the final result of the development of archival cultural resources,and the development of archival cultural effects in colleges and universities sho...The products of an archival culture in colleges and universities are the final result of the development of archival cultural resources,and the development of archival cultural effects in colleges and universities should be an important part of improving the artistic level of libraries.The existing RippleNet model doesn’t consider the influence of key nodes on recommendation results,and the recommendation accuracy is not high.Therefore,based on the RippleNet model,this paper introduces the influence of complex network nodes into the model and puts forward the Cn RippleNet model.The performance of the model is verified by experiments,which provide a theoretical basis for the promotion and recommendation of its cultural products of universarchives,solve the problem that RippleNet doesn’t consider the influence of key nodes on recommendation results,and improve the recommendation accuracy.This paper also combs the development course of archival cultural products in detail.Finally,based on the Cn-RippleNet model,the cultural effect of university archives is recommended and popularized.展开更多
Wireless sensor networks(WSN)are widely used in many situations,but the disordered and random deployment mode will waste a lot of sensor resources.This paper proposes a multi-topology hierarchical collaborative partic...Wireless sensor networks(WSN)are widely used in many situations,but the disordered and random deployment mode will waste a lot of sensor resources.This paper proposes a multi-topology hierarchical collaborative particle swarm optimization(MHCHPSO)to optimize sensor deployment location and improve the coverage of WSN.MHCHPSO divides the population into three types topology:diversity topology for global exploration,fast convergence topology for local development,and collaboration topology for exploration and development.All topologies are optimized in parallel to overcome the precocious convergence of PSO.This paper compares with various heuristic algorithms at CEC 2013,CEC 2015,and CEC 2017.The experimental results show that MHCHPSO outperforms the comparison algorithms.In addition,MHCHPSO is applied to the WSN localization optimization,and the experimental results confirm the optimization ability of MHCHPSO in practical engineering problems.展开更多
Based on option-critic algorithm,a new adversarial algorithm named deterministic policy network with option architecture is proposed to improve agent's performance against opponent with fixed offensive algorithm.A...Based on option-critic algorithm,a new adversarial algorithm named deterministic policy network with option architecture is proposed to improve agent's performance against opponent with fixed offensive algorithm.An option network is introduced in upper level design,which can generate activated signal from defensive and of-fensive strategies according to temporary situation.Then the lower level executive layer can figure out interactive action with guidance of activated signal,and the value of both activated signal and interactive action is evaluated by critic structure together.This method could release requirement of semi Markov decision process effectively and eventually simplified network structure by eliminating termination possibility layer.According to the result of experiment,it is proved that new algorithm switches strategy style between offensive and defensive ones neatly and acquires more reward from environment than classical deep deterministic policy gradient algorithm does.展开更多
There is a lot of information in healthcare and medical records.However,it is challenging for humans to turn data into information and spot hidden patterns in today’s digitally based culture.Effective decision suppor...There is a lot of information in healthcare and medical records.However,it is challenging for humans to turn data into information and spot hidden patterns in today’s digitally based culture.Effective decision support technologies can help medical professionals find critical information concealed in voluminous data and support their clinical judgments and in different healthcare management activities.This paper presented an extensive literature survey for healthcare systems using machine learning based on multi-criteria decision-making.Various existing studies are considered for review,and a critical analysis is being done through the reviews study,which can help the researchers to explore other research areas to cater for the need of the field.展开更多
目的研究糖尿病患者采用Triangle分层分级管理结合LEARNS模式干预的护理效果。方法选择2020年10月—2023年10月东南大学附属中大医院收治的40例糖尿病患者为研究对象,以随机抽签法分为A组(20例,行Triangle分层分级管理结合LEARNS模式护...目的研究糖尿病患者采用Triangle分层分级管理结合LEARNS模式干预的护理效果。方法选择2020年10月—2023年10月东南大学附属中大医院收治的40例糖尿病患者为研究对象,以随机抽签法分为A组(20例,行Triangle分层分级管理结合LEARNS模式护理)和B组(20例,施行常规护理干预)。对比两组护理前后血糖水平、糖尿病自我管理行为量表(Summary of Diabetes Self Care Activities,SDSCA)评分及依从性。结果干预后,A组患者血糖水平显著优于B组,差异有统计学意义(P<0.05)。干预前,两组患者SDSCA评分对比,差异无统计学意义(P>0.05);干预后,A组SDSCA评分显著优于B组,差异有统计学意义(P<0.05)。A组患者依从性显著优于B组,差异有统计学意义(P<0.05)。结论给予糖尿病患者Triangle分层分级管理结合LEARNS模式进行干预,与常规护理干预对比,其能够很大程度改善患者血糖情况,提高糖尿病患者自我管理能力及依从性。展开更多
Learning is widely used in intelligent planning to shorten the planning process or improve the plan quality. This paper aims at introducing learning and fatigue into the classical hierarchical task network (HTN) pla...Learning is widely used in intelligent planning to shorten the planning process or improve the plan quality. This paper aims at introducing learning and fatigue into the classical hierarchical task network (HTN) planning process so as to create better high- quality plans quickly. The process of HTN planning is mapped during a depth-first search process in a problem-solving agent, and the models of learning in HTN planning is conducted similar to the learning depth-first search (LDFS). Based on the models, a learning method integrating HTN planning and LDFS is presented, and a fatigue mechanism is introduced to balance exploration and exploitation in learning. Finally, experiments in two classical do- mains are carried out in order to validate the effectiveness of the proposed learning and fatigue inspired method.展开更多
This research proposes a method called enhanced collaborative andgeometric multi-kernel learning (E-CGMKL) that can enhance the CGMKLalgorithm which deals with multi-class classification problems with non-lineardata d...This research proposes a method called enhanced collaborative andgeometric multi-kernel learning (E-CGMKL) that can enhance the CGMKLalgorithm which deals with multi-class classification problems with non-lineardata distributions. CGMKL combines multiple kernel learning with softmaxfunction using the framework of multi empirical kernel learning (MEKL) inwhich empirical kernel mapping (EKM) provides explicit feature constructionin the high dimensional kernel space. CGMKL ensures the consistent outputof samples across kernel spaces and minimizes the within-class distance tohighlight geometric features of multiple classes. However, the kernels constructed by CGMKL do not have any explicit relationship among them andtry to construct high dimensional feature representations independently fromeach other. This could be disadvantageous for learning on datasets with complex hidden structures. To overcome this limitation, E-CGMKL constructskernel spaces from hidden layers of trained deep neural networks (DNN).Due to the nature of the DNN architecture, these kernel spaces not onlyprovide multiple feature representations but also inherit the compositionalhierarchy of the hidden layers, which might be beneficial for enhancing thepredictive performance of the CGMKL algorithm on complex data withnatural hierarchical structures, for example, image data. Furthermore, ourproposed scheme handles image data by constructing kernel spaces from aconvolutional neural network (CNN). Considering the effectiveness of CNNarchitecture on image data, these kernel spaces provide a major advantageover the CGMKL algorithm which does not exploit the CNN architecture forconstructing kernel spaces from image data. Additionally, outputs of hiddenlayers directly provide features for kernel spaces and unlike CGMKL, do notrequire an approximate MEKL framework. E-CGMKL combines the consistency and geometry preserving aspects of CGMKL with the compositionalhierarchy of kernel spaces extracted from DNN hidden layers to enhance the predictive performance of CGMKL significantly. The experimental results onvarious data sets demonstrate the superior performance of the E-CGMKLalgorithm compared to other competing methods including the benchmarkCGMKL.展开更多
文摘In the context of edge computing environments in general and the metaverse in particular,federated learning(FL)has emerged as a distributed machine learning paradigm that allows multiple users to collaborate on training a shared machine learning model locally,eliminating the need for uploading raw data to a central server.It is perhaps the only training paradigm that preserves the privacy of user data,which is essential for computing environments as personal as the metaverse.However,the original FL architecture proposed is not scalable to a large number of user devices in the metaverse community.To mitigate this problem,hierarchical federated learning(HFL)has been introduced as a general distributed learning paradigm,inspiring a number of research works.In this paper,we present several types of HFL architectures,with a special focus on the three-layer client-edge-cloud HFL architecture,which is most pertinent to the metaverse due to its delay-sensitive nature.We also examine works that take advantage of the natural layered organization of three-layer client-edge-cloud HFL to tackle some of the most challenging problems in FL within the metaverse.Finally,we outline some future research directions of HFL in the metaverse.
基金sponsored in part by the National Key R&D Program of China under Grant No. 2020YFB1806605the National Natural Science Foundation of China under Grant Nos. 62022049, 62111530197, and 61871254+1 种基金OPPOsupported by the Fundamental Research Funds for the Central Universities under Grant No. 2022JBXT001
文摘Federated learning(FL)is a distributed machine learning(ML)framework where several clients cooperatively train an ML model by exchanging the model parameters without directly sharing their local data.In FL,the limited number of participants for model aggregation and communication latency are two major bottlenecks.Hierarchical federated learning(HFL),with a cloud-edge-client hierarchy,can leverage the large coverage of cloud servers and the low transmission latency of edge servers.There are growing research interests in implementing FL in vehicular networks due to the requirements of timely ML training for intelligent vehicles.However,the limited number of participants in vehicular networks and vehicle mobility degrade the performance of FL training.In this context,HFL,which stands out for lower latency,wider coverage and more participants,is promising in vehicular networks.In this paper,we begin with the background and motivation of HFL and the feasibility of implementing HFL in vehicular networks.Then,the architecture of HFL is illustrated.Next,we clarify new issues in HFL and review several existing solutions.Furthermore,we introduce some typical use cases in vehicular networks as well as our initial efforts on implementing HFL in vehicular networks.Finally,we conclude with future research directions.
基金supported by the National Natural Science Foundation of China(61303108)Suzhou Key Industries Technological Innovation-Prospective Applied Research Project(SYG201804)+2 种基金A Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)the Fundamental Research Funds for the Gentral UniversitiesJLU(93K172020K25)。
文摘In reinforcement learning an agent may explore ineffectively when dealing with sparse reward tasks where finding a reward point is difficult.To solve the problem,we propose an algorithm called hierarchical deep reinforcement learning with automatic sub-goal identification via computer vision(HADS)which takes advantage of hierarchical reinforcement learning to alleviate the sparse reward problem and improve efficiency of exploration by utilizing a sub-goal mechanism.HADS uses a computer vision method to identify sub-goals automatically for hierarchical deep reinforcement learning.Due to the fact that not all sub-goal points are reachable,a mechanism is proposed to remove unreachable sub-goal points so as to further improve the performance of the algorithm.HADS involves contour recognition to identify sub-goals from the state image where some salient states in the state image may be recognized as sub-goals,while those that are not will be removed based on prior knowledge.Our experiments verified the effect of the algorithm.
基金National Natural Science Foundation of China ( No. 61070033 )Fundamental Research Funds for the Central Universities,China( No. 2012ZM0061)
文摘It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed-layer local learning (HCFLL) based support vector machine(SVM) algorithm is proposed to deal with this problem. Firstly, HCFLL hierarchically clusters a given dataset into a modified clustering feature tree based on the ideas of unsupervised clustering and supervised clustering. Then it locally trains SVM on each labeled subtree at a fixed-layer of the tree. The experimental results show that compared with the existing popular algorithms such as core vector machine and decision-tree support vector machine, HCFLL can significantly improve the training and testing speeds with comparable testing accuracy.
基金supported by the National Natural Science Foundation of China(62003021,91212304).
文摘The guidance strategy is an extremely critical factor in determining the striking effect of the missile operation.A novel guidance law is presented by exploiting the deep reinforcement learning(DRL)with the hierarchical deep deterministic policy gradient(DDPG)algorithm.The reward functions are constructed to minimize the line-of-sight(LOS)angle rate and avoid the threat caused by the opposed obstacles.To attenuate the chattering of the acceleration,a hierarchical reinforcement learning structure and an improved reward function with action penalty are put forward.The simulation results validate that the missile under the proposed method can hit the target successfully and keep away from the threatened areas effectively.
基金Research reported in this publication was partially supported by NIH,Nos.R01EB031102,R01HL151561,and R01CA233888The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH。
文摘Deep learning(DL)has shown unprecedented performance for many image analysis and image enhancement tasks.Yet,solving large-scale inverse problems like tomographic reconstruction remains challenging for DL.These problems involve non-local and space-variant integral transforms between the input and output domains,for which no efficient neural network models are readily available.A prior attempt to solve tomographic reconstruction problems with supervised learning relied on a brute-force fully connected network and only allowed reconstruction with a 128^(4)system matrix size.This cannot practically scale to realistic data sizes such as 512^(4)and 512^(6)for three-dimensional datasets.Here we present a novel framework to solve such problems with DL by casting the original problem as a continuum of intermediate representations between the input and output domains.The original problem is broken down into a sequence of simpler transformations that can be well mapped onto an efficient hierarchical network architecture,with exponentially fewer parameters than a fully connected network would need.We applied the approach to computed tomography(CT)image reconstruction for a 5124 system matrix size.This work introduces a new kind of data-driven DL solver for full-size CT reconstruction without relying on the structure of direct(analytical)or iterative(numerical)inversion techniques.This work presents a feasibility demonstration of full-scale learnt reconstruction,whereas more developments will be needed to demonstrate superiority relative to traditional reconstruction approaches.The proposed approach is also extendable to other imaging problems such as emission and magnetic resonance reconstruction.More broadly,hierarchical DL opens the door to a new class of solvers for general inverse problems,which could potentially lead to improved signal-to-noise ratio,spatial resolution and computational efficiency in various areas.
基金the General Program of National Natural Science Foundation of China(62072049).
文摘The rapid growth of modern mobile devices leads to a large number of distributed data,which is extremely valuable for learning models.Unfortunately,model training by collecting all these original data to a centralized cloud server is not applicable due to data privacy and communication costs concerns,hindering artificial intelligence from empowering mobile devices.Moreover,these data are not identically and independently distributed(Non-IID)caused by their different context,which will deteriorate the performance of the model.To address these issues,we propose a novel Distributed Learning algorithm based on hierarchical clustering and Adaptive Dataset Condensation,named ADC-DL,which learns a shared model by collecting the synthetic samples generated on each device.To tackle the heterogeneity of data distribution,we propose an entropy topsis comprehensive tiering model for hierarchical clustering,which distinguishes clients in terms of their data characteristics.Subsequently,synthetic dummy samples are generated based on the hierarchical structure utilizing adaptive dataset condensation.The procedure of dataset condensation can be adjusted adaptively according to the tier of the client.Extensive experiments demonstrate that the performance of our ADC-DL is more outstanding in prediction accuracy and communication costs compared with existing algorithms.
文摘Artificial intelligence,which has recently emerged with the rapid development of information technology,is drawing attention as a tool for solving various problems demanded by society and industry.In particular,convolutional neural networks(CNNs),a type of deep learning technology,are highlighted in computer vision fields,such as image classification and recognition and object tracking.Training these CNN models requires a large amount of data,and a lack of data can lead to performance degradation problems due to overfitting.As CNN architecture development and optimization studies become active,ensemble techniques have emerged to perform image classification by combining features extracted from multiple CNN models.In this study,data augmentation and contour image extraction were performed to overcome the data shortage problem.In addition,we propose a hierarchical ensemble technique to achieve high image classification accuracy,even if trained from a small amount of data.First,we trained the UCMerced land use dataset and the contour images for each image on pretrained VGGNet,GoogLeNet,ResNet,DenseNet,and EfficientNet.We then apply a hierarchical ensemble technique to the number of cases in which each model can be deployed.These experiments were performed in cases where the proportion of training datasets was 30%,50%,and 70%,resulting in a performance improvement of up to 4.68%compared to the average accuracy of the entire model.
文摘Most modern face recognition and classification systems mainly rely on hand-crafted image feature descriptors. In this paper, we propose a novel deep learning algorithm combining unsupervised and supervised learning named deep belief network embedded with Softmax regress (DBNESR) as a natural source for obtaining additional, complementary hierarchical representations, which helps to relieve us from the complicated hand-crafted feature-design step. DBNESR first learns hierarchical representations of feature by greedy layer-wise unsupervised learning in a feed-forward (bottom-up) and back-forward (top-down) manner and then makes more efficient recognition with Softmax regress by supervised learning. As a comparison with the algorithms only based on supervised learning, we again propose and design many kinds of classifiers: BP, HBPNNs, RBF, HRBFNNs, SVM and multiple classification decision fusion classifier (MCDFC)—hybrid HBPNNs-HRBFNNs-SVM classifier. The conducted experiments validate: Firstly, the proposed DBNESR is optimal for face recognition with the highest and most stable recognition rates;second, the algorithm combining unsupervised and supervised learning has better effect than all supervised learning algorithms;third, hybrid neural networks have better effect than single model neural network;fourth, the average recognition rate and variance of these algorithms in order of the largest to the smallest are respectively shown as DBNESR, MCDFC, SVM, HRBFNNs, RBF, HBPNNs, BP and BP, RBF, HBPNNs, HRBFNNs, SVM, MCDFC, DBNESR;at last, it reflects hierarchical representations of feature by DBNESR in terms of its capability of modeling hard artificial intelligent tasks.
文摘Proposes a reinforcement learning scheme based on a special Hierarchical Fuzzy Neural-Networks (HFNN)for solving complicated learning tasks in a continuous multi-variables environment. The output of the previous layer in the HFNN is no longer used as if-part of the next layer, but used only in then-part. Thus it can deal with the difficulty when the output of the previous layer is meaningless or its meaning is uncertain. The proposed HFNN has a minimal number of fuzzy rules and can successfully solve the problem of rules combination explosion and decrease the quantity of computation and memory requirement. In the learning process, two HFNN with the same structure perform fuzzy action composition and evaluation function approximation simultaneously where the parameters of neural-networks are tuned and updated on line by using gradient descent algorithm. The reinforcement learning method is proved to be correct and feasible by simulation of a double inverted pendulum system.
基金supported in part by the Higher Education Sprout Project from the Ministry of Education(MOE)and National Science and Technology Council,Taiwan(109-2628-E-224-001-MY3,112-2622-E-224-003)and in part by Isuzu Optics Corporation.Dr.Shih-Yu Chen is the corresponding author.
文摘Data fusion generates fused data by combining multiple sources,resulting in information that is more consistent,accurate,and useful than any individual source and more reliable and consistent than the raw original data,which are often imperfect,inconsistent,complex,and uncertain.Traditional data fusion methods like probabilistic fusion,set-based fusion,and evidential belief reasoning fusion methods are computationally complex and require accurate classification and proper handling of raw data.Data fusion is the process of integrating multiple data sources.Data filtering means examining a dataset to exclude,rearrange,or apportion data according to the criteria.Different sensors generate a large amount of data,requiring the development of machine learning(ML)algorithms to overcome the challenges of traditional methods.The advancement in hardware acceleration and the abundance of data from various sensors have led to the development of machine learning(ML)algorithms,expected to address the limitations of traditional methods.However,many open issues still exist as machine learning algorithms are used for data fusion.From the literature,nine issues have been identified irrespective of any application.The decision-makers should pay attention to these issues as data fusion becomes more applicable and successful.A fuzzy analytical hierarchical process(FAHP)enables us to handle these issues.It helps to get the weights for each corresponding issue and rank issues based on these calculated weights.The most significant issue identified is the lack of deep learning models used for data fusion that improve accuracy and learning quality weighted 0.141.The least significant one is the cross-domain multimodal data fusion weighted 0.076 because the whole semantic knowledge for multimodal data cannot be captured.
文摘The products of an archival culture in colleges and universities are the final result of the development of archival cultural resources,and the development of archival cultural effects in colleges and universities should be an important part of improving the artistic level of libraries.The existing RippleNet model doesn’t consider the influence of key nodes on recommendation results,and the recommendation accuracy is not high.Therefore,based on the RippleNet model,this paper introduces the influence of complex network nodes into the model and puts forward the Cn RippleNet model.The performance of the model is verified by experiments,which provide a theoretical basis for the promotion and recommendation of its cultural products of universarchives,solve the problem that RippleNet doesn’t consider the influence of key nodes on recommendation results,and improve the recommendation accuracy.This paper also combs the development course of archival cultural products in detail.Finally,based on the Cn-RippleNet model,the cultural effect of university archives is recommended and popularized.
基金supported by the National Key Research and Development Program Projects of China(No.2018YFC1504705)the National Natural Science Foundation of China(No.61731015)+1 种基金the Major instrument special project of National Natural Science Foundation of China(No.42027806)the Key Research and Development Program of Shaanxi(No.2022GY-331)。
文摘Wireless sensor networks(WSN)are widely used in many situations,but the disordered and random deployment mode will waste a lot of sensor resources.This paper proposes a multi-topology hierarchical collaborative particle swarm optimization(MHCHPSO)to optimize sensor deployment location and improve the coverage of WSN.MHCHPSO divides the population into three types topology:diversity topology for global exploration,fast convergence topology for local development,and collaboration topology for exploration and development.All topologies are optimized in parallel to overcome the precocious convergence of PSO.This paper compares with various heuristic algorithms at CEC 2013,CEC 2015,and CEC 2017.The experimental results show that MHCHPSO outperforms the comparison algorithms.In addition,MHCHPSO is applied to the WSN localization optimization,and the experimental results confirm the optimization ability of MHCHPSO in practical engineering problems.
基金the National Natural Science Foundation of China (No.61673265)the National Key Research and Development Program (No.2020YFC1512203)the Shanghai Commercial Aircraft System Engineering Joint Research Fund (No.CASEF-2022-Z05)。
文摘Based on option-critic algorithm,a new adversarial algorithm named deterministic policy network with option architecture is proposed to improve agent's performance against opponent with fixed offensive algorithm.An option network is introduced in upper level design,which can generate activated signal from defensive and of-fensive strategies according to temporary situation.Then the lower level executive layer can figure out interactive action with guidance of activated signal,and the value of both activated signal and interactive action is evaluated by critic structure together.This method could release requirement of semi Markov decision process effectively and eventually simplified network structure by eliminating termination possibility layer.According to the result of experiment,it is proved that new algorithm switches strategy style between offensive and defensive ones neatly and acquires more reward from environment than classical deep deterministic policy gradient algorithm does.
文摘There is a lot of information in healthcare and medical records.However,it is challenging for humans to turn data into information and spot hidden patterns in today’s digitally based culture.Effective decision support technologies can help medical professionals find critical information concealed in voluminous data and support their clinical judgments and in different healthcare management activities.This paper presented an extensive literature survey for healthcare systems using machine learning based on multi-criteria decision-making.Various existing studies are considered for review,and a critical analysis is being done through the reviews study,which can help the researchers to explore other research areas to cater for the need of the field.
文摘目的研究糖尿病患者采用Triangle分层分级管理结合LEARNS模式干预的护理效果。方法选择2020年10月—2023年10月东南大学附属中大医院收治的40例糖尿病患者为研究对象,以随机抽签法分为A组(20例,行Triangle分层分级管理结合LEARNS模式护理)和B组(20例,施行常规护理干预)。对比两组护理前后血糖水平、糖尿病自我管理行为量表(Summary of Diabetes Self Care Activities,SDSCA)评分及依从性。结果干预后,A组患者血糖水平显著优于B组,差异有统计学意义(P<0.05)。干预前,两组患者SDSCA评分对比,差异无统计学意义(P>0.05);干预后,A组SDSCA评分显著优于B组,差异有统计学意义(P<0.05)。A组患者依从性显著优于B组,差异有统计学意义(P<0.05)。结论给予糖尿病患者Triangle分层分级管理结合LEARNS模式进行干预,与常规护理干预对比,其能够很大程度改善患者血糖情况,提高糖尿病患者自我管理能力及依从性。
文摘Learning is widely used in intelligent planning to shorten the planning process or improve the plan quality. This paper aims at introducing learning and fatigue into the classical hierarchical task network (HTN) planning process so as to create better high- quality plans quickly. The process of HTN planning is mapped during a depth-first search process in a problem-solving agent, and the models of learning in HTN planning is conducted similar to the learning depth-first search (LDFS). Based on the models, a learning method integrating HTN planning and LDFS is presented, and a fatigue mechanism is introduced to balance exploration and exploitation in learning. Finally, experiments in two classical do- mains are carried out in order to validate the effectiveness of the proposed learning and fatigue inspired method.
文摘This research proposes a method called enhanced collaborative andgeometric multi-kernel learning (E-CGMKL) that can enhance the CGMKLalgorithm which deals with multi-class classification problems with non-lineardata distributions. CGMKL combines multiple kernel learning with softmaxfunction using the framework of multi empirical kernel learning (MEKL) inwhich empirical kernel mapping (EKM) provides explicit feature constructionin the high dimensional kernel space. CGMKL ensures the consistent outputof samples across kernel spaces and minimizes the within-class distance tohighlight geometric features of multiple classes. However, the kernels constructed by CGMKL do not have any explicit relationship among them andtry to construct high dimensional feature representations independently fromeach other. This could be disadvantageous for learning on datasets with complex hidden structures. To overcome this limitation, E-CGMKL constructskernel spaces from hidden layers of trained deep neural networks (DNN).Due to the nature of the DNN architecture, these kernel spaces not onlyprovide multiple feature representations but also inherit the compositionalhierarchy of the hidden layers, which might be beneficial for enhancing thepredictive performance of the CGMKL algorithm on complex data withnatural hierarchical structures, for example, image data. Furthermore, ourproposed scheme handles image data by constructing kernel spaces from aconvolutional neural network (CNN). Considering the effectiveness of CNNarchitecture on image data, these kernel spaces provide a major advantageover the CGMKL algorithm which does not exploit the CNN architecture forconstructing kernel spaces from image data. Additionally, outputs of hiddenlayers directly provide features for kernel spaces and unlike CGMKL, do notrequire an approximate MEKL framework. E-CGMKL combines the consistency and geometry preserving aspects of CGMKL with the compositionalhierarchy of kernel spaces extracted from DNN hidden layers to enhance the predictive performance of CGMKL significantly. The experimental results onvarious data sets demonstrate the superior performance of the E-CGMKLalgorithm compared to other competing methods including the benchmarkCGMKL.