Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is co...Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing.展开更多
The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed wo...The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.展开更多
Deep metric learning(DML)has achieved great results on visual understanding tasks by seamlessly integrating conventional metric learning with deep neural networks.Existing deep metric learning methods focus on designi...Deep metric learning(DML)has achieved great results on visual understanding tasks by seamlessly integrating conventional metric learning with deep neural networks.Existing deep metric learning methods focus on designing pair-based distance loss to decrease intra-class distance while increasing interclass distance.However,these methods fail to preserve the geometric structure of data in the embedding space,which leads to the spatial structure shift across mini-batches and may slow down the convergence of embedding learning.To alleviate these issues,by assuming that the input data is embedded in a lower-dimensional sub-manifold,we propose a novel deep Riemannian metric learning(DRML)framework that exploits the non-Euclidean geometric structural information.Considering that the curvature information of data measures how much the Riemannian(nonEuclidean)metric deviates from the Euclidean metric,we leverage geometry flow,which is called a geometric evolution equation,to characterize the relation between the Riemannian metric and its curvature.Our DRML not only regularizes the local neighborhoods connection of the embeddings at the hidden layer but also adapts the embeddings to preserve the geometric structure of the data.On several benchmark datasets,the proposed DRML outperforms all existing methods and these results demonstrate its effectiveness.展开更多
We introduce a novel method using a new generative model that automatically learns effective representations of the target and background appearance to detect,segment and track each instance in a video sequence.Differ...We introduce a novel method using a new generative model that automatically learns effective representations of the target and background appearance to detect,segment and track each instance in a video sequence.Differently from current discriminative tracking-by-detection solutions,our proposed hierarchical structural embedding learning can predict more highquality masks with accurate boundary details over spatio-temporal space via the normalizing flows.We formulate the instance inference procedure as a hierarchical spatio-temporal embedded learning across time and space.Given the video clip,our method first coarsely locates pixels belonging to a particular instance with Gaussian distribution and then builds a novel mixing distribution to promote the instance boundary by fusing hierarchical appearance embedding information in a coarse-to-fine manner.For the mixing distribution,we utilize a factorization condition normalized flow fashion to estimate the distribution parameters to improve the segmentation performance.Comprehensive qualitative,quantitative,and ablation experiments are performed on three representative video instance segmentation benchmarks(i.e.,YouTube-VIS19,YouTube-VIS21,and OVIS)and the effectiveness of the proposed method is demonstrated.More impressively,the superior performance of our model on an unsupervised video object segmentation dataset(i.e.,DAVIS19)proves its generalizability.Our algorithm implementations are publicly available at https://github.com/zyqin19/HEVis.展开更多
Destination prediction has attracted widespread attention because it can help vehicle-aid systems recommend related services in advance to improve user driving experience.However,the relevant research is mainly based ...Destination prediction has attracted widespread attention because it can help vehicle-aid systems recommend related services in advance to improve user driving experience.However,the relevant research is mainly based on driving trajectory of vehicles to predict the destinations,which is challenging to achieve the early destination prediction.To this end,we propose a model of early destination prediction,DP-BPR,to predict the destinations by users’travel time and locations.There are three challenges to accomplish the model:1)the extremely sparse historical data make it challenge to predict destinations directly from raw historical data;2)the destinations are related to not only departure points but also departure time so that both of them should be taken into consideration in prediction;3)how to learn destination preferences from historical data.To deal with these challenges,we map sparse high-dimensional data to a dense low-dimensional space through embedding learning using deep neural networks.We learn the embeddings not only for users but also for locations and time under the supervision of historical data,and then use Bayesian personalized ranking(BPR)to learn to rank destinations.Experimental results on the Zebra dataset show the effectiveness of DP-BPR.展开更多
The increasing share of renewable energy in the electricity grid and progressing changes in power consumption have led to fluctuating,and weather-dependent power flows.To ensure grid stability,grid operators rely on p...The increasing share of renewable energy in the electricity grid and progressing changes in power consumption have led to fluctuating,and weather-dependent power flows.To ensure grid stability,grid operators rely on power forecasts which are crucial for grid calculations and planning.In this paper,a Multi-Task Learning approach is combined with a Graph Neural Network(GNN)to predict vertical power flows at transformers connecting high and extra-high voltage levels.The proposed method accounts for local differences in power flow characteristics by using an Embedding Multi-Task Learning approach.The use of a Bayesian embedding to capture the latent node characteristics allows to share the weights across all transformers in the subsequent node-invariant GNN while still allowing the individual behavioral patterns of the transformers to be distinguished.At the same time,dependencies between transformers are considered by the GNN architecture which can learn relationships between different transformers and thus take into account that power flows in an electricity network are not independent from each other.The effectiveness of the proposed method is demonstrated through evaluation on two real-world data sets provided by two of four German Transmission System Operators,comprising large portions of the operated German transmission grid.The results show that the proposed Multi-Task Graph Neural Network is a suitable representation learner for electricity networks with a clear advantage provided by the preceding embedding layer.It is able to capture interconnections between correlated transformers and indeed improves the performance in power flow prediction compared to standard Neural Networks.A sign test shows that the proposed model reduces the test RMSE on both data sets compared to the benchmark models significantly.展开更多
With the development of information fusion,knowledge graph completion tasks have received a lot of attention.some studies investigate the broader underlying problems of linguistics,while embedding learning has a narro...With the development of information fusion,knowledge graph completion tasks have received a lot of attention.some studies investigate the broader underlying problems of linguistics,while embedding learning has a narrow focus.This poses significant challenges due to the heterogeneity of coarse-graining patterns.Then,to settle the whole matter,a framework for completion is designed,named Triple Encoder-Scoring Module(TEsm).The model employs an alternating two-branch structure that fuses local features into the interaction pattern of the triplet itself by perfectly combining distance and structure models.Moreover,it is mapped to a uniform shared space.Upon completion,an ensemble inference method is proposed to query multiple predictions from different graphs using a weight classifier.Experiments show that the experimental dataset used for the completion task is DBpedia,which contains five different linguistic subsets..Our extensive experimental results demonstrate that TEsm can efficiently and smoothly solve the optimal completion task,validating the performance of the proposed model.展开更多
When a person's neuromuscular system is affected by an injury or disease,Activities‐for‐Daily‐Living(ADL),such as gripping,turning,and walking,are impaired.Electroen-cephalography(EEG)and Electromyography(EMG)a...When a person's neuromuscular system is affected by an injury or disease,Activities‐for‐Daily‐Living(ADL),such as gripping,turning,and walking,are impaired.Electroen-cephalography(EEG)and Electromyography(EMG)are physiological signals generated by a body during neuromuscular activities embedding the intentions of the subject,and they are used in Brain–Computer Interface(BCI)or robotic rehabilitation systems.However,existing BCI or robotic rehabilitation systems use signal classification technique limitations such as(1)missing temporal correlation of the EEG and EMG signals in the entire window and(2)overlooking the interrelationship between different sensors in the system.Furthermore,typical existing systems are designed to operate based on the presence of dominant physiological signals associated with certain actions;(3)their effectiveness will be greatly reduced if subjects are disabled in generating the dominant signals.A novel classification model,named BIOFIS is proposed,which fuses signals from different sensors to generate inter‐channel and intra‐channel relationships.It ex-plores the temporal correlation of the signals within a timeframe via a Long Short‐Term Memory(LSTM)block.The proposed architecture is able to classify the various subsets of a full‐range arm movement that performs actions such as forward,grip and raise,lower and release,and reverse.The system can achieve 98.6%accuracy for a 4‐way action using EEG data and 97.18%accuracy using EMG data.Moreover,even without the dominant signal,the accuracy scores were 90.1%for the EEG data and 85.2%for the EMG data.The proposed mechanism shows promise in the design of EEG/EMG‐based use in the medical device and rehabilitation industries.展开更多
Agricultural robots are flexible to obtain ambient information across large areas of farmland. However, it needs to face two major challenges: data compression and filtering noise. To address these challenges, an enco...Agricultural robots are flexible to obtain ambient information across large areas of farmland. However, it needs to face two major challenges: data compression and filtering noise. To address these challenges, an encoder for ambient data compression, named Tiny-Encoder, was presented to compress and filter raw ambient information, which can be applied to agricultural robots. Tiny-Encoder is based on the operation of convolutions and pooling, and it has a small number of layers and filters. With the aim of evaluating the performance of Tiny-Encoder, different three types of ambient information (including temperature, humidity, and light) were selected to show the performance of compressing raw data and filtering noise. In the task of compressing raw data, Tiny-Encoder obtained higher accuracy (less than the maximum error of sensors ±0.5°C or ±3.5% RH) and more appropriate size (the largest size is 205 KB) than the other two auto-encoders based convolutional operations with different compressed features (including 20, 60, and 200 features). As for filtering noise, Tiny-Encoder has comparable performance with three conventional filtering approaches (including median filtering, Gaussian filtering, and Savitzky-Golay filtering). With large kernel size (i.e., 5), Tiny-Encoder has the best performance among these four filtering approaches: the coefficients of variation with the large kernel (i.e., 5) were 8.6189% (temperature), 10.2684% (humidity), 57.3576% (light), respectively. Overall, Tiny-Encoder can be used for ambient information compression applied to microcontrollers in agricultural information acquisition robots.展开更多
Word embedding has drawn a lot of attention due to its usefulness in many NLP tasks. So far a handful of neural-network based word embedding algorithms have been proposed without considering the effects of pronouns in...Word embedding has drawn a lot of attention due to its usefulness in many NLP tasks. So far a handful of neural-network based word embedding algorithms have been proposed without considering the effects of pronouns in the training corpus. In this paper, we propose using co-reference resolution to improve the word embedding by extracting better context. We evaluate four word embeddings with considerations of co-reference resolution and compare the quality of word embedding on the task of word analogy and word similarity on multiple data sets.Experiments show that by using co-reference resolution, the word embedding performance in the word analogy task can be improved by around 1.88%. We find that the words that are names of countries are affected the most,which is as expected.展开更多
基金financially supported by the Technology Development Fund of China Academy of Machinery Science and Technology(No.170221ZY01)。
文摘Additive manufacturing technology is highly regarded due to its advantages,such as high precision and the ability to address complex geometric challenges.However,the development of additive manufacturing process is constrained by issues like unclear fundamental principles,complex experimental cycles,and high costs.Machine learning,as a novel artificial intelligence technology,has the potential to deeply engage in the development of additive manufacturing process,assisting engineers in learning and developing new techniques.This paper provides a comprehensive overview of the research and applications of machine learning in the field of additive manufacturing,particularly in model design and process development.Firstly,it introduces the background and significance of machine learning-assisted design in additive manufacturing process.It then further delves into the application of machine learning in additive manufacturing,focusing on model design and process guidance.Finally,it concludes by summarizing and forecasting the development trends of machine learning technology in the field of additive manufacturing.
文摘The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.
基金supported in part by the Young Elite Scientists Sponsorship Program by CAST(2022QNRC001)the National Natural Science Foundation of China(61621003,62101136)+2 种基金Natural Science Foundation of Shanghai(21ZR1403600)Shanghai Municipal Science and Technology Major Project(2018SHZDZX01)ZJLab,and Shanghai Municipal of Science and Technology Project(20JC1419500)。
文摘Deep metric learning(DML)has achieved great results on visual understanding tasks by seamlessly integrating conventional metric learning with deep neural networks.Existing deep metric learning methods focus on designing pair-based distance loss to decrease intra-class distance while increasing interclass distance.However,these methods fail to preserve the geometric structure of data in the embedding space,which leads to the spatial structure shift across mini-batches and may slow down the convergence of embedding learning.To alleviate these issues,by assuming that the input data is embedded in a lower-dimensional sub-manifold,we propose a novel deep Riemannian metric learning(DRML)framework that exploits the non-Euclidean geometric structural information.Considering that the curvature information of data measures how much the Riemannian(nonEuclidean)metric deviates from the Euclidean metric,we leverage geometry flow,which is called a geometric evolution equation,to characterize the relation between the Riemannian metric and its curvature.Our DRML not only regularizes the local neighborhoods connection of the embeddings at the hidden layer but also adapts the embeddings to preserve the geometric structure of the data.On several benchmark datasets,the proposed DRML outperforms all existing methods and these results demonstrate its effectiveness.
基金supported in part by the National Natural Science Foundation of China(62176139,62106128,62176141)the Major Basic Research Project of Shandong Natural Science Foundation(ZR2021ZD15)+4 种基金the Natural Science Foundation of Shandong Province(ZR2021QF001)the Young Elite Scientists Sponsorship Program by CAST(2021QNRC001)the Open Project of Key Laboratory of Artificial Intelligence,Ministry of Educationthe Shandong Provincial Natural Science Foundation for Distinguished Young Scholars(ZR2021JQ26)the Taishan Scholar Project of Shandong Province(tsqn202103088)。
文摘We introduce a novel method using a new generative model that automatically learns effective representations of the target and background appearance to detect,segment and track each instance in a video sequence.Differently from current discriminative tracking-by-detection solutions,our proposed hierarchical structural embedding learning can predict more highquality masks with accurate boundary details over spatio-temporal space via the normalizing flows.We formulate the instance inference procedure as a hierarchical spatio-temporal embedded learning across time and space.Given the video clip,our method first coarsely locates pixels belonging to a particular instance with Gaussian distribution and then builds a novel mixing distribution to promote the instance boundary by fusing hierarchical appearance embedding information in a coarse-to-fine manner.For the mixing distribution,we utilize a factorization condition normalized flow fashion to estimate the distribution parameters to improve the segmentation performance.Comprehensive qualitative,quantitative,and ablation experiments are performed on three representative video instance segmentation benchmarks(i.e.,YouTube-VIS19,YouTube-VIS21,and OVIS)and the effectiveness of the proposed method is demonstrated.More impressively,the superior performance of our model on an unsupervised video object segmentation dataset(i.e.,DAVIS19)proves its generalizability.Our algorithm implementations are publicly available at https://github.com/zyqin19/HEVis.
基金Project(2018YFF0214706)supported by the National Key Research and Development Program of ChinaProject(cstc2020jcyj-msxmX0690)supported by the Natural Science Foundation of Chongqing,China+1 种基金Project(2020CDJ-LHZZ-039)supported by the Fundamental Research Funds for the Central Universities of Chongqing,ChinaProject(cstc2019jscx-fxydX0012)supported by the Key Research Program of Chongqing Technology Innovation and Application Development,China。
文摘Destination prediction has attracted widespread attention because it can help vehicle-aid systems recommend related services in advance to improve user driving experience.However,the relevant research is mainly based on driving trajectory of vehicles to predict the destinations,which is challenging to achieve the early destination prediction.To this end,we propose a model of early destination prediction,DP-BPR,to predict the destinations by users’travel time and locations.There are three challenges to accomplish the model:1)the extremely sparse historical data make it challenge to predict destinations directly from raw historical data;2)the destinations are related to not only departure points but also departure time so that both of them should be taken into consideration in prediction;3)how to learn destination preferences from historical data.To deal with these challenges,we map sparse high-dimensional data to a dense low-dimensional space through embedding learning using deep neural networks.We learn the embeddings not only for users but also for locations and time under the supervision of historical data,and then use Bayesian personalized ranking(BPR)to learn to rank destinations.Experimental results on the Zebra dataset show the effectiveness of DP-BPR.
文摘The increasing share of renewable energy in the electricity grid and progressing changes in power consumption have led to fluctuating,and weather-dependent power flows.To ensure grid stability,grid operators rely on power forecasts which are crucial for grid calculations and planning.In this paper,a Multi-Task Learning approach is combined with a Graph Neural Network(GNN)to predict vertical power flows at transformers connecting high and extra-high voltage levels.The proposed method accounts for local differences in power flow characteristics by using an Embedding Multi-Task Learning approach.The use of a Bayesian embedding to capture the latent node characteristics allows to share the weights across all transformers in the subsequent node-invariant GNN while still allowing the individual behavioral patterns of the transformers to be distinguished.At the same time,dependencies between transformers are considered by the GNN architecture which can learn relationships between different transformers and thus take into account that power flows in an electricity network are not independent from each other.The effectiveness of the proposed method is demonstrated through evaluation on two real-world data sets provided by two of four German Transmission System Operators,comprising large portions of the operated German transmission grid.The results show that the proposed Multi-Task Graph Neural Network is a suitable representation learner for electricity networks with a clear advantage provided by the preceding embedding layer.It is able to capture interconnections between correlated transformers and indeed improves the performance in power flow prediction compared to standard Neural Networks.A sign test shows that the proposed model reduces the test RMSE on both data sets compared to the benchmark models significantly.
基金Supported by Science and Technology Innovation Action Plan"of Shanghai Science and Technology Commission for Social Development Project(21DZ1204900)。
文摘With the development of information fusion,knowledge graph completion tasks have received a lot of attention.some studies investigate the broader underlying problems of linguistics,while embedding learning has a narrow focus.This poses significant challenges due to the heterogeneity of coarse-graining patterns.Then,to settle the whole matter,a framework for completion is designed,named Triple Encoder-Scoring Module(TEsm).The model employs an alternating two-branch structure that fuses local features into the interaction pattern of the triplet itself by perfectly combining distance and structure models.Moreover,it is mapped to a uniform shared space.Upon completion,an ensemble inference method is proposed to query multiple predictions from different graphs using a weight classifier.Experiments show that the experimental dataset used for the completion task is DBpedia,which contains five different linguistic subsets..Our extensive experimental results demonstrate that TEsm can efficiently and smoothly solve the optimal completion task,validating the performance of the proposed model.
文摘When a person's neuromuscular system is affected by an injury or disease,Activities‐for‐Daily‐Living(ADL),such as gripping,turning,and walking,are impaired.Electroen-cephalography(EEG)and Electromyography(EMG)are physiological signals generated by a body during neuromuscular activities embedding the intentions of the subject,and they are used in Brain–Computer Interface(BCI)or robotic rehabilitation systems.However,existing BCI or robotic rehabilitation systems use signal classification technique limitations such as(1)missing temporal correlation of the EEG and EMG signals in the entire window and(2)overlooking the interrelationship between different sensors in the system.Furthermore,typical existing systems are designed to operate based on the presence of dominant physiological signals associated with certain actions;(3)their effectiveness will be greatly reduced if subjects are disabled in generating the dominant signals.A novel classification model,named BIOFIS is proposed,which fuses signals from different sensors to generate inter‐channel and intra‐channel relationships.It ex-plores the temporal correlation of the signals within a timeframe via a Long Short‐Term Memory(LSTM)block.The proposed architecture is able to classify the various subsets of a full‐range arm movement that performs actions such as forward,grip and raise,lower and release,and reverse.The system can achieve 98.6%accuracy for a 4‐way action using EEG data and 97.18%accuracy using EMG data.Moreover,even without the dominant signal,the accuracy scores were 90.1%for the EEG data and 85.2%for the EMG data.The proposed mechanism shows promise in the design of EEG/EMG‐based use in the medical device and rehabilitation industries.
基金This work was financially supported by the National Key Research and Development Program(Grant No.2019YFE0125500)the Chinese University Scientific Fund(Grant No.2021TC111).
文摘Agricultural robots are flexible to obtain ambient information across large areas of farmland. However, it needs to face two major challenges: data compression and filtering noise. To address these challenges, an encoder for ambient data compression, named Tiny-Encoder, was presented to compress and filter raw ambient information, which can be applied to agricultural robots. Tiny-Encoder is based on the operation of convolutions and pooling, and it has a small number of layers and filters. With the aim of evaluating the performance of Tiny-Encoder, different three types of ambient information (including temperature, humidity, and light) were selected to show the performance of compressing raw data and filtering noise. In the task of compressing raw data, Tiny-Encoder obtained higher accuracy (less than the maximum error of sensors ±0.5°C or ±3.5% RH) and more appropriate size (the largest size is 205 KB) than the other two auto-encoders based convolutional operations with different compressed features (including 20, 60, and 200 features). As for filtering noise, Tiny-Encoder has comparable performance with three conventional filtering approaches (including median filtering, Gaussian filtering, and Savitzky-Golay filtering). With large kernel size (i.e., 5), Tiny-Encoder has the best performance among these four filtering approaches: the coefficients of variation with the large kernel (i.e., 5) were 8.6189% (temperature), 10.2684% (humidity), 57.3576% (light), respectively. Overall, Tiny-Encoder can be used for ambient information compression applied to microcontrollers in agricultural information acquisition robots.
基金supported by the National HighTech Research and Development(863)Program(No.2015AA015401)the National Natural Science Foundation of China(Nos.61533018 and 61402220)+2 种基金the State Scholarship Fund of CSC(No.201608430240)the Philosophy and Social Science Foundation of Hunan Province(No.16YBA323)the Scientific Research Fund of Hunan Provincial Education Department(Nos.16C1378 and 14B153)
文摘Word embedding has drawn a lot of attention due to its usefulness in many NLP tasks. So far a handful of neural-network based word embedding algorithms have been proposed without considering the effects of pronouns in the training corpus. In this paper, we propose using co-reference resolution to improve the word embedding by extracting better context. We evaluate four word embeddings with considerations of co-reference resolution and compare the quality of word embedding on the task of word analogy and word similarity on multiple data sets.Experiments show that by using co-reference resolution, the word embedding performance in the word analogy task can be improved by around 1.88%. We find that the words that are names of countries are affected the most,which is as expected.