In this paper, we present a novel Support Vector Machine active learning algorithm for effective 3D model retrieval using the concept of relevance feedback. The proposed method learns from the most informative objects...In this paper, we present a novel Support Vector Machine active learning algorithm for effective 3D model retrieval using the concept of relevance feedback. The proposed method learns from the most informative objects which are marked by the user, and then creates a boundary separating the relevant models from irrelevant ones. What it needs is only a small number of 3D models labelled by the user. It can grasp the user's semantic knowledge rapidly and accurately. Experimental results showed that the proposed algorithm significantly improves the retrieval effectiveness. Compared with four state-of-the-art query refinement schemes for 3D model retrieval, it provides superior retrieval performance after no more than two rounds of relevance feedback.展开更多
The mobile Ad Hoc network(MANET)is a self-organizing and self-configuring wireless network,consisting of a set of mobile nodes.The design of efficient routing protocols for MANET has always been an active area of rese...The mobile Ad Hoc network(MANET)is a self-organizing and self-configuring wireless network,consisting of a set of mobile nodes.The design of efficient routing protocols for MANET has always been an active area of research.In existing routing algorithms,however,the current work does not scale well enough to ensure route stability when the mobility and distribution of nodes vary with time.In addition,each node in MANET has only limited initial energy,so energy conservation and balance must be taken into account.An efficient routing algorithm should not only be stable but also energy saving and balanced,within the dynamic network environment.To address the above problems,we propose a stable and energy-efficient routing algorithm,based on learning automata(LA)theory for MANET.First,we construct a new node stability measurement model and define an effective energy ratio function.On that basis,we give the node a weighted value,which is used as the iteration parameter for LA.Next,we construct an LA theory-based feedback mechanism for the MANET environment to optimize the selection of available routes and to prove the convergence of our algorithm.The experiments show that our proposed LA-based routing algorithm for MANET achieved the best performance in route survival time,energy consumption,energy balance,and acceptable per-formance in end-to-end delay and packet delivery ratio.展开更多
Regular fastener detection is necessary to ensure the safety of railways.However,the number of abnormal fasteners is significantly lower than the number of normal fasteners in real railways.Existing supervised inspect...Regular fastener detection is necessary to ensure the safety of railways.However,the number of abnormal fasteners is significantly lower than the number of normal fasteners in real railways.Existing supervised inspectionmethods have insufficient detection ability in cases of imbalanced samples.To solve this problem,we propose an approach based on deep convolutional neural networks(DCNNs),which consists of three stages:fastener localization,abnormal fastener sample generation based on saliency detection,and fastener state inspection.First,a lightweight YOLOv5s is designed to achieve fast and precise localization of fastener regions.Then,the foreground clip region of a fastener image is extracted by the designed fastener saliency detection network(F-SDNet),combined with data augmentation to generate a large number of abnormal fastener samples and balance the number of abnormal and normal samples.Finally,a fastener inspection model called Fastener ResNet-8 is constructed by being trained with the augmented fastener dataset.Results show the effectiveness of our proposed method in solving the problem of sample imbalance in fastener detection.Qualitative and quantitative comparisons show that the proposed F-SDNet outperforms other state-of-the-art methods in clip region extraction,reaching MAE and max F-measure of 0.0215 and 0.9635,respectively.In addition,the FPS of the fastener state inspection model reached 86.2,and the average accuracy reached 98.7%on 614 augmented fastener test sets and 99.9%on 7505 real fastener datasets.展开更多
Renewable energy has become a solution to the world’s energy concerns in recent years.Photovoltaic(PV)technology is the fastest technique to convert solar radiation into electricity.Solar-powered buses,metros,and car...Renewable energy has become a solution to the world’s energy concerns in recent years.Photovoltaic(PV)technology is the fastest technique to convert solar radiation into electricity.Solar-powered buses,metros,and cars use PV technology.Such technologies are always evolving.Included in the parameters that need to be analysed and examined include PV capabilities,vehicle power requirements,utility patterns,acceleration and deceleration rates,and storage module type and capacity,among others.PVPG is intermit-tent and weather-dependent.Accurate forecasting and modelling of PV sys-tem output power are key to managing storage,delivery,and smart grids.With unparalleled data granularity,a data-driven system could better anticipate solar generation.Deep learning(DL)models have gained popularity due to their capacity to handle complex datasets and increase computing power.This article introduces the Galactic Swarm Optimization with Deep Belief Network(GSODBN-PPGF)model.The GSODBN-PPGF model predicts PV power production.The GSODBN-PPGF model normalises data using data scaling.DBN is used to forecast PV power output.The GSO algorithm boosts the DBN model’s predicted output.GSODBN-PPGF projected 0.002 after 40 h but observed 0.063.The GSODBN-PPGF model validation is compared to existing approaches.Simulations showed that the GSODBN-PPGF model outperformed recent techniques.It shows that the proposed model is better at forecasting than other models and can be used to predict the PV power output for the next day.展开更多
Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examini...Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examining both its advantages and disadvantages. Positive impacts of AI are evident in communication, feedback systems, tracking mechanisms, and decision-making processes within organizations. AI-powered communication tools, as exemplified by Slack, facilitate seamless collaboration, transcending geographical barriers. Feedback systems, like Adobe’s Performance Management System, employ AI algorithms to provide personalized development opportunities, enhancing employee growth. AI-based tracking systems optimize resource allocation, as exemplified by studies like “AI-Based Tracking Systems: Enhancing Efficiency and Accountability.” Additionally, AI-powered decision support, demonstrated during the COVID-19 pandemic, showcases the capability to navigate complex challenges and maintain resilience. However, AI adoption poses challenges in human resources, potentially leading to job displacement and necessitating upskilling efforts. Managing AI errors becomes crucial, as illustrated by instances like Amazon’s biased recruiting tool. Data privacy concerns also arise, emphasizing the need for robust security measures. The proposed solution suggests leveraging Local Machine Learning Models (LLMs) to address data privacy issues. Approaches such as federated learning, on-device learning, differential privacy, and homomorphic encryption offer promising strategies. By exploring the evolving dynamics of AI and leadership, this research advocates for responsible AI adoption and proposes LLMs as a potential solution, fostering a balanced integration of AI benefits while mitigating associated risks in corporate settings.展开更多
Battery pack capacity estimation under real-world operating conditions is important for battery performance optimization and health management,contributing to the reliability and longevity of batterypowered systems.Ho...Battery pack capacity estimation under real-world operating conditions is important for battery performance optimization and health management,contributing to the reliability and longevity of batterypowered systems.However,complex operating conditions,coupling cell-to-cell inconsistency,and limited labeled data pose great challenges to accurate and robust battery pack capacity estimation.To address these issues,this paper proposes a hierarchical data-driven framework aimed at enhancing the training of machine learning models with fewer labeled data.Unlike traditional data-driven methods that lack interpretability,the hierarchical data-driven framework unveils the“mechanism”of the black box inside the data-driven framework by splitting the final estimation target into cell-level and pack-level intermediate targets.A generalized feature matrix is devised without requiring all cell voltages,significantly reducing the computational cost and memory resources.The generated intermediate target labels and the corresponding features are hierarchically employed to enhance the training of two machine learning models,effectively alleviating the difficulty of learning the relationship from all features due to fewer labeled data and addressing the dilemma of requiring extensive labeled data for accurate estimation.Using only 10%of degradation data,the proposed framework outperforms the state-of-the-art battery pack capacity estimation methods,achieving mean absolute percentage errors of 0.608%,0.601%,and 1.128%for three battery packs whose degradation load profiles represent real-world operating conditions.Its high accuracy,adaptability,and robustness indicate the potential in different application scenarios,which is promising for reducing laborious and expensive aging experiments at the pack level and facilitating the development of battery technology.展开更多
Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural netwo...Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels.展开更多
Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challeng...Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challenging despite the economic benefits.Existing PV forecasting techniques(sequential and convolutional neural networks(CNN))are sensitive to environmental conditions,reducing energy distribution system performance.To handle these issues,this article proposes an efficient,weather-resilient convolutional-transformer-based network(CT-NET)for accurate and efficient PV power forecasting.The network consists of three main modules.First,the acquired PV generation data are forwarded to the pre-processing module for data refinement.Next,to carry out data encoding,a CNNbased multi-head attention(MHA)module is developed in which a single MHA is used to decode the encoded data.The encoder module is mainly composed of 1D convolutional and MHA layers,which extract local as well as contextual features,while the decoder part includes MHA and feedforward layers to generate the final prediction.Finally,the performance of the proposed network is evaluated using standard error metrics,including the mean squared error(MSE),root mean squared error(RMSE),and mean absolute percentage error(MAPE).An ablation study and comparative analysis with several competitive state-of-the-art approaches revealed a lower error rate in terms of MSE(0.0471),RMSE(0.2167),and MAPE(0.6135)over publicly available benchmark data.In addition,it is demonstrated that our proposed model is less complex,with the lowest number of parameters(0.0135 M),size(0.106 MB),and inference time(2 ms/step),suggesting that it is easy to integrate into the smart grid.展开更多
With the commercialization of fifth generation networks worldwide,research into sixth generation(6G)networks has been launched to meet the demands for high data rates and low latency for future services.A wireless pro...With the commercialization of fifth generation networks worldwide,research into sixth generation(6G)networks has been launched to meet the demands for high data rates and low latency for future services.A wireless propagation channel is the transmission medium to transfer information between the transmitter and the receiver.Moreover,channel properties determine the ultimate performance limit of wireless communication systems.Thus,conducting channel research is a prerequisite to designing 6G wireless communication systems.In this paper,we first introduce several emerging technologies and applications for 6G,such as terahertz communication,industrial Internet of Things,space-air-ground integrated network,and machine learning,and point out the developing trends of 6G channel models.Then,we give a review of channel measurements and models for the technologies and applications.Finally,the outlook for 6G channel measurements and models is discussed.展开更多
基金the National Basic Research Program (973) of China (No. 2004CB719401)the National Research Foundation for the Doctoral Program of Higher Education of China (No.20060003060)
文摘In this paper, we present a novel Support Vector Machine active learning algorithm for effective 3D model retrieval using the concept of relevance feedback. The proposed method learns from the most informative objects which are marked by the user, and then creates a boundary separating the relevant models from irrelevant ones. What it needs is only a small number of 3D models labelled by the user. It can grasp the user's semantic knowledge rapidly and accurately. Experimental results showed that the proposed algorithm significantly improves the retrieval effectiveness. Compared with four state-of-the-art query refinement schemes for 3D model retrieval, it provides superior retrieval performance after no more than two rounds of relevance feedback.
基金The work was supported by the National Natural Science Foundation of China(No.61772386)Guangdong provincial science and technology project(No.2015B010131007)。
文摘The mobile Ad Hoc network(MANET)is a self-organizing and self-configuring wireless network,consisting of a set of mobile nodes.The design of efficient routing protocols for MANET has always been an active area of research.In existing routing algorithms,however,the current work does not scale well enough to ensure route stability when the mobility and distribution of nodes vary with time.In addition,each node in MANET has only limited initial energy,so energy conservation and balance must be taken into account.An efficient routing algorithm should not only be stable but also energy saving and balanced,within the dynamic network environment.To address the above problems,we propose a stable and energy-efficient routing algorithm,based on learning automata(LA)theory for MANET.First,we construct a new node stability measurement model and define an effective energy ratio function.On that basis,we give the node a weighted value,which is used as the iteration parameter for LA.Next,we construct an LA theory-based feedback mechanism for the MANET environment to optimize the selection of available routes and to prove the convergence of our algorithm.The experiments show that our proposed LA-based routing algorithm for MANET achieved the best performance in route survival time,energy consumption,energy balance,and acceptable per-formance in end-to-end delay and packet delivery ratio.
基金supported in part by the National Natural Science Foundation of China (Grant Nos.51975347 and 51907117)in part by the Shanghai Science and Technology Program (Grant No.22010501600).
文摘Regular fastener detection is necessary to ensure the safety of railways.However,the number of abnormal fasteners is significantly lower than the number of normal fasteners in real railways.Existing supervised inspectionmethods have insufficient detection ability in cases of imbalanced samples.To solve this problem,we propose an approach based on deep convolutional neural networks(DCNNs),which consists of three stages:fastener localization,abnormal fastener sample generation based on saliency detection,and fastener state inspection.First,a lightweight YOLOv5s is designed to achieve fast and precise localization of fastener regions.Then,the foreground clip region of a fastener image is extracted by the designed fastener saliency detection network(F-SDNet),combined with data augmentation to generate a large number of abnormal fastener samples and balance the number of abnormal and normal samples.Finally,a fastener inspection model called Fastener ResNet-8 is constructed by being trained with the augmented fastener dataset.Results show the effectiveness of our proposed method in solving the problem of sample imbalance in fastener detection.Qualitative and quantitative comparisons show that the proposed F-SDNet outperforms other state-of-the-art methods in clip region extraction,reaching MAE and max F-measure of 0.0215 and 0.9635,respectively.In addition,the FPS of the fastener state inspection model reached 86.2,and the average accuracy reached 98.7%on 614 augmented fastener test sets and 99.9%on 7505 real fastener datasets.
基金funded by the Deanship of Scientific Research,Princess Nourah bint Abdulrahman University,through the Program of Research Project Funding after publication,Grand No.PRFA-P-42-16.
文摘Renewable energy has become a solution to the world’s energy concerns in recent years.Photovoltaic(PV)technology is the fastest technique to convert solar radiation into electricity.Solar-powered buses,metros,and cars use PV technology.Such technologies are always evolving.Included in the parameters that need to be analysed and examined include PV capabilities,vehicle power requirements,utility patterns,acceleration and deceleration rates,and storage module type and capacity,among others.PVPG is intermit-tent and weather-dependent.Accurate forecasting and modelling of PV sys-tem output power are key to managing storage,delivery,and smart grids.With unparalleled data granularity,a data-driven system could better anticipate solar generation.Deep learning(DL)models have gained popularity due to their capacity to handle complex datasets and increase computing power.This article introduces the Galactic Swarm Optimization with Deep Belief Network(GSODBN-PPGF)model.The GSODBN-PPGF model predicts PV power production.The GSODBN-PPGF model normalises data using data scaling.DBN is used to forecast PV power output.The GSO algorithm boosts the DBN model’s predicted output.GSODBN-PPGF projected 0.002 after 40 h but observed 0.063.The GSODBN-PPGF model validation is compared to existing approaches.Simulations showed that the GSODBN-PPGF model outperformed recent techniques.It shows that the proposed model is better at forecasting than other models and can be used to predict the PV power output for the next day.
文摘Artificial Intelligence (AI) is transforming organizational dynamics, and revolutionizing corporate leadership practices. This research paper delves into the question of how AI influences corporate leadership, examining both its advantages and disadvantages. Positive impacts of AI are evident in communication, feedback systems, tracking mechanisms, and decision-making processes within organizations. AI-powered communication tools, as exemplified by Slack, facilitate seamless collaboration, transcending geographical barriers. Feedback systems, like Adobe’s Performance Management System, employ AI algorithms to provide personalized development opportunities, enhancing employee growth. AI-based tracking systems optimize resource allocation, as exemplified by studies like “AI-Based Tracking Systems: Enhancing Efficiency and Accountability.” Additionally, AI-powered decision support, demonstrated during the COVID-19 pandemic, showcases the capability to navigate complex challenges and maintain resilience. However, AI adoption poses challenges in human resources, potentially leading to job displacement and necessitating upskilling efforts. Managing AI errors becomes crucial, as illustrated by instances like Amazon’s biased recruiting tool. Data privacy concerns also arise, emphasizing the need for robust security measures. The proposed solution suggests leveraging Local Machine Learning Models (LLMs) to address data privacy issues. Approaches such as federated learning, on-device learning, differential privacy, and homomorphic encryption offer promising strategies. By exploring the evolving dynamics of AI and leadership, this research advocates for responsible AI adoption and proposes LLMs as a potential solution, fostering a balanced integration of AI benefits while mitigating associated risks in corporate settings.
基金supported by the National Outstanding Youth Science Fund Project of National Natural Science Foundation of China[Grant No.52222708]the Natural Science Foundation of Beijing Municipality[Grant No.3212033]。
文摘Battery pack capacity estimation under real-world operating conditions is important for battery performance optimization and health management,contributing to the reliability and longevity of batterypowered systems.However,complex operating conditions,coupling cell-to-cell inconsistency,and limited labeled data pose great challenges to accurate and robust battery pack capacity estimation.To address these issues,this paper proposes a hierarchical data-driven framework aimed at enhancing the training of machine learning models with fewer labeled data.Unlike traditional data-driven methods that lack interpretability,the hierarchical data-driven framework unveils the“mechanism”of the black box inside the data-driven framework by splitting the final estimation target into cell-level and pack-level intermediate targets.A generalized feature matrix is devised without requiring all cell voltages,significantly reducing the computational cost and memory resources.The generated intermediate target labels and the corresponding features are hierarchically employed to enhance the training of two machine learning models,effectively alleviating the difficulty of learning the relationship from all features due to fewer labeled data and addressing the dilemma of requiring extensive labeled data for accurate estimation.Using only 10%of degradation data,the proposed framework outperforms the state-of-the-art battery pack capacity estimation methods,achieving mean absolute percentage errors of 0.608%,0.601%,and 1.128%for three battery packs whose degradation load profiles represent real-world operating conditions.Its high accuracy,adaptability,and robustness indicate the potential in different application scenarios,which is promising for reducing laborious and expensive aging experiments at the pack level and facilitating the development of battery technology.
基金Subjects funded by the National Natural Science Foundation of China(Nos.62275216 and 61775181)the Natural Science Basic Research Programme of Shaanxi Province-Major Basic Research Special Project(Nos.S2018-ZC-TD-0061 and TZ0393)the Special Project for the Development of National Key Scientific Instruments and Equipment No.(51927804).
文摘Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels.
基金supported by the National Research Foundation of Korea (NRF)grant funded by the Korean government (MSIT) (No.2019M3F2A1073179).
文摘Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challenging despite the economic benefits.Existing PV forecasting techniques(sequential and convolutional neural networks(CNN))are sensitive to environmental conditions,reducing energy distribution system performance.To handle these issues,this article proposes an efficient,weather-resilient convolutional-transformer-based network(CT-NET)for accurate and efficient PV power forecasting.The network consists of three main modules.First,the acquired PV generation data are forwarded to the pre-processing module for data refinement.Next,to carry out data encoding,a CNNbased multi-head attention(MHA)module is developed in which a single MHA is used to decode the encoded data.The encoder module is mainly composed of 1D convolutional and MHA layers,which extract local as well as contextual features,while the decoder part includes MHA and feedforward layers to generate the final prediction.Finally,the performance of the proposed network is evaluated using standard error metrics,including the mean squared error(MSE),root mean squared error(RMSE),and mean absolute percentage error(MAPE).An ablation study and comparative analysis with several competitive state-of-the-art approaches revealed a lower error rate in terms of MSE(0.0471),RMSE(0.2167),and MAPE(0.6135)over publicly available benchmark data.In addition,it is demonstrated that our proposed model is less complex,with the lowest number of parameters(0.0135 M),size(0.106 MB),and inference time(2 ms/step),suggesting that it is easy to integrate into the smart grid.
基金supported by the National Key R&D Program of China(No.2018YFB1801101)the National Science Fund for Distinguished Young Scholars,China(No.61925102)the Key Project of State Key Lab of Networking and Switching Technology,China(No.NST20180105),Huawei,and ZTE Corporation。
文摘With the commercialization of fifth generation networks worldwide,research into sixth generation(6G)networks has been launched to meet the demands for high data rates and low latency for future services.A wireless propagation channel is the transmission medium to transfer information between the transmitter and the receiver.Moreover,channel properties determine the ultimate performance limit of wireless communication systems.Thus,conducting channel research is a prerequisite to designing 6G wireless communication systems.In this paper,we first introduce several emerging technologies and applications for 6G,such as terahertz communication,industrial Internet of Things,space-air-ground integrated network,and machine learning,and point out the developing trends of 6G channel models.Then,we give a review of channel measurements and models for the technologies and applications.Finally,the outlook for 6G channel measurements and models is discussed.