DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in...DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.展开更多
The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-r...The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-ronment is a challenging task.Current instance segmentation algorithms for strawberries suffer from issues such as poor real-time performance and low accuracy.To this end,the present study proposes an Efficient YOLACT(E-YOLACT)algorithm for strawberry detection and segmentation based on the YOLACT framework.The key enhancements of the E-YOLACT encompass the development of a lightweight attention mechanism,pyramid squeeze shuffle attention(PSSA),for efficient feature extraction.Additionally,an attention-guided context-feature pyramid network(AC-FPN)is employed instead of FPN to optimize the architecture’s performance.Furthermore,a feature-enhanced model(FEM)is introduced to enhance the prediction head’s capabilities,while efficient fast non-maximum suppression(EF-NMS)is devised to improve non-maximum suppression.The experimental results demonstrate that the E-YOLACT achieves a Box-mAP and Mask-mAP of 77.9 and 76.6,respectively,on the custom dataset.Moreover,it exhibits an impressive category accuracy of 93.5%.Notably,the E-YOLACT also demonstrates a remarkable real-time detection capability with a speed of 34.8 FPS.The method proposed in this article presents an efficient approach for the vision system of a strawberry-picking robot.展开更多
The mining sector historically drove the global economy but at the expense of severe environmental and health repercussions,posing sustainability challenges[1]-[3].Recent advancements on artificial intelligence(AI)are...The mining sector historically drove the global economy but at the expense of severe environmental and health repercussions,posing sustainability challenges[1]-[3].Recent advancements on artificial intelligence(AI)are revolutionizing mining through robotic and data-driven innovations[4]-[7].While AI offers mining industry advantages,it is crucial to acknowledge the potential risks associated with its widespread use.Over-reliance on AI may lead to a loss of human control over mining operations in the future,resulting in unpredictable consequences.展开更多
In this paper, the initial boundary value problem of a class of nonlinear generalized Kolmogorov-Petrovlkii-Piskunov equations is studied. The existence and uniqueness of the solution and the bounded absorption set ar...In this paper, the initial boundary value problem of a class of nonlinear generalized Kolmogorov-Petrovlkii-Piskunov equations is studied. The existence and uniqueness of the solution and the bounded absorption set are proved by the prior estimation and the Galerkin finite element method, thus the existence of the global attractor is proved and the upper bound estimate of the global attractor is obtained.展开更多
ChatG PT,an artificial intelligence generated content (AIGC) model developed by OpenAI,has attracted worldwide attention for its capability of dealing with challenging language understanding and generation tasks in th...ChatG PT,an artificial intelligence generated content (AIGC) model developed by OpenAI,has attracted worldwide attention for its capability of dealing with challenging language understanding and generation tasks in the form of conversations.This paper briefly provides an overview on the history,status quo and potential future development of ChatGPT,helping to provide an entry point to think about ChatGPT.Specifically,from the limited open-accessed resources,we conclude the core techniques of ChatGPT,mainly including large-scale language models,in-context learning,reinforcement learning from human feedback and the key technical steps for developing ChatGPT.We further analyze the pros and cons of ChatGPT and we rethink the duality of ChatGPT in various fields.Although it has been widely acknowledged that ChatGPT brings plenty of opportunities for various fields,mankind should still treat and use ChatG PT properly to avoid the potential threat,e.g.,academic integrity and safety challenge.Finally,we discuss several open problems as the potential development of ChatGPT.展开更多
THE current ChatGPT phenomenon has signaled a new era of Artificial Intelligence moving from Algorithmic Intelligence to Linguistic Intelligence where interactive activities between actual and artificial,real and virt...THE current ChatGPT phenomenon has signaled a new era of Artificial Intelligence moving from Algorithmic Intelligence to Linguistic Intelligence where interactive activities between actual and artificial,real and virtual,human and machine play an active and important role online and in real-time.At IEEE/CAA JAS,we are interested in investigating the impact and significance of this new era on industrial development,especially control and automation for manufacturing and production.展开更多
Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In ...Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.展开更多
In order to make the peak and offset of the signal meet the requirements of artificial equipment,dynamical analysis and geometric control of the laser system have become indispensable.In this paper,a locally active me...In order to make the peak and offset of the signal meet the requirements of artificial equipment,dynamical analysis and geometric control of the laser system have become indispensable.In this paper,a locally active memristor with non-volatile memory is introduced into a complex-valued Lorenz laser system.By using numerical measures,complex dynamical behaviors of the memristive laser system are uncovered.It appears the alternating appearance of quasi-periodic and chaotic oscillations.The mechanism of transformation from a quasi-periodic pattern to a chaotic one is revealed from the perspective of Hamilton energy.Interestingly,initial-values-oriented extreme multi-stability patterns are found,where the coexisting attractors have the same Lyapunov exponents.In addition,the introduction of a memristor greatly improves the complexity of the laser system.Moreover,to control the amplitude and offset of the chaotic signal,two kinds of geometric control methods including amplitude control and rotation control are designed.The results show that these two geometric control methods have revised the size and position of the chaotic signal without changing the chaotic dynamics.Finally,a digital hardware device is developed and the experiment outputs agree fairly well with those of the numerical simulations.展开更多
Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion im...Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion images have disadvantages such as blurred edges,low contrast,and loss of details.Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform(NSST).Furthermore,the low-frequency subbands were fused by convolutional sparse representation(CSR),and the high-frequency subbands were fused by an improved pulse coupled neural network(IPCNN)algorithm,which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm,improving the performance of sparse representation with details injection.The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators.展开更多
Background Image denoising is an important topic in the digital image processing field.This study theoretically investigates the validity of the classical nonlocal mean filter(NLM)for removing Gaussian noise from a no...Background Image denoising is an important topic in the digital image processing field.This study theoretically investigates the validity of the classical nonlocal mean filter(NLM)for removing Gaussian noise from a novel statistical perspective.Method By considering the restored image as an estimator of the clear image from a statistical perspective,we gradually analyze the unbiasedness and effectiveness of the restored value obtained by the NLM filter.Subsequently,we propose an improved NLM algorithm called the clustering-based NLM filter that is derived from the conditions obtained through the theoretical analysis.The proposed filter attempts to restore an ideal value using the approximately constant intensities obtained by the image clustering process.In this study,we adopt a mixed probability model on a prefiltered image to generate an estimator of the ideal clustered components.Result The experiment yields improved peak signal-to-noise ratio values and visual results upon the removal of Gaussian noise.Conclusion However,the considerable practical performance of our filter demonstrates that our method is theoretically acceptable as it can effectively estimate ideal images.展开更多
In developing countries like South Africa,users experienced more than 1030 hours of load shedding outages in just the first half of 2023 due to inadequate power supply from the national grid.Residential homes that can...In developing countries like South Africa,users experienced more than 1030 hours of load shedding outages in just the first half of 2023 due to inadequate power supply from the national grid.Residential homes that cannot afford to take actions to mitigate the challenges of load shedding are severely inconvenienced as they have to reschedule their demand involuntarily.This study presents optimal strategies to guide households in determining suitable scheduling and sizing solutions for solar home systems to mitigate the inconvenience experienced by residents due to load shedding.To start with,we predict the load shedding stages that are used as input for the optimal strategies by using the K-Nearest Neighbour(KNN)algorithm.Based on an accurate forecast of the future load shedding patterns,we formulate the residents’inconvenience and the loss of power supply probability during load shedding as the objective function.When solving the multi-objective optimisation problem,four different strategies to fight against load shedding are identified,namely(1)optimal home appliance scheduling(HAS)under load shedding;(2)optimal HAS supported by solar panels;(3)optimal HAS supported by batteries,and(4)optimal HAS supported by the solar home system with both solar panels and batteries.Among these strategies,appliance scheduling with an optimally sized 9.6 kWh battery and a 2.74 kWp panel array of five 550 Wp panels,eliminates the loss of power supply probability and reduces the inconvenience by 92%when tested under the South African load shedding cases in 2023.展开更多
Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professio...Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professional sports analytics realm but also the academic AI research community. AI brings gamechanging approaches for soccer analytics where soccer has been a typical benchmark for AI research. The combination has been an emerging topic. In this paper, soccer match analytics are taken as a complete observation-orientation-decision-action(OODA) loop.In addition, as in AI frameworks such as that for reinforcement learning, interacting with a virtual environment enables an evolving model. Therefore, both soccer analytics in the real world and virtual domains are discussed. With the intersection of the OODA loop and the real-virtual domains, available soccer data, including event and tracking data, and diverse orientation and decisionmaking models for both real-world and virtual soccer matches are comprehensively reviewed. Finally, some promising directions in this interdisciplinary area are pointed out. It is claimed that paradigms for both professional sports analytics and AI research could be combined. Moreover, it is quite promising to bridge the gap between the real and virtual domains for soccer match analysis and decision-making.展开更多
This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analy...This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure,control and decision-making, and perception and interaction, a holistic overview of the current state of humanoid robot research is presented. Furthermore, emerging challenges in the field are identified, emphasizing the necessity for a deeper understanding of biological motion mechanisms, improved structural design,enhanced material applications, advanced drive and control methods, and efficient energy utilization. The integration of bionics, brain-inspired intelligence, mechanics, and control is underscored as a promising direction for the development of advanced humanoid robotic systems. This paper serves as an invaluable resource, offering insightful guidance to researchers in the field,while contributing to the ongoing evolution and potential of humanoid robots across diverse domains.展开更多
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral...By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods.展开更多
In this paper,fixed-time consensus tracking for mul-tiagent systems(MASs)with dynamics in the form of strict feed-back affine nonlinearity is addressed.A fixed-time antidistur-bance consensus tracking protocol is prop...In this paper,fixed-time consensus tracking for mul-tiagent systems(MASs)with dynamics in the form of strict feed-back affine nonlinearity is addressed.A fixed-time antidistur-bance consensus tracking protocol is proposed,which consists of a distributed fixed-time observer,a fixed-time disturbance observer,a nonsmooth antidisturbance backstepping controller,and the fixed-time stability analysis is conducted by using the Lyapunov theory correspondingly.This paper includes three main improvements.First,a distributed fixed-time observer is developed for each follower to obtain an estimate of the leader’s output by utilizing the topology of the communication network.Second,a fixed-time disturbance observer is given to estimate the lumped disturbances for feedforward compensation.Finally,a nonsmooth antidisturbance backstepping tracking controller with feedforward compensation for lumped disturbances is designed.In order to mitigate the“explosion of complexity”in the tradi-tional backstepping approach,we have implemented a modified nonsmooth command filter to enhance the performance of the closed-loop system.The simulation results show that the pro-posed method is effective.展开更多
This paper investigates the tracking control problem for unmanned underwater vehicles(UUVs)systems with sensor faults,input saturation,and external disturbance caused by waves and ocean currents.An active sensor fault...This paper investigates the tracking control problem for unmanned underwater vehicles(UUVs)systems with sensor faults,input saturation,and external disturbance caused by waves and ocean currents.An active sensor fault-tolerant control scheme is proposed.First,the developed method only requires the inertia matrix of the UUV,without other dynamic information,and can handle both additive and multiplicative sensor faults.Subsequently,an adaptive fault-tolerant controller is designed to achieve asymptotic tracking control of the UUV by employing robust integral of the sign of error feedback method.It is shown that the effect of sensor faults is online estimated and compensated by an adaptive estimator.With the proposed controller,the tracking error and estimation error can asymptotically converge to zero.Finally,simulation results are performed to demonstrate the effectiveness of the proposed method.展开更多
The pursuit-evasion game models the strategic interaction among players, attracting attention in many realistic scenarios, such as missile guidance, unmanned aerial vehicles, and target defense. Existing studies mainl...The pursuit-evasion game models the strategic interaction among players, attracting attention in many realistic scenarios, such as missile guidance, unmanned aerial vehicles, and target defense. Existing studies mainly concentrate on the cooperative pursuit of multiple players in two-dimensional pursuit-evasion games. However, these approaches can hardly be applied to practical situations where players usually move in three-dimensional space with a three-degree-of-freedom control. In this paper,we make the first attempt to investigate the equilibrium strategy of the realistic pursuit-evasion game, in which the pursuer follows a three-degree-of-freedom control, and the evader moves freely. First, we describe the pursuer's three-degree-of-freedom control and the evader's relative coordinate. We then rigorously derive the equilibrium strategy by solving the retrogressive path equation according to the Hamilton-Jacobi-Bellman-Isaacs(HJBI) method, which divides the pursuit-evasion process into the navigation and acceleration phases. Besides, we analyze the maximum allowable speed for the pursuer to capture the evader successfully and provide the strategy with which the evader can escape when the pursuer's speed exceeds the threshold. We further conduct comparison tests with various unilateral deviations to verify that the proposed strategy forms a Nash equilibrium.展开更多
Given the challenge of estimating or calculating quantities of waste electrical and electronic equipment(WEEE)in developing countries,this article focuses on predicting the WEEE generated by Cameroonian small and medi...Given the challenge of estimating or calculating quantities of waste electrical and electronic equipment(WEEE)in developing countries,this article focuses on predicting the WEEE generated by Cameroonian small and medium enterprises(SMEs)that are engaged in ISO 14001:2015 initiatives and consume electrical and electronic equipment(EEE)to enhance their performance and profitability.The methodology employed an exploratory approach involving the application of general equilibrium theory(GET)to contextualize the study and generate relevant parameters for deploying the random forest regression learning algorithm for predictions.Machine learning was applied to 80%of the samples for training,while simulation was conducted on the remaining 20%of samples based on quantities of EEE utilized over a specific period,utilization rates,repair rates,and average lifespans.The results demonstrate that the model’s predicted values are significantly close to the actual quantities of generated WEEE,and the model’s performance was evaluated using the mean squared error(MSE)and yielding satisfactory results.Based on this model,both companies and stakeholders can set realistic objectives for managing companies’WEEE,fostering sustainable socio-environmental practices.展开更多
Galaxy morphology classifications based on machine learning are a typical technique to handle enormous amounts of astronomical observation data,but the key challenge is how to provide enough training data for the mach...Galaxy morphology classifications based on machine learning are a typical technique to handle enormous amounts of astronomical observation data,but the key challenge is how to provide enough training data for the machine learning models.Therefore this article proposes an image data augmentation method that combines few-shot learning and generative adversarial networks.The Galaxy10 DECaLs data set is selected for the experiments with consistency,variance,and augmentation effects being evaluated.Three popular networks,including AlexNet,VGG,and ResNet,are used as examples to study the effectiveness of different augmentation methods on galaxy morphology classifications.Experiment results show that the proposed method can generate galaxy images and can be used for expanding the classification model’s training set.According to comparative studies,the best enhancement effect on model performance is obtained by generating a data set that is 0.5–1 time larger than the original data set.Meanwhile,different augmentation strategies have considerably varied effects on different types of galaxies.FSL-GAN achieved the best classification performance on the ResNet network for In-between Round Smooth Galaxies and Unbarred Loose Spiral Galaxies,with F1 Scores of 89.54%and 63.18%,respectively.Experimental comparison reveals that various data augmentation techniques have varied effects on different categories of galaxy morphology and machine learning models.Finally,the best augmentation strategies for each galaxy category are suggested.展开更多
Most of the neural network architectures are based on human experience,which requires a long and tedious trial-and-error process.Neural architecture search(NAS)attempts to detect effective architectures without human ...Most of the neural network architectures are based on human experience,which requires a long and tedious trial-and-error process.Neural architecture search(NAS)attempts to detect effective architectures without human intervention.Evolutionary algorithms(EAs)for NAS can find better solutions than human-designed architectures by exploring a large search space for possible architectures.Using multiobjective EAs for NAS,optimal neural architectures that meet various performance criteria can be explored and discovered efficiently.Furthermore,hardware-accelerated NAS methods can improve the efficiency of the NAS.While existing reviews have mainly focused on different strategies to complete NAS,a few studies have explored the use of EAs for NAS.In this paper,we summarize and explore the use of EAs for NAS,as well as large-scale multiobjective optimization strategies and hardware-accelerated NAS methods.NAS performs well in healthcare applications,such as medical image analysis,classification of disease diagnosis,and health monitoring.EAs for NAS can automate the search process and optimize multiple objectives simultaneously in a given healthcare task.Deep neural network has been successfully used in healthcare,but it lacks interpretability.Medical data is highly sensitive,and privacy leaks are frequently reported in the healthcare industry.To solve these problems,in healthcare,we propose an interpretable neuroevolution framework based on federated learning to address search efficiency and privacy protection.Moreover,we also point out future research directions for evolutionary NAS.Overall,for researchers who want to use EAs to optimize NNs in healthcare,we analyze the advantages and disadvantages of doing so to provide detailed guidance,and propose an interpretable privacy-preserving framework for healthcare applications.展开更多
基金the National Natural Science Foundation of China(62271485,61903363,U1811463,62103411,62203250)the Science and Technology Development Fund of Macao SAR(0093/2023/RIA2,0050/2020/A1)。
文摘DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.
基金funded by Anhui Provincial Natural Science Foundation(No.2208085ME128)the Anhui University-Level Special Project of Anhui University of Science and Technology(No.XCZX2021-01)+1 种基金the Research and the Development Fund of the Institute of Environmental Friendly Materials and Occupational Health,Anhui University of Science and Technology(No.ALW2022YF06)Anhui Province New Era Education Quality Project(Graduate Education)(No.2022xscx073).
文摘The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-ronment is a challenging task.Current instance segmentation algorithms for strawberries suffer from issues such as poor real-time performance and low accuracy.To this end,the present study proposes an Efficient YOLACT(E-YOLACT)algorithm for strawberry detection and segmentation based on the YOLACT framework.The key enhancements of the E-YOLACT encompass the development of a lightweight attention mechanism,pyramid squeeze shuffle attention(PSSA),for efficient feature extraction.Additionally,an attention-guided context-feature pyramid network(AC-FPN)is employed instead of FPN to optimize the architecture’s performance.Furthermore,a feature-enhanced model(FEM)is introduced to enhance the prediction head’s capabilities,while efficient fast non-maximum suppression(EF-NMS)is devised to improve non-maximum suppression.The experimental results demonstrate that the E-YOLACT achieves a Box-mAP and Mask-mAP of 77.9 and 76.6,respectively,on the custom dataset.Moreover,it exhibits an impressive category accuracy of 93.5%.Notably,the E-YOLACT also demonstrates a remarkable real-time detection capability with a speed of 34.8 FPS.The method proposed in this article presents an efficient approach for the vision system of a strawberry-picking robot.
文摘The mining sector historically drove the global economy but at the expense of severe environmental and health repercussions,posing sustainability challenges[1]-[3].Recent advancements on artificial intelligence(AI)are revolutionizing mining through robotic and data-driven innovations[4]-[7].While AI offers mining industry advantages,it is crucial to acknowledge the potential risks associated with its widespread use.Over-reliance on AI may lead to a loss of human control over mining operations in the future,resulting in unpredictable consequences.
文摘In this paper, the initial boundary value problem of a class of nonlinear generalized Kolmogorov-Petrovlkii-Piskunov equations is studied. The existence and uniqueness of the solution and the bounded absorption set are proved by the prior estimation and the Galerkin finite element method, thus the existence of the global attractor is proved and the upper bound estimate of the global attractor is obtained.
基金supported by National Key Research and Development Program of China (2021YFB1714300)National Natural Science Foundation of China (62293502, 61831022, 61976211)Youth Innovation Promotion Association CAS。
文摘ChatG PT,an artificial intelligence generated content (AIGC) model developed by OpenAI,has attracted worldwide attention for its capability of dealing with challenging language understanding and generation tasks in the form of conversations.This paper briefly provides an overview on the history,status quo and potential future development of ChatGPT,helping to provide an entry point to think about ChatGPT.Specifically,from the limited open-accessed resources,we conclude the core techniques of ChatGPT,mainly including large-scale language models,in-context learning,reinforcement learning from human feedback and the key technical steps for developing ChatGPT.We further analyze the pros and cons of ChatGPT and we rethink the duality of ChatGPT in various fields.Although it has been widely acknowledged that ChatGPT brings plenty of opportunities for various fields,mankind should still treat and use ChatG PT properly to avoid the potential threat,e.g.,academic integrity and safety challenge.Finally,we discuss several open problems as the potential development of ChatGPT.
基金partially supported by the Science and Technology Development Fund of Macao SAR(0050/2020/A1)the National Natural Science Foundation of China(62103411)。
文摘THE current ChatGPT phenomenon has signaled a new era of Artificial Intelligence moving from Algorithmic Intelligence to Linguistic Intelligence where interactive activities between actual and artificial,real and virtual,human and machine play an active and important role online and in real-time.At IEEE/CAA JAS,we are interested in investigating the impact and significance of this new era on industrial development,especially control and automation for manufacturing and production.
基金This work was supported in part by the National Natural Science Foundation of China(82260360)the Foreign Young Talent Program(QN2021033002L).
文摘Artificial intelligence(AI)continues to transform data analysis in many domains.Progress in each domain is driven by a growing body of annotated data,increased computational resources,and technological innovations.In medicine,the sensitivity of the data,the complexity of the tasks,the potentially high stakes,and a requirement of accountability give rise to a particular set of challenges.In this review,we focus on three key methodological approaches that address some of the particular challenges in AI-driven medical decision making.1)Explainable AI aims to produce a human-interpretable justification for each output.Such models increase confidence if the results appear plausible and match the clinicians expectations.However,the absence of a plausible explanation does not imply an inaccurate model.Especially in highly non-linear,complex models that are tuned to maximize accuracy,such interpretable representations only reflect a small portion of the justification.2)Domain adaptation and transfer learning enable AI models to be trained and applied across multiple domains.For example,a classification task based on images acquired on different acquisition hardware.3)Federated learning enables learning large-scale models without exposing sensitive personal health information.Unlike centralized AI learning,where the centralized learning machine has access to the entire training data,the federated learning process iteratively updates models across multiple sites by exchanging only parameter updates,not personal health data.This narrative review covers the basic concepts,highlights relevant corner-stone and stateof-the-art research in the field,and discusses perspectives.
基金Project supported by the National Natural Science Foundation of China(Grant No.61773010)Taishan Scholar Foundation of Shandong Province of China(Grant No.ts20190938)。
文摘In order to make the peak and offset of the signal meet the requirements of artificial equipment,dynamical analysis and geometric control of the laser system have become indispensable.In this paper,a locally active memristor with non-volatile memory is introduced into a complex-valued Lorenz laser system.By using numerical measures,complex dynamical behaviors of the memristive laser system are uncovered.It appears the alternating appearance of quasi-periodic and chaotic oscillations.The mechanism of transformation from a quasi-periodic pattern to a chaotic one is revealed from the perspective of Hamilton energy.Interestingly,initial-values-oriented extreme multi-stability patterns are found,where the coexisting attractors have the same Lyapunov exponents.In addition,the introduction of a memristor greatly improves the complexity of the laser system.Moreover,to control the amplitude and offset of the chaotic signal,two kinds of geometric control methods including amplitude control and rotation control are designed.The results show that these two geometric control methods have revised the size and position of the chaotic signal without changing the chaotic dynamics.Finally,a digital hardware device is developed and the experiment outputs agree fairly well with those of the numerical simulations.
基金supported in part by the National Natural Science Foundation of China under Grant 41505017.
文摘Multi-source information can be obtained through the fusion of infrared images and visible light images,which have the characteristics of complementary information.However,the existing acquisition methods of fusion images have disadvantages such as blurred edges,low contrast,and loss of details.Based on convolution sparse representation and improved pulse-coupled neural network this paper proposes an image fusion algorithm that decompose the source images into high-frequency and low-frequency subbands by non-subsampled Shearlet Transform(NSST).Furthermore,the low-frequency subbands were fused by convolutional sparse representation(CSR),and the high-frequency subbands were fused by an improved pulse coupled neural network(IPCNN)algorithm,which can effectively solve the problem of difficulty in setting parameters of the traditional PCNN algorithm,improving the performance of sparse representation with details injection.The result reveals that the proposed method in this paper has more advantages than the existing mainstream fusion algorithms in terms of visual effects and objective indicators.
文摘Background Image denoising is an important topic in the digital image processing field.This study theoretically investigates the validity of the classical nonlocal mean filter(NLM)for removing Gaussian noise from a novel statistical perspective.Method By considering the restored image as an estimator of the clear image from a statistical perspective,we gradually analyze the unbiasedness and effectiveness of the restored value obtained by the NLM filter.Subsequently,we propose an improved NLM algorithm called the clustering-based NLM filter that is derived from the conditions obtained through the theoretical analysis.The proposed filter attempts to restore an ideal value using the approximately constant intensities obtained by the image clustering process.In this study,we adopt a mixed probability model on a prefiltered image to generate an estimator of the ideal clustered components.Result The experiment yields improved peak signal-to-noise ratio values and visual results upon the removal of Gaussian noise.Conclusion However,the considerable practical performance of our filter demonstrates that our method is theoretically acceptable as it can effectively estimate ideal images.
基金supported by National Key R&D Program of China(Grant No.2021YFE0199000)National Natural Science Foundation of China(Grant No.62133015)+1 种基金National Research Foundation China/South Africa Research Cooperation Programme with Grant No.148762Royal Academy of Engineering Transforming Systems through Partnership grant scheme with reference No.TSP2021\100016.
文摘In developing countries like South Africa,users experienced more than 1030 hours of load shedding outages in just the first half of 2023 due to inadequate power supply from the national grid.Residential homes that cannot afford to take actions to mitigate the challenges of load shedding are severely inconvenienced as they have to reschedule their demand involuntarily.This study presents optimal strategies to guide households in determining suitable scheduling and sizing solutions for solar home systems to mitigate the inconvenience experienced by residents due to load shedding.To start with,we predict the load shedding stages that are used as input for the optimal strategies by using the K-Nearest Neighbour(KNN)algorithm.Based on an accurate forecast of the future load shedding patterns,we formulate the residents’inconvenience and the loss of power supply probability during load shedding as the objective function.When solving the multi-objective optimisation problem,four different strategies to fight against load shedding are identified,namely(1)optimal home appliance scheduling(HAS)under load shedding;(2)optimal HAS supported by solar panels;(3)optimal HAS supported by batteries,and(4)optimal HAS supported by the solar home system with both solar panels and batteries.Among these strategies,appliance scheduling with an optimally sized 9.6 kWh battery and a 2.74 kWp panel array of five 550 Wp panels,eliminates the loss of power supply probability and reduces the inconvenience by 92%when tested under the South African load shedding cases in 2023.
基金supported by the National Key Research,Development Program of China (2020AAA0103404)the Beijing Nova Program (20220484077)the National Natural Science Foundation of China (62073323)。
文摘Due to ever-growing soccer data collection approaches and progressing artificial intelligence(AI) methods, soccer analysis, evaluation, and decision-making have received increasing interest from not only the professional sports analytics realm but also the academic AI research community. AI brings gamechanging approaches for soccer analytics where soccer has been a typical benchmark for AI research. The combination has been an emerging topic. In this paper, soccer match analytics are taken as a complete observation-orientation-decision-action(OODA) loop.In addition, as in AI frameworks such as that for reinforcement learning, interacting with a virtual environment enables an evolving model. Therefore, both soccer analytics in the real world and virtual domains are discussed. With the intersection of the OODA loop and the real-virtual domains, available soccer data, including event and tracking data, and diverse orientation and decisionmaking models for both real-world and virtual soccer matches are comprehensively reviewed. Finally, some promising directions in this interdisciplinary area are pointed out. It is claimed that paradigms for both professional sports analytics and AI research could be combined. Moreover, it is quite promising to bridge the gap between the real and virtual domains for soccer match analysis and decision-making.
基金supported by the National Natural Science Foundation of China(62303457,U21A20482)Project funded by China Postdoctoral Science Foundation (2023M733737)the National Key R&D Program of China(2022YFB3303800)。
文摘This paper provides a comprehensive review of the current status, advancements, and future prospects of humanoid robots, highlighting their significance in driving the evolution of next-generation industries. By analyzing various research endeavors and key technologies, encompassing ontology structure,control and decision-making, and perception and interaction, a holistic overview of the current state of humanoid robot research is presented. Furthermore, emerging challenges in the field are identified, emphasizing the necessity for a deeper understanding of biological motion mechanisms, improved structural design,enhanced material applications, advanced drive and control methods, and efficient energy utilization. The integration of bionics, brain-inspired intelligence, mechanics, and control is underscored as a promising direction for the development of advanced humanoid robotic systems. This paper serves as an invaluable resource, offering insightful guidance to researchers in the field,while contributing to the ongoing evolution and potential of humanoid robots across diverse domains.
基金Ministry of Education,Culture,Sports,Science and Technology,Grant/Award Number:20K11867。
文摘By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods.
基金supported by the National Defense Basic Scientific Research Project(JCKY2020130C025)the National Science and Technology Major Project(J2019-III-0020-0064,J2019-V-0014-0109)。
文摘In this paper,fixed-time consensus tracking for mul-tiagent systems(MASs)with dynamics in the form of strict feed-back affine nonlinearity is addressed.A fixed-time antidistur-bance consensus tracking protocol is proposed,which consists of a distributed fixed-time observer,a fixed-time disturbance observer,a nonsmooth antidisturbance backstepping controller,and the fixed-time stability analysis is conducted by using the Lyapunov theory correspondingly.This paper includes three main improvements.First,a distributed fixed-time observer is developed for each follower to obtain an estimate of the leader’s output by utilizing the topology of the communication network.Second,a fixed-time disturbance observer is given to estimate the lumped disturbances for feedforward compensation.Finally,a nonsmooth antidisturbance backstepping tracking controller with feedforward compensation for lumped disturbances is designed.In order to mitigate the“explosion of complexity”in the tradi-tional backstepping approach,we have implemented a modified nonsmooth command filter to enhance the performance of the closed-loop system.The simulation results show that the pro-posed method is effective.
基金the National Natural Science Foundation of China(62303012,62236002,61911004,62303008)。
文摘This paper investigates the tracking control problem for unmanned underwater vehicles(UUVs)systems with sensor faults,input saturation,and external disturbance caused by waves and ocean currents.An active sensor fault-tolerant control scheme is proposed.First,the developed method only requires the inertia matrix of the UUV,without other dynamic information,and can handle both additive and multiplicative sensor faults.Subsequently,an adaptive fault-tolerant controller is designed to achieve asymptotic tracking control of the UUV by employing robust integral of the sign of error feedback method.It is shown that the effect of sensor faults is online estimated and compensated by an adaptive estimator.With the proposed controller,the tracking error and estimation error can asymptotically converge to zero.Finally,simulation results are performed to demonstrate the effectiveness of the proposed method.
基金supported in part by the Strategic Priority Research Program of Chinese Academy of Sciences(XDA27030100)National Natural Science Foundation of China(72293575, 11832001)。
文摘The pursuit-evasion game models the strategic interaction among players, attracting attention in many realistic scenarios, such as missile guidance, unmanned aerial vehicles, and target defense. Existing studies mainly concentrate on the cooperative pursuit of multiple players in two-dimensional pursuit-evasion games. However, these approaches can hardly be applied to practical situations where players usually move in three-dimensional space with a three-degree-of-freedom control. In this paper,we make the first attempt to investigate the equilibrium strategy of the realistic pursuit-evasion game, in which the pursuer follows a three-degree-of-freedom control, and the evader moves freely. First, we describe the pursuer's three-degree-of-freedom control and the evader's relative coordinate. We then rigorously derive the equilibrium strategy by solving the retrogressive path equation according to the Hamilton-Jacobi-Bellman-Isaacs(HJBI) method, which divides the pursuit-evasion process into the navigation and acceleration phases. Besides, we analyze the maximum allowable speed for the pursuer to capture the evader successfully and provide the strategy with which the evader can escape when the pursuer's speed exceeds the threshold. We further conduct comparison tests with various unilateral deviations to verify that the proposed strategy forms a Nash equilibrium.
文摘Given the challenge of estimating or calculating quantities of waste electrical and electronic equipment(WEEE)in developing countries,this article focuses on predicting the WEEE generated by Cameroonian small and medium enterprises(SMEs)that are engaged in ISO 14001:2015 initiatives and consume electrical and electronic equipment(EEE)to enhance their performance and profitability.The methodology employed an exploratory approach involving the application of general equilibrium theory(GET)to contextualize the study and generate relevant parameters for deploying the random forest regression learning algorithm for predictions.Machine learning was applied to 80%of the samples for training,while simulation was conducted on the remaining 20%of samples based on quantities of EEE utilized over a specific period,utilization rates,repair rates,and average lifespans.The results demonstrate that the model’s predicted values are significantly close to the actual quantities of generated WEEE,and the model’s performance was evaluated using the mean squared error(MSE)and yielding satisfactory results.Based on this model,both companies and stakeholders can set realistic objectives for managing companies’WEEE,fostering sustainable socio-environmental practices.
基金supported by China Manned Space Program through its Space Application System,the National Natural Science Foundation of China(NSFC,grant Nos.11973022 and U1811464)the Natural Science Foundation of Guangdong Province(No.2020A1515010710)。
文摘Galaxy morphology classifications based on machine learning are a typical technique to handle enormous amounts of astronomical observation data,but the key challenge is how to provide enough training data for the machine learning models.Therefore this article proposes an image data augmentation method that combines few-shot learning and generative adversarial networks.The Galaxy10 DECaLs data set is selected for the experiments with consistency,variance,and augmentation effects being evaluated.Three popular networks,including AlexNet,VGG,and ResNet,are used as examples to study the effectiveness of different augmentation methods on galaxy morphology classifications.Experiment results show that the proposed method can generate galaxy images and can be used for expanding the classification model’s training set.According to comparative studies,the best enhancement effect on model performance is obtained by generating a data set that is 0.5–1 time larger than the original data set.Meanwhile,different augmentation strategies have considerably varied effects on different types of galaxies.FSL-GAN achieved the best classification performance on the ResNet network for In-between Round Smooth Galaxies and Unbarred Loose Spiral Galaxies,with F1 Scores of 89.54%and 63.18%,respectively.Experimental comparison reveals that various data augmentation techniques have varied effects on different categories of galaxy morphology and machine learning models.Finally,the best augmentation strategies for each galaxy category are suggested.
基金supported in part by the National Natural Science Foundation of China (NSFC) under Grant No.61976242in part by the Natural Science Fund of Hebei Province for Distinguished Young Scholars under Grant No.F2021202010+2 种基金in part by the Fundamental Scientific Research Funds for Interdisciplinary Team of Hebei University of Technology under Grant No.JBKYTD2002funded by Science and Technology Project of Hebei Education Department under Grant No.JZX2023007supported by 2022 Interdisciplinary Postgraduate Training Program of Hebei University of Technology under Grant No.HEBUT-YXKJC-2022122.
文摘Most of the neural network architectures are based on human experience,which requires a long and tedious trial-and-error process.Neural architecture search(NAS)attempts to detect effective architectures without human intervention.Evolutionary algorithms(EAs)for NAS can find better solutions than human-designed architectures by exploring a large search space for possible architectures.Using multiobjective EAs for NAS,optimal neural architectures that meet various performance criteria can be explored and discovered efficiently.Furthermore,hardware-accelerated NAS methods can improve the efficiency of the NAS.While existing reviews have mainly focused on different strategies to complete NAS,a few studies have explored the use of EAs for NAS.In this paper,we summarize and explore the use of EAs for NAS,as well as large-scale multiobjective optimization strategies and hardware-accelerated NAS methods.NAS performs well in healthcare applications,such as medical image analysis,classification of disease diagnosis,and health monitoring.EAs for NAS can automate the search process and optimize multiple objectives simultaneously in a given healthcare task.Deep neural network has been successfully used in healthcare,but it lacks interpretability.Medical data is highly sensitive,and privacy leaks are frequently reported in the healthcare industry.To solve these problems,in healthcare,we propose an interpretable neuroevolution framework based on federated learning to address search efficiency and privacy protection.Moreover,we also point out future research directions for evolutionary NAS.Overall,for researchers who want to use EAs to optimize NNs in healthcare,we analyze the advantages and disadvantages of doing so to provide detailed guidance,and propose an interpretable privacy-preserving framework for healthcare applications.