This empirical study examines ChatGPT as an educational and learning tool.It investigates the opportunities and challenges that ChatGPT provides to the students and instructors of communication,business writing,and co...This empirical study examines ChatGPT as an educational and learning tool.It investigates the opportunities and challenges that ChatGPT provides to the students and instructors of communication,business writing,and composition courses.It also strives to provide recommendations.After conducting 30 theory-based and application-based ChatGPT tests,it is found that ChatGPT has the potential of replacing search engines as it provides accurate and reliable input to students.For opportunities,the study found that ChatGPT provides a platform for students to seek answers to theory-based questions and generate ideas for application-based questions.It also provides a platform for instructors to integrate technology in classrooms and conduct workshops to discuss and evaluate generated responses.For challenges,the study found that ChatGPT,if unethically used by students,may lead to human unintelligence and unlearning.This may also present a challenge to instructors as the use of ChatGPT negatively affects their ability to differentiate between meticulous and automation-dependent students,on the one hand,and measure the achievement of learning outcomes,on the other hand.Based on the outcome of the analysis,this study recommends communication,business writing,and composition instructors to(1)refrain from making theory-based questions as take-home assessments,(2)provide communication and business writing students with detailed case-based and scenario-based assessment tasks that call for personalized answers utilizing critical,creative,and imaginative thinking incorporating lectures and textbook material,(3)enforce submitting all take-home assessments on plagiarism detection software,especially for composition courses,and(4)integrate ChatGPT generated responses in classes as examples to be discussed in workshops.Remarkably,this study found that ChatGPT skillfully paraphrases regenerated responses in a way that is not detected by similarity detection software.To maintain their effectiveness,similarity detection software providers need to upgrade their software to avoid such incidents from slipping unnoticed.展开更多
Because of its strong ability to solve problems,evolutionary multitask optimization(EMTO)algorithms have been widely studied recently.Evolutionary algorithms have the advantage of fast searching for the optimal soluti...Because of its strong ability to solve problems,evolutionary multitask optimization(EMTO)algorithms have been widely studied recently.Evolutionary algorithms have the advantage of fast searching for the optimal solution,but it is easy to fall into local optimum and difficult to generalize.Combining evolutionary multitask algorithms with evolutionary optimization algorithms can be an effective method for solving these problems.Through the implicit parallelism of tasks themselves and the knowledge transfer between tasks,more promising individual algorithms can be generated in the evolution process,which can jump out of the local optimum.How to better combine the two has also been studied more and more.This paper explores the existing evolutionary multitasking theory and improvement scheme in detail.Then,it summarizes the application of EMTO in different scenarios.Finally,according to the existing research,the future research trends and potential exploration directions are revealed.展开更多
The inquiry process of traditional medical equipment maintenance management is complex,which has a negative impact on the efficiency and accuracy of medical equipment maintenance management and results in a significan...The inquiry process of traditional medical equipment maintenance management is complex,which has a negative impact on the efficiency and accuracy of medical equipment maintenance management and results in a significant amount of wasted time and resources.To properly predict the failure of medical equipment,a method for failure life cycle prediction of medical equipment was developed.The system is divided into four modules:the whole life cycle management module constructs the life cycle data set of medical devices from the three parts of the management in the early stage,the middle stage,and the later stage;the status detection module monitors the main operation data of the medical device components through the normal value of the relevant sensitive data in the whole life cycle management module;and the main function of the fault diagnosis module is based on the normal value of the relevant sensitive data in the whole life cycle management module.The inference machine diagnoses the operation data of the equipment;the fault prediction module constructs a fine prediction system based on the least square support vector machine algorithm and uses the AFS-ABC algorithm to optimize the model to obtain the optimal model with the regularized parameters and width parameters;the optimal model is then used to predict the failure of medical equipment.Comparative experiments are designed to determine whether or not the design system is effective.The results demonstrate that the suggested system accurately predicts the breakdown of ECG diagnostic equipment and incubators and has a high level of support and dependability.The design system has the minimum prediction error and the quickest program execution time compared to the comparison system.Hence,the design system is able to accurately predict the numerous causes and types of medical device failure.展开更多
In the modern world,women now have tremendous success in every field.They can play,learn,and earn as much as men.But what about safety?Do they have the same secure environment that men and boys do?The answer is“NO”....In the modern world,women now have tremendous success in every field.They can play,learn,and earn as much as men.But what about safety?Do they have the same secure environment that men and boys do?The answer is“NO”.Women and girls have been subjected to numerous incidents,including acid throwing,rape,kidnapping,and harassment.It is common to read a lot of news like this in newspapers every day.These incidents make women feel unsafe in this society.Our freedom came a long time ago,but women still lack complete security in this society.All women cannot fight or shout all the time when some danger is happening to them.What can the physically challenged person and Children do?To make women feel safe,we designed“Wrist Band”using IoT for women safety.As the sensors sense information from the body,it will always update the information such as pulse,temperature,and vibration to the well-wishers through the Blynk app.展开更多
The rise of the Internet of Things and autonomous systems has made connecting vehicles more critical.Connected autonomous vehicles can create diverse communication networks that can improve the environment and offer c...The rise of the Internet of Things and autonomous systems has made connecting vehicles more critical.Connected autonomous vehicles can create diverse communication networks that can improve the environment and offer contemporary applications.With the advent of Fifth Generation(5G)networks,vehicle-to-everything(V2X)networks are expected to be highly intelligent,reside on superfast,reliable,and low-latency connections.Network slicing,machine learning(ML),and deep learning(DL)are related to network automation and optimization in V2X communication.ML/DL with network slicing aims to optimize the performance,reliability of the V2X networks,personalized services,costs,and scalability,and thus,it enhances the overall driving experience.These advantages can ultimately lead to a safer and more efficient transportation system.However,existing long-term evolution systems and enabling 5G technologies cannot meet such dynamic requirements without adding higher complexity levels.ML algorithms mitigate complexity levels,which can be highly instrumental in such vehicular communication systems.This study aims to review V2X slicing based on a proposed taxonomy that describes the enablers of slicing,a different configuration of slicing,the requirements of slicing,and the ML algorithm used to control and manage to slice.This study also reviews various research works established in network slicing through ML algorithms to enable V2X communication use cases,focusing on V2X network slicing and considering efficient control and management.The enabler technologies are considered in light of the network requirements,particular configurations,and the underlying methods and algorithms,with a review of some critical challenges and possible solutions available.The paper concludes with a future roadmap by discussing some open research issues and future directions.展开更多
Ear recognition is a new kind of biometric identification technology now.Feature extraction is a key step in pattern recognition technology,which determines the accuracy of classification results.The method of single ...Ear recognition is a new kind of biometric identification technology now.Feature extraction is a key step in pattern recognition technology,which determines the accuracy of classification results.The method of single feature extraction can achieve high recognition rate under certain conditions,but the use of double feature extraction can overcome the limitation of single feature extraction.In order to improve the accuracy of classification results,this paper proposes a new method,that is,the method of complementary double feature extraction based on Principal Component Analysis(PCA)and Fisherface,and we apply it to human ear image recognition.The experiment was carried out on the ear image library provided by the University of Science and Technology Beijing.The results show that the ear recognition rate of the proposed method is significantly higher than the single feature extraction using PCA,Fisherface,or Independent component analysis(ICA)alone.展开更多
Document images often contain various page components and complex logical structures,which make document layout analysis task challenging.For most deep learning-based document layout analysis methods,convolutional neu...Document images often contain various page components and complex logical structures,which make document layout analysis task challenging.For most deep learning-based document layout analysis methods,convolutional neural networks(CNNs)are adopted as the feature extraction networks.In this paper,a hybrid spatial-channel attention network(HSCA-Net)is proposed to improve feature extraction capability by introducing attention mechanism to explore more salient properties within document pages.The HSCA-Net consists of spatial attention module(SAM),channel attention module(CAM),and designed lateral attention connection.CAM adaptively adjusts channel feature responses by emphasizing selective information,which depends on the contribution of the features of each channel.SAM guides CNNs to focus on the informative contents and capture global context information among page objects.The lateral attention connection incorporates SAM and CAM into multiscale feature pyramid network,and thus retains original feature information.The effectiveness and adaptability of HSCA-Net are evaluated through multiple experiments on publicly available datasets such as PubLayNet,ICDAR-POD,and Article Regions.Experimental results demonstrate that HSCA-Net achieves state-of-the-art performance on document layout analysis task.展开更多
To address the issue of information asymmetry between the two parties and moral hazard among service providers in the process of service outsourcing,this paper builds the Stackelberg game model based on the principal-...To address the issue of information asymmetry between the two parties and moral hazard among service providers in the process of service outsourcing,this paper builds the Stackelberg game model based on the principal-agent framework,examines the dynamic game situation before the contract being signed,and develops four information models.The analysis reveals a Pareto improvement in the game’s Nash equilibrium when comparing the four models from the standpoint of the supply chain.In the complete information scenario,the service level of the service provider,the customer company’s incentive effectiveness,and the supply chain system’s ultimate profit are all maximized.Furthermore,a coordinating mechanism for disposable profit is built in this study.The paper then suggests a blockchain-based architecture for the service outsourcing process supervision and a distributed incentive mechanism under the coordination mechanism in response to the inadequacy of the principal-agent theory to address the information asymmetry problem and the moral hazard problem.The experiment’s end findings demonstrate that both parties can benefit from the coordination mechanism,and the application of blockchain technology can resolve these issues and effectively encourage service providers.展开更多
With the continuous development of medical informatics and digital diagnosis,the classification of tuberculosis(TB)cases from computed tomography(CT)images of the lung based on deep learning is an important guiding ai...With the continuous development of medical informatics and digital diagnosis,the classification of tuberculosis(TB)cases from computed tomography(CT)images of the lung based on deep learning is an important guiding aid in clinical diagnosis and treatment.Due to its potential application in medical image classification,this task has received extensive research attention.Existing related neural network techniques are still challenging in terms of feature extraction of global contextual information of images and network complexity in achieving image classification.To address these issues,this paper proposes a lightweight medical image classification network based on a combination of Transformer and convolutional neural network(CNN)for the classification of TB cases from lung CT.The method mainly consists of a fusion of the CNN module and the Transformer module,exploiting the advantages of both in order to accomplish a more accurate classification task.On the one hand,the CNN branch supplements the Transformer branch with basic local feature information in the low level;on the other hand,in the middle and high levels of the model,the CNN branch can also provide the Transformer architecture with different local and global feature information to the Transformer architecture to enhance the ability of the model to obtain feature information and improve the accuracy of image classification.A shortcut is used in each module of the network to solve the problem of poor model results due to gradient divergence and to optimize the effectiveness of TB classification.The proposed lightweight model can well solve the problem of long training time in the process of TB classification of lung CT and improve the speed of classification.The proposed method was validated on a CT image data set provided by the First Hospital of Lanzhou University.The experimental results show that the proposed lightweight classification network for TB based on CT medical images of lungs can fully extract the feature information of the input images and obtain high-accuracy classification results.展开更多
Background:To solve the cluster analysis better,we propose a new method based on the chaotic particle swarm optimization(CPSO)algorithm.Methods:In order to enhance the performance in clustering,we propose a novel meth...Background:To solve the cluster analysis better,we propose a new method based on the chaotic particle swarm optimization(CPSO)algorithm.Methods:In order to enhance the performance in clustering,we propose a novel method based on CPSO.We first evaluate the clustering performance of this model using the variance ratio criterion(VRC)as the evaluation metric.The effectiveness of the CPSO algorithm is compared with that of the traditional particle swarm optimization(PSO)algorithm.The CPSO aims to improve the VRC value while avoiding local optimal solutions.The simulated dataset is set at three levels of overlapping:non-overlapping,partial overlapping,and severe overlapping.Finally,we compare CPSO with two other methods.Results:By observing the comparative results,our proposed CPSO method performs outstandingly.In the conditions of non-overlapping,partial overlapping,and severe overlapping,our method has the best VRC values of 1683.2,620.5,and 275.6,respectively.The mean VRC values in these three cases are 1683.2,617.8,and 222.6.Conclusion:The CPSO performed better than other methods for cluster analysis problems.CPSO is effective for cluster analysis.展开更多
There is a lot of information in healthcare and medical records.However,it is challenging for humans to turn data into information and spot hidden patterns in today’s digitally based culture.Effective decision suppor...There is a lot of information in healthcare and medical records.However,it is challenging for humans to turn data into information and spot hidden patterns in today’s digitally based culture.Effective decision support technologies can help medical professionals find critical information concealed in voluminous data and support their clinical judgments and in different healthcare management activities.This paper presented an extensive literature survey for healthcare systems using machine learning based on multi-criteria decision-making.Various existing studies are considered for review,and a critical analysis is being done through the reviews study,which can help the researchers to explore other research areas to cater for the need of the field.展开更多
Aiming at intelligent decision-making of unmanned aerial vehicle(UAV)based on situation information in air combat,a novelmaneuvering decision method based on deep reinforcement learning is proposed in this paper.The a...Aiming at intelligent decision-making of unmanned aerial vehicle(UAV)based on situation information in air combat,a novelmaneuvering decision method based on deep reinforcement learning is proposed in this paper.The autonomous maneuvering model ofUAV is established byMarkovDecision Process.The Twin DelayedDeep Deterministic Policy Gradient(TD3)algorithm and the Deep Deterministic Policy Gradient(DDPG)algorithm in deep reinforcement learning are used to train the model,and the experimental results of the two algorithms are analyzed and compared.The simulation experiment results show that compared with the DDPG algorithm,the TD3 algorithm has stronger decision-making performance and faster convergence speed and is more suitable for solving combat problems.The algorithm proposed in this paper enables UAVs to autonomously make maneuvering decisions based on situation information such as position,speed,and relative azimuth,adjust their actions to approach,and successfully strike the enemy,providing a new method for UAVs to make intelligent maneuvering decisions during air combat.展开更多
As one chemical composition,nicotine content has an important influence on the quality of tobacco leaves.Rapid and nondestructive quantitative analysis of nicotine is an important task in the tobacco industry.Near-inf...As one chemical composition,nicotine content has an important influence on the quality of tobacco leaves.Rapid and nondestructive quantitative analysis of nicotine is an important task in the tobacco industry.Near-infrared(NIR)spectroscopy as an effective chemical composition analysis technique has been widely used.In this paper,we propose a one-dimensional fully convolutional network(1D-FCN)model to quantitatively analyze the nicotine composition of tobacco leaves using NIR spectroscopy data in a cloud environment.This 1D-FCN model uses one-dimensional convolution layers to directly extract the complex features from sequential spectroscopy data.It consists of five convolutional layers and two full connection layers with the max-pooling layer replaced by a convolutional layer to avoid information loss.Cloud computing techniques are used to solve the increasing requests of large-size data analysis and implement data sharing and accessing.Experimental results show that the proposed 1D-FCN model can effectively extract the complex characteristics inside the spectrum and more accurately predict the nicotine volumes in tobacco leaves than other approaches.This research provides a deep learning foundation for quantitative analysis of NIR spectral data in the tobacco industry.展开更多
With the improvement of people’s living standards,the demand for health monitoring and exercise detection is increasing.It is of great significance to study human activity recognition(HAR)methods that are different f...With the improvement of people’s living standards,the demand for health monitoring and exercise detection is increasing.It is of great significance to study human activity recognition(HAR)methods that are different from traditional feature extraction methods.This article uses convolutional neural network(CNN)algorithms in deep learning to automatically extract features of activities related to human life.We used a stochastic gradient descent algorithm to optimize the parameters of the CNN.The trained network model is compressed on STM32CubeMX-AI.Finally,this article introduces the use of neural networks on embedded devices to recognize six human activities of daily life,such as sitting,standing,walking,jogging,upstairs,and downstairs.The acceleration sensor related to human activity information is used to obtain the relevant characteristics of the activity,thereby solving the HAR problem.By drawing the accuracy curve,loss function curve,and confusion matrix diagram of the training model,the recognition effect of the convolutional neural network can be seen more intuitively.After comparing the average accuracy of each set of experiments and the test set of the best model obtained from it,the best model is then selected.展开更多
Artificial intelligence(AI)and robotics have gone through three generations of development,from Turing test,logic theory machine,to expert system and self-driving car.In the third-generation today,AI and robotics have...Artificial intelligence(AI)and robotics have gone through three generations of development,from Turing test,logic theory machine,to expert system and self-driving car.In the third-generation today,AI and robotics have collaboratively been used in many areas in our society,including industry,business,manufacture,research,and education.There are many challenging problems in developing AI and robotics applications.We launch this new Journal of Artificial Intelligence and Technology to facilitate the exchange of the latest research and practice in AI and technologies.In this inaugural issue,we first introduce a few key technologies and platforms supporting the third-generation AI and robotics application development based on stacks of technologies and platforms.We present examples of such development environments created by both industry and academia.We also selected eight papers in the related areas to celebrate the foundation of this journal.展开更多
The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step be...The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step before the execution of the advanced vision task.Traditional dehazing algorithms achieve image dehazing by improving image brightness and contrast or constructing artificial priors such as color attenuation priors and dark channel priors.However,the effect is unstable when dealing with complex scenes.In the method based on convolutional neural network,the image dehazing network of the encoding and decoding structure does not consider the difference before and after the dehazing image,and the image spatial information is lost in the encoding stage.In order to overcome these problems,this paper proposes a novel end-to-end two-stream convolutional neural network for single-image dehazing.The network model is composed of a spatial information feature stream and a highlevel semantic feature stream.The spatial information feature stream retains the detailed information of the dehazing image,and the high-level semantic feature stream extracts the multi-scale structural features of the dehazing image.A spatial information auxiliary module is designed and placed between the feature streams.This module uses the attention mechanism to construct a unified expression of different types of information and realizes the gradual restoration of the clear image with the semantic information auxiliary spatial information in the dehazing network.A parallel residual twicing module is proposed,which performs dehazing on the difference information of features at different stages to improve the model’s ability to discriminate haze images.The peak signal-to-noise ratio(PSNR)and structural similarity are used to quantitatively evaluate the similarity between the dehazing results of each algorithm and the original image.The structure similarity and PSNR of the method in this paper reached 0.852 and 17.557dB on the HazeRD dataset,which were higher than existing comparison algorithms.On the SOTS dataset,the indicators are 0.955 and 27.348dB,which are sub-optimal results.In experiments with real haze images,this method can also achieve excellent visual restoration effects.The experimental results show that the model proposed in this paper can restore desired visual effects without fog images,and it also has good generalization performance in real haze scenes.展开更多
Deep convolutional neural networks(CNNs)with strong learning abilities have been used in the field of image denoising.However,some CNNs depend on a single deep network to train an image denoising model,which will have...Deep convolutional neural networks(CNNs)with strong learning abilities have been used in the field of image denoising.However,some CNNs depend on a single deep network to train an image denoising model,which will have poor performance in complex screens.To address this problem,we propose a hybrid denoising CNN(HDCNN).HDCNN is composed of a dilated block(DB),RepVGG block(RVB),feature refinement block(FB),and a single convolution.DB combines a dilated convolution,batch normalization(BN),common convolutions,and activation function of ReLU to obtain more context information.RVB uses parallel combination of convolution,BN,and ReLU to extract complementary width features.FB is used to obtain more accurate information via refining obtained feature from the RVB.A single convolution collaborates a residual learning operation to construct a clean image.These key components make the HDCNN have good performance in image denoising.Experiment shows that the proposed HDCNN enjoys good denoising effect in public data sets.展开更多
In this paper,we investigate the problem of key radar signal sorting and recognition in electronic intelligence(ELINT).Our major contribution is the development of a combined approach based on clustering and pulse rep...In this paper,we investigate the problem of key radar signal sorting and recognition in electronic intelligence(ELINT).Our major contribution is the development of a combined approach based on clustering and pulse repetition interval(PRI)transform algorithm,to solve the problem that the traditional methods based on pulse description word(PDW)were not exclusively targeted at tiny particular signals and were less time-efficient.We achieve this in three steps:firstly,PDW presorting is carried out by the DBSCAN(Density-Based Spatial Clustering of Applications with Noise)clustering algorithm,and then PRI estimates of each cluster are obtained by the PRI transform algorithm.Finally,by judging the matching between various PRI estimates and key targets,it is determined whether the current signal contains key target signals or not.Simulation results show that the proposed method should improve the time efficiency of key signal recognition and deal with the complex signal environment with noise interference and overlapping signals.展开更多
Class attendance is important.Class attendance recording is often done using“roll-call”or signing attendance registers.These are time consuming,easy to cheat,and it is difficult to draw any information from them.The...Class attendance is important.Class attendance recording is often done using“roll-call”or signing attendance registers.These are time consuming,easy to cheat,and it is difficult to draw any information from them.There are other,expensive alternatives to automate attendance recording with varying accuracy.This study experimented with a smartphone camera and different combinations of face detection and recognition algorithms to determine if it can be used to record attendance successfully,while keeping the solution cost-effective.The effect of different class sizes was also investigated.The research was done within a pragmatism philosophy,using a prototype in a field experiment.The algorithms that were used are Viola–Jones(Haar features),deep neural network and histogram of oriented gradients for detection,and eigenfaces,fisherfaces,and local binary pattern histogram for recognition.The best combination was Viola–Jones combined with fisherfaces,with a mean accuracy of 54%for a class of 10 students and 34.5%for a class of 22 students.The best all over performance on a single class photo was 70%(class size 10).As is,this prototype is not accurate enough to use,but with a few adjustments,it may become a cheap,easy-to-implement solution to the attendance recording problem.展开更多
Teaching students the concepts behind computational thinking is a difficult task,often gated by the inherent difficulty of programming languages.In the classroom,teaching assistants may be required to interact with st...Teaching students the concepts behind computational thinking is a difficult task,often gated by the inherent difficulty of programming languages.In the classroom,teaching assistants may be required to interact with students to help them learn the material.Time spent in grading and offering feedback on assignments removes from this time to help students directly.As such,we offer a framework for developing an explainable artificial intelligence that performs automated analysis of student code while offering feedback and partial credit.The creation of this system is dependent on three core components.Those components are a knowledge base,a set of conditions to be analyzed,and a formal set of inference rules.In this paper,we develop such a system for our own language by employing π-calculus and Hoare logic.Our detailed system can also perform self-learning of rules.Given solution files,the system is able to extract the important aspects of the program and develop feedback that explicitly details the errors students make when they veer away from these aspects.The level of detail and expected precision can be easily modified through parameter tuning and variety in sample solutions.展开更多
文摘This empirical study examines ChatGPT as an educational and learning tool.It investigates the opportunities and challenges that ChatGPT provides to the students and instructors of communication,business writing,and composition courses.It also strives to provide recommendations.After conducting 30 theory-based and application-based ChatGPT tests,it is found that ChatGPT has the potential of replacing search engines as it provides accurate and reliable input to students.For opportunities,the study found that ChatGPT provides a platform for students to seek answers to theory-based questions and generate ideas for application-based questions.It also provides a platform for instructors to integrate technology in classrooms and conduct workshops to discuss and evaluate generated responses.For challenges,the study found that ChatGPT,if unethically used by students,may lead to human unintelligence and unlearning.This may also present a challenge to instructors as the use of ChatGPT negatively affects their ability to differentiate between meticulous and automation-dependent students,on the one hand,and measure the achievement of learning outcomes,on the other hand.Based on the outcome of the analysis,this study recommends communication,business writing,and composition instructors to(1)refrain from making theory-based questions as take-home assessments,(2)provide communication and business writing students with detailed case-based and scenario-based assessment tasks that call for personalized answers utilizing critical,creative,and imaginative thinking incorporating lectures and textbook material,(3)enforce submitting all take-home assessments on plagiarism detection software,especially for composition courses,and(4)integrate ChatGPT generated responses in classes as examples to be discussed in workshops.Remarkably,this study found that ChatGPT skillfully paraphrases regenerated responses in a way that is not detected by similarity detection software.To maintain their effectiveness,similarity detection software providers need to upgrade their software to avoid such incidents from slipping unnoticed.
基金Natural Science Basic Research Plan in Shaanxi Province of China under Grant 2022JM-327 and in part by the CAAI-Huawei MindSpore Academic Open Fund.
文摘Because of its strong ability to solve problems,evolutionary multitask optimization(EMTO)algorithms have been widely studied recently.Evolutionary algorithms have the advantage of fast searching for the optimal solution,but it is easy to fall into local optimum and difficult to generalize.Combining evolutionary multitask algorithms with evolutionary optimization algorithms can be an effective method for solving these problems.Through the implicit parallelism of tasks themselves and the knowledge transfer between tasks,more promising individual algorithms can be generated in the evolution process,which can jump out of the local optimum.How to better combine the two has also been studied more and more.This paper explores the existing evolutionary multitasking theory and improvement scheme in detail.Then,it summarizes the application of EMTO in different scenarios.Finally,according to the existing research,the future research trends and potential exploration directions are revealed.
文摘The inquiry process of traditional medical equipment maintenance management is complex,which has a negative impact on the efficiency and accuracy of medical equipment maintenance management and results in a significant amount of wasted time and resources.To properly predict the failure of medical equipment,a method for failure life cycle prediction of medical equipment was developed.The system is divided into four modules:the whole life cycle management module constructs the life cycle data set of medical devices from the three parts of the management in the early stage,the middle stage,and the later stage;the status detection module monitors the main operation data of the medical device components through the normal value of the relevant sensitive data in the whole life cycle management module;and the main function of the fault diagnosis module is based on the normal value of the relevant sensitive data in the whole life cycle management module.The inference machine diagnoses the operation data of the equipment;the fault prediction module constructs a fine prediction system based on the least square support vector machine algorithm and uses the AFS-ABC algorithm to optimize the model to obtain the optimal model with the regularized parameters and width parameters;the optimal model is then used to predict the failure of medical equipment.Comparative experiments are designed to determine whether or not the design system is effective.The results demonstrate that the suggested system accurately predicts the breakdown of ECG diagnostic equipment and incubators and has a high level of support and dependability.The design system has the minimum prediction error and the quickest program execution time compared to the comparison system.Hence,the design system is able to accurately predict the numerous causes and types of medical device failure.
文摘In the modern world,women now have tremendous success in every field.They can play,learn,and earn as much as men.But what about safety?Do they have the same secure environment that men and boys do?The answer is“NO”.Women and girls have been subjected to numerous incidents,including acid throwing,rape,kidnapping,and harassment.It is common to read a lot of news like this in newspapers every day.These incidents make women feel unsafe in this society.Our freedom came a long time ago,but women still lack complete security in this society.All women cannot fight or shout all the time when some danger is happening to them.What can the physically challenged person and Children do?To make women feel safe,we designed“Wrist Band”using IoT for women safety.As the sensors sense information from the body,it will always update the information such as pulse,temperature,and vibration to the well-wishers through the Blynk app.
基金This work was supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807900the National Natural Science Foundation of China under Grant 62101306The work was also supported by Datang Linktester Technology Co.Ltd.
文摘The rise of the Internet of Things and autonomous systems has made connecting vehicles more critical.Connected autonomous vehicles can create diverse communication networks that can improve the environment and offer contemporary applications.With the advent of Fifth Generation(5G)networks,vehicle-to-everything(V2X)networks are expected to be highly intelligent,reside on superfast,reliable,and low-latency connections.Network slicing,machine learning(ML),and deep learning(DL)are related to network automation and optimization in V2X communication.ML/DL with network slicing aims to optimize the performance,reliability of the V2X networks,personalized services,costs,and scalability,and thus,it enhances the overall driving experience.These advantages can ultimately lead to a safer and more efficient transportation system.However,existing long-term evolution systems and enabling 5G technologies cannot meet such dynamic requirements without adding higher complexity levels.ML algorithms mitigate complexity levels,which can be highly instrumental in such vehicular communication systems.This study aims to review V2X slicing based on a proposed taxonomy that describes the enablers of slicing,a different configuration of slicing,the requirements of slicing,and the ML algorithm used to control and manage to slice.This study also reviews various research works established in network slicing through ML algorithms to enable V2X communication use cases,focusing on V2X network slicing and considering efficient control and management.The enabler technologies are considered in light of the network requirements,particular configurations,and the underlying methods and algorithms,with a review of some critical challenges and possible solutions available.The paper concludes with a future roadmap by discussing some open research issues and future directions.
基金National Key R&D Program of China(No:2019YFD0901605).
文摘Ear recognition is a new kind of biometric identification technology now.Feature extraction is a key step in pattern recognition technology,which determines the accuracy of classification results.The method of single feature extraction can achieve high recognition rate under certain conditions,but the use of double feature extraction can overcome the limitation of single feature extraction.In order to improve the accuracy of classification results,this paper proposes a new method,that is,the method of complementary double feature extraction based on Principal Component Analysis(PCA)and Fisherface,and we apply it to human ear image recognition.The experiment was carried out on the ear image library provided by the University of Science and Technology Beijing.The results show that the ear recognition rate of the proposed method is significantly higher than the single feature extraction using PCA,Fisherface,or Independent component analysis(ICA)alone.
文摘Document images often contain various page components and complex logical structures,which make document layout analysis task challenging.For most deep learning-based document layout analysis methods,convolutional neural networks(CNNs)are adopted as the feature extraction networks.In this paper,a hybrid spatial-channel attention network(HSCA-Net)is proposed to improve feature extraction capability by introducing attention mechanism to explore more salient properties within document pages.The HSCA-Net consists of spatial attention module(SAM),channel attention module(CAM),and designed lateral attention connection.CAM adaptively adjusts channel feature responses by emphasizing selective information,which depends on the contribution of the features of each channel.SAM guides CNNs to focus on the informative contents and capture global context information among page objects.The lateral attention connection incorporates SAM and CAM into multiscale feature pyramid network,and thus retains original feature information.The effectiveness and adaptability of HSCA-Net are evaluated through multiple experiments on publicly available datasets such as PubLayNet,ICDAR-POD,and Article Regions.Experimental results demonstrate that HSCA-Net achieves state-of-the-art performance on document layout analysis task.
基金Province Keys Research and Development Program of Shandong(Soft Science Projects)[No.2021RKY01007]Major Scientific and Technological Innovation Projects in Shandong Province[No.2018CXGC0703].
文摘To address the issue of information asymmetry between the two parties and moral hazard among service providers in the process of service outsourcing,this paper builds the Stackelberg game model based on the principal-agent framework,examines the dynamic game situation before the contract being signed,and develops four information models.The analysis reveals a Pareto improvement in the game’s Nash equilibrium when comparing the four models from the standpoint of the supply chain.In the complete information scenario,the service level of the service provider,the customer company’s incentive effectiveness,and the supply chain system’s ultimate profit are all maximized.Furthermore,a coordinating mechanism for disposable profit is built in this study.The paper then suggests a blockchain-based architecture for the service outsourcing process supervision and a distributed incentive mechanism under the coordination mechanism in response to the inadequacy of the principal-agent theory to address the information asymmetry problem and the moral hazard problem.The experiment’s end findings demonstrate that both parties can benefit from the coordination mechanism,and the application of blockchain technology can resolve these issues and effectively encourage service providers.
文摘With the continuous development of medical informatics and digital diagnosis,the classification of tuberculosis(TB)cases from computed tomography(CT)images of the lung based on deep learning is an important guiding aid in clinical diagnosis and treatment.Due to its potential application in medical image classification,this task has received extensive research attention.Existing related neural network techniques are still challenging in terms of feature extraction of global contextual information of images and network complexity in achieving image classification.To address these issues,this paper proposes a lightweight medical image classification network based on a combination of Transformer and convolutional neural network(CNN)for the classification of TB cases from lung CT.The method mainly consists of a fusion of the CNN module and the Transformer module,exploiting the advantages of both in order to accomplish a more accurate classification task.On the one hand,the CNN branch supplements the Transformer branch with basic local feature information in the low level;on the other hand,in the middle and high levels of the model,the CNN branch can also provide the Transformer architecture with different local and global feature information to the Transformer architecture to enhance the ability of the model to obtain feature information and improve the accuracy of image classification.A shortcut is used in each module of the network to solve the problem of poor model results due to gradient divergence and to optimize the effectiveness of TB classification.The proposed lightweight model can well solve the problem of long training time in the process of TB classification of lung CT and improve the speed of classification.The proposed method was validated on a CT image data set provided by the First Hospital of Lanzhou University.The experimental results show that the proposed lightweight classification network for TB based on CT medical images of lungs can fully extract the feature information of the input images and obtain high-accuracy classification results.
文摘Background:To solve the cluster analysis better,we propose a new method based on the chaotic particle swarm optimization(CPSO)algorithm.Methods:In order to enhance the performance in clustering,we propose a novel method based on CPSO.We first evaluate the clustering performance of this model using the variance ratio criterion(VRC)as the evaluation metric.The effectiveness of the CPSO algorithm is compared with that of the traditional particle swarm optimization(PSO)algorithm.The CPSO aims to improve the VRC value while avoiding local optimal solutions.The simulated dataset is set at three levels of overlapping:non-overlapping,partial overlapping,and severe overlapping.Finally,we compare CPSO with two other methods.Results:By observing the comparative results,our proposed CPSO method performs outstandingly.In the conditions of non-overlapping,partial overlapping,and severe overlapping,our method has the best VRC values of 1683.2,620.5,and 275.6,respectively.The mean VRC values in these three cases are 1683.2,617.8,and 222.6.Conclusion:The CPSO performed better than other methods for cluster analysis problems.CPSO is effective for cluster analysis.
文摘There is a lot of information in healthcare and medical records.However,it is challenging for humans to turn data into information and spot hidden patterns in today’s digitally based culture.Effective decision support technologies can help medical professionals find critical information concealed in voluminous data and support their clinical judgments and in different healthcare management activities.This paper presented an extensive literature survey for healthcare systems using machine learning based on multi-criteria decision-making.Various existing studies are considered for review,and a critical analysis is being done through the reviews study,which can help the researchers to explore other research areas to cater for the need of the field.
基金acknowledge National Natural Science Foundation of China(Grant No.61573285,No.62003267)Open Fund of Key Laboratory of Data Link Technology of China Electronics Technology Group Corporation(Grant No.CLDL-20182101)Natural Science Foundation of Shaanxi Province(Grant No.2020JQ220)to provide fund for conducting experiments.
文摘Aiming at intelligent decision-making of unmanned aerial vehicle(UAV)based on situation information in air combat,a novelmaneuvering decision method based on deep reinforcement learning is proposed in this paper.The autonomous maneuvering model ofUAV is established byMarkovDecision Process.The Twin DelayedDeep Deterministic Policy Gradient(TD3)algorithm and the Deep Deterministic Policy Gradient(DDPG)algorithm in deep reinforcement learning are used to train the model,and the experimental results of the two algorithms are analyzed and compared.The simulation experiment results show that compared with the DDPG algorithm,the TD3 algorithm has stronger decision-making performance and faster convergence speed and is more suitable for solving combat problems.The algorithm proposed in this paper enables UAVs to autonomously make maneuvering decisions based on situation information such as position,speed,and relative azimuth,adjust their actions to approach,and successfully strike the enemy,providing a new method for UAVs to make intelligent maneuvering decisions during air combat.
文摘As one chemical composition,nicotine content has an important influence on the quality of tobacco leaves.Rapid and nondestructive quantitative analysis of nicotine is an important task in the tobacco industry.Near-infrared(NIR)spectroscopy as an effective chemical composition analysis technique has been widely used.In this paper,we propose a one-dimensional fully convolutional network(1D-FCN)model to quantitatively analyze the nicotine composition of tobacco leaves using NIR spectroscopy data in a cloud environment.This 1D-FCN model uses one-dimensional convolution layers to directly extract the complex features from sequential spectroscopy data.It consists of five convolutional layers and two full connection layers with the max-pooling layer replaced by a convolutional layer to avoid information loss.Cloud computing techniques are used to solve the increasing requests of large-size data analysis and implement data sharing and accessing.Experimental results show that the proposed 1D-FCN model can effectively extract the complex characteristics inside the spectrum and more accurately predict the nicotine volumes in tobacco leaves than other approaches.This research provides a deep learning foundation for quantitative analysis of NIR spectral data in the tobacco industry.
文摘With the improvement of people’s living standards,the demand for health monitoring and exercise detection is increasing.It is of great significance to study human activity recognition(HAR)methods that are different from traditional feature extraction methods.This article uses convolutional neural network(CNN)algorithms in deep learning to automatically extract features of activities related to human life.We used a stochastic gradient descent algorithm to optimize the parameters of the CNN.The trained network model is compressed on STM32CubeMX-AI.Finally,this article introduces the use of neural networks on embedded devices to recognize six human activities of daily life,such as sitting,standing,walking,jogging,upstairs,and downstairs.The acceleration sensor related to human activity information is used to obtain the relevant characteristics of the activity,thereby solving the HAR problem.By drawing the accuracy curve,loss function curve,and confusion matrix diagram of the training model,the recognition effect of the convolutional neural network can be seen more intuitively.After comparing the average accuracy of each set of experiments and the test set of the best model obtained from it,the best model is then selected.
文摘Artificial intelligence(AI)and robotics have gone through three generations of development,from Turing test,logic theory machine,to expert system and self-driving car.In the third-generation today,AI and robotics have collaboratively been used in many areas in our society,including industry,business,manufacture,research,and education.There are many challenging problems in developing AI and robotics applications.We launch this new Journal of Artificial Intelligence and Technology to facilitate the exchange of the latest research and practice in AI and technologies.In this inaugural issue,we first introduce a few key technologies and platforms supporting the third-generation AI and robotics application development based on stacks of technologies and platforms.We present examples of such development environments created by both industry and academia.We also selected eight papers in the related areas to celebrate the foundation of this journal.
基金supported by the National Natural Science Foundationof China under Grant No. 61803061, 61906026Innovation research groupof universities in Chongqing+4 种基金the Chongqing Natural Science Foundationunder Grant cstc2020jcyj-msxmX0577, cstc2020jcyj-msxmX0634“Chengdu-Chongqing Economic Circle” innovation funding of Chongqing Municipal Education Commission KJCXZD2020028the Science andTechnology Research Program of Chongqing Municipal Education Commission grants KJQN202000602Ministry of Education China MobileResearch Fund (MCM 20180404)Special key project of Chongqingtechnology innovation and application development: cstc2019jscxzdztzx0068.
文摘The haze weather environment leads to the deterioration of the visual effect of the image,and it is difficult to carry out the work of the advanced vision task.Therefore,dehazing the haze image is an important step before the execution of the advanced vision task.Traditional dehazing algorithms achieve image dehazing by improving image brightness and contrast or constructing artificial priors such as color attenuation priors and dark channel priors.However,the effect is unstable when dealing with complex scenes.In the method based on convolutional neural network,the image dehazing network of the encoding and decoding structure does not consider the difference before and after the dehazing image,and the image spatial information is lost in the encoding stage.In order to overcome these problems,this paper proposes a novel end-to-end two-stream convolutional neural network for single-image dehazing.The network model is composed of a spatial information feature stream and a highlevel semantic feature stream.The spatial information feature stream retains the detailed information of the dehazing image,and the high-level semantic feature stream extracts the multi-scale structural features of the dehazing image.A spatial information auxiliary module is designed and placed between the feature streams.This module uses the attention mechanism to construct a unified expression of different types of information and realizes the gradual restoration of the clear image with the semantic information auxiliary spatial information in the dehazing network.A parallel residual twicing module is proposed,which performs dehazing on the difference information of features at different stages to improve the model’s ability to discriminate haze images.The peak signal-to-noise ratio(PSNR)and structural similarity are used to quantitatively evaluate the similarity between the dehazing results of each algorithm and the original image.The structure similarity and PSNR of the method in this paper reached 0.852 and 17.557dB on the HazeRD dataset,which were higher than existing comparison algorithms.On the SOTS dataset,the indicators are 0.955 and 27.348dB,which are sub-optimal results.In experiments with real haze images,this method can also achieve excellent visual restoration effects.The experimental results show that the model proposed in this paper can restore desired visual effects without fog images,and it also has good generalization performance in real haze scenes.
基金supported in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2021A1515110079in part by the Fundamental Research Funds for the Central Universities under Grant D5000210966in part by the Basic Research Plan in Taicang under Grant TC2021JC23.
文摘Deep convolutional neural networks(CNNs)with strong learning abilities have been used in the field of image denoising.However,some CNNs depend on a single deep network to train an image denoising model,which will have poor performance in complex screens.To address this problem,we propose a hybrid denoising CNN(HDCNN).HDCNN is composed of a dilated block(DB),RepVGG block(RVB),feature refinement block(FB),and a single convolution.DB combines a dilated convolution,batch normalization(BN),common convolutions,and activation function of ReLU to obtain more context information.RVB uses parallel combination of convolution,BN,and ReLU to extract complementary width features.FB is used to obtain more accurate information via refining obtained feature from the RVB.A single convolution collaborates a residual learning operation to construct a clean image.These key components make the HDCNN have good performance in image denoising.Experiment shows that the proposed HDCNN enjoys good denoising effect in public data sets.
文摘In this paper,we investigate the problem of key radar signal sorting and recognition in electronic intelligence(ELINT).Our major contribution is the development of a combined approach based on clustering and pulse repetition interval(PRI)transform algorithm,to solve the problem that the traditional methods based on pulse description word(PDW)were not exclusively targeted at tiny particular signals and were less time-efficient.We achieve this in three steps:firstly,PDW presorting is carried out by the DBSCAN(Density-Based Spatial Clustering of Applications with Noise)clustering algorithm,and then PRI estimates of each cluster are obtained by the PRI transform algorithm.Finally,by judging the matching between various PRI estimates and key targets,it is determined whether the current signal contains key target signals or not.Simulation results show that the proposed method should improve the time efficiency of key signal recognition and deal with the complex signal environment with noise interference and overlapping signals.
文摘Class attendance is important.Class attendance recording is often done using“roll-call”or signing attendance registers.These are time consuming,easy to cheat,and it is difficult to draw any information from them.There are other,expensive alternatives to automate attendance recording with varying accuracy.This study experimented with a smartphone camera and different combinations of face detection and recognition algorithms to determine if it can be used to record attendance successfully,while keeping the solution cost-effective.The effect of different class sizes was also investigated.The research was done within a pragmatism philosophy,using a prototype in a field experiment.The algorithms that were used are Viola–Jones(Haar features),deep neural network and histogram of oriented gradients for detection,and eigenfaces,fisherfaces,and local binary pattern histogram for recognition.The best combination was Viola–Jones combined with fisherfaces,with a mean accuracy of 54%for a class of 10 students and 34.5%for a class of 22 students.The best all over performance on a single class photo was 70%(class size 10).As is,this prototype is not accurate enough to use,but with a few adjustments,it may become a cheap,easy-to-implement solution to the attendance recording problem.
基金supported by general funding at IoT and Robotics Education Lab and FURI program at Arizona State University.
文摘Teaching students the concepts behind computational thinking is a difficult task,often gated by the inherent difficulty of programming languages.In the classroom,teaching assistants may be required to interact with students to help them learn the material.Time spent in grading and offering feedback on assignments removes from this time to help students directly.As such,we offer a framework for developing an explainable artificial intelligence that performs automated analysis of student code while offering feedback and partial credit.The creation of this system is dependent on three core components.Those components are a knowledge base,a set of conditions to be analyzed,and a formal set of inference rules.In this paper,we develop such a system for our own language by employing π-calculus and Hoare logic.Our detailed system can also perform self-learning of rules.Given solution files,the system is able to extract the important aspects of the program and develop feedback that explicitly details the errors students make when they veer away from these aspects.The level of detail and expected precision can be easily modified through parameter tuning and variety in sample solutions.