Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control sy...Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.展开更多
When a customer uses the software, then it is possible to occur defects that can be removed in the updated versions of the software. Hence, in the present work, a robust examination of cross-project software defect pr...When a customer uses the software, then it is possible to occur defects that can be removed in the updated versions of the software. Hence, in the present work, a robust examination of cross-project software defect prediction is elaborated through an innovative hybrid machine learning framework. The proposed technique combines an advanced deep neural network architecture with ensemble models such as Support Vector Machine (SVM), Random Forest (RF), and XGBoost. The study evaluates the performance by considering multiple software projects like CM1, JM1, KC1, and PC1 using datasets from the PROMISE Software Engineering Repository. The three hybrid models that are compared are Hybrid Model-1 (SVM, RandomForest, XGBoost, Neural Network), Hybrid Model-2 (GradientBoosting, DecisionTree, LogisticRegression, Neural Network), and Hybrid Model-3 (KNeighbors, GaussianNB, Support Vector Classification (SVC), Neural Network), and the Hybrid Model 3 surpasses the others in terms of recall, F1-score, accuracy, ROC AUC, and precision. The presented work offers valuable insights into the effectiveness of hybrid techniques for cross-project defect prediction, providing a comparative perspective on early defect identification and mitigation strategies. .展开更多
This study delves into the applications,challenges,and future directions of deep learning techniques in the field of image recognition.Deep learning,particularly Convolutional Neural Networks(CNNs),Recurrent Neural Ne...This study delves into the applications,challenges,and future directions of deep learning techniques in the field of image recognition.Deep learning,particularly Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),and Generative Adversarial Networks(GANs),has become key to enhancing the precision and efficiency of image recognition.These models are capable of processing complex visual data,facilitating efficient feature extraction and image classification.However,acquiring and annotating high-quality,diverse datasets,addressing imbalances in datasets,and model training and optimization remain significant challenges in this domain.The paper proposes strategies for improving data augmentation,optimizing model architectures,and employing automated model optimization tools to address these challenges,while also emphasizing the importance of considering ethical issues in technological advancements.As technology continues to evolve,the application of deep learning in image recognition will further demonstrate its potent capability to solve complex problems,driving society towards more inclusive and diverse development.展开更多
In modern transportation,pavement is one of the most important civil infrastructures for the movement of vehicles and pedestrians.Pavement service quality and service life are of great importance for civil engineers a...In modern transportation,pavement is one of the most important civil infrastructures for the movement of vehicles and pedestrians.Pavement service quality and service life are of great importance for civil engineers as they directly affect the regular service for the users.Therefore,monitoring the health status of pavement before irreversible damage occurs is essential for timely maintenance,which in turn ensures public transportation safety.Many pavement damages can be detected and analyzed by monitoring the structure dynamic responses and evaluating road surface conditions.Advanced technologies can be employed for the collection and analysis of such data,including various intrusive sensing techniques,image processing techniques,and machine learning methods.This review summarizes the state-ofthe-art of these three technologies in pavement engineering in recent years and suggests possible developments for future pavement monitoring and analysis based on these approaches.展开更多
Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are g...Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are generated,reflecting the interaction between the TBM system and surrounding rock,and these data can be used to evaluate the rock mass quality.This study proposed a stacking ensemble classifier for the real-time prediction of the rock mass classification using TBM operation data.Based on the Songhua River water conveyance project,a total of 7538 TBM tunnelling cycles and the corresponding rock mass classes are obtained after data preprocessing.Then,through the tree-based feature selection method,10 key TBM operation parameters are selected,and the mean values of the 10 selected features in the stable phase after removing outliers are calculated as the inputs of classifiers.The preprocessed data are randomly divided into the training set(90%)and test set(10%)using simple random sampling.Besides stacking ensemble classifier,seven individual classifiers are established as the comparison.These classifiers include support vector machine(SVM),k-nearest neighbors(KNN),random forest(RF),gradient boosting decision tree(GBDT),decision tree(DT),logistic regression(LR)and multilayer perceptron(MLP),where the hyper-parameters of each classifier are optimised using the grid search method.The prediction results show that the stacking ensemble classifier has a better performance than individual classifiers,and it shows a more powerful learning and generalisation ability for small and imbalanced samples.Additionally,a relative balance training set is obtained by the synthetic minority oversampling technique(SMOTE),and the influence of sample imbalance on the prediction performance is discussed.展开更多
Urban traffic congestion is a severe and widely studied problem over the decade because of the negative impacts. However, in recent years some approaches emerge as proper and suitable solutions. The Carpooling initiat...Urban traffic congestion is a severe and widely studied problem over the decade because of the negative impacts. However, in recent years some approaches emerge as proper and suitable solutions. The Carpooling initiative is one of the most representative efforts to propitiate a responsible use of particular vehicles. Thus, the paper introduces a carpooling model considering the users’ preference to reach an appropriate match among drivers and passengers. In particular, the paper conducts a study of 6 of the most avid classified techniques in machine learning to create a model for the selection of travel companions. The experimental results show the models’ precision and assess the best cases using Friedman’s test. Finally, the conclusions emphasize the relevance of the proposed study and suggest that it is necessary to extend the proposal with more drives and passengers’ data.展开更多
Deficiencies of the performance-based iterative learning control (ILC) for the non-regular systems are investigated in detail, then a faster control input updating and lifting technique is introduced in the design o...Deficiencies of the performance-based iterative learning control (ILC) for the non-regular systems are investigated in detail, then a faster control input updating and lifting technique is introduced in the design of performance index based ILCs for the partial non-regular systems. Two ldnds of optimal ILCs based on different performance indices are considered. Finally, simulation examples are given to illustrate the feasibility of the proposed learning controls.展开更多
The field of biomedical imaging has been revolutionized by deep learning techniques.This special issue is focused on the theme of“AI-based Image Analysis”.Because there are so many conferences and journals in this f...The field of biomedical imaging has been revolutionized by deep learning techniques.This special issue is focused on the theme of“AI-based Image Analysis”.Because there are so many conferences and journals in this field,our special issue can only be a small snapshot of a much bigger and highly dynamic picture.In this special issue,we present six papers that highlight the power of deep learning in solving challenging biomedical imaging and image analysis problems.展开更多
Energy is essential to practically all exercises and is imperative for the development of personal satisfaction.So,valuable energy has been in great demand for many years,especially for using smart homes and structure...Energy is essential to practically all exercises and is imperative for the development of personal satisfaction.So,valuable energy has been in great demand for many years,especially for using smart homes and structures,as individuals quickly improve their way of life depending on current innovations.However,there is a shortage of energy,as the energy required is higher than that produced.Many new plans are being designed to meet the consumer’s energy requirements.In many regions,energy utilization in the housing area is 30%–40%.The growth of smart homes has raised the requirement for intelligence in applications such as asset management,energy-efficient automation,security,and healthcare monitoring to learn about residents’actions and forecast their future demands.To overcome the challenges of energy consumption optimization,in this study,we apply an energy management technique.Data fusion has recently attracted much energy efficiency in buildings,where numerous types of information are processed.The proposed research developed a data fusion model to predict energy consumption for accuracy and miss rate.The results of the proposed approach are compared with those of the previously published techniques and found that the prediction accuracy of the proposed method is 92%,which is higher than the previously published approaches.展开更多
The biggest problem facing the world is information security in the digital era.Information protection and integrity are hot topics at all times,so many techniques have been introduced to transmit and store data secur...The biggest problem facing the world is information security in the digital era.Information protection and integrity are hot topics at all times,so many techniques have been introduced to transmit and store data securely.The increase in computing power is increasing the number of security breaches and attacks at a higher rate than before on average.Thus,a number of existing security systems are at risk of hacking.This paper proposes an encryption technique called Partial Deep-Learning Encryption Technique(PD-LET)to achieve data security.PD-LET includes several stages for encoding and decoding digital data.Data preprocessing,convolution layer of standard deep learning algorithm,zigzag transformation,image partitioning,and encryption key are the main stages of PD-LET.Initially,the proposed technique converts digital data into the corresponding matrix and then applies encryption stages to it.The implementation of encrypting stages is frequently changed.This collaboration between deep learning and zigzag transformation techniques provides the best output result and transfers the original data into a completely undefined image which makes the proposed technique efficient and secure via data encryption.Moreover,its implementation phases are continuously changed during the encryption phase,which makes the data encryption technique more immune to some future attacks because breaking this technique needs to know all the information about the encryption technique.The security analysis of the obtained results shows that it is computationally impractical to break the proposed technique due to the large size and diversity of keys and PD-LET has achieved a reliable security system.展开更多
The immediate international spread of severe acute respiratory syn-drome revealed the potential threat of infectious diseases in a closely integrated and interdependent world.When an outbreak occurs,each country must ...The immediate international spread of severe acute respiratory syn-drome revealed the potential threat of infectious diseases in a closely integrated and interdependent world.When an outbreak occurs,each country must have a well-coordinated and preventative plan to address the situation.Information and Communication Technologies have provided innovative approaches to dealing with numerous facets of daily living.Although intelligent devices and applica-tions have become a vital part of our everyday lives,smart gadgets have also led to several physical and psychological health problems in modern society.Here,we used an artificial intelligence AI-based system for disease prediction using an Artificial Neural Network(ANN).The ANN improved the regularization of the classification model,hence increasing its accuracy.The unconstrained opti-mization model reduced the classifier’s cost function to obtain the lowest possible cost.To verify the performance of the intelligent system,we compared the out-comes of the suggested scheme with the results of previously proposed models.The proposed intelligent system achieved an accuracy of 0.89,and the miss rate 0.11 was higher than in previously proposed models.展开更多
The integration of distributed generations (DGs) into distribution systems (DSs) is increasingly becoming a solution for compensating for isolated local energy systems (ILESs). Additionally, distributed generations ar...The integration of distributed generations (DGs) into distribution systems (DSs) is increasingly becoming a solution for compensating for isolated local energy systems (ILESs). Additionally, distributed generations are used for self-consumption with excess energy injected into centralized grids (CGs). However, the improper sizing of renewable energy systems (RESs) exposes the entire system to power losses. This work presents an optimization of a system consisting of distributed generations. Firstly, PSO algorithms evaluate the size of the entire system on the IEEE bus 14 test standard. Secondly, the size of the system is allocated using improved Particles Swarm Optimization (IPSO). The convergence speed of the objective function enables a conjecture to be made about the robustness of the proposed system. The power and voltage profile on the IEEE 14-bus standard displays a decrease in power losses and an appropriate response to energy demands (EDs), validating the proposed method.展开更多
The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper ...The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.展开更多
The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear...The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear,pose significant challenges to the reliability and performance of communication systems.This review paper navigates the landscape of antenna defect detection,emphasizing the need for a nuanced understanding of various defect types and the associated challenges in visual detection.This review paper serves as a valuable resource for researchers,engineers,and practitioners engaged in the design and maintenance of communication systems.The insights presented here pave the way for enhanced reliability in antenna systems through targeted defect detection measures.In this study,a comprehensive literature analysis on computer vision algorithms that are employed in end-of-line visual inspection of antenna parts is presented.The PRISMA principles will be followed throughout the review,and its goals are to provide a summary of recent research,identify relevant computer vision techniques,and evaluate how effective these techniques are in discovering defects during inspections.It contains articles from scholarly journals as well as papers presented at conferences up until June 2023.This research utilized search phrases that were relevant,and papers were chosen based on whether or not they met certain inclusion and exclusion criteria.In this study,several different computer vision approaches,such as feature extraction and defect classification,are broken down and analyzed.Additionally,their applicability and performance are discussed.The review highlights the significance of utilizing a wide variety of datasets and measurement criteria.The findings of this study add to the existing body of knowledge and point researchers in the direction of promising new areas of investigation,such as real-time inspection systems and multispectral imaging.This review,on its whole,offers a complete study of computer vision approaches for quality control in antenna parts.It does so by providing helpful insights and drawing attention to areas that require additional exploration.展开更多
As the volume of healthcare and medical data increases from diverse sources,real-world scenarios involving data sharing and collaboration have certain challenges,including the risk of privacy leakage,difficulty in dat...As the volume of healthcare and medical data increases from diverse sources,real-world scenarios involving data sharing and collaboration have certain challenges,including the risk of privacy leakage,difficulty in data fusion,low reliability of data storage,low effectiveness of data sharing,etc.To guarantee the service quality of data collaboration,this paper presents a privacy-preserving Healthcare and Medical Data Collaboration Service System combining Blockchain with Federated Learning,termed FL-HMChain.This system is composed of three layers:Data extraction and storage,data management,and data application.Focusing on healthcare and medical data,a healthcare and medical blockchain is constructed to realize data storage,transfer,processing,and access with security,real-time,reliability,and integrity.An improved master node selection consensus mechanism is presented to detect and prevent dishonest behavior,ensuring the overall reliability and trustworthiness of the collaborative model training process.Furthermore,healthcare and medical data collaboration services in real-world scenarios have been discussed and developed.To further validate the performance of FL-HMChain,a Convolutional Neural Network-based Federated Learning(FL-CNN-HMChain)model is investigated for medical image identification.This model achieves better performance compared to the baseline Convolutional Neural Network(CNN),having an average improvement of 4.7%on Area Under Curve(AUC)and 7%on Accuracy(ACC),respectively.Furthermore,the probability of privacy leakage can be effectively reduced by the blockchain-based parameter transfer mechanism in federated learning between local and global models.展开更多
The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing in...The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing individuals.This tendency will cause the newly generated solution to remain closely tied to the candidate optimal in the search area.To address this issue,the paper introduces an opposition-based learning-based search mechanism for FFO algorithm(IFFO).Firstly,this paper introduces niching techniques to improve the survival list method,which not only focuses on the adaptability of individuals but also considers the population’s crowding degree to enhance the global search capability.Secondly,an initialization strategy of opposition-based learning is used to perturb the initial population and elevate its quality.Finally,to verify the superiority of the improved search mechanism,IFFO,FFO and the cutting-edge metaheuristic algorithms are compared and analyzed using a set of test functions.The results prove that compared with other algorithms,IFFO is characterized by its rapid convergence,precise results and robust stability.展开更多
In this study, an integrated approach for diagenetic facies classification, reservoir quality analysis and quantitative wireline log prediction of tight gas sandstones(TGSs) is introduced utilizing a combination of fi...In this study, an integrated approach for diagenetic facies classification, reservoir quality analysis and quantitative wireline log prediction of tight gas sandstones(TGSs) is introduced utilizing a combination of fit-for-purpose complementary testing and machine learning techniques. The integrated approach is specialized for the middle Permian Shihezi Formation TGSs in the northeastern Ordos Basin, where operators often face significant drilling uncertainty and increased exploration risks due to low porosities and micro-Darcy range permeabilities. In this study, detrital compositions and diagenetic minerals and their pore type assemblages were analyzed using optical light microscopy, cathodoluminescence, standard scanning electron microscopy, and X-ray diffraction. Different types of diagenetic facies were delineated on this basis to capture the characteristic rock properties of the TGSs in the target formation.A combination of He porosity and permeability measurements, mercury intrusion capillary pressure and nuclear magnetic resonance data was used to analyze the mechanism of heterogeneous TGS reservoirs.We found that the type, size and proportion of pores considerably varied between diagenetic facies due to differences in the initial depositional attributes and subsequent diagenetic alterations;these differences affected the size, distribution and connectivity of the pore network and varied the reservoir quality. Five types of diagenetic facies were classified:(i) grain-coating facies, which have minimal ductile grains, chlorite coatings that inhibit quartz overgrowths, large intergranular pores that dominate the pore network, the best pore structure and the greatest reservoir quality;(ii) quartz-cemented facies,which exhibit strong quartz overgrowths, intergranular porosity and a pore size decrease, resulting in the deterioration of the pore structure and reservoir quality;(iii) mixed-cemented facies, in which the cementation of various authigenic minerals increases the micropores, resulting in a poor pore structure and reservoir quality;(iv) carbonate-cemented facies and(v) tightly compacted facies, in which the intergranular pores are filled with carbonate cement and ductile grains;thus, the pore network mainly consists of micropores with small pore throat sizes, and the pore structure and reservoir quality are the worst. The grain-coating facies with the best reservoir properties are more likely to have high gas productivity and are the primary targets for exploration and development. The diagenetic facies were then translated into wireline log expressions(conventional and NMR logging). Finally, a wireline log quantitative prediction model of TGSs using convolutional neural network machine learning algorithms was established to successfully classify the different diagenetic facies.展开更多
Lung cancer continues to be a leading cause of cancer-related deaths worldwide,emphasizing the critical need for improved diagnostic techniques.Early detection of lung tumors significantly increases the chances of suc...Lung cancer continues to be a leading cause of cancer-related deaths worldwide,emphasizing the critical need for improved diagnostic techniques.Early detection of lung tumors significantly increases the chances of successful treatment and survival.However,current diagnostic methods often fail to detect tumors at an early stage or to accurately pinpoint their location within the lung tissue.Single-model deep learning technologies for lung cancer detection,while beneficial,cannot capture the full range of features present in medical imaging data,leading to incomplete or inaccurate detection.Furthermore,it may not be robust enough to handle the wide variability in medical images due to different imaging conditions,patient anatomy,and tumor characteristics.To overcome these disadvantages,dual-model or multi-model approaches can be employed.This research focuses on enhancing the detection of lung cancer by utilizing a combination of two learning models:a Convolutional Neural Network(CNN)for categorization and the You Only Look Once(YOLOv8)architecture for real-time identification and pinpointing of tumors.CNNs automatically learn to extract hierarchical features from raw image data,capturing patterns such as edges,textures,and complex structures that are crucial for identifying lung cancer.YOLOv8 incorporates multiscale feature extraction,enabling the detection of tumors of varying sizes and scales within a single image.This is particularly beneficial for identifying small or irregularly shaped tumors that may be challenging to detect.Furthermore,through the utilization of cutting-edge data augmentation methods,such as Deep Convolutional Generative Adversarial Networks(DCGAN),the suggested approach can handle the issue of limited data and boost the models’ability to learn from diverse and comprehensive datasets.The combined method not only improved accuracy and localization but also ensured efficient real-time processing,which is crucial for practical clinical applications.The CNN achieved an accuracy of 97.67%in classifying lung tissues into healthy and cancerous categories.The YOLOv8 model achieved an Intersection over Union(IoU)score of 0.85 for tumor localization,reflecting high precision in detecting and marking tumor boundaries within the images.Finally,the incorporation of synthetic images generated by DCGAN led to a 10%improvement in both the CNN classification accuracy and YOLOv8 detection performance.展开更多
基金partly supported by the University of Malaya Impact Oriented Interdisci-plinary Research Grant under Grant IIRG008(A,B,C)-19IISS.
文摘Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.
文摘When a customer uses the software, then it is possible to occur defects that can be removed in the updated versions of the software. Hence, in the present work, a robust examination of cross-project software defect prediction is elaborated through an innovative hybrid machine learning framework. The proposed technique combines an advanced deep neural network architecture with ensemble models such as Support Vector Machine (SVM), Random Forest (RF), and XGBoost. The study evaluates the performance by considering multiple software projects like CM1, JM1, KC1, and PC1 using datasets from the PROMISE Software Engineering Repository. The three hybrid models that are compared are Hybrid Model-1 (SVM, RandomForest, XGBoost, Neural Network), Hybrid Model-2 (GradientBoosting, DecisionTree, LogisticRegression, Neural Network), and Hybrid Model-3 (KNeighbors, GaussianNB, Support Vector Classification (SVC), Neural Network), and the Hybrid Model 3 surpasses the others in terms of recall, F1-score, accuracy, ROC AUC, and precision. The presented work offers valuable insights into the effectiveness of hybrid techniques for cross-project defect prediction, providing a comparative perspective on early defect identification and mitigation strategies. .
文摘This study delves into the applications,challenges,and future directions of deep learning techniques in the field of image recognition.Deep learning,particularly Convolutional Neural Networks(CNNs),Recurrent Neural Networks(RNNs),and Generative Adversarial Networks(GANs),has become key to enhancing the precision and efficiency of image recognition.These models are capable of processing complex visual data,facilitating efficient feature extraction and image classification.However,acquiring and annotating high-quality,diverse datasets,addressing imbalances in datasets,and model training and optimization remain significant challenges in this domain.The paper proposes strategies for improving data augmentation,optimizing model architectures,and employing automated model optimization tools to address these challenges,while also emphasizing the importance of considering ethical issues in technological advancements.As technology continues to evolve,the application of deep learning in image recognition will further demonstrate its potent capability to solve complex problems,driving society towards more inclusive and diverse development.
基金supported by the National Key R&D Program of China(2017YFF0205600)the International Research Cooperation Seed Fund of Beijing University of Technology(2018A08)+1 种基金Science and Technology Project of Beijing Municipal Commission of Transport(2018-kjc-01-213)the Construction of Service Capability of Scientific and Technological Innovation-Municipal Level of Fundamental Research Funds(Scientific Research Categories)of Beijing City(PXM2019_014204_500032).
文摘In modern transportation,pavement is one of the most important civil infrastructures for the movement of vehicles and pedestrians.Pavement service quality and service life are of great importance for civil engineers as they directly affect the regular service for the users.Therefore,monitoring the health status of pavement before irreversible damage occurs is essential for timely maintenance,which in turn ensures public transportation safety.Many pavement damages can be detected and analyzed by monitoring the structure dynamic responses and evaluating road surface conditions.Advanced technologies can be employed for the collection and analysis of such data,including various intrusive sensing techniques,image processing techniques,and machine learning methods.This review summarizes the state-ofthe-art of these three technologies in pavement engineering in recent years and suggests possible developments for future pavement monitoring and analysis based on these approaches.
基金funded by the National Natural Science Foundation of China(Grant No.41941019)the State Key Laboratory of Hydroscience and Engineering(Grant No.2019-KY-03)。
文摘Real-time prediction of the rock mass class in front of the tunnel face is essential for the adaptive adjustment of tunnel boring machines(TBMs).During the TBM tunnelling process,a large number of operation data are generated,reflecting the interaction between the TBM system and surrounding rock,and these data can be used to evaluate the rock mass quality.This study proposed a stacking ensemble classifier for the real-time prediction of the rock mass classification using TBM operation data.Based on the Songhua River water conveyance project,a total of 7538 TBM tunnelling cycles and the corresponding rock mass classes are obtained after data preprocessing.Then,through the tree-based feature selection method,10 key TBM operation parameters are selected,and the mean values of the 10 selected features in the stable phase after removing outliers are calculated as the inputs of classifiers.The preprocessed data are randomly divided into the training set(90%)and test set(10%)using simple random sampling.Besides stacking ensemble classifier,seven individual classifiers are established as the comparison.These classifiers include support vector machine(SVM),k-nearest neighbors(KNN),random forest(RF),gradient boosting decision tree(GBDT),decision tree(DT),logistic regression(LR)and multilayer perceptron(MLP),where the hyper-parameters of each classifier are optimised using the grid search method.The prediction results show that the stacking ensemble classifier has a better performance than individual classifiers,and it shows a more powerful learning and generalisation ability for small and imbalanced samples.Additionally,a relative balance training set is obtained by the synthetic minority oversampling technique(SMOTE),and the influence of sample imbalance on the prediction performance is discussed.
文摘Urban traffic congestion is a severe and widely studied problem over the decade because of the negative impacts. However, in recent years some approaches emerge as proper and suitable solutions. The Carpooling initiative is one of the most representative efforts to propitiate a responsible use of particular vehicles. Thus, the paper introduces a carpooling model considering the users’ preference to reach an appropriate match among drivers and passengers. In particular, the paper conducts a study of 6 of the most avid classified techniques in machine learning to create a model for the selection of travel companions. The experimental results show the models’ precision and assess the best cases using Friedman’s test. Finally, the conclusions emphasize the relevance of the proposed study and suggest that it is necessary to extend the proposal with more drives and passengers’ data.
基金supported by the National Natural Science Foundation of China (No.60774023)Hunan Provincial Natural Science Foundation (No.06JJ50141)
文摘Deficiencies of the performance-based iterative learning control (ILC) for the non-regular systems are investigated in detail, then a faster control input updating and lifting technique is introduced in the design of performance index based ILCs for the partial non-regular systems. Two ldnds of optimal ILCs based on different performance indices are considered. Finally, simulation examples are given to illustrate the feasibility of the proposed learning controls.
文摘The field of biomedical imaging has been revolutionized by deep learning techniques.This special issue is focused on the theme of“AI-based Image Analysis”.Because there are so many conferences and journals in this field,our special issue can only be a small snapshot of a much bigger and highly dynamic picture.In this special issue,we present six papers that highlight the power of deep learning in solving challenging biomedical imaging and image analysis problems.
文摘Energy is essential to practically all exercises and is imperative for the development of personal satisfaction.So,valuable energy has been in great demand for many years,especially for using smart homes and structures,as individuals quickly improve their way of life depending on current innovations.However,there is a shortage of energy,as the energy required is higher than that produced.Many new plans are being designed to meet the consumer’s energy requirements.In many regions,energy utilization in the housing area is 30%–40%.The growth of smart homes has raised the requirement for intelligence in applications such as asset management,energy-efficient automation,security,and healthcare monitoring to learn about residents’actions and forecast their future demands.To overcome the challenges of energy consumption optimization,in this study,we apply an energy management technique.Data fusion has recently attracted much energy efficiency in buildings,where numerous types of information are processed.The proposed research developed a data fusion model to predict energy consumption for accuracy and miss rate.The results of the proposed approach are compared with those of the previously published techniques and found that the prediction accuracy of the proposed method is 92%,which is higher than the previously published approaches.
文摘The biggest problem facing the world is information security in the digital era.Information protection and integrity are hot topics at all times,so many techniques have been introduced to transmit and store data securely.The increase in computing power is increasing the number of security breaches and attacks at a higher rate than before on average.Thus,a number of existing security systems are at risk of hacking.This paper proposes an encryption technique called Partial Deep-Learning Encryption Technique(PD-LET)to achieve data security.PD-LET includes several stages for encoding and decoding digital data.Data preprocessing,convolution layer of standard deep learning algorithm,zigzag transformation,image partitioning,and encryption key are the main stages of PD-LET.Initially,the proposed technique converts digital data into the corresponding matrix and then applies encryption stages to it.The implementation of encrypting stages is frequently changed.This collaboration between deep learning and zigzag transformation techniques provides the best output result and transfers the original data into a completely undefined image which makes the proposed technique efficient and secure via data encryption.Moreover,its implementation phases are continuously changed during the encryption phase,which makes the data encryption technique more immune to some future attacks because breaking this technique needs to know all the information about the encryption technique.The security analysis of the obtained results shows that it is computationally impractical to break the proposed technique due to the large size and diversity of keys and PD-LET has achieved a reliable security system.
文摘The immediate international spread of severe acute respiratory syn-drome revealed the potential threat of infectious diseases in a closely integrated and interdependent world.When an outbreak occurs,each country must have a well-coordinated and preventative plan to address the situation.Information and Communication Technologies have provided innovative approaches to dealing with numerous facets of daily living.Although intelligent devices and applica-tions have become a vital part of our everyday lives,smart gadgets have also led to several physical and psychological health problems in modern society.Here,we used an artificial intelligence AI-based system for disease prediction using an Artificial Neural Network(ANN).The ANN improved the regularization of the classification model,hence increasing its accuracy.The unconstrained opti-mization model reduced the classifier’s cost function to obtain the lowest possible cost.To verify the performance of the intelligent system,we compared the out-comes of the suggested scheme with the results of previously proposed models.The proposed intelligent system achieved an accuracy of 0.89,and the miss rate 0.11 was higher than in previously proposed models.
文摘The integration of distributed generations (DGs) into distribution systems (DSs) is increasingly becoming a solution for compensating for isolated local energy systems (ILESs). Additionally, distributed generations are used for self-consumption with excess energy injected into centralized grids (CGs). However, the improper sizing of renewable energy systems (RESs) exposes the entire system to power losses. This work presents an optimization of a system consisting of distributed generations. Firstly, PSO algorithms evaluate the size of the entire system on the IEEE bus 14 test standard. Secondly, the size of the system is allocated using improved Particles Swarm Optimization (IPSO). The convergence speed of the objective function enables a conjecture to be made about the robustness of the proposed system. The power and voltage profile on the IEEE 14-bus standard displays a decrease in power losses and an appropriate response to energy demands (EDs), validating the proposed method.
文摘The Indian Himalayan region is frequently experiencing climate change-induced landslides.Thus,landslide susceptibility assessment assumes greater significance for lessening the impact of a landslide hazard.This paper makes an attempt to assess landslide susceptibility in Shimla district of the northwest Indian Himalayan region.It examined the effectiveness of random forest(RF),multilayer perceptron(MLP),sequential minimal optimization regression(SMOreg)and bagging ensemble(B-RF,BSMOreg,B-MLP)models.A landslide inventory map comprising 1052 locations of past landslide occurrences was classified into training(70%)and testing(30%)datasets.The site-specific influencing factors were selected by employing a multicollinearity test.The relationship between past landslide occurrences and influencing factors was established using the frequency ratio method.The effectiveness of machine learning models was verified through performance assessors.The landslide susceptibility maps were validated by the area under the receiver operating characteristic curves(ROC-AUC),accuracy,precision,recall and F1-score.The key performance metrics and map validation demonstrated that the BRF model(correlation coefficient:0.988,mean absolute error:0.010,root mean square error:0.058,relative absolute error:2.964,ROC-AUC:0.947,accuracy:0.778,precision:0.819,recall:0.917 and F-1 score:0.865)outperformed the single classifiers and other bagging ensemble models for landslide susceptibility.The results show that the largest area was found under the very high susceptibility zone(33.87%),followed by the low(27.30%),high(20.68%)and moderate(18.16%)susceptibility zones.The factors,namely average annual rainfall,slope,lithology,soil texture and earthquake magnitude have been identified as the influencing factors for very high landslide susceptibility.Soil texture,lineament density and elevation have been attributed to high and moderate susceptibility.Thus,the study calls for devising suitable landslide mitigation measures in the study area.Structural measures,an immediate response system,community participation and coordination among stakeholders may help lessen the detrimental impact of landslides.The findings from this study could aid decision-makers in mitigating future catastrophes and devising suitable strategies in other geographical regions with similar geological characteristics.
文摘The rapid evolution of wireless communication technologies has underscored the critical role of antennas in ensuring seamless connectivity.Antenna defects,ranging from manufacturing imperfections to environmental wear,pose significant challenges to the reliability and performance of communication systems.This review paper navigates the landscape of antenna defect detection,emphasizing the need for a nuanced understanding of various defect types and the associated challenges in visual detection.This review paper serves as a valuable resource for researchers,engineers,and practitioners engaged in the design and maintenance of communication systems.The insights presented here pave the way for enhanced reliability in antenna systems through targeted defect detection measures.In this study,a comprehensive literature analysis on computer vision algorithms that are employed in end-of-line visual inspection of antenna parts is presented.The PRISMA principles will be followed throughout the review,and its goals are to provide a summary of recent research,identify relevant computer vision techniques,and evaluate how effective these techniques are in discovering defects during inspections.It contains articles from scholarly journals as well as papers presented at conferences up until June 2023.This research utilized search phrases that were relevant,and papers were chosen based on whether or not they met certain inclusion and exclusion criteria.In this study,several different computer vision approaches,such as feature extraction and defect classification,are broken down and analyzed.Additionally,their applicability and performance are discussed.The review highlights the significance of utilizing a wide variety of datasets and measurement criteria.The findings of this study add to the existing body of knowledge and point researchers in the direction of promising new areas of investigation,such as real-time inspection systems and multispectral imaging.This review,on its whole,offers a complete study of computer vision approaches for quality control in antenna parts.It does so by providing helpful insights and drawing attention to areas that require additional exploration.
基金We are thankful for the funding support fromthe Science and Technology Projects of the National Archives Administration of China(Grant Number 2022-R-031)the Fundamental Research Funds for the Central Universities,Central China Normal University(Grant Number CCNU24CG014).
文摘As the volume of healthcare and medical data increases from diverse sources,real-world scenarios involving data sharing and collaboration have certain challenges,including the risk of privacy leakage,difficulty in data fusion,low reliability of data storage,low effectiveness of data sharing,etc.To guarantee the service quality of data collaboration,this paper presents a privacy-preserving Healthcare and Medical Data Collaboration Service System combining Blockchain with Federated Learning,termed FL-HMChain.This system is composed of three layers:Data extraction and storage,data management,and data application.Focusing on healthcare and medical data,a healthcare and medical blockchain is constructed to realize data storage,transfer,processing,and access with security,real-time,reliability,and integrity.An improved master node selection consensus mechanism is presented to detect and prevent dishonest behavior,ensuring the overall reliability and trustworthiness of the collaborative model training process.Furthermore,healthcare and medical data collaboration services in real-world scenarios have been discussed and developed.To further validate the performance of FL-HMChain,a Convolutional Neural Network-based Federated Learning(FL-CNN-HMChain)model is investigated for medical image identification.This model achieves better performance compared to the baseline Convolutional Neural Network(CNN),having an average improvement of 4.7%on Area Under Curve(AUC)and 7%on Accuracy(ACC),respectively.Furthermore,the probability of privacy leakage can be effectively reduced by the blockchain-based parameter transfer mechanism in federated learning between local and global models.
基金support from the Ningxia Natural Science Foundation Project(2023AAC03361).
文摘The flying foxes optimization(FFO)algorithm,as a newly introduced metaheuristic algorithm,is inspired by the survival tactics of flying foxes in heat wave environments.FFO preferentially selects the best-performing individuals.This tendency will cause the newly generated solution to remain closely tied to the candidate optimal in the search area.To address this issue,the paper introduces an opposition-based learning-based search mechanism for FFO algorithm(IFFO).Firstly,this paper introduces niching techniques to improve the survival list method,which not only focuses on the adaptability of individuals but also considers the population’s crowding degree to enhance the global search capability.Secondly,an initialization strategy of opposition-based learning is used to perturb the initial population and elevate its quality.Finally,to verify the superiority of the improved search mechanism,IFFO,FFO and the cutting-edge metaheuristic algorithms are compared and analyzed using a set of test functions.The results prove that compared with other algorithms,IFFO is characterized by its rapid convergence,precise results and robust stability.
基金financially supported by the National Natural Science Foundation of China (No. 42272156)research on efficient exploration and development technology for tight stone gas of China United Coalbed Methane Corporation (No. ZZGSECCYWG 2021-322)。
文摘In this study, an integrated approach for diagenetic facies classification, reservoir quality analysis and quantitative wireline log prediction of tight gas sandstones(TGSs) is introduced utilizing a combination of fit-for-purpose complementary testing and machine learning techniques. The integrated approach is specialized for the middle Permian Shihezi Formation TGSs in the northeastern Ordos Basin, where operators often face significant drilling uncertainty and increased exploration risks due to low porosities and micro-Darcy range permeabilities. In this study, detrital compositions and diagenetic minerals and their pore type assemblages were analyzed using optical light microscopy, cathodoluminescence, standard scanning electron microscopy, and X-ray diffraction. Different types of diagenetic facies were delineated on this basis to capture the characteristic rock properties of the TGSs in the target formation.A combination of He porosity and permeability measurements, mercury intrusion capillary pressure and nuclear magnetic resonance data was used to analyze the mechanism of heterogeneous TGS reservoirs.We found that the type, size and proportion of pores considerably varied between diagenetic facies due to differences in the initial depositional attributes and subsequent diagenetic alterations;these differences affected the size, distribution and connectivity of the pore network and varied the reservoir quality. Five types of diagenetic facies were classified:(i) grain-coating facies, which have minimal ductile grains, chlorite coatings that inhibit quartz overgrowths, large intergranular pores that dominate the pore network, the best pore structure and the greatest reservoir quality;(ii) quartz-cemented facies,which exhibit strong quartz overgrowths, intergranular porosity and a pore size decrease, resulting in the deterioration of the pore structure and reservoir quality;(iii) mixed-cemented facies, in which the cementation of various authigenic minerals increases the micropores, resulting in a poor pore structure and reservoir quality;(iv) carbonate-cemented facies and(v) tightly compacted facies, in which the intergranular pores are filled with carbonate cement and ductile grains;thus, the pore network mainly consists of micropores with small pore throat sizes, and the pore structure and reservoir quality are the worst. The grain-coating facies with the best reservoir properties are more likely to have high gas productivity and are the primary targets for exploration and development. The diagenetic facies were then translated into wireline log expressions(conventional and NMR logging). Finally, a wireline log quantitative prediction model of TGSs using convolutional neural network machine learning algorithms was established to successfully classify the different diagenetic facies.
文摘Lung cancer continues to be a leading cause of cancer-related deaths worldwide,emphasizing the critical need for improved diagnostic techniques.Early detection of lung tumors significantly increases the chances of successful treatment and survival.However,current diagnostic methods often fail to detect tumors at an early stage or to accurately pinpoint their location within the lung tissue.Single-model deep learning technologies for lung cancer detection,while beneficial,cannot capture the full range of features present in medical imaging data,leading to incomplete or inaccurate detection.Furthermore,it may not be robust enough to handle the wide variability in medical images due to different imaging conditions,patient anatomy,and tumor characteristics.To overcome these disadvantages,dual-model or multi-model approaches can be employed.This research focuses on enhancing the detection of lung cancer by utilizing a combination of two learning models:a Convolutional Neural Network(CNN)for categorization and the You Only Look Once(YOLOv8)architecture for real-time identification and pinpointing of tumors.CNNs automatically learn to extract hierarchical features from raw image data,capturing patterns such as edges,textures,and complex structures that are crucial for identifying lung cancer.YOLOv8 incorporates multiscale feature extraction,enabling the detection of tumors of varying sizes and scales within a single image.This is particularly beneficial for identifying small or irregularly shaped tumors that may be challenging to detect.Furthermore,through the utilization of cutting-edge data augmentation methods,such as Deep Convolutional Generative Adversarial Networks(DCGAN),the suggested approach can handle the issue of limited data and boost the models’ability to learn from diverse and comprehensive datasets.The combined method not only improved accuracy and localization but also ensured efficient real-time processing,which is crucial for practical clinical applications.The CNN achieved an accuracy of 97.67%in classifying lung tissues into healthy and cancerous categories.The YOLOv8 model achieved an Intersection over Union(IoU)score of 0.85 for tumor localization,reflecting high precision in detecting and marking tumor boundaries within the images.Finally,the incorporation of synthetic images generated by DCGAN led to a 10%improvement in both the CNN classification accuracy and YOLOv8 detection performance.