In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading...In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading.Our in-depth investigation delves into the intricacies of merging Multi-Agent Reinforcement Learning(MARL)and Explainable AI(XAI)within Fintech,aiming to refine Algorithmic Trading strategies.Through meticulous examination,we uncover the nuanced interactions of AI-driven agents as they collaborate and compete within the financial realm,employing sophisticated deep learning techniques to enhance the clarity and adaptability of trading decisions.These AI-infused Fintech platforms harness collective intelligence to unearth trends,mitigate risks,and provide tailored financial guidance,fostering benefits for individuals and enterprises navigating the digital landscape.Our research holds the potential to revolutionize finance,opening doors to fresh avenues for investment and asset management in the digital age.Additionally,our statistical evaluation yields encouraging results,with metrics such as Accuracy=0.85,Precision=0.88,and F1 Score=0.86,reaffirming the efficacy of our approach within Fintech and emphasizing its reliability and innovative prowess.展开更多
This research focuses on improving the Harris’Hawks Optimization algorithm(HHO)by tackling several of its shortcomings,including insufficient population diversity,an imbalance in exploration vs.exploitation,and a lac...This research focuses on improving the Harris’Hawks Optimization algorithm(HHO)by tackling several of its shortcomings,including insufficient population diversity,an imbalance in exploration vs.exploitation,and a lack of thorough exploitation depth.To tackle these shortcomings,it proposes enhancements from three distinct perspectives:an initialization technique for populations grounded in opposition-based learning,a strategy for updating escape energy factors to improve the equilibrium between exploitation and exploration,and a comprehensive exploitation approach that utilizes variable neighborhood search along with mutation operators.The effectiveness of the Improved Harris Hawks Optimization algorithm(IHHO)is assessed by comparing it to five leading algorithms across 23 benchmark test functions.Experimental findings indicate that the IHHO surpasses several contemporary algorithms its problem-solving capabilities.Additionally,this paper introduces a feature selection method leveraging the IHHO algorithm(IHHO-FS)to address challenges such as low efficiency in feature selection and high computational costs(time to find the optimal feature combination and model response time)associated with high-dimensional datasets.Comparative analyses between IHHO-FS and six other advanced feature selection methods are conducted across eight datasets.The results demonstrate that IHHO-FS significantly reduces the computational costs associated with classification models by lowering data dimensionality,while also enhancing the efficiency of feature selection.Furthermore,IHHO-FS shows strong competitiveness relative to numerous algorithms.展开更多
A novel color image encryption scheme is developed to enhance the security of encryption without increasing the complexity. Firstly, the plain color image is decomposed into three grayscale plain images, which are con...A novel color image encryption scheme is developed to enhance the security of encryption without increasing the complexity. Firstly, the plain color image is decomposed into three grayscale plain images, which are converted into the frequency domain coefficient matrices(FDCM) with discrete cosine transform(DCT) operation. After that, a twodimensional(2D) coupled chaotic system is developed and used to generate one group of embedded matrices and another group of encryption matrices, respectively. The embedded matrices are integrated with the FDCM to fulfill the frequency domain encryption, and then the inverse DCT processing is implemented to recover the spatial domain signal. Eventually,under the function of the encryption matrices and the proposed diagonal scrambling algorithm, the final color ciphertext is obtained. The experimental results show that the proposed method can not only ensure efficient encryption but also satisfy various sizes of image encryption. Besides, it has better performance than other similar techniques in statistical feature analysis, such as key space, key sensitivity, anti-differential attack, information entropy, noise attack, etc.展开更多
Reversible data hiding is a confidential communication technique that takes advantage of image file characteristics,which allows us to hide sensitive data in image files.In this paper,we propose a novel high-fidelity ...Reversible data hiding is a confidential communication technique that takes advantage of image file characteristics,which allows us to hide sensitive data in image files.In this paper,we propose a novel high-fidelity reversible data hiding scheme.Based on the advantage of the multipredictor mechanism,we combine two effective prediction schemes to improve prediction accuracy.In addition,the multihistogram technique is utilized to further improve the image quality of the stego image.Moreover,a model of the grouped knapsack problem is used to speed up the search for the suitable embedding bin in each sub-histogram.Experimental results show that the quality of the stego image of our scheme outperforms state-of-the-art schemes in most cases.展开更多
Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detec...Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.展开更多
Image classification and unsupervised image segmentation can be achieved using the Gaussian mixture model.Although the Gaussian mixture model enhances the flexibility of image segmentation,it does not reflect spatial ...Image classification and unsupervised image segmentation can be achieved using the Gaussian mixture model.Although the Gaussian mixture model enhances the flexibility of image segmentation,it does not reflect spatial information and is sensitive to the segmentation parameter.In this study,we first present an efficient algorithm that incorporates spatial information into the Gaussian mixture model(GMM)without parameter estimation.The proposed model highlights the residual region with considerable information and constructs color saliency.Second,we incorporate the content-based color saliency as spatial information in the Gaussian mixture model.The segmentation is performed by clustering each pixel into an appropriate component according to the expectation maximization and maximum criteria.Finally,the random color histogram assigns a unique color to each cluster and creates an attractive color by default for segmentation.A random color histogram serves as an effective tool for data visualization and is instrumental in the creation of generative art,facilitating both analytical and aesthetic objectives.For experiments,we have used the Berkeley segmentation dataset BSDS-500 and Microsoft Research in Cambridge dataset.In the study,the proposed model showcases notable advancements in unsupervised image segmentation,with probabilistic rand index(PRI)values reaching 0.80,BDE scores as low as 12.25 and 12.02,compactness variations at 0.59 and 0.7,and variation of information(VI)reduced to 2.0 and 1.49 for the BSDS-500 and MSRC datasets,respectively,outperforming current leading-edge methods and yielding more precise segmentations.展开更多
The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation fo...The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.展开更多
The management of early stage hepatocellular carcinoma(HCC)presents significant challenges.While radiofrequency ablation(RFA)has shown safety and effectiveness in treating HCC,with lower mortality rates and shorter ho...The management of early stage hepatocellular carcinoma(HCC)presents significant challenges.While radiofrequency ablation(RFA)has shown safety and effectiveness in treating HCC,with lower mortality rates and shorter hospital stays,its high recurrence rate remains a significant impediment.Consequently,achieving improved survival solely through RFA is challenging,particularly in retrospective studies with inherent biases.Ultrasound is commonly used for guiding percutaneous RFA,but its low contrast can lead to missed tumors and the risk of HCC recurrence.To enhance the efficiency of ultrasound-guided percutaneous RFA,various techniques such as artificial ascites and contrast-enhanced ultrasound have been developed to facilitate complete tumor ablation.Minimally invasive surgery(MIS)offers advantages over open surgery and has gained traction in various surgical fields.Recent studies suggest that laparoscopic intraoperative RFA(IORFA)may be more effective than percutaneous RFA in terms of survival for HCC patients unsuitable for surgery,highlighting its significance.Therefore,combining MIS-IORFA with these enhanced percutaneous RFA techniques may hold greater significance for HCC treatment using the MIS-IORFA approach.This article reviews liver resection and RFA in HCC treatment,comparing their merits and proposing a trajectory involving their combination in future therapy.展开更多
This study proposed a new real-time manufacturing process monitoring method to monitor and detect process shifts in manufacturing operations.Since real-time production process monitoring is critical in today’s smart ...This study proposed a new real-time manufacturing process monitoring method to monitor and detect process shifts in manufacturing operations.Since real-time production process monitoring is critical in today’s smart manufacturing.The more robust the monitoring model,the more reliable a process is to be under control.In the past,many researchers have developed real-time monitoring methods to detect process shifts early.However,thesemethods have limitations in detecting process shifts as quickly as possible and handling various data volumes and varieties.In this paper,a robust monitoring model combining Gated Recurrent Unit(GRU)and Random Forest(RF)with Real-Time Contrast(RTC)called GRU-RF-RTC was proposed to detect process shifts rapidly.The effectiveness of the proposed GRU-RF-RTC model is first evaluated using multivariate normal and nonnormal distribution datasets.Then,to prove the applicability of the proposed model in a realmanufacturing setting,the model was evaluated using real-world normal and non-normal problems.The results demonstrate that the proposed GRU-RF-RTC outperforms other methods in detecting process shifts quickly with the lowest average out-of-control run length(ARL1)in all synthesis and real-world problems under normal and non-normal cases.The experiment results on real-world problems highlight the significance of the proposed GRU-RF-RTC model in modern manufacturing process monitoring applications.The result reveals that the proposed method improves the shift detection capability by 42.14%in normal and 43.64%in gamma distribution problems.展开更多
Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concer...Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concerns.However,these three factors have intrinsic trade-off relationships.The existing studies show that load concentration can reduce the number of servers and hence save energy.In this paper,we deal with the problem of reliable task deployment in data centers,with the goal of minimizing the number of servers used in cloud data centers under the constraint that the job execution deadline can be met upon single server failure.We propose a QoS-Constrained,Reliable and Energy-efficient task replica deployment(QSRE)algorithm for the problem by combining task replication and re-execution.For each task in a job that cannot finish executing by re-execution within deadline,we initiate two replicas for the task:main task and task replica.Each main task runs on an individual server.The associated task replica is deployed on a backup server and completes part of the whole task load before the main task failure.Different from the main tasks,multiple task replicas can be allocated to the same backup server to reduce the energy consumption of cloud data centers by minimizing the number of servers required for running the task replicas.Specifically,QSRE assigns the task replicas with the longest and the shortest execution time to the backup servers in turn,such that the task replicas can meet the QoS-specified job execution deadline under the main task failure.We conduct experiments through simulations.The experimental results show that QSRE can effectively reduce the number of servers used,while ensuring the reliability and QoS of job execution.展开更多
Compact torus(CT)injection is a highly promising technique for the central fueling of future reactor-grade fusion devices since it features extremely high injection velocity and relatively high plasma mass.Recently,a ...Compact torus(CT)injection is a highly promising technique for the central fueling of future reactor-grade fusion devices since it features extremely high injection velocity and relatively high plasma mass.Recently,a CT injector for the EAST tokamak,EAST-CTI,was developed and platform-tested.In the first round of experiments conducted with low parameter settings,the maximum velocity and mass of the CT plasma were 150 km·s^(-1)and 90μg,respectively.However,the parameters obtained by EAST-CTI were still very low and were far from the requirements of a device such as EAST that has a strong magnetic field.In future,we plan to solve the spark problem that EAST-CTI currently encounters(that mainly hinders the further development of experiments)through engineering methods,and use greater power to obtain a more stable and suitable CT plasma for EAST.展开更多
Expert knowledge is the key to modeling milling fault detection systems based on the belief rule base.The construction of an initial expert knowledge base seriously affects the accuracy and interpretability of the mil...Expert knowledge is the key to modeling milling fault detection systems based on the belief rule base.The construction of an initial expert knowledge base seriously affects the accuracy and interpretability of the milling fault detection model.However,due to the complexity of the milling system structure and the uncertainty of the milling failure index,it is often impossible to construct model expert knowledge effectively.Therefore,a milling system fault detection method based on fault tree analysis and hierarchical BRB(FTBRB)is proposed.Firstly,the proposed method uses a fault tree and hierarchical BRB modeling.Through fault tree analysis(FTA),the logical correspondence between FTA and BRB is sorted out.This can effectively embed the FTA mechanism into the BRB expert knowledge base.The hierarchical BRB model is used to solve the problem of excessive indexes and avoid combinatorial explosion.Secondly,evidence reasoning(ER)is used to ensure the transparency of the model reasoning process.Thirdly,the projection covariance matrix adaptation evolutionary strategies(P-CMA-ES)is used to optimize the model.Finally,this paper verifies the validity model and the method’s feasibility techniques for milling data sets.展开更多
Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability...Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.展开更多
The prediction of processor performance has important referencesignificance for future processors. Both the accuracy and rationality of theprediction results are required. The hierarchical belief rule base (HBRB)can i...The prediction of processor performance has important referencesignificance for future processors. Both the accuracy and rationality of theprediction results are required. The hierarchical belief rule base (HBRB)can initially provide a solution to low prediction accuracy. However, theinterpretability of the model and the traceability of the results still warrantfurther investigation. Therefore, a processor performance prediction methodbased on interpretable hierarchical belief rule base (HBRB-I) and globalsensitivity analysis (GSA) is proposed. The method can yield more reliableprediction results. Evidence reasoning (ER) is firstly used to evaluate thehistorical data of the processor, followed by a performance prediction modelwith interpretability constraints that is constructed based on HBRB-I. Then,the whale optimization algorithm (WOA) is used to optimize the parameters.Furthermore, to test the interpretability of the performance predictionprocess, GSA is used to analyze the relationship between the input and thepredicted output indicators. Finally, based on the UCI database processordataset, the effectiveness and superiority of the method are verified. Accordingto our experiments, our prediction method generates more reliable andaccurate estimations than traditional models.展开更多
Voice classification is important in creating more intelligent systems that help with student exams,identifying criminals,and security systems.The main aim of the research is to develop a system able to predicate and ...Voice classification is important in creating more intelligent systems that help with student exams,identifying criminals,and security systems.The main aim of the research is to develop a system able to predicate and classify gender,age,and accent.So,a newsystem calledClassifyingVoice Gender,Age,and Accent(CVGAA)is proposed.Backpropagation and bagging algorithms are designed to improve voice recognition systems that incorporate sensory voice features such as rhythm-based features used to train the device to distinguish between the two gender categories.It has high precision compared to other algorithms used in this problem,as the adaptive backpropagation algorithm had an accuracy of 98%and the Bagging algorithm had an accuracy of 98.10%in the gender identification data.Bagging has the best accuracy among all algorithms,with 55.39%accuracy in the voice common dataset and age classification and accent accuracy in a speech accent of 78.94%.展开更多
A significant obstacle in intelligent transportation systems(ITS)is the capacity to predict traffic flow.Recent advancements in deep neural networks have enabled the development of models to represent traffic flow acc...A significant obstacle in intelligent transportation systems(ITS)is the capacity to predict traffic flow.Recent advancements in deep neural networks have enabled the development of models to represent traffic flow accurately.However,accurately predicting traffic flow at the individual road level is extremely difficult due to the complex interplay of spatial and temporal factors.This paper proposes a technique for predicting short-term traffic flow data using an architecture that utilizes convolutional bidirectional long short-term memory(Conv-BiLSTM)with attention mechanisms.Prior studies neglected to include data pertaining to factors such as holidays,weather conditions,and vehicle types,which are interconnected and significantly impact the accuracy of forecast outcomes.In addition,this research incorporates recurring monthly periodic pattern data that significantly enhances the accuracy of forecast outcomes.The experimental findings demonstrate a performance improvement of 21.68%when incorporating the vehicle type feature.展开更多
To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentat...To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentation,lane detection,and traffic object detection.Firstly,in the encoding stage,features are extracted,and Generalized Efficient Layer Aggregation Network(GELAN)is utilized to enhance feature extraction and gradient flow.Secondly,in the decoding stage,specialized detection heads are designed;the drivable area segmentation head employs DySample to expand feature maps,the lane detection head merges early-stage features and processes the output through the Focal Modulation Network(FMN).Lastly,the Minimum Point Distance IoU(MPDIoU)loss function is employed to compute the matching degree between traffic object detection boxes and predicted boxes,facilitating model training adjustments.Experimental results on the BDD100K dataset demonstrate that the proposed network achieves a drivable area segmentation mean intersection over union(mIoU)of 92.2%,lane detection accuracy and intersection over union(IoU)of 75.3%and 26.4%,respectively,and traffic object detection recall and mAP of 89.7%and 78.2%,respectively.The detection performance surpasses that of other single-task or multi-task algorithm models.展开更多
Effective data communication is a crucial aspect of the Social Internet of Things(SIoT)and continues to be a significant research focus.This paper proposes a data forwarding algorithm based on Multidimensional Social ...Effective data communication is a crucial aspect of the Social Internet of Things(SIoT)and continues to be a significant research focus.This paper proposes a data forwarding algorithm based on Multidimensional Social Relations(MSRR)in SIoT to solve this problem.The proposed algorithm separates message forwarding into intra-and cross-community forwarding by analyzing interest traits and social connections among nodes.Three new metrics are defined:the intensity of node social relationships,node activity,and community connectivity.Within the community,messages are sent by determining which node is most similar to the sender by weighing the strength of social connections and node activity.When a node performs cross-community forwarding,the message is forwarded to the most reasonable relay community by measuring the node activity and the connection between communities.The proposed algorithm was compared to three existing routing algorithms in simulation experiments.Results indicate that the proposed algorithmsubstantially improves message delivery efficiency while lessening network overhead and enhancing connectivity and coordination in the SIoT context.展开更多
Layout synthesis in quantum computing is crucial due to the physical constraints of quantum devices where quantum bits(qubits)can only interact effectively with their nearest neighbors.This constraint severely impacts...Layout synthesis in quantum computing is crucial due to the physical constraints of quantum devices where quantum bits(qubits)can only interact effectively with their nearest neighbors.This constraint severely impacts the design and efficiency of quantum algorithms,as arranging qubits optimally can significantly reduce circuit depth and improve computational performance.To tackle the layout synthesis challenge,we propose an algorithm based on integer linear programming(ILP).ILP is well-suited for this problem as it can formulate the optimization objective of minimizing circuit depth while adhering to the nearest neighbor interaction constraint.The algorithm aims to generate layouts that maximize qubit connectivity within the given physical constraints of the quantum device.For experimental validation,we outline a clear and feasible setup using real quantum devices.This includes specifying the type and configuration of the quantum hardware used,such as the number of qubits,connectivity constraints,and any technological limitations.The proposed algorithm is implemented on these devices to demonstrate its effectiveness in producing depth-optimal quantum circuit layouts.By integrating these elements,our research aims to provide practical solutions to enhance the efficiency and scalability of quantum computing systems,paving the way for advancements in quantum algorithm design and implementation.展开更多
基金This project was funded by Deanship of Scientific Research(DSR)at King Abdulaziz University,Jeddah underGrant No.(IFPIP-1127-611-1443)the authors,therefore,acknowledge with thanks DSR technical and financial support.
文摘In the rapidly evolving landscape of today’s digital economy,Financial Technology(Fintech)emerges as a trans-formative force,propelled by the dynamic synergy between Artificial Intelligence(AI)and Algorithmic Trading.Our in-depth investigation delves into the intricacies of merging Multi-Agent Reinforcement Learning(MARL)and Explainable AI(XAI)within Fintech,aiming to refine Algorithmic Trading strategies.Through meticulous examination,we uncover the nuanced interactions of AI-driven agents as they collaborate and compete within the financial realm,employing sophisticated deep learning techniques to enhance the clarity and adaptability of trading decisions.These AI-infused Fintech platforms harness collective intelligence to unearth trends,mitigate risks,and provide tailored financial guidance,fostering benefits for individuals and enterprises navigating the digital landscape.Our research holds the potential to revolutionize finance,opening doors to fresh avenues for investment and asset management in the digital age.Additionally,our statistical evaluation yields encouraging results,with metrics such as Accuracy=0.85,Precision=0.88,and F1 Score=0.86,reaffirming the efficacy of our approach within Fintech and emphasizing its reliability and innovative prowess.
基金supported by the National Natural Science Foundation of China(grant number 62073330)constituted a segment of a project associated with the School of Computer Science and Information Engineering at Harbin Normal University。
文摘This research focuses on improving the Harris’Hawks Optimization algorithm(HHO)by tackling several of its shortcomings,including insufficient population diversity,an imbalance in exploration vs.exploitation,and a lack of thorough exploitation depth.To tackle these shortcomings,it proposes enhancements from three distinct perspectives:an initialization technique for populations grounded in opposition-based learning,a strategy for updating escape energy factors to improve the equilibrium between exploitation and exploration,and a comprehensive exploitation approach that utilizes variable neighborhood search along with mutation operators.The effectiveness of the Improved Harris Hawks Optimization algorithm(IHHO)is assessed by comparing it to five leading algorithms across 23 benchmark test functions.Experimental findings indicate that the IHHO surpasses several contemporary algorithms its problem-solving capabilities.Additionally,this paper introduces a feature selection method leveraging the IHHO algorithm(IHHO-FS)to address challenges such as low efficiency in feature selection and high computational costs(time to find the optimal feature combination and model response time)associated with high-dimensional datasets.Comparative analyses between IHHO-FS and six other advanced feature selection methods are conducted across eight datasets.The results demonstrate that IHHO-FS significantly reduces the computational costs associated with classification models by lowering data dimensionality,while also enhancing the efficiency of feature selection.Furthermore,IHHO-FS shows strong competitiveness relative to numerous algorithms.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62105004 and 52174141)the College Student Innovation and Entrepreneurship Fund Project(Grant No.202210361053)+1 种基金Anhui Mining Machinery and Electrical Equipment Coordination Innovation Center,Anhui University of Science&Technology(Grant No.KSJD202304)the Anhui Province Digital Agricultural Engineering Technology Research Center Open Project(Grant No.AHSZNYGC-ZXKF021)。
文摘A novel color image encryption scheme is developed to enhance the security of encryption without increasing the complexity. Firstly, the plain color image is decomposed into three grayscale plain images, which are converted into the frequency domain coefficient matrices(FDCM) with discrete cosine transform(DCT) operation. After that, a twodimensional(2D) coupled chaotic system is developed and used to generate one group of embedded matrices and another group of encryption matrices, respectively. The embedded matrices are integrated with the FDCM to fulfill the frequency domain encryption, and then the inverse DCT processing is implemented to recover the spatial domain signal. Eventually,under the function of the encryption matrices and the proposed diagonal scrambling algorithm, the final color ciphertext is obtained. The experimental results show that the proposed method can not only ensure efficient encryption but also satisfy various sizes of image encryption. Besides, it has better performance than other similar techniques in statistical feature analysis, such as key space, key sensitivity, anti-differential attack, information entropy, noise attack, etc.
基金funded by National Science Council,Taiwan,the Grant Number is NSC 111-2410-H-167-005-MY2.
文摘Reversible data hiding is a confidential communication technique that takes advantage of image file characteristics,which allows us to hide sensitive data in image files.In this paper,we propose a novel high-fidelity reversible data hiding scheme.Based on the advantage of the multipredictor mechanism,we combine two effective prediction schemes to improve prediction accuracy.In addition,the multihistogram technique is utilized to further improve the image quality of the stego image.Moreover,a model of the grouped knapsack problem is used to speed up the search for the suitable embedding bin in each sub-histogram.Experimental results show that the quality of the stego image of our scheme outperforms state-of-the-art schemes in most cases.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2024R 343)PrincessNourah bint Abdulrahman University,Riyadh,Saudi ArabiaDeanship of Scientific Research at Northern Border University,Arar,Kingdom of Saudi Arabia,for funding this researchwork through the project number“NBU-FFR-2024-1092-02”.
文摘Phishing attacks present a persistent and evolving threat in the cybersecurity land-scape,necessitating the development of more sophisticated detection methods.Traditional machine learning approaches to phishing detection have relied heavily on feature engineering and have often fallen short in adapting to the dynamically changing patterns of phishingUniformResource Locator(URLs).Addressing these challenge,we introduce a framework that integrates the sequential data processing strengths of a Recurrent Neural Network(RNN)with the hyperparameter optimization prowess of theWhale Optimization Algorithm(WOA).Ourmodel capitalizes on an extensive Kaggle dataset,featuring over 11,000 URLs,each delineated by 30 attributes.The WOA’s hyperparameter optimization enhances the RNN’s performance,evidenced by a meticulous validation process.The results,encapsulated in precision,recall,and F1-score metrics,surpass baseline models,achieving an overall accuracy of 92%.This study not only demonstrates the RNN’s proficiency in learning complex patterns but also underscores the WOA’s effectiveness in refining machine learning models for the critical task of phishing detection.
基金supported by the MOE(Ministry of Education of China)Project of Humanities and Social Sciences(23YJAZH169)the Hubei Provincial Department of Education Outstanding Youth Scientific Innovation Team Support Foundation(T2020017)Henan Foreign Experts Project No.HNGD2023027.
文摘Image classification and unsupervised image segmentation can be achieved using the Gaussian mixture model.Although the Gaussian mixture model enhances the flexibility of image segmentation,it does not reflect spatial information and is sensitive to the segmentation parameter.In this study,we first present an efficient algorithm that incorporates spatial information into the Gaussian mixture model(GMM)without parameter estimation.The proposed model highlights the residual region with considerable information and constructs color saliency.Second,we incorporate the content-based color saliency as spatial information in the Gaussian mixture model.The segmentation is performed by clustering each pixel into an appropriate component according to the expectation maximization and maximum criteria.Finally,the random color histogram assigns a unique color to each cluster and creates an attractive color by default for segmentation.A random color histogram serves as an effective tool for data visualization and is instrumental in the creation of generative art,facilitating both analytical and aesthetic objectives.For experiments,we have used the Berkeley segmentation dataset BSDS-500 and Microsoft Research in Cambridge dataset.In the study,the proposed model showcases notable advancements in unsupervised image segmentation,with probabilistic rand index(PRI)values reaching 0.80,BDE scores as low as 12.25 and 12.02,compactness variations at 0.59 and 0.7,and variation of information(VI)reduced to 2.0 and 1.49 for the BSDS-500 and MSRC datasets,respectively,outperforming current leading-edge methods and yielding more precise segmentations.
文摘The visions of Industry 4.0 and 5.0 have reinforced the industrial environment.They have also made artificial intelligence incorporated as a major facilitator.Diagnosing machine faults has become a solid foundation for automatically recognizing machine failure,and thus timely maintenance can ensure safe operations.Transfer learning is a promising solution that can enhance the machine fault diagnosis model by borrowing pre-trained knowledge from the source model and applying it to the target model,which typically involves two datasets.In response to the availability of multiple datasets,this paper proposes using selective and adaptive incremental transfer learning(SA-ITL),which fuses three algorithms,namely,the hybrid selective algorithm,the transferability enhancement algorithm,and the incremental transfer learning algorithm.It is a selective algorithm that enables selecting and ordering appropriate datasets for transfer learning and selecting useful knowledge to avoid negative transfer.The algorithm also adaptively adjusts the portion of training data to balance the learning rate and training time.The proposed algorithm is evaluated and analyzed using ten benchmark datasets.Compared with other algorithms from existing works,SA-ITL improves the accuracy of all datasets.Ablation studies present the accuracy enhancements of the SA-ITL,including the hybrid selective algorithm(1.22%-3.82%),transferability enhancement algorithm(1.91%-4.15%),and incremental transfer learning algorithm(0.605%-2.68%).These also show the benefits of enhancing the target model with heterogeneous image datasets that widen the range of domain selection between source and target domains.
文摘The management of early stage hepatocellular carcinoma(HCC)presents significant challenges.While radiofrequency ablation(RFA)has shown safety and effectiveness in treating HCC,with lower mortality rates and shorter hospital stays,its high recurrence rate remains a significant impediment.Consequently,achieving improved survival solely through RFA is challenging,particularly in retrospective studies with inherent biases.Ultrasound is commonly used for guiding percutaneous RFA,but its low contrast can lead to missed tumors and the risk of HCC recurrence.To enhance the efficiency of ultrasound-guided percutaneous RFA,various techniques such as artificial ascites and contrast-enhanced ultrasound have been developed to facilitate complete tumor ablation.Minimally invasive surgery(MIS)offers advantages over open surgery and has gained traction in various surgical fields.Recent studies suggest that laparoscopic intraoperative RFA(IORFA)may be more effective than percutaneous RFA in terms of survival for HCC patients unsuitable for surgery,highlighting its significance.Therefore,combining MIS-IORFA with these enhanced percutaneous RFA techniques may hold greater significance for HCC treatment using the MIS-IORFA approach.This article reviews liver resection and RFA in HCC treatment,comparing their merits and proposing a trajectory involving their combination in future therapy.
基金support from the National Science and Technology Council of Taiwan(Contract Nos.111-2221 E-011081 and 111-2622-E-011019)the support from Intelligent Manufacturing Innovation Center(IMIC),National Taiwan University of Science and Technology(NTUST),Taipei,Taiwan,which is a Featured Areas Research Center in Higher Education Sprout Project of Ministry of Education(MOE),Taiwan(since 2023)was appreciatedWe also thank Wang Jhan Yang Charitable Trust Fund(Contract No.WJY 2020-HR-01)for its financial support.
文摘This study proposed a new real-time manufacturing process monitoring method to monitor and detect process shifts in manufacturing operations.Since real-time production process monitoring is critical in today’s smart manufacturing.The more robust the monitoring model,the more reliable a process is to be under control.In the past,many researchers have developed real-time monitoring methods to detect process shifts early.However,thesemethods have limitations in detecting process shifts as quickly as possible and handling various data volumes and varieties.In this paper,a robust monitoring model combining Gated Recurrent Unit(GRU)and Random Forest(RF)with Real-Time Contrast(RTC)called GRU-RF-RTC was proposed to detect process shifts rapidly.The effectiveness of the proposed GRU-RF-RTC model is first evaluated using multivariate normal and nonnormal distribution datasets.Then,to prove the applicability of the proposed model in a realmanufacturing setting,the model was evaluated using real-world normal and non-normal problems.The results demonstrate that the proposed GRU-RF-RTC outperforms other methods in detecting process shifts quickly with the lowest average out-of-control run length(ARL1)in all synthesis and real-world problems under normal and non-normal cases.The experiment results on real-world problems highlight the significance of the proposed GRU-RF-RTC model in modern manufacturing process monitoring applications.The result reveals that the proposed method improves the shift detection capability by 42.14%in normal and 43.64%in gamma distribution problems.
文摘Reliability,QoS and energy consumption are three important concerns of cloud service providers.Most of the current research on reliable task deployment in cloud computing focuses on only one or two of the three concerns.However,these three factors have intrinsic trade-off relationships.The existing studies show that load concentration can reduce the number of servers and hence save energy.In this paper,we deal with the problem of reliable task deployment in data centers,with the goal of minimizing the number of servers used in cloud data centers under the constraint that the job execution deadline can be met upon single server failure.We propose a QoS-Constrained,Reliable and Energy-efficient task replica deployment(QSRE)algorithm for the problem by combining task replication and re-execution.For each task in a job that cannot finish executing by re-execution within deadline,we initiate two replicas for the task:main task and task replica.Each main task runs on an individual server.The associated task replica is deployed on a backup server and completes part of the whole task load before the main task failure.Different from the main tasks,multiple task replicas can be allocated to the same backup server to reduce the energy consumption of cloud data centers by minimizing the number of servers required for running the task replicas.Specifically,QSRE assigns the task replicas with the longest and the shortest execution time to the backup servers in turn,such that the task replicas can meet the QoS-specified job execution deadline under the main task failure.We conduct experiments through simulations.The experimental results show that QSRE can effectively reduce the number of servers used,while ensuring the reliability and QoS of job execution.
基金support of the National Key Research and Development Program of China(Nos.2017YFE0300501,2017YFE0300500)Institute of Energy,Hefei Comprehensive National Science Center(Nos.21KZS202,19KZS205)+3 种基金University Synergy Innovation Program of Anhui Province(Nos.GXXT-2021-014,GXXT-2021-029)National Natural Science Foundation of China(No.11905143)the Fundamental Research Funds for the Central Universities of China(No.JZ2022HGTB0302)supported in part by the Users with Excellence Program of Hefei Science Center CAS(No.2020HSC-UE008)。
文摘Compact torus(CT)injection is a highly promising technique for the central fueling of future reactor-grade fusion devices since it features extremely high injection velocity and relatively high plasma mass.Recently,a CT injector for the EAST tokamak,EAST-CTI,was developed and platform-tested.In the first round of experiments conducted with low parameter settings,the maximum velocity and mass of the CT plasma were 150 km·s^(-1)and 90μg,respectively.However,the parameters obtained by EAST-CTI were still very low and were far from the requirements of a device such as EAST that has a strong magnetic field.In future,we plan to solve the spark problem that EAST-CTI currently encounters(that mainly hinders the further development of experiments)through engineering methods,and use greater power to obtain a more stable and suitable CT plasma for EAST.
基金This work was supported in part by the Natural Science Foundation of China under Grant 62203461 and Grant 62203365in part by the Postdoctoral Science Foundation of China under Grant No.2020M683736+3 种基金in part by the Teaching reform project of higher education in Heilongjiang Province under Grant Nos.SJGY20210456 and SJGY20210457in part by the Natural Science Foundation of Heilongjiang Province of China under Grant No.LH2021F038in part by the graduate academic innovation project of Harbin Normal University under Grant Nos.HSDSSCX2022-17,HSDSSCX2022-18 andHSDSSCX2022-19in part by the Foreign Expert Project of Heilongjiang Province under Grant No.GZ20220131.
文摘Expert knowledge is the key to modeling milling fault detection systems based on the belief rule base.The construction of an initial expert knowledge base seriously affects the accuracy and interpretability of the milling fault detection model.However,due to the complexity of the milling system structure and the uncertainty of the milling failure index,it is often impossible to construct model expert knowledge effectively.Therefore,a milling system fault detection method based on fault tree analysis and hierarchical BRB(FTBRB)is proposed.Firstly,the proposed method uses a fault tree and hierarchical BRB modeling.Through fault tree analysis(FTA),the logical correspondence between FTA and BRB is sorted out.This can effectively embed the FTA mechanism into the BRB expert knowledge base.The hierarchical BRB model is used to solve the problem of excessive indexes and avoid combinatorial explosion.Secondly,evidence reasoning(ER)is used to ensure the transparency of the model reasoning process.Thirdly,the projection covariance matrix adaptation evolutionary strategies(P-CMA-ES)is used to optimize the model.Finally,this paper verifies the validity model and the method’s feasibility techniques for milling data sets.
基金supported and granted by the Ministry of Science and Technology,Taiwan(MOST110-2622-E-390-001 and MOST109-2622-E-390-002-CC3).
文摘Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.
基金This work is supported in part by the Postdoctoral Science Foundation of China under Grant No.2020M683736in part by the Teaching reform project of higher education in Heilongjiang Province under Grant No.SJGY20210456in part by the Natural Science Foundation of Heilongjiang Province of China under Grant No.LH2021F038.
文摘The prediction of processor performance has important referencesignificance for future processors. Both the accuracy and rationality of theprediction results are required. The hierarchical belief rule base (HBRB)can initially provide a solution to low prediction accuracy. However, theinterpretability of the model and the traceability of the results still warrantfurther investigation. Therefore, a processor performance prediction methodbased on interpretable hierarchical belief rule base (HBRB-I) and globalsensitivity analysis (GSA) is proposed. The method can yield more reliableprediction results. Evidence reasoning (ER) is firstly used to evaluate thehistorical data of the processor, followed by a performance prediction modelwith interpretability constraints that is constructed based on HBRB-I. Then,the whale optimization algorithm (WOA) is used to optimize the parameters.Furthermore, to test the interpretability of the performance predictionprocess, GSA is used to analyze the relationship between the input and thepredicted output indicators. Finally, based on the UCI database processordataset, the effectiveness and superiority of the method are verified. Accordingto our experiments, our prediction method generates more reliable andaccurate estimations than traditional models.
文摘Voice classification is important in creating more intelligent systems that help with student exams,identifying criminals,and security systems.The main aim of the research is to develop a system able to predicate and classify gender,age,and accent.So,a newsystem calledClassifyingVoice Gender,Age,and Accent(CVGAA)is proposed.Backpropagation and bagging algorithms are designed to improve voice recognition systems that incorporate sensory voice features such as rhythm-based features used to train the device to distinguish between the two gender categories.It has high precision compared to other algorithms used in this problem,as the adaptive backpropagation algorithm had an accuracy of 98%and the Bagging algorithm had an accuracy of 98.10%in the gender identification data.Bagging has the best accuracy among all algorithms,with 55.39%accuracy in the voice common dataset and age classification and accent accuracy in a speech accent of 78.94%.
文摘A significant obstacle in intelligent transportation systems(ITS)is the capacity to predict traffic flow.Recent advancements in deep neural networks have enabled the development of models to represent traffic flow accurately.However,accurately predicting traffic flow at the individual road level is extremely difficult due to the complex interplay of spatial and temporal factors.This paper proposes a technique for predicting short-term traffic flow data using an architecture that utilizes convolutional bidirectional long short-term memory(Conv-BiLSTM)with attention mechanisms.Prior studies neglected to include data pertaining to factors such as holidays,weather conditions,and vehicle types,which are interconnected and significantly impact the accuracy of forecast outcomes.In addition,this research incorporates recurring monthly periodic pattern data that significantly enhances the accuracy of forecast outcomes.The experimental findings demonstrate a performance improvement of 21.68%when incorporating the vehicle type feature.
文摘To enhance the efficiency and accuracy of environmental perception for autonomous vehicles,we propose GDMNet,a unified multi-task perception network for autonomous driving,capable of performing drivable area segmentation,lane detection,and traffic object detection.Firstly,in the encoding stage,features are extracted,and Generalized Efficient Layer Aggregation Network(GELAN)is utilized to enhance feature extraction and gradient flow.Secondly,in the decoding stage,specialized detection heads are designed;the drivable area segmentation head employs DySample to expand feature maps,the lane detection head merges early-stage features and processes the output through the Focal Modulation Network(FMN).Lastly,the Minimum Point Distance IoU(MPDIoU)loss function is employed to compute the matching degree between traffic object detection boxes and predicted boxes,facilitating model training adjustments.Experimental results on the BDD100K dataset demonstrate that the proposed network achieves a drivable area segmentation mean intersection over union(mIoU)of 92.2%,lane detection accuracy and intersection over union(IoU)of 75.3%and 26.4%,respectively,and traffic object detection recall and mAP of 89.7%and 78.2%,respectively.The detection performance surpasses that of other single-task or multi-task algorithm models.
基金supported by the NationalNatural Science Foundation of China(61972136)the Hubei Provincial Department of Education Outstanding Youth Scientific Innovation Team Support Foundation(T201410,T2020017)+1 种基金the Natural Science Foundation of Xiaogan City(XGKJ2022010095,XGKJ2022010094)the Science and Technology Research Project of Education Department of Hubei Province(No.Q20222704).
文摘Effective data communication is a crucial aspect of the Social Internet of Things(SIoT)and continues to be a significant research focus.This paper proposes a data forwarding algorithm based on Multidimensional Social Relations(MSRR)in SIoT to solve this problem.The proposed algorithm separates message forwarding into intra-and cross-community forwarding by analyzing interest traits and social connections among nodes.Three new metrics are defined:the intensity of node social relationships,node activity,and community connectivity.Within the community,messages are sent by determining which node is most similar to the sender by weighing the strength of social connections and node activity.When a node performs cross-community forwarding,the message is forwarded to the most reasonable relay community by measuring the node activity and the connection between communities.The proposed algorithm was compared to three existing routing algorithms in simulation experiments.Results indicate that the proposed algorithmsubstantially improves message delivery efficiency while lessening network overhead and enhancing connectivity and coordination in the SIoT context.
基金supported by National Science and Technology Council,Taiwan,NSTC 112-2221-E-024-004.
文摘Layout synthesis in quantum computing is crucial due to the physical constraints of quantum devices where quantum bits(qubits)can only interact effectively with their nearest neighbors.This constraint severely impacts the design and efficiency of quantum algorithms,as arranging qubits optimally can significantly reduce circuit depth and improve computational performance.To tackle the layout synthesis challenge,we propose an algorithm based on integer linear programming(ILP).ILP is well-suited for this problem as it can formulate the optimization objective of minimizing circuit depth while adhering to the nearest neighbor interaction constraint.The algorithm aims to generate layouts that maximize qubit connectivity within the given physical constraints of the quantum device.For experimental validation,we outline a clear and feasible setup using real quantum devices.This includes specifying the type and configuration of the quantum hardware used,such as the number of qubits,connectivity constraints,and any technological limitations.The proposed algorithm is implemented on these devices to demonstrate its effectiveness in producing depth-optimal quantum circuit layouts.By integrating these elements,our research aims to provide practical solutions to enhance the efficiency and scalability of quantum computing systems,paving the way for advancements in quantum algorithm design and implementation.