High-rise buildings are usually considered as flexible structures with low inherent damping. Therefore, these kinds of buildings are susceptible to wind-induced vibration. Tuned Mass Damper(TMD) can be used as an ef...High-rise buildings are usually considered as flexible structures with low inherent damping. Therefore, these kinds of buildings are susceptible to wind-induced vibration. Tuned Mass Damper(TMD) can be used as an effective device to mitigate excessive vibrations. In this study, Artificial Neural Networks is used to find optimal mechanical properties of TMD for high-rise buildings subjected to wind load. The patterns obtained from structural analysis of different multi degree of freedom(MDF) systems are used for training neural networks. In order to obtain these patterns, structural models of some systems with 10 to 80 degrees-of-freedoms are built in MATLAB/SIMULINK program. Finally, the optimal properties of TMD are determined based on the objective of maximum displacement response reduction. The Auto-Regressive model is used to simulate the wind load. In this way, the uncertainties related to wind loading can be taken into account in neural network’s outputs. After training the neural network, it becomes possible to set the frequency and TMD mass ratio as inputs and get the optimal TMD frequency and damping ratio as outputs. As a case study, a benchmark 76-story office building is considered and the presented procedure is used to obtain optimal characteristics of the TMD for the building.展开更多
ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN t...ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches.展开更多
In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world da...In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world data,particularly in the field of medical imaging.Traditional deep subspace clustering algorithms,which are mostly unsupervised,are limited in their ability to effectively utilize the inherent prior knowledge in medical images.Our MAS-DSC algorithm incorporates a semi-supervised learning framework that uses a small amount of labeled data to guide the clustering process,thereby enhancing the discriminative power of the feature representations.Additionally,the multi-scale feature extraction mechanism is designed to adapt to the complexity of medical imaging data,resulting in more accurate clustering performance.To address the difficulty of hyperparameter selection in deep subspace clustering,this paper employs a Bayesian optimization algorithm for adaptive tuning of hyperparameters related to subspace clustering,prior knowledge constraints,and model loss weights.Extensive experiments on standard clustering datasets,including ORL,Coil20,and Coil100,validate the effectiveness of the MAS-DSC algorithm.The results show that with its multi-scale network structure and Bayesian hyperparameter optimization,MAS-DSC achieves excellent clustering results on these datasets.Furthermore,tests on a brain tumor dataset demonstrate the robustness of the algorithm and its ability to leverage prior knowledge for efficient feature extraction and enhanced clustering performance within a semi-supervised learning framework.展开更多
An adaptive integral dynamic surface control approach based on fully tuned radial basis function neural network (FTRBFNN) is presented for a general class of strict-feedback nonlinear systems,which may possess a wid...An adaptive integral dynamic surface control approach based on fully tuned radial basis function neural network (FTRBFNN) is presented for a general class of strict-feedback nonlinear systems,which may possess a wide class of uncertainties that are not linearly parameterized and do not have any prior knowledge of the bounding functions.FTRBFNN is employed to approximate the uncertainty online,and a systematic framework for adaptive controller design is given by dynamic surface control. The control algorithm has two outstanding features,namely,the neural network regulates the weights,width and center of Gaussian function simultaneously,which ensures the control system has perfect ability of restraining different unknown uncertainties and the integral term of tracking error introduced in the control law can eliminate the static error of the closed loop system effectively. As a result,high control precision can be achieved.All signals in the closed loop system can be guaranteed bounded by Lyapunov approach.Finally,simulation results demonstrate the validity of the control approach.展开更多
The implementation of scalable quantum networks requires photons at the telecom band and long-lived spin coherence.The single Er^(3+) in solid-state hosts is an important candidate that fulfills these critical require...The implementation of scalable quantum networks requires photons at the telecom band and long-lived spin coherence.The single Er^(3+) in solid-state hosts is an important candidate that fulfills these critical requirements simultaneously.However,to entangle distant Er^(3+) ions through photonic connections,the emission frequency of individual Er^(3+) in solid-state matrix must be the same,which is challenging because the emission frequency of Er^(3+) depends on its local environment.Herein,we propose and experimentally demonstrate the Stark tuning of the emission frequency of a single Er^(3+) in a Y_(2)SiO_(5) crystal by employing electrodes interfaced with a silicon photonic crystal cavity.We obtain a Stark shift of 182.9±0.8 MHz,which is approximately 27 times of the optical emission linewidth,demonstrating promising applications in tuning the emission frequency of independent Er^(3+) into the same spectral channels.Our results provide a useful solution for construction of scalable quantum networks based on single Er^(3+) and a universal tool for tuning emission of individual rare-earth ions.展开更多
Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay ...Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay its onset.In this study,we proposed a Deep Dense Layer Neural Network(DDLNN)for diabetes prediction using a dataset with 768 instances and nine variables.We also applied a combination of classical machine learning(ML)algorithms and ensemble learning algorithms for the effective prediction of the disease.The classical ML algorithms used were Support Vector Machine(SVM),Logistic Regression(LR),Decision Tree(DT),K-Nearest Neighbor(KNN),and Naïve Bayes(NB).We also constructed ensemble models such as bagging(Random Forest)and boosting like AdaBoost and Extreme Gradient Boosting(XGBoost)to evaluate the performance of prediction models.The proposed DDLNN model and ensemble learning models were trained and tested using hyperparameter tuning and K-Fold cross-validation to determine the best parameters for predicting the disease.The combined ML models used majority voting to select the best outcomes among the models.The efficacy of the proposed and other models was evaluated for effective diabetes prediction.The investigation concluded that the proposed model,after hyperparameter tuning,outperformed other learning models with an accuracy of 84.42%,a precision of 85.12%,a recall rate of 65.40%,and a specificity of 94.11%.展开更多
Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coron...Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective.展开更多
Drug-target interactions prediction(DTIP)remains an important requirement in thefield of drug discovery and human medicine.The identification of interaction among the drug compound and target protein plays an essential ...Drug-target interactions prediction(DTIP)remains an important requirement in thefield of drug discovery and human medicine.The identification of interaction among the drug compound and target protein plays an essential pro-cess in the drug discovery process.It is a lengthier and complex process for pre-dicting the drug target interaction(DTI)utilizing experimental approaches.To resolve these issues,computational intelligence based DTIP techniques were developed to offer an efficient predictive model with low cost.The recently devel-oped deep learning(DL)models can be employed for the design of effective pre-dictive approaches for DTIP.With this motivation,this paper presents a new drug target interaction prediction using optimal recurrent neural network(DTIP-ORNN)technique.The goal of the DTIP-ORNN technique is to predict the DTIs in a semi-supervised way,i.e.,inclusion of both labelled and unlabelled instances.Initially,the DTIP-ORNN technique performs data preparation process and also includes class labelling process,where the target interactions from the database are used to determine thefinal label of the unlabelled instances.Besides,drug-to-drug(D-D)and target-to-target(T-T)interactions are used for the weight initia-tion of the RNN based bidirectional long short term memory(BiLSTM)model which is then utilized to the prediction of DTIs.Since hyperparameters signifi-cantly affect the prediction performance of the BiLSTM technique,the Adam optimizer is used which mainly helps to improve the DTI prediction outcomes.In order to ensure the enhanced predictive outcomes of the DTIP-ORNN techni-que,a series of simulations are implemented on four benchmark datasets.The comparative result analysis shows the promising performance of the DTIP-ORNN method on the recent approaches.展开更多
Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current...Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current agricultural problems like balancing production and environmental concerns.Weed control has become one of the significant problems in the agricultural sector.In traditional weed control,the entire field is treated uniformly by spraying the soil,a single herbicide dose,weed,and crops in the same way.For more precise farming,robots could accomplish targeted weed treatment if they could specifically find the location of the dispensable plant and identify the weed type.This may lessen by large margin utilization of agrochemicals on agricultural fields and favour sustainable agriculture.This study presents a Harris Hawks Optimizer with Graph Convolutional Network based Weed Detection(HHOGCN-WD)technique for Precision Agriculture.The HHOGCN-WD technique mainly focuses on identifying and classifying weeds for precision agriculture.For image pre-processing,the HHOGCN-WD model utilizes a bilateral normal filter(BNF)for noise removal.In addition,coupled convolutional neural network(CCNet)model is utilized to derive a set of feature vectors.To detect and classify weed,the GCN model is utilized with the HHO algorithm as a hyperparameter optimizer to improve the detection performance.The experimental results of the HHOGCN-WD technique are investigated under the benchmark dataset.The results indicate the promising performance of the presented HHOGCN-WD model over other recent approaches,with increased accuracy of 99.13%.展开更多
The advances in MIMO systems and networking technologies introduced a revolution in recent times, especially in wireless and wired multi-cast (multi-point-to-multi-point) transmission field. In this work, the distribu...The advances in MIMO systems and networking technologies introduced a revolution in recent times, especially in wireless and wired multi-cast (multi-point-to-multi-point) transmission field. In this work, the distributed versions of self-tuning proportional integral plus derivative (SPID) controller and self-tuning proportional plus integral (SPI) controller are described. An explicit rate feedback mechanism is used to design a controller for regulating the source rates in wireless and wired multi-cast networks. The control parameters of the SPID and SPI controllers are determined to ensure the stability of the control loop. Simulations are carried out with wireless and wired multi-cast models, to evaluate the performance of the SPID and SPI controllers and the ensuing results show that SPID scheme yields better performance than SPI scheme;however, it requires more computing time and central processing unit (CPU) resources.展开更多
针对关系抽取(RE)任务中实体关系语义挖掘困难和预测关系有偏差等问题,提出一种基于掩码提示与门控记忆网络校准(MGMNC)的RE方法。首先,利用提示中的掩码学习实体之间在预训练语言模型(PLM)语义空间中的潜在语义,通过构造掩码注意力权...针对关系抽取(RE)任务中实体关系语义挖掘困难和预测关系有偏差等问题,提出一种基于掩码提示与门控记忆网络校准(MGMNC)的RE方法。首先,利用提示中的掩码学习实体之间在预训练语言模型(PLM)语义空间中的潜在语义,通过构造掩码注意力权重矩阵,将离散的掩码语义空间相互关联;其次,采用门控校准网络将含有实体和关系语义的掩码表示融入句子的全局语义;再次,将它们作为关系提示校准关系信息,随后将句子表示的最终表示映射至相应的关系类别;最后,通过更好地利用提示中掩码,并结合传统微调方法的学习句子全局语义的优势,充分激发PLM的潜力。实验结果表明,所提方法在SemEval(SemEval-2010 Task 8)数据集的F1值达到91.4%,相较于RELA(Relation Extraction with Label Augmentation)生成式方法提高了1.0个百分点;在SciERC(Entities, Relations, and Coreference for Scientific knowledge graph construction)和CLTC(Chinese Literature Text Corpus)数据集上的F1值分别达到91.0%和82.8%。所提方法在上述3个数据集上均明显优于对比方法,验证了所提方法的有效性。相较于基于生成式的方法,所提方法实现了更优的抽取性能。展开更多
The use of generative adversarial network(GAN)-based models for the conditional generation of image semantic segmentation has shown promising results in recent years.However,there are still some limitations,including ...The use of generative adversarial network(GAN)-based models for the conditional generation of image semantic segmentation has shown promising results in recent years.However,there are still some limitations,including limited diversity of image style,distortion of detailed texture,unbalanced color tone,and lengthy training time.To address these issues,we propose an asymmetric pre-training and fine-tuning(APF)-GAN model.展开更多
Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown pr...Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection.展开更多
文摘High-rise buildings are usually considered as flexible structures with low inherent damping. Therefore, these kinds of buildings are susceptible to wind-induced vibration. Tuned Mass Damper(TMD) can be used as an effective device to mitigate excessive vibrations. In this study, Artificial Neural Networks is used to find optimal mechanical properties of TMD for high-rise buildings subjected to wind load. The patterns obtained from structural analysis of different multi degree of freedom(MDF) systems are used for training neural networks. In order to obtain these patterns, structural models of some systems with 10 to 80 degrees-of-freedoms are built in MATLAB/SIMULINK program. Finally, the optimal properties of TMD are determined based on the objective of maximum displacement response reduction. The Auto-Regressive model is used to simulate the wind load. In this way, the uncertainties related to wind loading can be taken into account in neural network’s outputs. After training the neural network, it becomes possible to set the frequency and TMD mass ratio as inputs and get the optimal TMD frequency and damping ratio as outputs. As a case study, a benchmark 76-story office building is considered and the presented procedure is used to obtain optimal characteristics of the TMD for the building.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4210118DSR33The authors are thankful to the Deanship of ScientificResearch atNajranUniversity for funding thiswork under theResearch Groups Funding Program Grant Code(NU/RG/SERC/11/7).
文摘ive Arabic Text Summarization using Hyperparameter Tuned Denoising Deep Neural Network(AATS-HTDDNN)technique.The presented AATS-HTDDNN technique aims to generate summaries of Arabic text.In the presented AATS-HTDDNN technique,the DDNN model is utilized to generate the summary.This study exploits the Chameleon Swarm Optimization(CSO)algorithm to fine-tune the hyperparameters relevant to the DDNN model since it considerably affects the summarization efficiency.This phase shows the novelty of the current study.To validate the enhanced summarization performance of the proposed AATS-HTDDNN model,a comprehensive experimental analysis was conducted.The comparison study outcomes confirmed the better performance of the AATS-HTDDNN model over other approaches.
基金supported in part by the National Natural Science Foundation of China under Grant 62171203in part by the Jiangsu Province“333 Project”High-Level Talent Cultivation Subsidized Project+2 种基金in part by the SuzhouKey Supporting Subjects for Health Informatics under Grant SZFCXK202147in part by the Changshu Science and Technology Program under Grants CS202015 and CS202246in part by Changshu Key Laboratory of Medical Artificial Intelligence and Big Data under Grants CYZ202301 and CS202314.
文摘In this paper,we introduce a novel Multi-scale and Auto-tuned Semi-supervised Deep Subspace Clustering(MAS-DSC)algorithm,aimed at addressing the challenges of deep subspace clustering in high-dimensional real-world data,particularly in the field of medical imaging.Traditional deep subspace clustering algorithms,which are mostly unsupervised,are limited in their ability to effectively utilize the inherent prior knowledge in medical images.Our MAS-DSC algorithm incorporates a semi-supervised learning framework that uses a small amount of labeled data to guide the clustering process,thereby enhancing the discriminative power of the feature representations.Additionally,the multi-scale feature extraction mechanism is designed to adapt to the complexity of medical imaging data,resulting in more accurate clustering performance.To address the difficulty of hyperparameter selection in deep subspace clustering,this paper employs a Bayesian optimization algorithm for adaptive tuning of hyperparameters related to subspace clustering,prior knowledge constraints,and model loss weights.Extensive experiments on standard clustering datasets,including ORL,Coil20,and Coil100,validate the effectiveness of the MAS-DSC algorithm.The results show that with its multi-scale network structure and Bayesian hyperparameter optimization,MAS-DSC achieves excellent clustering results on these datasets.Furthermore,tests on a brain tumor dataset demonstrate the robustness of the algorithm and its ability to leverage prior knowledge for efficient feature extraction and enhanced clustering performance within a semi-supervised learning framework.
基金supported by the China Postdoctoral Science Foundation (200904501035 201003548)+3 种基金the National Natural Science Foundation of China (60835001907160289101600460804017)
文摘An adaptive integral dynamic surface control approach based on fully tuned radial basis function neural network (FTRBFNN) is presented for a general class of strict-feedback nonlinear systems,which may possess a wide class of uncertainties that are not linearly parameterized and do not have any prior knowledge of the bounding functions.FTRBFNN is employed to approximate the uncertainty online,and a systematic framework for adaptive controller design is given by dynamic surface control. The control algorithm has two outstanding features,namely,the neural network regulates the weights,width and center of Gaussian function simultaneously,which ensures the control system has perfect ability of restraining different unknown uncertainties and the integral term of tracking error introduced in the control law can eliminate the static error of the closed loop system effectively. As a result,high control precision can be achieved.All signals in the closed loop system can be guaranteed bounded by Lyapunov approach.Finally,simulation results demonstrate the validity of the control approach.
基金supported by the National Key R&D Program of China(Grant No.2017YFA0304100)the Innovation Program for Quantum Science and Technology(Grant No.2021ZD0301200)+2 种基金the National Natural Science Foundation of China(Grant Nos.12222411 and 11821404)partially carried out at the USTC Center for Micro and Nanoscale Research and Fabricationthe support from the Youth Innovation Promotion Association CAS。
文摘The implementation of scalable quantum networks requires photons at the telecom band and long-lived spin coherence.The single Er^(3+) in solid-state hosts is an important candidate that fulfills these critical requirements simultaneously.However,to entangle distant Er^(3+) ions through photonic connections,the emission frequency of individual Er^(3+) in solid-state matrix must be the same,which is challenging because the emission frequency of Er^(3+) depends on its local environment.Herein,we propose and experimentally demonstrate the Stark tuning of the emission frequency of a single Er^(3+) in a Y_(2)SiO_(5) crystal by employing electrodes interfaced with a silicon photonic crystal cavity.We obtain a Stark shift of 182.9±0.8 MHz,which is approximately 27 times of the optical emission linewidth,demonstrating promising applications in tuning the emission frequency of independent Er^(3+) into the same spectral channels.Our results provide a useful solution for construction of scalable quantum networks based on single Er^(3+) and a universal tool for tuning emission of individual rare-earth ions.
文摘Diabetes is one of the fastest-growing human diseases worldwide and poses a significant threat to the population’s longer lives.Early prediction of diabetes is crucial to taking precautionary steps to avoid or delay its onset.In this study,we proposed a Deep Dense Layer Neural Network(DDLNN)for diabetes prediction using a dataset with 768 instances and nine variables.We also applied a combination of classical machine learning(ML)algorithms and ensemble learning algorithms for the effective prediction of the disease.The classical ML algorithms used were Support Vector Machine(SVM),Logistic Regression(LR),Decision Tree(DT),K-Nearest Neighbor(KNN),and Naïve Bayes(NB).We also constructed ensemble models such as bagging(Random Forest)and boosting like AdaBoost and Extreme Gradient Boosting(XGBoost)to evaluate the performance of prediction models.The proposed DDLNN model and ensemble learning models were trained and tested using hyperparameter tuning and K-Fold cross-validation to determine the best parameters for predicting the disease.The combined ML models used majority voting to select the best outcomes among the models.The efficacy of the proposed and other models was evaluated for effective diabetes prediction.The investigation concluded that the proposed model,after hyperparameter tuning,outperformed other learning models with an accuracy of 84.42%,a precision of 85.12%,a recall rate of 65.40%,and a specificity of 94.11%.
基金partially supported by the Medical Research Council Confidence in Concept Award,UK(MC_PC_17171)Royal Society International Exchanges Cost Share Award,UK(RP202G0230)+6 种基金British Heart Foundation Accelerator Award,UK(AA\18\3\34220)Hope Foundation for Cancer Research,UK(RM60G0680)Global Challenges Research Fund(GCRF),UK(P202PF11)Sino-UK Industrial Fund,UK(RP202G0289)LIAS Pioneering Partnerships Award,UK(P202ED10)Data Science Enhancement Fund,UK(P202RE237)Guangxi Key Laboratory of Trusted Software,CN(kx201901).
文摘Since 2019,the coronavirus disease-19(COVID-19)has been spreading rapidly worldwide,posing an unignorable threat to the global economy and human health.It is a disease caused by severe acute respiratory syndrome coronavirus 2,a single-stranded RNA virus of the genus Betacoronavirus.This virus is highly infectious and relies on its angiotensin-converting enzyme 2-receptor to enter cells.With the increase in the number of confirmed COVID-19 diagnoses,the difficulty of diagnosis due to the lack of global healthcare resources becomes increasingly apparent.Deep learning-based computer-aided diagnosis models with high generalisability can effectively alleviate this pressure.Hyperparameter tuning is essential in training such models and significantly impacts their final performance and training speed.However,traditional hyperparameter tuning methods are usually time-consuming and unstable.To solve this issue,we introduce Particle Swarm Optimisation to build a PSO-guided Self-Tuning Convolution Neural Network(PSTCNN),allowing the model to tune hyperparameters automatically.Therefore,the proposed approach can reduce human involvement.Also,the optimisation algorithm can select the combination of hyperparameters in a targeted manner,thus stably achieving a solution closer to the global optimum.Experimentally,the PSTCNN can obtain quite excellent results,with a sensitivity of 93.65%±1.86%,a specificity of 94.32%±2.07%,a precision of 94.30%±2.04%,an accuracy of 93.99%±1.78%,an F1-score of 93.97%±1.78%,Matthews Correlation Coefficient of 87.99%±3.56%,and Fowlkes-Mallows Index of 93.97%±1.78%.Our experiments demonstrate that compared to traditional methods,hyperparameter tuning of the model using an optimisation algorithm is faster and more effective.
文摘Drug-target interactions prediction(DTIP)remains an important requirement in thefield of drug discovery and human medicine.The identification of interaction among the drug compound and target protein plays an essential pro-cess in the drug discovery process.It is a lengthier and complex process for pre-dicting the drug target interaction(DTI)utilizing experimental approaches.To resolve these issues,computational intelligence based DTIP techniques were developed to offer an efficient predictive model with low cost.The recently devel-oped deep learning(DL)models can be employed for the design of effective pre-dictive approaches for DTIP.With this motivation,this paper presents a new drug target interaction prediction using optimal recurrent neural network(DTIP-ORNN)technique.The goal of the DTIP-ORNN technique is to predict the DTIs in a semi-supervised way,i.e.,inclusion of both labelled and unlabelled instances.Initially,the DTIP-ORNN technique performs data preparation process and also includes class labelling process,where the target interactions from the database are used to determine thefinal label of the unlabelled instances.Besides,drug-to-drug(D-D)and target-to-target(T-T)interactions are used for the weight initia-tion of the RNN based bidirectional long short term memory(BiLSTM)model which is then utilized to the prediction of DTIs.Since hyperparameters signifi-cantly affect the prediction performance of the BiLSTM technique,the Adam optimizer is used which mainly helps to improve the DTI prediction outcomes.In order to ensure the enhanced predictive outcomes of the DTIP-ORNN techni-que,a series of simulations are implemented on four benchmark datasets.The comparative result analysis shows the promising performance of the DTIP-ORNN method on the recent approaches.
基金This research was partly supported by the Technology Development Program of MSS[No.S3033853]by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2020R1I1A3069700).
文摘Precision agriculture includes the optimum and adequate use of resources depending on several variables that govern crop yield.Precision agriculture offers a novel solution utilizing a systematic technique for current agricultural problems like balancing production and environmental concerns.Weed control has become one of the significant problems in the agricultural sector.In traditional weed control,the entire field is treated uniformly by spraying the soil,a single herbicide dose,weed,and crops in the same way.For more precise farming,robots could accomplish targeted weed treatment if they could specifically find the location of the dispensable plant and identify the weed type.This may lessen by large margin utilization of agrochemicals on agricultural fields and favour sustainable agriculture.This study presents a Harris Hawks Optimizer with Graph Convolutional Network based Weed Detection(HHOGCN-WD)technique for Precision Agriculture.The HHOGCN-WD technique mainly focuses on identifying and classifying weeds for precision agriculture.For image pre-processing,the HHOGCN-WD model utilizes a bilateral normal filter(BNF)for noise removal.In addition,coupled convolutional neural network(CCNet)model is utilized to derive a set of feature vectors.To detect and classify weed,the GCN model is utilized with the HHO algorithm as a hyperparameter optimizer to improve the detection performance.The experimental results of the HHOGCN-WD technique are investigated under the benchmark dataset.The results indicate the promising performance of the presented HHOGCN-WD model over other recent approaches,with increased accuracy of 99.13%.
文摘The advances in MIMO systems and networking technologies introduced a revolution in recent times, especially in wireless and wired multi-cast (multi-point-to-multi-point) transmission field. In this work, the distributed versions of self-tuning proportional integral plus derivative (SPID) controller and self-tuning proportional plus integral (SPI) controller are described. An explicit rate feedback mechanism is used to design a controller for regulating the source rates in wireless and wired multi-cast networks. The control parameters of the SPID and SPI controllers are determined to ensure the stability of the control loop. Simulations are carried out with wireless and wired multi-cast models, to evaluate the performance of the SPID and SPI controllers and the ensuing results show that SPID scheme yields better performance than SPI scheme;however, it requires more computing time and central processing unit (CPU) resources.
文摘针对关系抽取(RE)任务中实体关系语义挖掘困难和预测关系有偏差等问题,提出一种基于掩码提示与门控记忆网络校准(MGMNC)的RE方法。首先,利用提示中的掩码学习实体之间在预训练语言模型(PLM)语义空间中的潜在语义,通过构造掩码注意力权重矩阵,将离散的掩码语义空间相互关联;其次,采用门控校准网络将含有实体和关系语义的掩码表示融入句子的全局语义;再次,将它们作为关系提示校准关系信息,随后将句子表示的最终表示映射至相应的关系类别;最后,通过更好地利用提示中掩码,并结合传统微调方法的学习句子全局语义的优势,充分激发PLM的潜力。实验结果表明,所提方法在SemEval(SemEval-2010 Task 8)数据集的F1值达到91.4%,相较于RELA(Relation Extraction with Label Augmentation)生成式方法提高了1.0个百分点;在SciERC(Entities, Relations, and Coreference for Scientific knowledge graph construction)和CLTC(Chinese Literature Text Corpus)数据集上的F1值分别达到91.0%和82.8%。所提方法在上述3个数据集上均明显优于对比方法,验证了所提方法的有效性。相较于基于生成式的方法,所提方法实现了更优的抽取性能。
基金This work was supported by the Fundamental Research Funds for the Central Universities 07063233084the Young Scientists Fund of the National Natural Science Foundation of China(Grant No.62206134)the Tianjin Key Laboratory of Visual Computing and Intelligent Perception(VCIP).Computation is supported by the Supercomputing Center of Nankai University(NKSC).
文摘The use of generative adversarial network(GAN)-based models for the conditional generation of image semantic segmentation has shown promising results in recent years.However,there are still some limitations,including limited diversity of image style,distortion of detailed texture,unbalanced color tone,and lengthy training time.To address these issues,we propose an asymmetric pre-training and fine-tuning(APF)-GAN model.
文摘Fraud of credit cards is a major issue for financial organizations and individuals.As fraudulent actions become more complex,a demand for better fraud detection systems is rising.Deep learning approaches have shown promise in several fields,including detecting credit card fraud.However,the efficacy of these models is heavily dependent on the careful selection of appropriate hyperparameters.This paper introduces models that integrate deep learning models with hyperparameter tuning techniques to learn the patterns and relationships within credit card transaction data,thereby improving fraud detection.Three deep learning models:AutoEncoder(AE),Convolution Neural Network(CNN),and Long Short-Term Memory(LSTM)are proposed to investigate how hyperparameter adjustment impacts the efficacy of deep learning models used to identify credit card fraud.The experiments conducted on a European credit card fraud dataset using different hyperparameters and three deep learning models demonstrate that the proposed models achieve a tradeoff between detection rate and precision,leading these models to be effective in accurately predicting credit card fraud.The results demonstrate that LSTM significantly outperformed AE and CNN in terms of accuracy(99.2%),detection rate(93.3%),and area under the curve(96.3%).These proposed models have surpassed those of existing studies and are expected to make a significant contribution to the field of credit card fraud detection.