Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up t...Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up to 7G.Furthermore,it improves the array gain and directivity,increasing the detection range and angular resolution of radar systems.This study proposes two highly efficient SLL reduction techniques.These techniques are based on the hybridization between either the single convolution or the double convolution algorithms and the genetic algorithm(GA)to develop the Conv/GA andDConv/GA,respectively.The convolution process determines the element’s excitations while the GA optimizes the element spacing.For M elements linear antenna array(LAA),the convolution of the excitation coefficients vector by itself provides a new vector of excitations of length N=(2M−1).This new vector is divided into three different sets of excitations including the odd excitations,even excitations,and middle excitations of lengths M,M−1,andM,respectively.When the same element spacing as the original LAA is used,it is noticed that the odd and even excitations provide a much lower SLL than that of the LAA but with amuch wider half-power beamwidth(HPBW).While the middle excitations give the same HPBWas the original LAA with a relatively higher SLL.Tomitigate the increased HPBWof the odd and even excitations,the element spacing is optimized using the GA.Thereby,the synthesized arrays have the same HPBW as the original LAA with a two-fold reduction in the SLL.Furthermore,for extreme SLL reduction,the DConv/GA is introduced.In this technique,the same procedure of the aforementioned Conv/GA technique is performed on the resultant even and odd excitation vectors.It provides a relatively wider HPBWthan the original LAA with about quad-fold reduction in the SLL.展开更多
To solve the problems of a few optical fibre line fault samples and the inefficiency of manual communication optical fibre fault diagnosis,this paper proposes a communication optical fibre fault diagnosis model based ...To solve the problems of a few optical fibre line fault samples and the inefficiency of manual communication optical fibre fault diagnosis,this paper proposes a communication optical fibre fault diagnosis model based on variational modal decomposition(VMD),fuzzy entropy(FE)and fuzzy clustering(FC).Firstly,based on the OTDR curve data collected in the field,VMD is used to extract the different modal components(IMF)of the original signal and calculate the fuzzy entropy(FE)values of different components to characterize the subtle differences between them.The fuzzy entropy of each curve is used as the feature vector,which in turn constructs the communication optical fibre feature vector matrix,and the fuzzy clustering algorithm is used to achieve fault diagnosis of faulty optical fibre.The VMD-FE combination can extract subtle differences in features,and the fuzzy clustering algorithm does not require sample training.The experimental results show that the model in this paper has high accuracy and is relevant to the maintenance of communication optical fibre when compared with existing feature extraction models and traditional machine learning models.展开更多
Cooperative utilization of multidimensional resources including cache, power and spectrum in satellite-terrestrial integrated networks(STINs) can provide a feasible approach for massive streaming media content deliver...Cooperative utilization of multidimensional resources including cache, power and spectrum in satellite-terrestrial integrated networks(STINs) can provide a feasible approach for massive streaming media content delivery over the seamless global coverage area. However, the on-board supportable resources of a single satellite are extremely limited and lack of interaction with others. In this paper, we design a network model with two-layered cache deployment, i.e., satellite layer and ground base station layer, and two types of sharing links, i.e., terrestrial-satellite sharing(TSS) links and inter-satellite sharing(ISS) links, to enhance the capability of cooperative delivery over STINs. Thus, we use rateless codes for the content divided-packet transmission, and derive the total energy efficiency(EE) in the whole transmission procedure, which is defined as the ratio of traffic offloading and energy consumption. We formulate two optimization problems about maximizing EE in different sharing scenarios(only TSS and TSS-ISS),and propose two optimized algorithms to obtain the optimal content placement matrixes, respectively.Simulation results demonstrate that, enabling sharing links with optimized cache placement have more than 2 times improvement of EE performance than other traditional placement schemes. Particularly, TSS-ISS schemes have the higher EE performance than only TSS schemes under the conditions of enough number of satellites and smaller inter-satellite distances.展开更多
The results presented here show for the first time the experimental demonstration of the fabrication of lossy mode resonance(LMR) devices based on perovskite coatings deposited on planar waveguides. Perovskite thin fi...The results presented here show for the first time the experimental demonstration of the fabrication of lossy mode resonance(LMR) devices based on perovskite coatings deposited on planar waveguides. Perovskite thin films have been obtained by means of the spin coating technique and their presence was confirmed by ellipsometry, scanning electron microscopy, and X-ray diffraction testing. The LMRs can be generated in a wide wavelength range and the experimental results agree with the theoretical simulations. Overall, this study highlights the potential of perovskite thin films for the development of novel LMR-based devices that can be used for environmental monitoring, industrial sensing, and gas detection, among other applications.展开更多
Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial ...Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task.展开更多
Cooperative non-orthogonal multiple access(NOMA)is heavily studied in the literature as a solution for 5G and beyond 5G applications.Cooperative NOMA transmits a superimposed version of all users’messages simultaneou...Cooperative non-orthogonal multiple access(NOMA)is heavily studied in the literature as a solution for 5G and beyond 5G applications.Cooperative NOMA transmits a superimposed version of all users’messages simultaneously with the aid of a relay,after that,each user decodes its own message.Accordingly,NOMA is deemed as a spectral efficient technique.Another emerging technique exploits orbital angular momentum(OAM),where OAM is an attractive character of electromagnetic waves.OAM gathered a great deal of attention in recent years(similar to the case with NOMA)due to its ability to enhance electromagnetic spectrum exploitation,hence increasing the achieved transmission throughput.However,OAM-based transmission suffers from wave divergence,especially at high OAM orders.This OAM limitation reduces the transmission distance.The distance can be extended via cooperative relays(part of cooperative NOMA).Relay helps the source to transmit packets to the destination by providing an additional connection to handle the transmission and provide a shorter distance between source and destination.In this paper,we propose employing OAM transmission in the cooperative NOMA network.Simulation experiments show that OAM transmission helps cooperative NOMA in achieving higher throughput compared to the conventional cooperative NOMA.Concurrently,the cooperation part of cooperative NOMA eases the divergence problem of OAM.In addition,the proposed system outperforms the standalone cooperative OAM-based solution.展开更多
A4-port multiple-input multiple-output(MIMO)antenna exhibiting lowmutual coupling andUWBperformance is developed.The octagonal-shaped four-antenna elements are connected with a 50microstrip feed line that is arranged...A4-port multiple-input multiple-output(MIMO)antenna exhibiting lowmutual coupling andUWBperformance is developed.The octagonal-shaped four-antenna elements are connected with a 50microstrip feed line that is arranged rotationally to achieve the orthogonal polarization for improving the MIMO system performance.The antenna has a wideband impedance bandwidth of 7.5GHz with S11<−10 dB from(103.44%)3.5–11GHz and inter-element isolation higher than 20 dB.Antenna validation is carried out by verifying the simulated and measured results after fabricating the antenna.The results in the form of omnidirectional radiation patterns,peak gain(≥4 dBi),and Envelope Correlation Coefficient(ECC)(≤0.01)are extracted to validate the suggested antenna performance.Aswell,time-domain analysis was investigated to demonstrate the operation of the suggested antenna in wideband applications.Finally,the simulated and experimental outcomes have almost similar tendenciesmaking the antenna suitable for its use in UWBMIMOapplications.展开更多
The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is...The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is a combinatorial optimization problem,which renders exhaustive search impossible as query sizes rise.Increases in CPU performance have surpassed main memory,and disk access speeds in recent decades,allowing data compression to be used—strategies for improving database performance systems.For performance enhancement,compression and query optimization are the two most factors.Compression reduces the volume of data,whereas query optimization minimizes execution time.Compressing the database reduces memory requirement,data takes less time to load into memory,fewer buffer missing occur,and the size of intermediate results is more diminutive.This paper performed query optimization on the graph database in a cloud dew environment by considering,which requires less time to execute a query.The factors compression and query optimization improve the performance of the databases.This research compares the performance of MySQL and Neo4j databases in terms of memory usage and execution time running on cloud dew servers.展开更多
Nowadays,there is tremendous growth in biometric authentication and cybersecurity applications.Thus,the efficient way of storing and securing personal biometric patterns is mandatory in most governmental and private s...Nowadays,there is tremendous growth in biometric authentication and cybersecurity applications.Thus,the efficient way of storing and securing personal biometric patterns is mandatory in most governmental and private sectors.Therefore,designing and implementing robust security algorithms for users’biometrics is still a hot research area to be investigated.This work presents a powerful biometric security system(BSS)to protect different biometric modalities such as faces,iris,and fingerprints.The proposed BSSmodel is based on hybridizing auto-encoder(AE)network and a chaos-based ciphering algorithm to cipher the details of the stored biometric patterns and ensures their secrecy.The employed AE network is unsupervised deep learning(DL)structure used in the proposed BSS model to extract main biometric features.These obtained features are utilized to generate two random chaos matrices.The first random chaos matrix is used to permute the pixels of biometric images.In contrast,the second random matrix is used to further cipher and confuse the resulting permuted biometric pixels using a two-dimensional(2D)chaotic logisticmap(CLM)algorithm.To assess the efficiency of the proposed BSS,(1)different standardized color and grayscale images of the examined fingerprint,faces,and iris biometrics were used(2)comprehensive security and recognition evaluation metrics were measured.The assessment results have proven the authentication and robustness superiority of the proposed BSSmodel compared to other existing BSSmodels.For example,the proposed BSS succeeds in getting a high area under the receiver operating characteristic(AROC)value that reached 99.97%and low rates of 0.00137,0.00148,and 3516 CMC,2023,vol.74,no.20.00157 for equal error rate(EER),false reject rate(FRR),and a false accept rate(FAR),respectively.展开更多
In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the ...In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the migration performance of the virtual machine(VM)in the cloud environment.VMs allow cloud customers to store essential data and resources.However,server usage has grown dramatically due to the virtualization of computer systems,resulting in higher data centre power consumption,storage needs,and operating expenses.Multiple VMs on one data centre manage share resources like central processing unit(CPU)cache,network bandwidth,memory,and application bandwidth.Inmulti-cloud,VMmigration addresses the performance degradation due to cloud server configuration,unbalanced traffic load,resource load management,and fault situations during data transfer.VMmigration speed is influenced by the size of the VM,the dirty rate of the running application,and the latency ofmigration iterations.As a result,evaluating VM migration performance while considering all of these factors becomes a difficult task.Themain effort of this research is to assess migration problems on performance.The simulation results in Matlab show that if the VMsize grows,themigration time of VMs and the downtime can be impacted by three orders ofmagnitude.The dirty page rate decreases,themigration time and the downtime grow,and the latency time decreases as network bandwidth increases during the migration time and post-migration overhead calculation when the VMtransfer is completed.All the simulated cases of VMs migration were performed in a fuzzy inference system with performance graphs.展开更多
Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have bee...Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have been susceptible to black fungus.Therefore,recovered COVID-19 patients should seek medical support when they notice mucormycosis symptoms.This paper proposes a novel ensemble deep-learning model that includes three pre-trained models:reset(50),VGG(19),and Inception.Our approach is medically intuitive and efficient compared to the traditional deep learning models.An image dataset was aggregated from various resources and divided into two classes:a black fungus class and a skin infection class.To the best of our knowledge,our study is the first that is concerned with building black fungus detection models based on deep learning algorithms.The proposed approach can significantly improve the performance of the classification task and increase the generalization ability of such a binary classification task.According to the reported results,it has empirically achieved a sensitivity value of 0.9907,a specificity value of 0.9938,a precision value of 0.9938,and a negative predictive value of 0.9907.展开更多
In recent years,wireless networks are widely used in different domains.This phenomenon has increased the number of Internet of Things(IoT)devices and their applications.Though IoT has numerous advantages,the commonly-...In recent years,wireless networks are widely used in different domains.This phenomenon has increased the number of Internet of Things(IoT)devices and their applications.Though IoT has numerous advantages,the commonly-used IoT devices are exposed to cyber-attacks periodically.This scenario necessitates real-time automated detection and the mitigation of different types of attacks in high-traffic networks.The Software-Defined Networking(SDN)technique and the Machine Learning(ML)-based intrusion detection technique are effective tools that can quickly respond to different types of attacks in the IoT networks.The Intrusion Detection System(IDS)models can be employed to secure the SDN-enabled IoT environment in this scenario.The current study devises a Harmony Search algorithmbased Feature Selection with Optimal Convolutional Autoencoder(HSAFSOCAE)for intrusion detection in the SDN-enabled IoT environment.The presented HSAFS-OCAE method follows a three-stage process in which the Harmony Search Algorithm-based FS(HSAFS)technique is exploited at first for feature selection.Next,the CAE method is leveraged to recognize and classify intrusions in the SDN-enabled IoT environment.Finally,the Artificial Fish SwarmAlgorithm(AFSA)is used to fine-tune the hyperparameters.This process improves the outcomes of the intrusion detection process executed by the CAE algorithm and shows the work’s novelty.The proposed HSAFSOCAE technique was experimentally validated under different aspects,and the comparative analysis results established the supremacy of the proposed model.展开更多
Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish betwee...Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.展开更多
Emerging technologies such as edge computing,Internet of Things(IoT),5G networks,big data,Artificial Intelligence(AI),and Unmanned Aerial Vehicles(UAVs)empower,Industry 4.0,with a progressive production methodology th...Emerging technologies such as edge computing,Internet of Things(IoT),5G networks,big data,Artificial Intelligence(AI),and Unmanned Aerial Vehicles(UAVs)empower,Industry 4.0,with a progressive production methodology that shows attention to the interaction between machine and human beings.In the literature,various authors have focused on resolving security problems in UAV communication to provide safety for vital applications.The current research article presents a Circle Search Optimization with Deep Learning Enabled Secure UAV Classification(CSODL-SUAVC)model for Industry 4.0 environment.The suggested CSODL-SUAVC methodology is aimed at accomplishing two core objectives such as secure communication via image steganography and image classification.Primarily,the proposed CSODL-SUAVC method involves the following methods such as Multi-Level Discrete Wavelet Transformation(ML-DWT),CSO-related Optimal Pixel Selection(CSO-OPS),and signcryption-based encryption.The proposed model deploys the CSO-OPS technique to select the optimal pixel points in cover images.The secret images,encrypted by signcryption technique,are embedded into cover images.Besides,the image classification process includes three components namely,Super-Resolution using Convolution Neural Network(SRCNN),Adam optimizer,and softmax classifier.The integration of the CSO-OPS algorithm and Adam optimizer helps in achieving the maximum performance upon UAV communication.The proposed CSODLSUAVC model was experimentally validated using benchmark datasets and the outcomes were evaluated under distinct aspects.The simulation outcomes established the supreme better performance of the CSODL-SUAVC model over recent approaches.展开更多
University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they...University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they can be solved through optimization algorithms to produce the aspired optimal timetable.Several techniques have been used to solve university timetabling problems,and most of them use optimization techniques.This paper provides a comprehensive review of the most recent studies dealing with concepts,methodologies,optimization,benchmarks,and open issues of university timetabling problems.The comprehensive review starts by presenting the essence of university timetabling as NP-COP,defining and clarifying the two formed classes of university timetabling:University Course Timetabling and University Examination Timetabling,illustrating the adopted algorithms for solving such a problem,elaborating the university timetabling constraints to be considered achieving the optimal timetable,and explaining how to analyze and measure the performance of the optimization algorithms by demonstrating the commonly used benchmark datasets for the evaluation.It is noted that meta-heuristic methodologies are widely used in the literature.Additionally,recently,multi-objective optimization has been increasingly used in solving such a problem that can identify robust university timetabling solutions.Finally,trends and future directions in university timetabling problems are provided.This paper provides good information for students,researchers,and specialists interested in this area of research.The challenges and possibilities for future research prospects are also explored.展开更多
COVID-19 has significantly impacted the growth prediction of a pandemic,and it is critical in determining how to battle and track the disease progression.In this case,COVID-19 data is a time-series dataset that can be...COVID-19 has significantly impacted the growth prediction of a pandemic,and it is critical in determining how to battle and track the disease progression.In this case,COVID-19 data is a time-series dataset that can be projected using different methodologies.Thus,this work aims to gauge the spread of the outbreak severity over time.Furthermore,data analytics and Machine Learning(ML)techniques are employed to gain a broader understanding of virus infections.We have simulated,adjusted,and fitted several statistical time-series forecasting models,linearML models,and nonlinear ML models.Examples of these models are Logistic Regression,Lasso,Ridge,ElasticNet,Huber Regressor,Lasso Lars,Passive Aggressive Regressor,K-Neighbors Regressor,Decision Tree Regressor,Extra Trees Regressor,Support Vector Regressions(SVR),AdaBoost Regressor,Random Forest Regressor,Bagging Regressor,AuoRegression,MovingAverage,Gradient Boosting Regressor,Autoregressive Moving Average(ARMA),Auto-Regressive Integrated Moving Averages(ARIMA),SimpleExpSmoothing,Exponential Smoothing,Holt-Winters,Simple Moving Average,Weighted Moving Average,Croston,and naive Bayes.Furthermore,our suggested methodology includes the development and evaluation of ensemble models built on top of the best-performing statistical and ML-based prediction methods.A third stage in the proposed system is to examine three different implementations to determine which model delivers the best performance.Then,this best method is used for future forecasts,and consequently,we can collect the most accurate and dependable predictions.展开更多
Violence detection is very important for public safety.However,violence detection is not an easy task.Because recognizing violence in surveillance video requires not only spatial information but also sufficient tempor...Violence detection is very important for public safety.However,violence detection is not an easy task.Because recognizing violence in surveillance video requires not only spatial information but also sufficient temporal information.In order to highlight the time information,we propose an efficient deep learning architecture for violence detection based on temporal attention mechanism,which utilizes pre-trained MobileNetV3,convolutional LSTM and temporal attention block Temporal Adaptive(TA).TA block can focus on further refining temporal information from spatial information extracted from backbone.Experimental results show the proposed model is validated on three publicly datasets:Hockey Fight,Movies,and RWF-2000 datasets.展开更多
Shadow extraction and elimination is essential for intelligent transportation systems(ITS)in vehicle tracking application.The shadow is the source of error for vehicle detection,which causes misclassification of vehic...Shadow extraction and elimination is essential for intelligent transportation systems(ITS)in vehicle tracking application.The shadow is the source of error for vehicle detection,which causes misclassification of vehicles and a high false alarm rate in the research of vehicle counting,vehicle detection,vehicle tracking,and classification.Most of the existing research is on shadow extraction of moving vehicles in high intensity and on standard datasets,but the process of extracting shadows from moving vehicles in low light of real scenes is difficult.The real scenes of vehicles dataset are generated by self on the Vadodara–Mumbai highway during periods of poor illumination for shadow extraction of moving vehicles to address the above problem.This paper offers a robust shadow extraction of moving vehicles and its elimination for vehicle tracking.The method is distributed into two phases:In the first phase,we extract foreground regions using a mixture of Gaussian model,and then in the second phase,with the help of the Gamma correction,intensity ratio,negative transformation,and a combination of Gaussian filters,we locate and remove the shadow region from the foreground areas.Compared to the outcomes proposed method with outcomes of an existing method,the suggested method achieves an average true negative rate of above 90%,a shadow detection rate SDR(η%),and a shadow discrimination rate SDR(ξ%)of 80%.Hence,the suggested method is more appropriate for moving shadow detection in real scenes.展开更多
Nowadays,the security of images or information is very important.This paper introduces a proposed hybrid watermarking and encryption technique for increasing medical image security.First,the secret medical image is en...Nowadays,the security of images or information is very important.This paper introduces a proposed hybrid watermarking and encryption technique for increasing medical image security.First,the secret medical image is encrypted using Advanced Encryption Standard(AES)algorithm.Then,the secret report of the patient is embedded into the encrypted secret medical image with the Least Significant Bit(LSB)watermarking algorithm.After that,the encrypted secret medical image with the secret report is concealed in a cover medical image,using Kekre’s Median Codebook Generation(KMCG)algorithm.Afterwards,the stego-image obtained is split into 16 parts.Finally,it is sent to the receiver.We adopt this strategy to send the secret medical image and report over a network securely.The proposed technique is assessed with different encryption quality metrics including Peak Signal-to-Noise Ratio(PSNR),Correlation Coefficient(Cr),Fea-ture Similarity Index Metric(FSIM),and Structural Similarity Index Metric(SSIM).Histogram estimation is used to confirm the matching between the secret medical image before and after transmission.Simulation results demonstrate that the proposed technique achieves good performance with high quality of the received medical image and clear image details in a very short processing time.展开更多
Arrhythmia has been classified using a variety of methods.Because of the dynamic nature of electrocardiogram(ECG)data,traditional handcrafted approaches are difficult to execute,making the machine learning(ML)solution...Arrhythmia has been classified using a variety of methods.Because of the dynamic nature of electrocardiogram(ECG)data,traditional handcrafted approaches are difficult to execute,making the machine learning(ML)solutions more appealing.Patients with cardiac arrhythmias can benefit from competent monitoring to save their lives.Cardiac arrhythmia classification and prediction have greatly improved in recent years.Arrhythmias are a category of conditions in which the heart's electrical activity is abnormally rapid or sluggish.Every year,it is one of the main reasons of mortality for both men and women,worldwide.For the classification of arrhythmias,this work proposes a novel technique based on optimized feature selection and optimized K-nearest neighbors(KNN)classifier.The proposed method makes advantage of the UCI repository,which has a 279-attribute high-dimensional cardiac arrhythmia dataset.The proposed approach is based on dividing cardiac arrhythmia patients into 16 groups based on the electrocardiography dataset’s features.The purpose is to design an efficient intelligent system employing the dipper throated optimization method to categorize cardiac arrhythmia patients.This method of comprehensive arrhythmia classification outperforms earlier methods presented in the literature.The achieved classification accuracy using the proposed approach is 99.8%.展开更多
基金Research Supporting Project Number(RSPD2023R 585),King Saud University,Riyadh,Saudi Arabia.
文摘Side lobe level reduction(SLL)of antenna arrays significantly enhances the signal-to-interference ratio and improves the quality of service(QOS)in recent and future wireless communication systems starting from 5G up to 7G.Furthermore,it improves the array gain and directivity,increasing the detection range and angular resolution of radar systems.This study proposes two highly efficient SLL reduction techniques.These techniques are based on the hybridization between either the single convolution or the double convolution algorithms and the genetic algorithm(GA)to develop the Conv/GA andDConv/GA,respectively.The convolution process determines the element’s excitations while the GA optimizes the element spacing.For M elements linear antenna array(LAA),the convolution of the excitation coefficients vector by itself provides a new vector of excitations of length N=(2M−1).This new vector is divided into three different sets of excitations including the odd excitations,even excitations,and middle excitations of lengths M,M−1,andM,respectively.When the same element spacing as the original LAA is used,it is noticed that the odd and even excitations provide a much lower SLL than that of the LAA but with amuch wider half-power beamwidth(HPBW).While the middle excitations give the same HPBWas the original LAA with a relatively higher SLL.Tomitigate the increased HPBWof the odd and even excitations,the element spacing is optimized using the GA.Thereby,the synthesized arrays have the same HPBW as the original LAA with a two-fold reduction in the SLL.Furthermore,for extreme SLL reduction,the DConv/GA is introduced.In this technique,the same procedure of the aforementioned Conv/GA technique is performed on the resultant even and odd excitation vectors.It provides a relatively wider HPBWthan the original LAA with about quad-fold reduction in the SLL.
基金This paper is supported by State Grid Gansu Electric Power Company Science and Technology Project(20220515003).
文摘To solve the problems of a few optical fibre line fault samples and the inefficiency of manual communication optical fibre fault diagnosis,this paper proposes a communication optical fibre fault diagnosis model based on variational modal decomposition(VMD),fuzzy entropy(FE)and fuzzy clustering(FC).Firstly,based on the OTDR curve data collected in the field,VMD is used to extract the different modal components(IMF)of the original signal and calculate the fuzzy entropy(FE)values of different components to characterize the subtle differences between them.The fuzzy entropy of each curve is used as the feature vector,which in turn constructs the communication optical fibre feature vector matrix,and the fuzzy clustering algorithm is used to achieve fault diagnosis of faulty optical fibre.The VMD-FE combination can extract subtle differences in features,and the fuzzy clustering algorithm does not require sample training.The experimental results show that the model in this paper has high accuracy and is relevant to the maintenance of communication optical fibre when compared with existing feature extraction models and traditional machine learning models.
基金supported by National Natural Sciences Foundation of China(No.62271165,62027802,61831008)the Guangdong Basic and Applied Basic Research Foundation(No.2023A1515030297,2021A1515011572)Shenzhen Science and Technology Program ZDSYS20210623091808025,Stable Support Plan Program GXWD20231129102638002.
文摘Cooperative utilization of multidimensional resources including cache, power and spectrum in satellite-terrestrial integrated networks(STINs) can provide a feasible approach for massive streaming media content delivery over the seamless global coverage area. However, the on-board supportable resources of a single satellite are extremely limited and lack of interaction with others. In this paper, we design a network model with two-layered cache deployment, i.e., satellite layer and ground base station layer, and two types of sharing links, i.e., terrestrial-satellite sharing(TSS) links and inter-satellite sharing(ISS) links, to enhance the capability of cooperative delivery over STINs. Thus, we use rateless codes for the content divided-packet transmission, and derive the total energy efficiency(EE) in the whole transmission procedure, which is defined as the ratio of traffic offloading and energy consumption. We formulate two optimization problems about maximizing EE in different sharing scenarios(only TSS and TSS-ISS),and propose two optimized algorithms to obtain the optimal content placement matrixes, respectively.Simulation results demonstrate that, enabling sharing links with optimized cache placement have more than 2 times improvement of EE performance than other traditional placement schemes. Particularly, TSS-ISS schemes have the higher EE performance than only TSS schemes under the conditions of enough number of satellites and smaller inter-satellite distances.
基金the partial support to Agencia Estatal de Investigación PID2019-106231RB-I00 research projectUniversidad Rey Juan Carlos with research project “Células fotovoltaicas de tercera generación basadas en semiconductores orgánicos avanzados perovskitas híbridas en estructuras multiunión” (reference M2607)the pre-doctoral research grant of the Public University of Navarra。
文摘The results presented here show for the first time the experimental demonstration of the fabrication of lossy mode resonance(LMR) devices based on perovskite coatings deposited on planar waveguides. Perovskite thin films have been obtained by means of the spin coating technique and their presence was confirmed by ellipsometry, scanning electron microscopy, and X-ray diffraction testing. The LMRs can be generated in a wide wavelength range and the experimental results agree with the theoretical simulations. Overall, this study highlights the potential of perovskite thin films for the development of novel LMR-based devices that can be used for environmental monitoring, industrial sensing, and gas detection, among other applications.
基金Shenzhen Science and Technology Program,Grant/Award Number:ZDSYS20211021111415025Shenzhen Institute of Artificial Intelligence and Robotics for SocietyYouth Science and Technology Talents Development Project of Guizhou Education Department,Grant/Award Number:QianJiaoheKYZi[2018]459。
文摘Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task.
文摘Cooperative non-orthogonal multiple access(NOMA)is heavily studied in the literature as a solution for 5G and beyond 5G applications.Cooperative NOMA transmits a superimposed version of all users’messages simultaneously with the aid of a relay,after that,each user decodes its own message.Accordingly,NOMA is deemed as a spectral efficient technique.Another emerging technique exploits orbital angular momentum(OAM),where OAM is an attractive character of electromagnetic waves.OAM gathered a great deal of attention in recent years(similar to the case with NOMA)due to its ability to enhance electromagnetic spectrum exploitation,hence increasing the achieved transmission throughput.However,OAM-based transmission suffers from wave divergence,especially at high OAM orders.This OAM limitation reduces the transmission distance.The distance can be extended via cooperative relays(part of cooperative NOMA).Relay helps the source to transmit packets to the destination by providing an additional connection to handle the transmission and provide a shorter distance between source and destination.In this paper,we propose employing OAM transmission in the cooperative NOMA network.Simulation experiments show that OAM transmission helps cooperative NOMA in achieving higher throughput compared to the conventional cooperative NOMA.Concurrently,the cooperation part of cooperative NOMA eases the divergence problem of OAM.In addition,the proposed system outperforms the standalone cooperative OAM-based solution.
文摘A4-port multiple-input multiple-output(MIMO)antenna exhibiting lowmutual coupling andUWBperformance is developed.The octagonal-shaped four-antenna elements are connected with a 50microstrip feed line that is arranged rotationally to achieve the orthogonal polarization for improving the MIMO system performance.The antenna has a wideband impedance bandwidth of 7.5GHz with S11<−10 dB from(103.44%)3.5–11GHz and inter-element isolation higher than 20 dB.Antenna validation is carried out by verifying the simulated and measured results after fabricating the antenna.The results in the form of omnidirectional radiation patterns,peak gain(≥4 dBi),and Envelope Correlation Coefficient(ECC)(≤0.01)are extracted to validate the suggested antenna performance.Aswell,time-domain analysis was investigated to demonstrate the operation of the suggested antenna in wideband applications.Finally,the simulated and experimental outcomes have almost similar tendenciesmaking the antenna suitable for its use in UWBMIMOapplications.
文摘The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is a combinatorial optimization problem,which renders exhaustive search impossible as query sizes rise.Increases in CPU performance have surpassed main memory,and disk access speeds in recent decades,allowing data compression to be used—strategies for improving database performance systems.For performance enhancement,compression and query optimization are the two most factors.Compression reduces the volume of data,whereas query optimization minimizes execution time.Compressing the database reduces memory requirement,data takes less time to load into memory,fewer buffer missing occur,and the size of intermediate results is more diminutive.This paper performed query optimization on the graph database in a cloud dew environment by considering,which requires less time to execute a query.The factors compression and query optimization improve the performance of the databases.This research compares the performance of MySQL and Neo4j databases in terms of memory usage and execution time running on cloud dew servers.
文摘Nowadays,there is tremendous growth in biometric authentication and cybersecurity applications.Thus,the efficient way of storing and securing personal biometric patterns is mandatory in most governmental and private sectors.Therefore,designing and implementing robust security algorithms for users’biometrics is still a hot research area to be investigated.This work presents a powerful biometric security system(BSS)to protect different biometric modalities such as faces,iris,and fingerprints.The proposed BSSmodel is based on hybridizing auto-encoder(AE)network and a chaos-based ciphering algorithm to cipher the details of the stored biometric patterns and ensures their secrecy.The employed AE network is unsupervised deep learning(DL)structure used in the proposed BSS model to extract main biometric features.These obtained features are utilized to generate two random chaos matrices.The first random chaos matrix is used to permute the pixels of biometric images.In contrast,the second random matrix is used to further cipher and confuse the resulting permuted biometric pixels using a two-dimensional(2D)chaotic logisticmap(CLM)algorithm.To assess the efficiency of the proposed BSS,(1)different standardized color and grayscale images of the examined fingerprint,faces,and iris biometrics were used(2)comprehensive security and recognition evaluation metrics were measured.The assessment results have proven the authentication and robustness superiority of the proposed BSSmodel compared to other existing BSSmodels.For example,the proposed BSS succeeds in getting a high area under the receiver operating characteristic(AROC)value that reached 99.97%and low rates of 0.00137,0.00148,and 3516 CMC,2023,vol.74,no.20.00157 for equal error rate(EER),false reject rate(FRR),and a false accept rate(FAR),respectively.
文摘In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the migration performance of the virtual machine(VM)in the cloud environment.VMs allow cloud customers to store essential data and resources.However,server usage has grown dramatically due to the virtualization of computer systems,resulting in higher data centre power consumption,storage needs,and operating expenses.Multiple VMs on one data centre manage share resources like central processing unit(CPU)cache,network bandwidth,memory,and application bandwidth.Inmulti-cloud,VMmigration addresses the performance degradation due to cloud server configuration,unbalanced traffic load,resource load management,and fault situations during data transfer.VMmigration speed is influenced by the size of the VM,the dirty rate of the running application,and the latency ofmigration iterations.As a result,evaluating VM migration performance while considering all of these factors becomes a difficult task.Themain effort of this research is to assess migration problems on performance.The simulation results in Matlab show that if the VMsize grows,themigration time of VMs and the downtime can be impacted by three orders ofmagnitude.The dirty page rate decreases,themigration time and the downtime grow,and the latency time decreases as network bandwidth increases during the migration time and post-migration overhead calculation when the VMtransfer is completed.All the simulated cases of VMs migration were performed in a fuzzy inference system with performance graphs.
基金supported by the MSIT (Ministry of Science and ICT),Korea,under the ICAN (ICT Challenge and Advanced Network of HRD)Program (IITP-2023-2020-0-01832)supervised by the IITP (Institute of Information&Communications Technology Planning&Evaluation)and the Soonchunhyang University Research Fund.
文摘Black fungus is a rare and dangerous mycology that usually affects the brain and lungs and could be life-threatening in diabetic cases.Recently,some COVID-19 survivors,especially those with co-morbid diseases,have been susceptible to black fungus.Therefore,recovered COVID-19 patients should seek medical support when they notice mucormycosis symptoms.This paper proposes a novel ensemble deep-learning model that includes three pre-trained models:reset(50),VGG(19),and Inception.Our approach is medically intuitive and efficient compared to the traditional deep learning models.An image dataset was aggregated from various resources and divided into two classes:a black fungus class and a skin infection class.To the best of our knowledge,our study is the first that is concerned with building black fungus detection models based on deep learning algorithms.The proposed approach can significantly improve the performance of the classification task and increase the generalization ability of such a binary classification task.According to the reported results,it has empirically achieved a sensitivity value of 0.9907,a specificity value of 0.9938,a precision value of 0.9938,and a negative predictive value of 0.9907.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Small Groups Project under Grant Number(168/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R237)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4320484DSR01).
文摘In recent years,wireless networks are widely used in different domains.This phenomenon has increased the number of Internet of Things(IoT)devices and their applications.Though IoT has numerous advantages,the commonly-used IoT devices are exposed to cyber-attacks periodically.This scenario necessitates real-time automated detection and the mitigation of different types of attacks in high-traffic networks.The Software-Defined Networking(SDN)technique and the Machine Learning(ML)-based intrusion detection technique are effective tools that can quickly respond to different types of attacks in the IoT networks.The Intrusion Detection System(IDS)models can be employed to secure the SDN-enabled IoT environment in this scenario.The current study devises a Harmony Search algorithmbased Feature Selection with Optimal Convolutional Autoencoder(HSAFSOCAE)for intrusion detection in the SDN-enabled IoT environment.The presented HSAFS-OCAE method follows a three-stage process in which the Harmony Search Algorithm-based FS(HSAFS)technique is exploited at first for feature selection.Next,the CAE method is leveraged to recognize and classify intrusions in the SDN-enabled IoT environment.Finally,the Artificial Fish SwarmAlgorithm(AFSA)is used to fine-tune the hyperparameters.This process improves the outcomes of the intrusion detection process executed by the CAE algorithm and shows the work’s novelty.The proposed HSAFSOCAE technique was experimentally validated under different aspects,and the comparative analysis results established the supremacy of the proposed model.
基金The authors would like to thank the support of the Taif University Researchers Supporting Project TURSP 2020/34,Taif University,Taif Saudi Arabia for supporting this work.
文摘Classification of electroencephalogram(EEG)signals for humans can be achieved via artificial intelligence(AI)techniques.Especially,the EEG signals associated with seizure epilepsy can be detected to distinguish between epileptic and non-epileptic regions.From this perspective,an automated AI technique with a digital processing method can be used to improve these signals.This paper proposes two classifiers:long short-term memory(LSTM)and support vector machine(SVM)for the classification of seizure and non-seizure EEG signals.These classifiers are applied to a public dataset,namely the University of Bonn,which consists of 2 classes–seizure and non-seizure.In addition,a fast Walsh-Hadamard Transform(FWHT)technique is implemented to analyze the EEG signals within the recurrence space of the brain.Thus,Hadamard coefficients of the EEG signals are obtained via the FWHT.Moreover,the FWHT is contributed to generate an efficient derivation of seizure EEG recordings from non-seizure EEG recordings.Also,a k-fold cross-validation technique is applied to validate the performance of the proposed classifiers.The LSTM classifier provides the best performance,with a testing accuracy of 99.00%.The training and testing loss rates for the LSTM are 0.0029 and 0.0602,respectively,while the weighted average precision,recall,and F1-score for the LSTM are 99.00%.The results of the SVM classifier in terms of accuracy,sensitivity,and specificity reached 91%,93.52%,and 91.3%,respectively.The computational time consumed for the training of the LSTM and SVM is 2000 and 2500 s,respectively.The results show that the LSTM classifier provides better performance than SVM in the classification of EEG signals.Eventually,the proposed classifiers provide high classification accuracy compared to previously published classifiers.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through the small Groups Project under grant number(168/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R151),Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR59).
文摘Emerging technologies such as edge computing,Internet of Things(IoT),5G networks,big data,Artificial Intelligence(AI),and Unmanned Aerial Vehicles(UAVs)empower,Industry 4.0,with a progressive production methodology that shows attention to the interaction between machine and human beings.In the literature,various authors have focused on resolving security problems in UAV communication to provide safety for vital applications.The current research article presents a Circle Search Optimization with Deep Learning Enabled Secure UAV Classification(CSODL-SUAVC)model for Industry 4.0 environment.The suggested CSODL-SUAVC methodology is aimed at accomplishing two core objectives such as secure communication via image steganography and image classification.Primarily,the proposed CSODL-SUAVC method involves the following methods such as Multi-Level Discrete Wavelet Transformation(ML-DWT),CSO-related Optimal Pixel Selection(CSO-OPS),and signcryption-based encryption.The proposed model deploys the CSO-OPS technique to select the optimal pixel points in cover images.The secret images,encrypted by signcryption technique,are embedded into cover images.Besides,the image classification process includes three components namely,Super-Resolution using Convolution Neural Network(SRCNN),Adam optimizer,and softmax classifier.The integration of the CSO-OPS algorithm and Adam optimizer helps in achieving the maximum performance upon UAV communication.The proposed CSODLSUAVC model was experimentally validated using benchmark datasets and the outcomes were evaluated under distinct aspects.The simulation outcomes established the supreme better performance of the CSODL-SUAVC model over recent approaches.
基金This research work was supported by the University Malaysia Sabah,Malaysia.
文摘University timetabling problems are a yearly challenging task and are faced repeatedly each semester.The problems are considered nonpolynomial time(NP)and combinatorial optimization problems(COP),which means that they can be solved through optimization algorithms to produce the aspired optimal timetable.Several techniques have been used to solve university timetabling problems,and most of them use optimization techniques.This paper provides a comprehensive review of the most recent studies dealing with concepts,methodologies,optimization,benchmarks,and open issues of university timetabling problems.The comprehensive review starts by presenting the essence of university timetabling as NP-COP,defining and clarifying the two formed classes of university timetabling:University Course Timetabling and University Examination Timetabling,illustrating the adopted algorithms for solving such a problem,elaborating the university timetabling constraints to be considered achieving the optimal timetable,and explaining how to analyze and measure the performance of the optimization algorithms by demonstrating the commonly used benchmark datasets for the evaluation.It is noted that meta-heuristic methodologies are widely used in the literature.Additionally,recently,multi-objective optimization has been increasingly used in solving such a problem that can identify robust university timetabling solutions.Finally,trends and future directions in university timetabling problems are provided.This paper provides good information for students,researchers,and specialists interested in this area of research.The challenges and possibilities for future research prospects are also explored.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number RI-44-0525.
文摘COVID-19 has significantly impacted the growth prediction of a pandemic,and it is critical in determining how to battle and track the disease progression.In this case,COVID-19 data is a time-series dataset that can be projected using different methodologies.Thus,this work aims to gauge the spread of the outbreak severity over time.Furthermore,data analytics and Machine Learning(ML)techniques are employed to gain a broader understanding of virus infections.We have simulated,adjusted,and fitted several statistical time-series forecasting models,linearML models,and nonlinear ML models.Examples of these models are Logistic Regression,Lasso,Ridge,ElasticNet,Huber Regressor,Lasso Lars,Passive Aggressive Regressor,K-Neighbors Regressor,Decision Tree Regressor,Extra Trees Regressor,Support Vector Regressions(SVR),AdaBoost Regressor,Random Forest Regressor,Bagging Regressor,AuoRegression,MovingAverage,Gradient Boosting Regressor,Autoregressive Moving Average(ARMA),Auto-Regressive Integrated Moving Averages(ARIMA),SimpleExpSmoothing,Exponential Smoothing,Holt-Winters,Simple Moving Average,Weighted Moving Average,Croston,and naive Bayes.Furthermore,our suggested methodology includes the development and evaluation of ensemble models built on top of the best-performing statistical and ML-based prediction methods.A third stage in the proposed system is to examine three different implementations to determine which model delivers the best performance.Then,this best method is used for future forecasts,and consequently,we can collect the most accurate and dependable predictions.
文摘Violence detection is very important for public safety.However,violence detection is not an easy task.Because recognizing violence in surveillance video requires not only spatial information but also sufficient temporal information.In order to highlight the time information,we propose an efficient deep learning architecture for violence detection based on temporal attention mechanism,which utilizes pre-trained MobileNetV3,convolutional LSTM and temporal attention block Temporal Adaptive(TA).TA block can focus on further refining temporal information from spatial information extracted from backbone.Experimental results show the proposed model is validated on three publicly datasets:Hockey Fight,Movies,and RWF-2000 datasets.
基金funded by Researchers Supporting Project Number(RSP2023R503),King Saud University,Riyadh,Saudi Arabia。
文摘Shadow extraction and elimination is essential for intelligent transportation systems(ITS)in vehicle tracking application.The shadow is the source of error for vehicle detection,which causes misclassification of vehicles and a high false alarm rate in the research of vehicle counting,vehicle detection,vehicle tracking,and classification.Most of the existing research is on shadow extraction of moving vehicles in high intensity and on standard datasets,but the process of extracting shadows from moving vehicles in low light of real scenes is difficult.The real scenes of vehicles dataset are generated by self on the Vadodara–Mumbai highway during periods of poor illumination for shadow extraction of moving vehicles to address the above problem.This paper offers a robust shadow extraction of moving vehicles and its elimination for vehicle tracking.The method is distributed into two phases:In the first phase,we extract foreground regions using a mixture of Gaussian model,and then in the second phase,with the help of the Gamma correction,intensity ratio,negative transformation,and a combination of Gaussian filters,we locate and remove the shadow region from the foreground areas.Compared to the outcomes proposed method with outcomes of an existing method,the suggested method achieves an average true negative rate of above 90%,a shadow detection rate SDR(η%),and a shadow discrimination rate SDR(ξ%)of 80%.Hence,the suggested method is more appropriate for moving shadow detection in real scenes.
文摘Nowadays,the security of images or information is very important.This paper introduces a proposed hybrid watermarking and encryption technique for increasing medical image security.First,the secret medical image is encrypted using Advanced Encryption Standard(AES)algorithm.Then,the secret report of the patient is embedded into the encrypted secret medical image with the Least Significant Bit(LSB)watermarking algorithm.After that,the encrypted secret medical image with the secret report is concealed in a cover medical image,using Kekre’s Median Codebook Generation(KMCG)algorithm.Afterwards,the stego-image obtained is split into 16 parts.Finally,it is sent to the receiver.We adopt this strategy to send the secret medical image and report over a network securely.The proposed technique is assessed with different encryption quality metrics including Peak Signal-to-Noise Ratio(PSNR),Correlation Coefficient(Cr),Fea-ture Similarity Index Metric(FSIM),and Structural Similarity Index Metric(SSIM).Histogram estimation is used to confirm the matching between the secret medical image before and after transmission.Simulation results demonstrate that the proposed technique achieves good performance with high quality of the received medical image and clear image details in a very short processing time.
文摘Arrhythmia has been classified using a variety of methods.Because of the dynamic nature of electrocardiogram(ECG)data,traditional handcrafted approaches are difficult to execute,making the machine learning(ML)solutions more appealing.Patients with cardiac arrhythmias can benefit from competent monitoring to save their lives.Cardiac arrhythmia classification and prediction have greatly improved in recent years.Arrhythmias are a category of conditions in which the heart's electrical activity is abnormally rapid or sluggish.Every year,it is one of the main reasons of mortality for both men and women,worldwide.For the classification of arrhythmias,this work proposes a novel technique based on optimized feature selection and optimized K-nearest neighbors(KNN)classifier.The proposed method makes advantage of the UCI repository,which has a 279-attribute high-dimensional cardiac arrhythmia dataset.The proposed approach is based on dividing cardiac arrhythmia patients into 16 groups based on the electrocardiography dataset’s features.The purpose is to design an efficient intelligent system employing the dipper throated optimization method to categorize cardiac arrhythmia patients.This method of comprehensive arrhythmia classification outperforms earlier methods presented in the literature.The achieved classification accuracy using the proposed approach is 99.8%.