In recent years,there has been extensive research on object detection methods applied to optical remote sensing images utilizing convolutional neural networks.Despite these efforts,the detection of small objects in re...In recent years,there has been extensive research on object detection methods applied to optical remote sensing images utilizing convolutional neural networks.Despite these efforts,the detection of small objects in remote sensing remains a formidable challenge.The deep network structure will bring about the loss of object features,resulting in the loss of object features and the near elimination of some subtle features associated with small objects in deep layers.Additionally,the features of small objects are susceptible to interference from background features contained within the image,leading to a decline in detection accuracy.Moreover,the sensitivity of small objects to the bounding box perturbation further increases the detection difficulty.In this paper,we introduce a novel approach,Cross-Layer Fusion and Weighted Receptive Field-based YOLO(CAW-YOLO),specifically designed for small object detection in remote sensing.To address feature loss in deep layers,we have devised a cross-layer attention fusion module.Background noise is effectively filtered through the incorporation of Bi-Level Routing Attention(BRA).To enhance the model’s capacity to perceive multi-scale objects,particularly small-scale objects,we introduce a weightedmulti-receptive field atrous spatial pyramid poolingmodule.Furthermore,wemitigate the sensitivity arising from bounding box perturbation by incorporating the joint Normalized Wasserstein Distance(NWD)and Efficient Intersection over Union(EIoU)losses.The efficacy of the proposedmodel in detecting small objects in remote sensing has been validated through experiments conducted on three publicly available datasets.The experimental results unequivocally demonstrate the model’s pronounced advantages in small object detection for remote sensing,surpassing the performance of current mainstream models.展开更多
In differentiable search architecture search methods,a more efficient search space design can significantly improve the performance of the searched architecture,thus requiring people to carefully define the search spa...In differentiable search architecture search methods,a more efficient search space design can significantly improve the performance of the searched architecture,thus requiring people to carefully define the search space with different complexity according to various operations.Meanwhile rationalizing the search strategies to explore the well-defined search space will further improve the speed and efficiency of architecture search.With this in mind,we propose a faster and more efficient differentiable architecture search method,AllegroNAS.Firstly,we introduce a more efficient search space enriched by the introduction of two redefined convolution modules.Secondly,we utilize a more efficient architectural parameter regularization method,mitigating the overfitting problem during the search process and reducing the error brought about by gradient approximation.Meanwhile,we introduce a natural exponential cosine annealing method to make the learning rate of the neural network training process more suitable for the search procedure.Moreover,group convolution and data augmentation are employed to reduce the computational cost.Finally,through extensive experiments on several public datasets,we demonstrate that our method can more swiftly search for better-performing neural network architectures in a more efficient search space,thus validating the effectiveness of our approach.展开更多
Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as re...Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as reduction in the lifespan of equipment due to frequent switching and interruption,delay,and stoppage of services may occur.Therefore,applying a machine learning(ML)method,which is possible to automatically judge and classify network-related service anomaly,and switch multi-input signals without dropping or changing signals by predicting or quickly determining the time of error occurrence for smooth stream switching when there are problems such as transmission errors,is required.In this paper,we propose an intelligent packet switching method based on the ML method of classification,which is one of the supervised learning methods,that presents the risk level of abnormal multi-stream occurring in broadcasting gateway equipment based on data.Furthermore,we subdivide the risk levels obtained from classification techniques into probabilities and then derive vectorized representative values for each attribute value of the collected input data and continuously update them.The obtained reference vector value is used for switching judgment through the cosine similarity value between input data obtained when a dangerous situation occurs.In the broadcasting gateway equipment to which the proposed method is applied,it is possible to perform more stable and smarter switching than before by solving problems of reliability and broadcasting accidents of the equipment and can maintain stable video streaming as well.展开更多
Incomplete fault signal characteristics and ease of noise contamination are issues with the current rolling bearing early fault diagnostic methods,making it challenging to ensure the fault diagnosis accuracy and relia...Incomplete fault signal characteristics and ease of noise contamination are issues with the current rolling bearing early fault diagnostic methods,making it challenging to ensure the fault diagnosis accuracy and reliability.A novel approach integrating enhanced Symplectic geometry mode decomposition with cosine difference limitation and calculus operator(ESGMD-CC)and artificial fish swarm algorithm(AFSA)optimized extreme learning machine(ELM)is proposed in this paper to enhance the extraction capability of fault features and thus improve the accuracy of fault diagnosis.Firstly,SGMD decomposes the raw vibration signal into multiple Symplectic geometry components(SGCs).Secondly,the iterations are reset by the cosine difference limitation to effectively separate the redundant components from the representative components.Additionally,the calculus operator is performed to strengthen weak fault features and make them easier to extract,and the singular value decomposition(SVD)weighted by power spectrum entropy(PSE)can be utilized as the sample feature representation.Finally,AFSA iteratively optimized ELM is adopted as the optimized classifier for fault identification.The superior performance of the proposed method has been validated by various experiments.展开更多
This paper covers the concept of Fourier series and its application for a periodic signal. A periodic signal is a signal that repeats its pattern over time at regular intervals. The idea inspiring is to approximate a ...This paper covers the concept of Fourier series and its application for a periodic signal. A periodic signal is a signal that repeats its pattern over time at regular intervals. The idea inspiring is to approximate a regular periodic signal, under Dirichlet conditions, via a linear superposition of trigonometric functions, thus Fourier polynomials are constructed. The Dirichlet conditions, are a set of mathematical conditions, providing a foundational framework for the validity of the Fourier series representation. By understanding and applying these conditions, we can accurately represent and process periodic signals, leading to advancements in various areas of signal processing. The resulting Fourier approximation allows complex periodic signals to be expressed as a sum of simpler sinusoidal functions, making it easier to analyze and manipulate such signals.展开更多
This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted av...This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.展开更多
A naïve discussion of Fermat’s last theorem conundrum is described. The present theorem’s proof is grounded on the well-known properties of sums of powers of the sine and cosine functions, the Minkowski norm de...A naïve discussion of Fermat’s last theorem conundrum is described. The present theorem’s proof is grounded on the well-known properties of sums of powers of the sine and cosine functions, the Minkowski norm definition, and some vector-specific structures.展开更多
Applied linguistics is an interdisciplinary domain which identifies,investigates,and offers solutions to language-related real-life problems.The new coronavirus disease,otherwise known as Coronavirus disease(COVID-19)...Applied linguistics is an interdisciplinary domain which identifies,investigates,and offers solutions to language-related real-life problems.The new coronavirus disease,otherwise known as Coronavirus disease(COVID-19),has severely affected the everyday life of people all over the world.Specifically,since there is insufficient access to vaccines and no straight or reliable treatment for coronavirus infection,the country has initiated the appropriate preventive measures(like lockdown,physical separation,and masking)for combating this extremely transmittable disease.So,individuals spent more time on online social media platforms(i.e.,Twitter,Facebook,Instagram,LinkedIn,and Reddit)and expressed their thoughts and feelings about coronavirus infection.Twitter has become one of the popular social media platforms and allows anyone to post tweets.This study proposes a sine cosine optimization with bidirectional gated recurrent unit-based senti-ment analysis(SCOBGRU-SA)on COVID-19 tweets.The SCOBGRU-SA technique aimed to detect and classify the various sentiments in Twitter data during the COVID-19 pandemic.The SCOBGRU-SA technique follows data pre-processing and the Fast-Text word embedding process to accomplish this.Moreover,the BGRU model is utilized to recognise and classify sen-timents present in the tweets.Furthermore,the SCO algorithm is exploited for tuning the BGRU method’s hyperparameter,which helps attain improved classification performance.The experimental validation of the SCOBGRU-SA technique takes place using a benchmark dataset,and the results signify its promising performance compared to other DL models.展开更多
Background:Genomic selection(GS)has revolutionized animal and plant breeding after the first implementation via early selection before measuring phenotypes.Besides genome,transcriptome and metabolome information are i...Background:Genomic selection(GS)has revolutionized animal and plant breeding after the first implementation via early selection before measuring phenotypes.Besides genome,transcriptome and metabolome information are increasingly considered new sources for GS.Difficulties in building the model with multi-omics data for GS and the limit of specimen availability have both delayed the progress of investigating multi-omics.Results:We utilized the Cosine kernel to map genomic and transcriptomic data as n×n symmetric matrix(G matrix and T matrix),combined with the best linear unbiased prediction(BLUP)for GS.Here,we defined five kernel-based prediction models:genomic BLUP(GBLUP),transcriptome-BLUP(TBLUP),multi-omics BLUP(MBLUP,M=ratio×G+(1-ratio)×T),multi-omics single-step BLUP(mss BLUP),and weighted multi-omics single-step BLUP(wmss BLUP)to integrate transcribed individuals and genotyped resource population.The predictive accuracy evaluations in four traits of the Chinese Simmental beef cattle population showed that(1)MBLUP was far preferred to GBLUP(ratio=1.0),(2)the prediction accuracy of wmss BLUP and mss BLUP had 4.18%and 3.37%average improvement over GBLUP,(3)We also found the accuracy of wmss BLUP increased with the growing proportion of transcribed cattle in the whole resource population.Conclusions:We concluded that the inclusion of transcriptome data in GS had the potential to improve accuracy.Moreover,wmss BLUP is accepted to be a promising alternative for the present situation in which plenty of individuals are genotyped when fewer are transcribed.展开更多
Labeled data is widely used in various classification tasks.However,there is a huge challenge that labels are often added artificially.Wrong labels added by malicious users will affect the training effect of the model...Labeled data is widely used in various classification tasks.However,there is a huge challenge that labels are often added artificially.Wrong labels added by malicious users will affect the training effect of the model.The unreliability of labeled data has hindered the research.In order to solve the above problems,we propose a framework of Label Noise Filtering and Missing Label Supplement(LNFS).And we take location labels in Location-Based Social Networks(LBSN)as an example to implement our framework.For the problem of label noise filtering,we first use FastText to transform the restaurant's labels into vectors,and then based on the assumption that the label most similar to all other labels in the location is most representative.We use cosine similarity to judge and select the most representative label.For the problem of label missing,we use simple common word similarity to judge the similarity of users'comments,and then use the label of the similar restaurant to supplement the missing labels.To optimize the performance of the model,we introduce game theory into our model to simulate the game between the malicious users and the model to improve the reliability of the model.Finally,a case study is given to illustrate the effectiveness and reliability of LNFS.展开更多
Due to the complexity of marine environment,underwater acoustic signal will be affected by complex background noise during transmission.Underwater acoustic signal denoising is always a difficult problem in underwater ...Due to the complexity of marine environment,underwater acoustic signal will be affected by complex background noise during transmission.Underwater acoustic signal denoising is always a difficult problem in underwater acoustic signal processing.To obtain a better denoising effect,a new denoising method of underwater acoustic signal based on optimized variational mode decomposition by black widow optimization algorithm(BVMD),fluctuation-based dispersion entropy threshold improved by Otsu method(OFDE),cosine similarity stationary threshold(CSST),BVMD,fluctuation-based dispersion entropy(FDE),named BVMD-OFDE-CSST-BVMD-FDE,is proposed.In the first place,decompose the original signal into a series of intrinsic mode functions(IMFs)by BVMD.Afterwards,distinguish pure IMFs,mixed IMFs and noise IMFs by OFDE and CSST,and reconstruct pure IMFs and mixed IMFs to obtain primary denoised signal.In the end,decompose primary denoising signal into IMFs by BVMD again,use the FDE value to distinguish noise IMFs and pure IMFs,and reconstruct pure IMFs to obtain the final denoised signal.The proposed mothod has three advantages:(i)BVMD can adaptively select the decomposition layer and penalty factor of VMD.(ii)FDE and CS are used as double criteria to distinguish noise IMFs from useful IMFs,and Otsu algorithm and CSST algorithm can effectively avoid the error caused by manually selecting thresholds.(iii)Secondary decomposition can make up for the deficiency of primary decomposition and further remove a small amount of noise.The chaotic signal and real ship signal are denoised.The experiment result shows that the proposed method can effectively denoise.It improves the denoising effect after primary decomposition,and has good practical value.展开更多
Shape and size optimization with frequency constraints is a highly nonlinear problem withmixed design variables,non-convex search space,and multiple local optima.Therefore,a hybrid sine cosine firefly algorithm(HSCFA)...Shape and size optimization with frequency constraints is a highly nonlinear problem withmixed design variables,non-convex search space,and multiple local optima.Therefore,a hybrid sine cosine firefly algorithm(HSCFA)is proposed to acquire more accurate solutions with less finite element analysis.The full attraction model of firefly algorithm(FA)is analyzed,and the factors that affect its computational efficiency and accuracy are revealed.A modified FA with simplified attraction model and adaptive parameter of sine cosine algorithm(SCA)is proposed to reduce the computational complexity and enhance the convergence rate.Then,the population is classified,and different populations are updated by modified FA and SCA respectively.Besides,the random search strategy based on Lévy flight is adopted to update the stagnant or infeasible solutions to enhance the population diversity.Elitist selection technique is applied to save the promising solutions and further improve the convergence rate.Moreover,the adaptive penalty function is employed to deal with the constraints.Finally,the performance of HSCFA is demonstrated through the numerical examples with nonstructural masses and frequency constraints.The results show that HSCFA is an efficient and competitive tool for shape and size optimization problems with frequency constraints.展开更多
Occurrence of crimes has been on the constant rise despite the emerging discoveries and advancements in the technological field in the past decade.One of the most tedious tasks is to track a suspect once a crime is co...Occurrence of crimes has been on the constant rise despite the emerging discoveries and advancements in the technological field in the past decade.One of the most tedious tasks is to track a suspect once a crime is committed.As most of the crimes are committed by individuals who have a history of felonies,it is essential for a monitoring system that does not just detect the person’s face who has committed the crime,but also their identity.Hence,a smart criminal detection and identification system that makes use of the OpenCV Deep Neural Network(DNN)model which employs a Single Shot Multibox Detector for detection of face and an auto-encoder model in which the encoder part is used for matching the captured facial images with the criminals has been proposed.After detection and extraction of the face in the image by face cropping,the captured face is then compared with the images in the CriminalDatabase.The comparison is performed by calculating the similarity value between each pair of images that are obtained by using the Cosine Similarity metric.After plotting the values in a graph to find the threshold value,we conclude that the confidence rate of the encoder model is 0.75 and above.展开更多
Many complex optimization problems in the real world can easily fall into local optimality and fail to find the optimal solution,so more new techniques and methods are needed to solve such challenges.Metaheuristic alg...Many complex optimization problems in the real world can easily fall into local optimality and fail to find the optimal solution,so more new techniques and methods are needed to solve such challenges.Metaheuristic algorithms have received a lot of attention in recent years because of their efficient performance and simple structure.Sine Cosine Algorithm(SCA)is a recent Metaheuristic algorithm that is based on two trigonometric functions Sine&Cosine.However,like all other metaheuristic algorithms,SCA has a slow convergence and may fail in sub-optimal regions.In this study,an enhanced version of SCA named RDSCA is suggested that depends on two techniques:random spare/replacement and double adaptive weight.The first technique is employed in SCA to speed the convergence whereas the second method is used to enhance exploratory searching capabilities.To evaluate RDSCA,30 functions from CEC 2017 and 4 real-world engineering problems are used.Moreover,a nonparametric test called Wilcoxon signed-rank is carried out at 5%level to evaluate the significance of the obtained results between RDSCA and the other 5 variants of SCA.The results show that RDSCA has competitive results with other metaheuristics algorithms.展开更多
This study investigates the scheduling problem ofmultiple agile optical satelliteswith large-scale tasks.This problem is difficult to solve owing to the time-dependent characteristic of agile optical satellites,comple...This study investigates the scheduling problem ofmultiple agile optical satelliteswith large-scale tasks.This problem is difficult to solve owing to the time-dependent characteristic of agile optical satellites,complex constraints,and considerable solution space.To solve the problem,we propose a scheduling method based on an improved sine and cosine algorithm and a task merging approach.We first establish a scheduling model with task merging constraints and observation action constraints to describe the problem.Then,an improved sine and cosine algorithm is proposed to search for the optimal solution with the maximum profit ratio.An adaptive cosine factor and an adaptive greedy factor are adopted to improve the algorithm.Besides,a taskmerging method with a task reallocation mechanism is developed to improve the scheduling efficiency.Experimental results demonstrate the superiority of the proposed algorithm over the comparison algorithms.展开更多
In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-f...In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-frequency oscillations introduced by data assimilation.However,as different scales of increments have unique evolutionary speeds and life histories in a numerical model,the traditional IAU scheme cannot fully meet the requirements of short-term forecasting for the damping of high-frequency noise and may even cause systematic drifts.Therefore,a multi-scale IAU scheme is proposed in this paper.Analysis increments were divided into different scale parts using a spatial filtering technique.For each scale increment,the optimal relaxation time in the IAU scheme was determined by the skill of the forecasting results.Finally,different scales of analysis increments were added to the model integration during their optimal relaxation time.The multi-scale IAU scheme can effectively reduce the noise and further improve the balance between large-scale and small-scale increments in the model initialization stage.To evaluate its performance,several numerical experiments were conducted to simulate the path and intensity of Typhoon Mangkhut(2018)and showed that:(1)the multi-scale IAU scheme had an obvious effect on noise control at the initial stage of data assimilation;(2)the optimal relaxation time for large-scale and small-scale increments was estimated as 6 h and 3 h,respectively;(3)the forecast performance of the multi-scale IAU scheme in the prediction of Typhoon Mangkhut(2018)was better than that of the traditional IAU scheme.The results demonstrate the superiority of the multi-scale IAU scheme.展开更多
Nowadays,Internet of Things(IoT)has penetrated all facets of human life while on the other hand,IoT devices are heavily prone to cyberattacks.It has become important to develop an accurate system that can detect malic...Nowadays,Internet of Things(IoT)has penetrated all facets of human life while on the other hand,IoT devices are heavily prone to cyberattacks.It has become important to develop an accurate system that can detect malicious attacks on IoT environments in order to mitigate security risks.Botnet is one of the dreadfulmalicious entities that has affected many users for the past few decades.It is challenging to recognize Botnet since it has excellent carrying and hidden capacities.Various approaches have been employed to identify the source of Botnet at earlier stages.Machine Learning(ML)and Deep Learning(DL)techniques are developed based on heavy influence from Botnet detection methodology.In spite of this,it is still a challenging task to detect Botnet at early stages due to low number of features accessible from Botnet dataset.The current study devises IoT with Cloud Assisted Botnet Detection and Classification utilizingRat SwarmOptimizer with Deep Learning(BDC-RSODL)model.The presented BDC-RSODL model includes a series of processes like pre-processing,feature subset selection,classification,and parameter tuning.Initially,the network data is pre-processed to make it compatible for further processing.Besides,RSO algorithm is exploited for effective selection of subset of features.Additionally,Long Short TermMemory(LSTM)algorithm is utilized for both identification and classification of botnets.Finally,Sine Cosine Algorithm(SCA)is executed for fine-tuning the hyperparameters related to LSTM model.In order to validate the promising 3086 CMC,2023,vol.74,no.2 performance of BDC-RSODL system,a comprehensive comparison analysis was conducted.The obtained results confirmed the supremacy of BDCRSODL model over recent approaches.展开更多
Securing medical data while transmission on the network is required because it is sensitive and life-dependent data.Many methods are used for protection,such as Steganography,Digital Signature,Cryptography,and Waterma...Securing medical data while transmission on the network is required because it is sensitive and life-dependent data.Many methods are used for protection,such as Steganography,Digital Signature,Cryptography,and Watermarking.This paper introduces a novel robust algorithm that combines discrete wavelet transform(DWT),discrete cosine transform(DCT),and singular value decomposition(SVD)digital image-watermarking algorithms.The host image is decomposed using a two-dimensional DWT(2D-DWT)to approximate low-frequency sub-bands in the embedding process.Then the sub-band low-high(LH)is decomposed using 2D-DWT to four new sub-bands.The resulting sub-band low-high(LH1)is decomposed using 2D-DWT to four new sub-bands.Two frequency bands,high-high(HH_(2))and high-low(HL_(2)),are transformed by DCT,and then the SVD is applied to the DCT coefficients.The strongest modified singular values(SVs)vary very little for most attacks,which is an important property of SVD watermarking.The two watermark images are encrypted using two layers of encryption,circular and chaotic encryption techniques,to increase security.The first encrypted watermark is embedded in the S component of the DCT components of the HL_(2)coefficients.The second encrypted watermark is embedded in the S component of the DCT components of the HH2 coefficients.The suggested technique has been tested against various attacks and proven to provide excellent stability and imperceptibility results.展开更多
基金supported in part by the National Natural Science Foundation of China under Grant 62006071part by the Science and Technology Research Project of Henan Province under Grant 232103810086.
文摘In recent years,there has been extensive research on object detection methods applied to optical remote sensing images utilizing convolutional neural networks.Despite these efforts,the detection of small objects in remote sensing remains a formidable challenge.The deep network structure will bring about the loss of object features,resulting in the loss of object features and the near elimination of some subtle features associated with small objects in deep layers.Additionally,the features of small objects are susceptible to interference from background features contained within the image,leading to a decline in detection accuracy.Moreover,the sensitivity of small objects to the bounding box perturbation further increases the detection difficulty.In this paper,we introduce a novel approach,Cross-Layer Fusion and Weighted Receptive Field-based YOLO(CAW-YOLO),specifically designed for small object detection in remote sensing.To address feature loss in deep layers,we have devised a cross-layer attention fusion module.Background noise is effectively filtered through the incorporation of Bi-Level Routing Attention(BRA).To enhance the model’s capacity to perceive multi-scale objects,particularly small-scale objects,we introduce a weightedmulti-receptive field atrous spatial pyramid poolingmodule.Furthermore,wemitigate the sensitivity arising from bounding box perturbation by incorporating the joint Normalized Wasserstein Distance(NWD)and Efficient Intersection over Union(EIoU)losses.The efficacy of the proposedmodel in detecting small objects in remote sensing has been validated through experiments conducted on three publicly available datasets.The experimental results unequivocally demonstrate the model’s pronounced advantages in small object detection for remote sensing,surpassing the performance of current mainstream models.
基金This work was supported in part by the National Natural Science Foundation of China under Grant 61305001the Natural Science Foundation of Heilongjiang Province of China under Grant F201222.
文摘In differentiable search architecture search methods,a more efficient search space design can significantly improve the performance of the searched architecture,thus requiring people to carefully define the search space with different complexity according to various operations.Meanwhile rationalizing the search strategies to explore the well-defined search space will further improve the speed and efficiency of architecture search.With this in mind,we propose a faster and more efficient differentiable architecture search method,AllegroNAS.Firstly,we introduce a more efficient search space enriched by the introduction of two redefined convolution modules.Secondly,we utilize a more efficient architectural parameter regularization method,mitigating the overfitting problem during the search process and reducing the error brought about by gradient approximation.Meanwhile,we introduce a natural exponential cosine annealing method to make the learning rate of the neural network training process more suitable for the search procedure.Moreover,group convolution and data augmentation are employed to reduce the computational cost.Finally,through extensive experiments on several public datasets,we demonstrate that our method can more swiftly search for better-performing neural network architectures in a more efficient search space,thus validating the effectiveness of our approach.
基金This work was supported by a research grant from Seoul Women’s University(2023-0183).
文摘Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as reduction in the lifespan of equipment due to frequent switching and interruption,delay,and stoppage of services may occur.Therefore,applying a machine learning(ML)method,which is possible to automatically judge and classify network-related service anomaly,and switch multi-input signals without dropping or changing signals by predicting or quickly determining the time of error occurrence for smooth stream switching when there are problems such as transmission errors,is required.In this paper,we propose an intelligent packet switching method based on the ML method of classification,which is one of the supervised learning methods,that presents the risk level of abnormal multi-stream occurring in broadcasting gateway equipment based on data.Furthermore,we subdivide the risk levels obtained from classification techniques into probabilities and then derive vectorized representative values for each attribute value of the collected input data and continuously update them.The obtained reference vector value is used for switching judgment through the cosine similarity value between input data obtained when a dangerous situation occurs.In the broadcasting gateway equipment to which the proposed method is applied,it is possible to perform more stable and smarter switching than before by solving problems of reliability and broadcasting accidents of the equipment and can maintain stable video streaming as well.
基金supported by National Key Research and Development Project (2020YFE0204900)National Natural Science Foundation of China (Grant Numbers 62073193,61873333)Key Research and Development Plan of Shandong Province (Grant Numbers 2019TSLH0301,2021CXGC010204).
文摘Incomplete fault signal characteristics and ease of noise contamination are issues with the current rolling bearing early fault diagnostic methods,making it challenging to ensure the fault diagnosis accuracy and reliability.A novel approach integrating enhanced Symplectic geometry mode decomposition with cosine difference limitation and calculus operator(ESGMD-CC)and artificial fish swarm algorithm(AFSA)optimized extreme learning machine(ELM)is proposed in this paper to enhance the extraction capability of fault features and thus improve the accuracy of fault diagnosis.Firstly,SGMD decomposes the raw vibration signal into multiple Symplectic geometry components(SGCs).Secondly,the iterations are reset by the cosine difference limitation to effectively separate the redundant components from the representative components.Additionally,the calculus operator is performed to strengthen weak fault features and make them easier to extract,and the singular value decomposition(SVD)weighted by power spectrum entropy(PSE)can be utilized as the sample feature representation.Finally,AFSA iteratively optimized ELM is adopted as the optimized classifier for fault identification.The superior performance of the proposed method has been validated by various experiments.
文摘This paper covers the concept of Fourier series and its application for a periodic signal. A periodic signal is a signal that repeats its pattern over time at regular intervals. The idea inspiring is to approximate a regular periodic signal, under Dirichlet conditions, via a linear superposition of trigonometric functions, thus Fourier polynomials are constructed. The Dirichlet conditions, are a set of mathematical conditions, providing a foundational framework for the validity of the Fourier series representation. By understanding and applying these conditions, we can accurately represent and process periodic signals, leading to advancements in various areas of signal processing. The resulting Fourier approximation allows complex periodic signals to be expressed as a sum of simpler sinusoidal functions, making it easier to analyze and manipulate such signals.
文摘This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.
文摘A naïve discussion of Fermat’s last theorem conundrum is described. The present theorem’s proof is grounded on the well-known properties of sums of powers of the sine and cosine functions, the Minkowski norm definition, and some vector-specific structures.
基金The authors thank the Deanship of Scientific Research at King Khalid University for funding this work through Small Groups Project under grant number(120/43)Princess Nourah bint Abdulrahman UniversityResearchers Supporting Project number(PNURSP2022R281)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research atUmmAl-Qura University for supporting this work by Grant Code:(22UQU4331004DSR06).
文摘Applied linguistics is an interdisciplinary domain which identifies,investigates,and offers solutions to language-related real-life problems.The new coronavirus disease,otherwise known as Coronavirus disease(COVID-19),has severely affected the everyday life of people all over the world.Specifically,since there is insufficient access to vaccines and no straight or reliable treatment for coronavirus infection,the country has initiated the appropriate preventive measures(like lockdown,physical separation,and masking)for combating this extremely transmittable disease.So,individuals spent more time on online social media platforms(i.e.,Twitter,Facebook,Instagram,LinkedIn,and Reddit)and expressed their thoughts and feelings about coronavirus infection.Twitter has become one of the popular social media platforms and allows anyone to post tweets.This study proposes a sine cosine optimization with bidirectional gated recurrent unit-based senti-ment analysis(SCOBGRU-SA)on COVID-19 tweets.The SCOBGRU-SA technique aimed to detect and classify the various sentiments in Twitter data during the COVID-19 pandemic.The SCOBGRU-SA technique follows data pre-processing and the Fast-Text word embedding process to accomplish this.Moreover,the BGRU model is utilized to recognise and classify sen-timents present in the tweets.Furthermore,the SCO algorithm is exploited for tuning the BGRU method’s hyperparameter,which helps attain improved classification performance.The experimental validation of the SCOBGRU-SA technique takes place using a benchmark dataset,and the results signify its promising performance compared to other DL models.
基金funds from the National Natural Science Foundations of China(32172693)the Program of National Beef Cattle and Yak Industrial Technology System(CARS-37)。
文摘Background:Genomic selection(GS)has revolutionized animal and plant breeding after the first implementation via early selection before measuring phenotypes.Besides genome,transcriptome and metabolome information are increasingly considered new sources for GS.Difficulties in building the model with multi-omics data for GS and the limit of specimen availability have both delayed the progress of investigating multi-omics.Results:We utilized the Cosine kernel to map genomic and transcriptomic data as n×n symmetric matrix(G matrix and T matrix),combined with the best linear unbiased prediction(BLUP)for GS.Here,we defined five kernel-based prediction models:genomic BLUP(GBLUP),transcriptome-BLUP(TBLUP),multi-omics BLUP(MBLUP,M=ratio×G+(1-ratio)×T),multi-omics single-step BLUP(mss BLUP),and weighted multi-omics single-step BLUP(wmss BLUP)to integrate transcribed individuals and genotyped resource population.The predictive accuracy evaluations in four traits of the Chinese Simmental beef cattle population showed that(1)MBLUP was far preferred to GBLUP(ratio=1.0),(2)the prediction accuracy of wmss BLUP and mss BLUP had 4.18%and 3.37%average improvement over GBLUP,(3)We also found the accuracy of wmss BLUP increased with the growing proportion of transcribed cattle in the whole resource population.Conclusions:We concluded that the inclusion of transcriptome data in GS had the potential to improve accuracy.Moreover,wmss BLUP is accepted to be a promising alternative for the present situation in which plenty of individuals are genotyped when fewer are transcribed.
基金supported by the National Natural Science Foundation of China(No.61872219)the Natural Science Foundation of Shandong Province(ZR2019MF001).
文摘Labeled data is widely used in various classification tasks.However,there is a huge challenge that labels are often added artificially.Wrong labels added by malicious users will affect the training effect of the model.The unreliability of labeled data has hindered the research.In order to solve the above problems,we propose a framework of Label Noise Filtering and Missing Label Supplement(LNFS).And we take location labels in Location-Based Social Networks(LBSN)as an example to implement our framework.For the problem of label noise filtering,we first use FastText to transform the restaurant's labels into vectors,and then based on the assumption that the label most similar to all other labels in the location is most representative.We use cosine similarity to judge and select the most representative label.For the problem of label missing,we use simple common word similarity to judge the similarity of users'comments,and then use the label of the similar restaurant to supplement the missing labels.To optimize the performance of the model,we introduce game theory into our model to simulate the game between the malicious users and the model to improve the reliability of the model.Finally,a case study is given to illustrate the effectiveness and reliability of LNFS.
基金supported by the National Natural Science Foundation of China(Grant No.51709228)。
文摘Due to the complexity of marine environment,underwater acoustic signal will be affected by complex background noise during transmission.Underwater acoustic signal denoising is always a difficult problem in underwater acoustic signal processing.To obtain a better denoising effect,a new denoising method of underwater acoustic signal based on optimized variational mode decomposition by black widow optimization algorithm(BVMD),fluctuation-based dispersion entropy threshold improved by Otsu method(OFDE),cosine similarity stationary threshold(CSST),BVMD,fluctuation-based dispersion entropy(FDE),named BVMD-OFDE-CSST-BVMD-FDE,is proposed.In the first place,decompose the original signal into a series of intrinsic mode functions(IMFs)by BVMD.Afterwards,distinguish pure IMFs,mixed IMFs and noise IMFs by OFDE and CSST,and reconstruct pure IMFs and mixed IMFs to obtain primary denoised signal.In the end,decompose primary denoising signal into IMFs by BVMD again,use the FDE value to distinguish noise IMFs and pure IMFs,and reconstruct pure IMFs to obtain the final denoised signal.The proposed mothod has three advantages:(i)BVMD can adaptively select the decomposition layer and penalty factor of VMD.(ii)FDE and CS are used as double criteria to distinguish noise IMFs from useful IMFs,and Otsu algorithm and CSST algorithm can effectively avoid the error caused by manually selecting thresholds.(iii)Secondary decomposition can make up for the deficiency of primary decomposition and further remove a small amount of noise.The chaotic signal and real ship signal are denoised.The experiment result shows that the proposed method can effectively denoise.It improves the denoising effect after primary decomposition,and has good practical value.
基金supported by the NationalNatural Science Foundation of China(No.11672098).
文摘Shape and size optimization with frequency constraints is a highly nonlinear problem withmixed design variables,non-convex search space,and multiple local optima.Therefore,a hybrid sine cosine firefly algorithm(HSCFA)is proposed to acquire more accurate solutions with less finite element analysis.The full attraction model of firefly algorithm(FA)is analyzed,and the factors that affect its computational efficiency and accuracy are revealed.A modified FA with simplified attraction model and adaptive parameter of sine cosine algorithm(SCA)is proposed to reduce the computational complexity and enhance the convergence rate.Then,the population is classified,and different populations are updated by modified FA and SCA respectively.Besides,the random search strategy based on Lévy flight is adopted to update the stagnant or infeasible solutions to enhance the population diversity.Elitist selection technique is applied to save the promising solutions and further improve the convergence rate.Moreover,the adaptive penalty function is employed to deal with the constraints.Finally,the performance of HSCFA is demonstrated through the numerical examples with nonstructural masses and frequency constraints.The results show that HSCFA is an efficient and competitive tool for shape and size optimization problems with frequency constraints.
文摘Occurrence of crimes has been on the constant rise despite the emerging discoveries and advancements in the technological field in the past decade.One of the most tedious tasks is to track a suspect once a crime is committed.As most of the crimes are committed by individuals who have a history of felonies,it is essential for a monitoring system that does not just detect the person’s face who has committed the crime,but also their identity.Hence,a smart criminal detection and identification system that makes use of the OpenCV Deep Neural Network(DNN)model which employs a Single Shot Multibox Detector for detection of face and an auto-encoder model in which the encoder part is used for matching the captured facial images with the criminals has been proposed.After detection and extraction of the face in the image by face cropping,the captured face is then compared with the images in the CriminalDatabase.The comparison is performed by calculating the similarity value between each pair of images that are obtained by using the Cosine Similarity metric.After plotting the values in a graph to find the threshold value,we conclude that the confidence rate of the encoder model is 0.75 and above.
基金supported in part by the Hangzhou Science and Technology Development Plan Project(Grant No.20191203B30).
文摘Many complex optimization problems in the real world can easily fall into local optimality and fail to find the optimal solution,so more new techniques and methods are needed to solve such challenges.Metaheuristic algorithms have received a lot of attention in recent years because of their efficient performance and simple structure.Sine Cosine Algorithm(SCA)is a recent Metaheuristic algorithm that is based on two trigonometric functions Sine&Cosine.However,like all other metaheuristic algorithms,SCA has a slow convergence and may fail in sub-optimal regions.In this study,an enhanced version of SCA named RDSCA is suggested that depends on two techniques:random spare/replacement and double adaptive weight.The first technique is employed in SCA to speed the convergence whereas the second method is used to enhance exploratory searching capabilities.To evaluate RDSCA,30 functions from CEC 2017 and 4 real-world engineering problems are used.Moreover,a nonparametric test called Wilcoxon signed-rank is carried out at 5%level to evaluate the significance of the obtained results between RDSCA and the other 5 variants of SCA.The results show that RDSCA has competitive results with other metaheuristics algorithms.
基金supported by Science and Technology on Complex Electronic System Simulation Laboratory (Funding No.6142401003022109).
文摘This study investigates the scheduling problem ofmultiple agile optical satelliteswith large-scale tasks.This problem is difficult to solve owing to the time-dependent characteristic of agile optical satellites,complex constraints,and considerable solution space.To solve the problem,we propose a scheduling method based on an improved sine and cosine algorithm and a task merging approach.We first establish a scheduling model with task merging constraints and observation action constraints to describe the problem.Then,an improved sine and cosine algorithm is proposed to search for the optimal solution with the maximum profit ratio.An adaptive cosine factor and an adaptive greedy factor are adopted to improve the algorithm.Besides,a taskmerging method with a task reallocation mechanism is developed to improve the scheduling efficiency.Experimental results demonstrate the superiority of the proposed algorithm over the comparison algorithms.
基金jointly sponsored by the Shenzhen Science and Technology Innovation Commission (Grant No. KCXFZ20201221173610028)the key program of the National Natural Science Foundation of China (Grant No. 42130605)
文摘In the traditional incremental analysis update(IAU)process,all analysis increments are treated as constant forcing in a model’s prognostic equations over a certain time window.This approach effectively reduces high-frequency oscillations introduced by data assimilation.However,as different scales of increments have unique evolutionary speeds and life histories in a numerical model,the traditional IAU scheme cannot fully meet the requirements of short-term forecasting for the damping of high-frequency noise and may even cause systematic drifts.Therefore,a multi-scale IAU scheme is proposed in this paper.Analysis increments were divided into different scale parts using a spatial filtering technique.For each scale increment,the optimal relaxation time in the IAU scheme was determined by the skill of the forecasting results.Finally,different scales of analysis increments were added to the model integration during their optimal relaxation time.The multi-scale IAU scheme can effectively reduce the noise and further improve the balance between large-scale and small-scale increments in the model initialization stage.To evaluate its performance,several numerical experiments were conducted to simulate the path and intensity of Typhoon Mangkhut(2018)and showed that:(1)the multi-scale IAU scheme had an obvious effect on noise control at the initial stage of data assimilation;(2)the optimal relaxation time for large-scale and small-scale increments was estimated as 6 h and 3 h,respectively;(3)the forecast performance of the multi-scale IAU scheme in the prediction of Typhoon Mangkhut(2018)was better than that of the traditional IAU scheme.The results demonstrate the superiority of the multi-scale IAU scheme.
基金the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under grant number(61/43)Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R319)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4340237DSR27).
文摘Nowadays,Internet of Things(IoT)has penetrated all facets of human life while on the other hand,IoT devices are heavily prone to cyberattacks.It has become important to develop an accurate system that can detect malicious attacks on IoT environments in order to mitigate security risks.Botnet is one of the dreadfulmalicious entities that has affected many users for the past few decades.It is challenging to recognize Botnet since it has excellent carrying and hidden capacities.Various approaches have been employed to identify the source of Botnet at earlier stages.Machine Learning(ML)and Deep Learning(DL)techniques are developed based on heavy influence from Botnet detection methodology.In spite of this,it is still a challenging task to detect Botnet at early stages due to low number of features accessible from Botnet dataset.The current study devises IoT with Cloud Assisted Botnet Detection and Classification utilizingRat SwarmOptimizer with Deep Learning(BDC-RSODL)model.The presented BDC-RSODL model includes a series of processes like pre-processing,feature subset selection,classification,and parameter tuning.Initially,the network data is pre-processed to make it compatible for further processing.Besides,RSO algorithm is exploited for effective selection of subset of features.Additionally,Long Short TermMemory(LSTM)algorithm is utilized for both identification and classification of botnets.Finally,Sine Cosine Algorithm(SCA)is executed for fine-tuning the hyperparameters related to LSTM model.In order to validate the promising 3086 CMC,2023,vol.74,no.2 performance of BDC-RSODL system,a comprehensive comparison analysis was conducted.The obtained results confirmed the supremacy of BDCRSODL model over recent approaches.
基金This work was supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R308)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Securing medical data while transmission on the network is required because it is sensitive and life-dependent data.Many methods are used for protection,such as Steganography,Digital Signature,Cryptography,and Watermarking.This paper introduces a novel robust algorithm that combines discrete wavelet transform(DWT),discrete cosine transform(DCT),and singular value decomposition(SVD)digital image-watermarking algorithms.The host image is decomposed using a two-dimensional DWT(2D-DWT)to approximate low-frequency sub-bands in the embedding process.Then the sub-band low-high(LH)is decomposed using 2D-DWT to four new sub-bands.The resulting sub-band low-high(LH1)is decomposed using 2D-DWT to four new sub-bands.Two frequency bands,high-high(HH_(2))and high-low(HL_(2)),are transformed by DCT,and then the SVD is applied to the DCT coefficients.The strongest modified singular values(SVs)vary very little for most attacks,which is an important property of SVD watermarking.The two watermark images are encrypted using two layers of encryption,circular and chaotic encryption techniques,to increase security.The first encrypted watermark is embedded in the S component of the DCT components of the HL_(2)coefficients.The second encrypted watermark is embedded in the S component of the DCT components of the HH2 coefficients.The suggested technique has been tested against various attacks and proven to provide excellent stability and imperceptibility results.