The Ms 6.4 earthquake occurred on May 21,2021 in Yangbi County,Dali Prefecture,Yunnan Province,which was the largest earthquake after the 2014 Jinggu Ms 6.6 earthquake,in western Yunnan.After the earthquake,the rapid ...The Ms 6.4 earthquake occurred on May 21,2021 in Yangbi County,Dali Prefecture,Yunnan Province,which was the largest earthquake after the 2014 Jinggu Ms 6.6 earthquake,in western Yunnan.After the earthquake,the rapid field investigation and earthquake relocation reveal that there was no obvious surface rupture and the earthquake did not occur on pre-existing active fault,but on a buried fault on the west side of Weixi–Qiaohou–Weishan fault zone in the eastern boundary of Baoshan sub-block.Significant foreshocks appeared three days before the earthquake.These phenomena aroused scholars'intensive attention.What the physical process and seismogenic mechanism of the Yangbi Ms 6.4 earthquake are revealed by the foreshocks and aftershocks?These scientific questions need to be solved urgently.展开更多
Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have b...Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal...Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.展开更多
Recognizing handwritten characters remains a critical and formidable challenge within the realm of computervision. Although considerable strides have been made in enhancing English handwritten character recognitionthr...Recognizing handwritten characters remains a critical and formidable challenge within the realm of computervision. Although considerable strides have been made in enhancing English handwritten character recognitionthrough various techniques, deciphering Arabic handwritten characters is particularly intricate. This complexityarises from the diverse array of writing styles among individuals, coupled with the various shapes that a singlecharacter can take when positioned differently within document images, rendering the task more perplexing. Inthis study, a novel segmentation method for Arabic handwritten scripts is suggested. This work aims to locatethe local minima of the vertical and diagonal word image densities to precisely identify the segmentation pointsbetween the cursive letters. The proposed method starts with pre-processing the word image without affectingits main features, then calculates the directions pixel density of the word image by scanning it vertically and fromangles 30° to 90° to count the pixel density fromall directions and address the problem of overlapping letters, whichis a commonly attitude in writing Arabic texts by many people. Local minima and thresholds are also determinedto identify the ideal segmentation area. The proposed technique is tested on samples obtained fromtwo datasets: Aself-curated image dataset and the IFN/ENIT dataset. The results demonstrate that the proposed method achievesa significant improvement in the proportions of cursive segmentation of 92.96% on our dataset, as well as 89.37%on the IFN/ENIT dataset.展开更多
System logs,serving as a pivotal data source for performance monitoring and anomaly detection,play an indispensable role in assuring service stability and reliability.Despite this,the majority of existing log-based an...System logs,serving as a pivotal data source for performance monitoring and anomaly detection,play an indispensable role in assuring service stability and reliability.Despite this,the majority of existing log-based anomaly detection methodologies predominantly depend on the sequence or quantity attributes of logs,utilizing solely a single Recurrent Neural Network(RNN)and its variant sequence models for detection.These approaches have not thoroughly exploited the semantic information embedded in logs,exhibit limited adaptability to novel logs,and a single model struggles to fully unearth the potential features within the log sequence.Addressing these challenges,this article proposes a hybrid architecture based on amultiscale convolutional neural network,efficient channel attention and mogrifier gated recurrent unit networks(LogCEM),which amalgamates multiple neural network technologies.Capitalizing on the superior performance of robustly optimized BERT approach(RoBERTa)in the realm of natural language processing,we employ RoBERTa to extract the original word vectors from each word in the log template.In conjunction with the enhanced Smooth Inverse Frequency(SIF)algorithm,we generate more precise log sentence vectors,thereby achieving an in-depth representation of log semantics.Subsequently,these log vector sequences are fed into a hybrid neural network,which fuses 1D Multi-Scale Convolutional Neural Network(MSCNN),Efficient Channel Attention Mechanism(ECA),and Mogrifier Gated Recurrent Unit(GRU).This amalgamation enables themodel to concurrently capture the local and global dependencies of the log sequence and autonomously learn the significance of different log sequences,thereby markedly enhancing the efficacy of log anomaly detection.To validate the effectiveness of the LogCEM model,we conducted evaluations on two authoritative open-source datasets.The experimental results demonstrate that LogCEM not only exhibits excellent accuracy and robustness,but also outperforms the current mainstream log anomaly detection methods.展开更多
Currently, most public higher learning institutions in Tanzania rely on traditional in-class examinations, requiring students to register and present identification documents for examinations eligibility verification....Currently, most public higher learning institutions in Tanzania rely on traditional in-class examinations, requiring students to register and present identification documents for examinations eligibility verification. This system, however, is prone to impersonations due to security vulnerabilities in current students’ verification system. These vulnerabilities include weak authentication, lack of encryption, and inadequate anti-counterfeiting measures. Additionally, advanced printing technologies and online marketplaces which claim to produce convincing fake identification documents make it easy to create convincing fake identity documents. The Improved Mechanism for Detecting Impersonations (IMDIs) system detects impersonations in in-class exams by integrating QR codes and dynamic question generation based on student profiles. It consists of a mobile verification app, built with Flutter and communicating via RESTful APIs, and a web system, developed with Laravel using HTML, CSS, and JavaScript. The two components communicate through APIs, with MySQL managing the database. The mobile app and web server interact to ensure efficient verification and security during examinations. The implemented IMDIs system was validated by a mobile application which is integrated with a QR codes scanner for capturing codes embedded in student Identity Cards and linking them to a dynamic question generation model. The QG model uses natural language processing (NLP) algorithm and Question Generation (QG) techniques to create dynamic profile questions. Results show that the IMDIs system could generate four challenging profile-based questions within two seconds, allowing the verification of 200 students in 33 minutes by one operator. The IMDIs system also tracks exam-eligible students, aiding in exam attendance and integrates with a Short Message Service (SMS) to report impersonation incidents to a dedicated security officer in real-time. The IMDIs system was tested and found to be 98% secure, 100% convenient, with a 0% false rejection rate and a 2% false acceptance rate, demonstrating its security, reliability, and high performance.展开更多
Food allergy has become an important food quality and safety issue,posing a challenge to the food industry and affecting consumer health.On the one hand,from the perspective of food processing industry,the diversity o...Food allergy has become an important food quality and safety issue,posing a challenge to the food industry and affecting consumer health.On the one hand,from the perspective of food processing industry,the diversity of food raw material ingredients,exogenous additives,and processing forms make the presence of allergens in modern food processing more complex.In addition,due to the lack of allergen identification,effective detection and allergenicity evaluation systems,there are serious deficiencies in the current theories and techniques for food allergen screening and detection,tracking and prediction,intervention and control;On the other hand,from the perspective of public health,meeting consumers'right to know whether there are raw materials containing food allergens in processed foods,and improving the credibility of government and people's satisfaction have become urgent matters;In addition,as people come into contact with more and more new borne novel foods,the probability of food allergy is also increasing.The food safety and health problems induced by increasingly complex,widespread and severe food allergy are difficult to avoid.In view of this,in response to the increasingly serious food allergy issues,this paper introduced the detection methods of food allergens,summarized the reduction and control techniques of food allergens,and elaborated hypoallergenic foods,which aims to provide the basis for preventing and controlling food allergy and ensuring the physical health of food allergy patients.展开更多
Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the t...Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the two methods is that the components of PCA are still dependent while ICA has no orthogonality constraint and its latentvariables are independent. Process monitoring with PCA often supposes that process data or principal components is Gaussian distribution. However, this kind of constraint cannot be satisfied by several practical processes. To ex-tend the use of PCA, a nonparametric method is added to PCA to overcome the difficulty, and kernel density estimation (KDE) is rather a good choice. Though ICA is based on non-Gaussian distribution intormation, .KDE can help in the close monitoring of the data. Methods, such as PCA, ICA, PCA.with .KDE(KPCA), and ICA with KDE,(KICA), are demonstrated and. compared by applying them to a practical industnal Spheripol craft polypropylene catalyzer reactor instead of a laboratory emulator.展开更多
A television based multistatic radar system is described. The commercial television transmitter is used as the illuminator in the multistatic radar system. The reflected commercial television signals are measured by ...A television based multistatic radar system is described. The commercial television transmitter is used as the illuminator in the multistatic radar system. The reflected commercial television signals are measured by an array of sensors. A data processing scheme is developed that adapts to the poor signal processing ability. The innovation is focused on the construction of the observation space, which could reduce the non linearity error. The new method leads to better system stability than the traditional one. Monte Carlo simulation is utilized and compared with the traditional method.展开更多
Large structures,such as bridges,highways,etc.,need to be inspected to evaluate their actual physical and functional condition,to predict future conditions,and to help decision makers allocating maintenance and rehabi...Large structures,such as bridges,highways,etc.,need to be inspected to evaluate their actual physical and functional condition,to predict future conditions,and to help decision makers allocating maintenance and rehabilitation resources.The assessment of civil infrastructure condition is carried out through information obtained by inspection and/or monitoring operations.Traditional techniques in structural health monitoring(SHM)involve visual inspection related to inspection standards that can be time-consuming data collection,expensive,labor intensive,and dangerous.To address these limitations,machine vision-based inspection procedures have increasingly been investigated within the research community.In this context,this paper proposes and compares four different computer vision procedures to identify damage by image processing:Otsu method thresholding,Markov random fields segmentation,RGB color detection technique,and K-means clustering algorithm.The first method is based on segmentation by thresholding that returns a binary image from a grayscale image.The Markov random fields technique uses a probabilistic approach to assign labels to model the spatial dependencies in image pixels.The RGB technique uses color detection to evaluate the defect extensions.Finally,K-means algorithm is based on Euclidean distance for clustering of the images.The benefits and limitations of each technique are discussed,and the challenges of using the techniques are highlighted.To show the effectiveness of the described techniques in damage detection of civil infrastructures,a case study is presented.Results show that various types of corrosion and cracks can be detected by image processing techniques making the proposed techniques a suitable tool for the prediction of the damage evolution in civil infrastructures.展开更多
Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier...Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches.展开更多
The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectivene...The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer’s disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer’s disease onset.展开更多
A survey of the population densities of rice planthoppers is important for forecasting decisions and efficient control. Tra- ditional manual surveying of rice planthoppers is time-consuming, fatiguing, and subjective....A survey of the population densities of rice planthoppers is important for forecasting decisions and efficient control. Tra- ditional manual surveying of rice planthoppers is time-consuming, fatiguing, and subjective. A new three-layer detection method was proposed to detect and identify white-backed planthoppers (WBPHs, Sogatella furcifera (Horvath)) and their developmental stages using image processing. In the first two detection layers, we used an AdaBoost classifier that was trained on a histogram of oriented gradient (HOG) features and a support vector machine (SVM) classifier that was trained on Gabor and Local Binary Pattern (LBP) features to detect WBPHs and remove impurities. We achieved a detection rate of 85.6% and a false detection rate of 10.2%. In the third detection layer, a SVM classifier that was trained on the HOG features was used to identify the different developmental stages of the WBPHs, and we achieved an identification rate of 73.1%, a false identification rate of 23.3%, and a 5.6% false detection rate for the images without WBPHs. The proposed three-layer detection method is feasible and effective for the identification of different developmental stages of planthoppers on rice plants in paddy fields.展开更多
Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research comm...Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research community with an opportunity to develop automated real-time identification techniques to detect the signs of hypoxic-ischemic-encephalopathy in larger electroencephalography/amplitude-integrated electroencephalography data sets more easily. This review details the recent achievements, performed by a number of prominent research groups across the world, in the automatic identification and classification of hypoxic-ischemic epileptiform neonatal seizures using advanced signal processing and machine learning techniques. This review also addresses the clinical challenges that current automated techniques face in order to be fully utilized by clinicians, and highlights the importance of upgrading the current clinical bedside sampling frequencies to higher sampling rates in order to provide better hypoxic-ischemic biomarker detection frameworks. Additionally, the article highlights that current clinical automated epileptiform detection strategies for human neonates have been only concerned with seizure detection after the therapeutic latent phase of injury. Whereas recent animal studies have demonstrated that the latent phase of opportunity is critically important for early diagnosis of hypoxic-ischemic-encephalopathy electroencephalography biomarkers and although difficult, detection strategies could utilize biomarkers in the latent phase to also predict the onset of future seizures.展开更多
This paper presents a fault diagnosis method for process faults and sensor faults in a class of nonlinear uncertain systems.The fault detection and isolation architecture consists of a fault detection estimator and a ...This paper presents a fault diagnosis method for process faults and sensor faults in a class of nonlinear uncertain systems.The fault detection and isolation architecture consists of a fault detection estimator and a bank of adaptive isolation estimators,each corresponding to a particular fault type.Adaptive thresholds for fault detection and isolation are presented.Fault detectability conditions characterizing the class of process faults and sensor faults that are detectable by the presented method are derived.A simulation example of robotic arm is used to illustrate the effectiveness of the fault diagnosis method.展开更多
Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmissi...Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmission electron microscope(TEM)images of W nanofibers using image processing techniques and convolutional neural network(CNN).We employ a three-stage approach consisting of Otsu,local-threshold,and watershed segmentation to extract bubbles from noisy images.To address over-segmentation,we propose a combination of area factor and radial pixel intensity scanning.A CNN is used to recognize bubbles,outperforming traditional neural network models such as Alex Net and Google Net with an accuracy of 97.1%and recall of 98.6%.Our method is tested on both clear and blurred TEM images,and demonstrates humanlike performance in recognizing bubbles.This work contributes to the development of quantitative image analysis in the field of plasma-material interactions,offering a scalable solution for analyzing material defects.Overall,this study's findings establish the potential for automatic defect recognition and its applications in the assessment of plasma-material interactions.This method can be employed in a variety of specialties,including plasma physics and materials science.展开更多
Based on the cognitive radar concept and the basic connotation of cognitive skywave over-the-horizon radar(SWOTHR), the system structure and information processingmechanism about cognitive SWOTHR are researched. Amo...Based on the cognitive radar concept and the basic connotation of cognitive skywave over-the-horizon radar(SWOTHR), the system structure and information processingmechanism about cognitive SWOTHR are researched. Amongthem, the hybrid network system architecture which is thedistributed configuration combining with the centralized cognition and its soft/hardware framework with the sense-detectionintegration are proposed, and the information processing framebased on the lens principle and its information processing flowwith receive-transmit joint adaption are designed, which buildand parse the work law for cognition and its self feedback adjustment with the lens focus model and five stages informationprocessing sequence. After that, the system simulation andthe performance analysis and comparison are provided, whichinitially proves the rationality and advantages of the proposedideas. Finally, four important development ideas of futureSWOTHR toward "high frequency intelligence information processing system" are discussed, which are scene information fusion, dynamic reconfigurable system, hierarchical and modulardesign, and sustainable development. Then the conclusion thatthe cognitive SWOTHR can cause the performance improvement is gotten.展开更多
An improvement detecting method was proposed according to the disadvantages of testing method of optical axes parallelism of shipboard photoelectrical theodolite (short for theodolite) based on image processing. Point...An improvement detecting method was proposed according to the disadvantages of testing method of optical axes parallelism of shipboard photoelectrical theodolite (short for theodolite) based on image processing. Pointolite replaced 0.2'' collimator to reduce the errors of crosshair images processing and improve the quality of image. What’s more, the high quality images could help to optimize the image processing method and the testing accuracy. The errors between the trial results interpreted by software and the results tested in dock were less than 10'', which indicated the improve method had some actual application values.展开更多
On-line monitoring and fault diagnosis of chemical process is extremely important for operation safety and product quality. Principal component analysis (PCA) has been widely used in multivariate statistical process m...On-line monitoring and fault diagnosis of chemical process is extremely important for operation safety and product quality. Principal component analysis (PCA) has been widely used in multivariate statistical process monitoring for its ability to reduce processes dimensions. PCA and other statistical techniques, however, have difficulties in differentiating faults correctly in complex chemical process. Support vector machine (SVM) is a novel approach based on statistical learning theory, which has emerged for feature identification and classification. In this paper, an integrated method is applied for process monitoring and fault diagnosis, which combines PCA for fault feature extraction and multiple SVMs for identification of different fault sources. This approach is verified and illustrated on the Tennessee Eastman benchmark process as a case study. Results show that the proposed PCA-SVMs method has good diagnosis capability and overall diagnosis correctness rate.展开更多
文摘The Ms 6.4 earthquake occurred on May 21,2021 in Yangbi County,Dali Prefecture,Yunnan Province,which was the largest earthquake after the 2014 Jinggu Ms 6.6 earthquake,in western Yunnan.After the earthquake,the rapid field investigation and earthquake relocation reveal that there was no obvious surface rupture and the earthquake did not occur on pre-existing active fault,but on a buried fault on the west side of Weixi–Qiaohou–Weishan fault zone in the eastern boundary of Baoshan sub-block.Significant foreshocks appeared three days before the earthquake.These phenomena aroused scholars'intensive attention.What the physical process and seismogenic mechanism of the Yangbi Ms 6.4 earthquake are revealed by the foreshocks and aftershocks?These scientific questions need to be solved urgently.
基金the National Natural Science Foundation of China(62003298,62163036)the Major Project of Science and Technology of Yunnan Province(202202AD080005,202202AH080009)the Yunnan University Professional Degree Graduate Practice Innovation Fund Project(ZC-22222770)。
文摘Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
文摘Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.
文摘Recognizing handwritten characters remains a critical and formidable challenge within the realm of computervision. Although considerable strides have been made in enhancing English handwritten character recognitionthrough various techniques, deciphering Arabic handwritten characters is particularly intricate. This complexityarises from the diverse array of writing styles among individuals, coupled with the various shapes that a singlecharacter can take when positioned differently within document images, rendering the task more perplexing. Inthis study, a novel segmentation method for Arabic handwritten scripts is suggested. This work aims to locatethe local minima of the vertical and diagonal word image densities to precisely identify the segmentation pointsbetween the cursive letters. The proposed method starts with pre-processing the word image without affectingits main features, then calculates the directions pixel density of the word image by scanning it vertically and fromangles 30° to 90° to count the pixel density fromall directions and address the problem of overlapping letters, whichis a commonly attitude in writing Arabic texts by many people. Local minima and thresholds are also determinedto identify the ideal segmentation area. The proposed technique is tested on samples obtained fromtwo datasets: Aself-curated image dataset and the IFN/ENIT dataset. The results demonstrate that the proposed method achievesa significant improvement in the proportions of cursive segmentation of 92.96% on our dataset, as well as 89.37%on the IFN/ENIT dataset.
基金supported by the Science and Technology Program State Grid Corporation of China,Grant SGSXDK00DJJS2250061.
文摘System logs,serving as a pivotal data source for performance monitoring and anomaly detection,play an indispensable role in assuring service stability and reliability.Despite this,the majority of existing log-based anomaly detection methodologies predominantly depend on the sequence or quantity attributes of logs,utilizing solely a single Recurrent Neural Network(RNN)and its variant sequence models for detection.These approaches have not thoroughly exploited the semantic information embedded in logs,exhibit limited adaptability to novel logs,and a single model struggles to fully unearth the potential features within the log sequence.Addressing these challenges,this article proposes a hybrid architecture based on amultiscale convolutional neural network,efficient channel attention and mogrifier gated recurrent unit networks(LogCEM),which amalgamates multiple neural network technologies.Capitalizing on the superior performance of robustly optimized BERT approach(RoBERTa)in the realm of natural language processing,we employ RoBERTa to extract the original word vectors from each word in the log template.In conjunction with the enhanced Smooth Inverse Frequency(SIF)algorithm,we generate more precise log sentence vectors,thereby achieving an in-depth representation of log semantics.Subsequently,these log vector sequences are fed into a hybrid neural network,which fuses 1D Multi-Scale Convolutional Neural Network(MSCNN),Efficient Channel Attention Mechanism(ECA),and Mogrifier Gated Recurrent Unit(GRU).This amalgamation enables themodel to concurrently capture the local and global dependencies of the log sequence and autonomously learn the significance of different log sequences,thereby markedly enhancing the efficacy of log anomaly detection.To validate the effectiveness of the LogCEM model,we conducted evaluations on two authoritative open-source datasets.The experimental results demonstrate that LogCEM not only exhibits excellent accuracy and robustness,but also outperforms the current mainstream log anomaly detection methods.
文摘Currently, most public higher learning institutions in Tanzania rely on traditional in-class examinations, requiring students to register and present identification documents for examinations eligibility verification. This system, however, is prone to impersonations due to security vulnerabilities in current students’ verification system. These vulnerabilities include weak authentication, lack of encryption, and inadequate anti-counterfeiting measures. Additionally, advanced printing technologies and online marketplaces which claim to produce convincing fake identification documents make it easy to create convincing fake identity documents. The Improved Mechanism for Detecting Impersonations (IMDIs) system detects impersonations in in-class exams by integrating QR codes and dynamic question generation based on student profiles. It consists of a mobile verification app, built with Flutter and communicating via RESTful APIs, and a web system, developed with Laravel using HTML, CSS, and JavaScript. The two components communicate through APIs, with MySQL managing the database. The mobile app and web server interact to ensure efficient verification and security during examinations. The implemented IMDIs system was validated by a mobile application which is integrated with a QR codes scanner for capturing codes embedded in student Identity Cards and linking them to a dynamic question generation model. The QG model uses natural language processing (NLP) algorithm and Question Generation (QG) techniques to create dynamic profile questions. Results show that the IMDIs system could generate four challenging profile-based questions within two seconds, allowing the verification of 200 students in 33 minutes by one operator. The IMDIs system also tracks exam-eligible students, aiding in exam attendance and integrates with a Short Message Service (SMS) to report impersonation incidents to a dedicated security officer in real-time. The IMDIs system was tested and found to be 98% secure, 100% convenient, with a 0% false rejection rate and a 2% false acceptance rate, demonstrating its security, reliability, and high performance.
基金The authors appreciated the financial support from National Natural Science Foundation of China(32102091)Shandong Provincial Natural Science Foundation(ZR2021QC086)+3 种基金China Postdoctoral Science Foundation(2021M693026)Postdoctoral Innovation Project of Shandong Province(862105033022)Qingdao Postdoctoral Applied Research Project(862105040045)Research Funding of Ocean University of China(862001013187).
文摘Food allergy has become an important food quality and safety issue,posing a challenge to the food industry and affecting consumer health.On the one hand,from the perspective of food processing industry,the diversity of food raw material ingredients,exogenous additives,and processing forms make the presence of allergens in modern food processing more complex.In addition,due to the lack of allergen identification,effective detection and allergenicity evaluation systems,there are serious deficiencies in the current theories and techniques for food allergen screening and detection,tracking and prediction,intervention and control;On the other hand,from the perspective of public health,meeting consumers'right to know whether there are raw materials containing food allergens in processed foods,and improving the credibility of government and people's satisfaction have become urgent matters;In addition,as people come into contact with more and more new borne novel foods,the probability of food allergy is also increasing.The food safety and health problems induced by increasingly complex,widespread and severe food allergy are difficult to avoid.In view of this,in response to the increasingly serious food allergy issues,this paper introduced the detection methods of food allergens,summarized the reduction and control techniques of food allergens,and elaborated hypoallergenic foods,which aims to provide the basis for preventing and controlling food allergy and ensuring the physical health of food allergy patients.
基金Supported by the National Natural Science Foundation of China (No.60574047) and the Doctorate Foundation of the State Education Ministry of China (No.20050335018).
文摘Abstract Data-driven tools, such as principal component analysis (PCA) and independent component analysis (ICA) have been applied to different benchmarks as process monitoring methods. The difference between the two methods is that the components of PCA are still dependent while ICA has no orthogonality constraint and its latentvariables are independent. Process monitoring with PCA often supposes that process data or principal components is Gaussian distribution. However, this kind of constraint cannot be satisfied by several practical processes. To ex-tend the use of PCA, a nonparametric method is added to PCA to overcome the difficulty, and kernel density estimation (KDE) is rather a good choice. Though ICA is based on non-Gaussian distribution intormation, .KDE can help in the close monitoring of the data. Methods, such as PCA, ICA, PCA.with .KDE(KPCA), and ICA with KDE,(KICA), are demonstrated and. compared by applying them to a practical industnal Spheripol craft polypropylene catalyzer reactor instead of a laboratory emulator.
文摘A television based multistatic radar system is described. The commercial television transmitter is used as the illuminator in the multistatic radar system. The reflected commercial television signals are measured by an array of sensors. A data processing scheme is developed that adapts to the poor signal processing ability. The innovation is focused on the construction of the observation space, which could reduce the non linearity error. The new method leads to better system stability than the traditional one. Monte Carlo simulation is utilized and compared with the traditional method.
基金Part of the research leading to these results has received funding from the research project DESDEMONA–Detection of Steel Defects by Enhanced MONitoring and Automated procedure for self-inspection and maintenance (grant agreement number RFCS-2018_800687) supported by EU Call RFCS-2017sponsored by the NATO Science for Peace and Security Programme under grant id. G5924。
文摘Large structures,such as bridges,highways,etc.,need to be inspected to evaluate their actual physical and functional condition,to predict future conditions,and to help decision makers allocating maintenance and rehabilitation resources.The assessment of civil infrastructure condition is carried out through information obtained by inspection and/or monitoring operations.Traditional techniques in structural health monitoring(SHM)involve visual inspection related to inspection standards that can be time-consuming data collection,expensive,labor intensive,and dangerous.To address these limitations,machine vision-based inspection procedures have increasingly been investigated within the research community.In this context,this paper proposes and compares four different computer vision procedures to identify damage by image processing:Otsu method thresholding,Markov random fields segmentation,RGB color detection technique,and K-means clustering algorithm.The first method is based on segmentation by thresholding that returns a binary image from a grayscale image.The Markov random fields technique uses a probabilistic approach to assign labels to model the spatial dependencies in image pixels.The RGB technique uses color detection to evaluate the defect extensions.Finally,K-means algorithm is based on Euclidean distance for clustering of the images.The benefits and limitations of each technique are discussed,and the challenges of using the techniques are highlighted.To show the effectiveness of the described techniques in damage detection of civil infrastructures,a case study is presented.Results show that various types of corrosion and cracks can be detected by image processing techniques making the proposed techniques a suitable tool for the prediction of the damage evolution in civil infrastructures.
基金supported through the Annual Funding track by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia[Project No.AN000685].
文摘Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches.
文摘The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer’s disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer’s disease onset.
基金financially supported by the National High Technology Research and Development Program of China (863 Program, 2013AA102402)the 521 Talent Project of Zhejiang Sci-Tech University, Chinathe Key Research and Development Program of Zhejiang Province, China (2015C03023)
文摘A survey of the population densities of rice planthoppers is important for forecasting decisions and efficient control. Tra- ditional manual surveying of rice planthoppers is time-consuming, fatiguing, and subjective. A new three-layer detection method was proposed to detect and identify white-backed planthoppers (WBPHs, Sogatella furcifera (Horvath)) and their developmental stages using image processing. In the first two detection layers, we used an AdaBoost classifier that was trained on a histogram of oriented gradient (HOG) features and a support vector machine (SVM) classifier that was trained on Gabor and Local Binary Pattern (LBP) features to detect WBPHs and remove impurities. We achieved a detection rate of 85.6% and a false detection rate of 10.2%. In the third detection layer, a SVM classifier that was trained on the HOG features was used to identify the different developmental stages of the WBPHs, and we achieved an identification rate of 73.1%, a false identification rate of 23.3%, and a 5.6% false detection rate for the images without WBPHs. The proposed three-layer detection method is feasible and effective for the identification of different developmental stages of planthoppers on rice plants in paddy fields.
基金supported by the Auckland Medical Research Foundation,No.1117017(to CPU)
文摘Perinatal hypoxic-ischemic-encephalopathy significantly contributes to neonatal death and life-long disability such as cerebral palsy. Advances in signal processing and machine learning have provided the research community with an opportunity to develop automated real-time identification techniques to detect the signs of hypoxic-ischemic-encephalopathy in larger electroencephalography/amplitude-integrated electroencephalography data sets more easily. This review details the recent achievements, performed by a number of prominent research groups across the world, in the automatic identification and classification of hypoxic-ischemic epileptiform neonatal seizures using advanced signal processing and machine learning techniques. This review also addresses the clinical challenges that current automated techniques face in order to be fully utilized by clinicians, and highlights the importance of upgrading the current clinical bedside sampling frequencies to higher sampling rates in order to provide better hypoxic-ischemic biomarker detection frameworks. Additionally, the article highlights that current clinical automated epileptiform detection strategies for human neonates have been only concerned with seizure detection after the therapeutic latent phase of injury. Whereas recent animal studies have demonstrated that the latent phase of opportunity is critically important for early diagnosis of hypoxic-ischemic-encephalopathy electroencephalography biomarkers and although difficult, detection strategies could utilize biomarkers in the latent phase to also predict the onset of future seizures.
文摘This paper presents a fault diagnosis method for process faults and sensor faults in a class of nonlinear uncertain systems.The fault detection and isolation architecture consists of a fault detection estimator and a bank of adaptive isolation estimators,each corresponding to a particular fault type.Adaptive thresholds for fault detection and isolation are presented.Fault detectability conditions characterizing the class of process faults and sensor faults that are detectable by the presented method are derived.A simulation example of robotic arm is used to illustrate the effectiveness of the fault diagnosis method.
基金supported by the National Key R&D Program of China(No.2017YFE0300106)Dalian Science and Technology Star Project(No.2020RQ136)+1 种基金the Central Guidance on Local Science and Technology Development Fund of Liaoning Province(No.2022010055-JH6/100)the Fundamental Research Funds for the Central Universities(No.DUT21RC(3)066)。
文摘Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmission electron microscope(TEM)images of W nanofibers using image processing techniques and convolutional neural network(CNN).We employ a three-stage approach consisting of Otsu,local-threshold,and watershed segmentation to extract bubbles from noisy images.To address over-segmentation,we propose a combination of area factor and radial pixel intensity scanning.A CNN is used to recognize bubbles,outperforming traditional neural network models such as Alex Net and Google Net with an accuracy of 97.1%and recall of 98.6%.Our method is tested on both clear and blurred TEM images,and demonstrates humanlike performance in recognizing bubbles.This work contributes to the development of quantitative image analysis in the field of plasma-material interactions,offering a scalable solution for analyzing material defects.Overall,this study's findings establish the potential for automatic defect recognition and its applications in the assessment of plasma-material interactions.This method can be employed in a variety of specialties,including plasma physics and materials science.
基金supported by the National Natural Science Foundation of China(61471391)the China Postdoctoral Science Foundation(2013M542541)
文摘Based on the cognitive radar concept and the basic connotation of cognitive skywave over-the-horizon radar(SWOTHR), the system structure and information processingmechanism about cognitive SWOTHR are researched. Amongthem, the hybrid network system architecture which is thedistributed configuration combining with the centralized cognition and its soft/hardware framework with the sense-detectionintegration are proposed, and the information processing framebased on the lens principle and its information processing flowwith receive-transmit joint adaption are designed, which buildand parse the work law for cognition and its self feedback adjustment with the lens focus model and five stages informationprocessing sequence. After that, the system simulation andthe performance analysis and comparison are provided, whichinitially proves the rationality and advantages of the proposedideas. Finally, four important development ideas of futureSWOTHR toward "high frequency intelligence information processing system" are discussed, which are scene information fusion, dynamic reconfigurable system, hierarchical and modulardesign, and sustainable development. Then the conclusion thatthe cognitive SWOTHR can cause the performance improvement is gotten.
文摘An improvement detecting method was proposed according to the disadvantages of testing method of optical axes parallelism of shipboard photoelectrical theodolite (short for theodolite) based on image processing. Pointolite replaced 0.2'' collimator to reduce the errors of crosshair images processing and improve the quality of image. What’s more, the high quality images could help to optimize the image processing method and the testing accuracy. The errors between the trial results interpreted by software and the results tested in dock were less than 10'', which indicated the improve method had some actual application values.
文摘On-line monitoring and fault diagnosis of chemical process is extremely important for operation safety and product quality. Principal component analysis (PCA) has been widely used in multivariate statistical process monitoring for its ability to reduce processes dimensions. PCA and other statistical techniques, however, have difficulties in differentiating faults correctly in complex chemical process. Support vector machine (SVM) is a novel approach based on statistical learning theory, which has emerged for feature identification and classification. In this paper, an integrated method is applied for process monitoring and fault diagnosis, which combines PCA for fault feature extraction and multiple SVMs for identification of different fault sources. This approach is verified and illustrated on the Tennessee Eastman benchmark process as a case study. Results show that the proposed PCA-SVMs method has good diagnosis capability and overall diagnosis correctness rate.