Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages ot...Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment.展开更多
The rise or fall of the stock markets directly affects investors’interest and loyalty.Therefore,it is necessary to measure the performance of stocks in the market in advance to prevent our assets from suffering signi...The rise or fall of the stock markets directly affects investors’interest and loyalty.Therefore,it is necessary to measure the performance of stocks in the market in advance to prevent our assets from suffering significant losses.In our proposed study,six supervised machine learning(ML)strategies and deep learning(DL)models with long short-term memory(LSTM)of data science was deployed for thorough analysis and measurement of the performance of the technology stocks.Under discussion are Apple Inc.(AAPL),Microsoft Corporation(MSFT),Broadcom Inc.,Taiwan Semiconductor Manufacturing Company Limited(TSM),NVIDIA Corporation(NVDA),and Avigilon Corporation(AVGO).The datasets were taken from the Yahoo Finance API from 06-05-2005 to 06-05-2022(seventeen years)with 4280 samples.As already noted,multiple studies have been performed to resolve this problem using linear regression,support vectormachines,deep long short-termmemory(LSTM),and many other models.In this research,the Hidden Markov Model(HMM)outperformed other employed machine learning ensembles,tree-based models,the ARIMA(Auto Regressive IntegratedMoving Average)model,and long short-term memory with a robust mean accuracy score of 99.98.Other statistical analyses and measurements for machine learning ensemble algorithms,the Long Short-TermModel,and ARIMA were also carried out for further investigation of the performance of advanced models for forecasting time series data.Thus,the proposed research found the best model to be HMM,and LSTM was the second-best model that performed well in all aspects.A developedmodel will be highly recommended and helpful for early measurement of technology stock performance for investment or withdrawal based on the future stock rise or fall for creating smart environments.展开更多
The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional con...The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification.展开更多
This paper explores the reform and practice of software engineering-related courses based on the competency model of the Computing Curricula,and proposes some measures of teaching reform and talent cultivation in soft...This paper explores the reform and practice of software engineering-related courses based on the competency model of the Computing Curricula,and proposes some measures of teaching reform and talent cultivation in software engineering.The teaching reform emphasizes student-centered education,and focuses on the cultivation and enhancement of students’knowledge,skills,and dispositions.Based on the three elements of the competency model,specific measures of teaching reform are proposed for some professional courses in software engineering,to strengthen course relevance,improve knowledge systems,reform practical modes with a focus on skill development,and cultivate good dispositions through student-centered education.The teaching reform’s attempts and practice are conducted in some courses such as Advanced Web Technologies,Software Engineering,and Intelligent Terminal Systems and Application Development.Through the analysis and comparison of the implementation effects,significant improvements are observed in teaching effectiveness,students’mastery of knowledge and skills are noticeably improved,and the expected goals of the teaching reform are achieved.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
For years,foot ulcers linked with diabetes mellitus and neuropathy have significantly impacted diabetic patients’ health-related quality of life(HRQoL). Diabetes foot ulcers impact15% of all diabetic patients at some...For years,foot ulcers linked with diabetes mellitus and neuropathy have significantly impacted diabetic patients’ health-related quality of life(HRQoL). Diabetes foot ulcers impact15% of all diabetic patients at some point in their lives. The facilities and resources used for DFU detection and treatment are only available at hospitals and clinics,which results in the unavailability of feasible and timely detection at an early stage. This necessitates the development of an at-home DFU detection system that enables timely predictions and seamless communication with users,thereby preventing amputations due to neglect and severity. This paper proposes a feasible system consisting of three major modules:an IoT device that works to sense foot nodes to send vibrations onto a foot sole,a machine learning model based on supervised learning which predicts the level of severity of the DFU using four different classification techniques including XGBoost,K-SVM,Random Forest,and Decision tree,and a mobile application that acts as an interface between the sensors and the patient. Based on the severity levels,necessary steps for prevention,treatment,and medications are recommended via the application.展开更多
The use of metamaterial enhances the performance of a specific class of antennas known as metamaterial antennas.The radiation cost and quality factor of the antenna are influenced by the size of the antenna.Metamateri...The use of metamaterial enhances the performance of a specific class of antennas known as metamaterial antennas.The radiation cost and quality factor of the antenna are influenced by the size of the antenna.Metamaterial antennas allow for the circumvention of the bandwidth restriction for small antennas.Antenna parameters have recently been predicted using machine learning algorithms in existing literature.Machine learning can take the place of the manual process of experimenting to find the ideal simulated antenna parameters.The accuracy of the prediction will be primarily dependent on the model that is used.In this paper,a novel method for forecasting the bandwidth of the metamaterial antenna is proposed,based on using the Pearson Kernel as a standard kernel.Along with these new approaches,this paper suggests a unique hypersphere-based normalization to normalize the values of the dataset attributes and a dimensionality reduction method based on the Pearson kernel to reduce the dimension.A novel algorithm for optimizing the parameters of Convolutional Neural Network(CNN)based on improved Bat Algorithm-based Optimization with Pearson Mutation(BAO-PM)is also presented in this work.The prediction results of the proposed work are better when compared to the existing models in the literature.展开更多
Knowledge graph can assist in improving recommendation performance and is widely applied in various person-alized recommendation domains.However,existing knowledge-aware recommendation methods face challenges such as ...Knowledge graph can assist in improving recommendation performance and is widely applied in various person-alized recommendation domains.However,existing knowledge-aware recommendation methods face challenges such as weak user-item interaction supervisory signals and noise in the knowledge graph.To tackle these issues,this paper proposes a neighbor information contrast-enhanced recommendation method by adding subtle noise to construct contrast views and employing contrastive learning to strengthen supervisory signals and reduce knowledge noise.Specifically,first,this paper adopts heterogeneous propagation and knowledge-aware attention networks to obtain multi-order neighbor embedding of users and items,mining the high-order neighbor informa-tion of users and items.Next,in the neighbor information,this paper introduces weak noise following a uniform distribution to construct neighbor contrast views,effectively reducing the time overhead of view construction.This paper then performs contrastive learning between neighbor views to promote the uniformity of view information,adjusting the neighbor structure,and achieving the goal of reducing the knowledge noise in the knowledge graph.Finally,this paper introduces multi-task learning to mitigate the problem of weak supervisory signals.To validate the effectiveness of our method,experiments are conducted on theMovieLens-1M,MovieLens-20M,Book-Crossing,and Last-FM datasets.The results showthat compared to the best baselines,our method shows significant improvements in AUC and F1.展开更多
Cardiovascular disease is the leading cause of death globally.This disease causes loss of heart muscles and is also responsible for the death of heart cells,sometimes damaging their functionality.A person’s life may ...Cardiovascular disease is the leading cause of death globally.This disease causes loss of heart muscles and is also responsible for the death of heart cells,sometimes damaging their functionality.A person’s life may depend on receiving timely assistance as soon as possible.Thus,minimizing the death ratio can be achieved by early detection of heart attack(HA)symptoms.In the United States alone,an estimated 610,000 people die fromheart attacks each year,accounting for one in every four fatalities.However,by identifying and reporting heart attack symptoms early on,it is possible to reduce damage and save many lives significantly.Our objective is to devise an algorithm aimed at helping individuals,particularly elderly individuals living independently,to safeguard their lives.To address these challenges,we employ deep learning techniques.We have utilized a vision transformer(ViT)to address this problem.However,it has a significant overhead cost due to its memory consumption and computational complexity because of scaling dot-product attention.Also,since transformer performance typically relies on large-scale or adequate data,adapting ViT for smaller datasets is more challenging.In response,we propose a three-in-one steam model,theMulti-Head Attention Vision Hybrid(MHAVH).Thismodel integrates a real-time posture recognition framework to identify chest pain postures indicative of heart attacks using transfer learning techniques,such as ResNet-50 and VGG-16,renowned for their robust feature extraction capabilities.By incorporatingmultiple heads into the vision transformer to generate additional metrics and enhance heart-detection capabilities,we leverage a 2019 posture-based dataset comprising RGB images,a novel creation by the author that marks the first dataset tailored for posture-based heart attack detection.Given the limited online data availability,we segmented this dataset into gender categories(male and female)and conducted testing on both segmented and original datasets.The training accuracy of our model reached an impressive 99.77%.Upon testing,the accuracy for male and female datasets was recorded at 92.87%and 75.47%,respectively.The combined dataset accuracy is 93.96%,showcasing a commendable performance overall.Our proposed approach demonstrates versatility in accommodating small and large datasets,offering promising prospects for real-world applications.展开更多
Pupil dynamics are the important characteristics of face spoofing detection.The face recognition system is one of the most used biometrics for authenticating individual identity.The main threats to the facial recognit...Pupil dynamics are the important characteristics of face spoofing detection.The face recognition system is one of the most used biometrics for authenticating individual identity.The main threats to the facial recognition system are different types of presentation attacks like print attacks,3D mask attacks,replay attacks,etc.The proposed model uses pupil characteristics for liveness detection during the authentication process.The pupillary light reflex is an involuntary reaction controlling the pupil’s diameter at different light intensities.The proposed framework consists of two-phase methodologies.In the first phase,the pupil’s diameter is calculated by applying stimulus(light)in one eye of the subject and calculating the constriction of the pupil size on both eyes in different video frames.The above measurement is converted into feature space using Kohn and Clynes model-defined parameters.The Support Vector Machine is used to classify legitimate subjects when the diameter change is normal(or when the eye is alive)or illegitimate subjects when there is no change or abnormal oscillations of pupil behavior due to the presence of printed photograph,video,or 3D mask of the subject in front of the camera.In the second phase,we perform the facial recognition process.Scale-invariant feature transform(SIFT)is used to find the features from the facial images,with each feature having a size of a 128-dimensional vector.These features are scale,rotation,and orientation invariant and are used for recognizing facial images.The brute force matching algorithm is used for matching features of two different images.The threshold value we considered is 0.08 for good matches.To analyze the performance of the framework,we tested our model in two Face antispoofing datasets named Replay attack datasets and CASIA-SURF datasets,which were used because they contain the videos of the subjects in each sample having three modalities(RGB,IR,Depth).The CASIA-SURF datasets showed an 89.9%Equal Error Rate,while the Replay Attack datasets showed a 92.1%Equal Error Rate.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encr...In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encryption methods have been proposed by researchers to secure communications.Most of the proposed steganography techniques achieve higher embedding capacities without compromising visual imperceptibility using LSB substitution.In this work,we have an approach that utilizes a combinationofMost SignificantBit(MSB)matching andLeast Significant Bit(LSB)substitution.The proposed algorithm divides confidential messages into pairs of bits and connects them with the MSBs of individual pixels using pair matching,enabling the storage of 6 bits in one pixel by modifying a maximum of three bits.The proposed technique is evaluated using embedding capacity and Peak Signal-to-Noise Ratio(PSNR)score,we compared our work with the Zakariya scheme the results showed a significant increase in data concealment capacity.The achieved results of ourwork showthat our algorithmdemonstrates an improvement in hiding capacity from11%to 22%for different data samples while maintaining a minimumPeak Signal-to-Noise Ratio(PSNR)of 37 dB.These findings highlight the effectiveness and trustworthiness of the proposed algorithm in securing the communication process and maintaining visual integrity.展开更多
This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information throu...This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.展开更多
Patients with mild traumatic brain injury have a diverse clinical presentation,and the underlying pathophysiology remains poorly understood.Magnetic resonance imaging is a non-invasive technique that has been widely u...Patients with mild traumatic brain injury have a diverse clinical presentation,and the underlying pathophysiology remains poorly understood.Magnetic resonance imaging is a non-invasive technique that has been widely utilized to investigate neuro biological markers after mild traumatic brain injury.This approach has emerged as a promising tool for investigating the pathogenesis of mild traumatic brain injury.G raph theory is a quantitative method of analyzing complex networks that has been widely used to study changes in brain structure and function.However,most previous mild traumatic brain injury studies using graph theory have focused on specific populations,with limited exploration of simultaneous abnormalities in structural and functional connectivity.Given that mild traumatic brain injury is the most common type of traumatic brain injury encounte red in clinical practice,further investigation of the patient characteristics and evolution of structural and functional connectivity is critical.In the present study,we explored whether abnormal structural and functional connectivity in the acute phase could serve as indicators of longitudinal changes in imaging data and cognitive function in patients with mild traumatic brain injury.In this longitudinal study,we enrolled 46 patients with mild traumatic brain injury who were assessed within 2 wee ks of injury,as well as 36 healthy controls.Resting-state functional magnetic resonance imaging and diffusion-weighted imaging data were acquired for graph theoretical network analysis.In the acute phase,patients with mild traumatic brain injury demonstrated reduced structural connectivity in the dorsal attention network.More than 3 months of followup data revealed signs of recovery in structural and functional connectivity,as well as cognitive function,in 22 out of the 46 patients.Furthermore,better cognitive function was associated with more efficient networks.Finally,our data indicated that small-worldness in the acute stage could serve as a predictor of longitudinal changes in connectivity in patients with mild traumatic brain injury.These findings highlight the importance of integrating structural and functional connectivity in unde rstanding the occurrence and evolution of mild traumatic brain injury.Additionally,exploratory analysis based on subnetworks could serve a predictive function in the prognosis of patients with mild traumatic brain injury.展开更多
Online Signature Verification (OSV), as a personal identification technology, is widely used in various industries.However, it faces challenges, such as incomplete feature extraction, low accuracy, and computational h...Online Signature Verification (OSV), as a personal identification technology, is widely used in various industries.However, it faces challenges, such as incomplete feature extraction, low accuracy, and computational heaviness. Toaddress these issues, we propose a novel approach for online signature verification, using a one-dimensionalGhost-ACmix Residual Network (1D-ACGRNet), which is a Ghost-ACmix Residual Network that combines convolutionwith a self-attention mechanism and performs improvement by using Ghost method. The Ghost-ACmix Residualstructure is introduced to leverage both self-attention and convolution mechanisms for capturing global featureinformation and extracting local information, effectively complementing whole and local signature features andmitigating the problem of insufficient feature extraction. Then, the Ghost-based Convolution and Self-Attention(ACG) block is proposed to simplify the common parts between convolution and self-attention using the Ghostmodule and employ feature transformation to obtain intermediate features, thus reducing computational costs.Additionally, feature selection is performed using the random forestmethod, and the data is dimensionally reducedusing Principal Component Analysis (PCA). Finally, tests are implemented on the MCYT-100 datasets and theSVC-2004 Task2 datasets, and the equal error rates (EERs) for small-sample training using five genuine andforged signatures are 3.07% and 4.17%, respectively. The EERs for training with ten genuine and forged signaturesare 0.91% and 2.12% on the respective datasets. The experimental results illustrate that the proposed approacheffectively enhances the accuracy of online signature verification.展开更多
Due to the fact that a memristor with memory properties is an ideal electronic component for implementation of the artificial neural synaptic function,a brand-new tristable locally active memristor model is first prop...Due to the fact that a memristor with memory properties is an ideal electronic component for implementation of the artificial neural synaptic function,a brand-new tristable locally active memristor model is first proposed in this paper.Here,a novel four-dimensional fractional-order memristive cellular neural network(FO-MCNN)model with hidden attractors is constructed to enhance the engineering feasibility of the original CNN model and its performance.Then,its hardware circuit implementation and complicated dynamic properties are investigated on multi-simulation platforms.Subsequently,it is used toward secure communication application scenarios.Taking it as the pseudo-random number generator(PRNG),a new privacy image security scheme is designed based on the adaptive sampling rate compressive sensing(ASR-CS)model.Eventually,the simulation analysis and comparative experiments manifest that the proposed data encryption scheme possesses strong immunity against various security attack models and satisfactory compression performance.展开更多
The mining sector historically drove the global economy but at the expense of severe environmental and health repercussions,posing sustainability challenges[1]-[3].Recent advancements on artificial intelligence(AI)are...The mining sector historically drove the global economy but at the expense of severe environmental and health repercussions,posing sustainability challenges[1]-[3].Recent advancements on artificial intelligence(AI)are revolutionizing mining through robotic and data-driven innovations[4]-[7].While AI offers mining industry advantages,it is crucial to acknowledge the potential risks associated with its widespread use.Over-reliance on AI may lead to a loss of human control over mining operations in the future,resulting in unpredictable consequences.展开更多
Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than ot...Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers.展开更多
The Chinese tree shrew(Tupaia belangeri chinensis)has emerged as a promising model for investigating adrenal steroid synthesis,but it is unclear whether the same cells produce steroid hormones and whether their produc...The Chinese tree shrew(Tupaia belangeri chinensis)has emerged as a promising model for investigating adrenal steroid synthesis,but it is unclear whether the same cells produce steroid hormones and whether their production is regulated in the same way as in humans.Here,we comprehensively mapped the cell types and pathways of steroid metabolism in the adrenal gland of Chinese tree shrews using single-cell RNA sequencing,spatial transcriptome analysis,mass spectrometry,and immunohistochemistry.We compared the transcriptomes of various adrenal cell types across tree shrews,humans,macaques,and mice.Results showed that tree shrew adrenal glands expressed many of the same key enzymes for steroid synthesis as humans,including CYP11B2,CYP11B1,CYB5A,and CHGA.Biochemical analysis confirmed the production of aldosterone,cortisol,and dehydroepiandrosterone but not dehydroepiandrosterone sulfate in the tree shrew adrenal glands.Furthermore,genes in adrenal cell types in tree shrews were correlated with genetic risk factors for polycystic ovary syndrome,primary aldosteronism,hypertension,and related disorders in humans based on genome-wide association studies.Overall,this study suggests that the adrenal glands of Chinese tree shrews may consist of closely related cell populations with functional similarity to those of the human adrenal gland.Our comprehensive results(publicly available at http://gxmujyzmolab.cn:16245/scAGMap/)should facilitate the advancement of this animal model for the investigation of adrenal gland disorders.展开更多
基金Researchers supporting Project Number(RSPD2024R576),King Saud University,Riyadh,Saudi Arabia.
文摘Sentiment analysis is becoming increasingly important in today’s digital age, with social media being a significantsource of user-generated content. The development of sentiment lexicons that can support languages other thanEnglish is a challenging task, especially for analyzing sentiment analysis in social media reviews. Most existingsentiment analysis systems focus on English, leaving a significant research gap in other languages due to limitedresources and tools. This research aims to address this gap by building a sentiment lexicon for local languages,which is then used with a machine learning algorithm for efficient sentiment analysis. In the first step, a lexiconis developed that includes five languages: Urdu, Roman Urdu, Pashto, Roman Pashto, and English. The sentimentscores from SentiWordNet are associated with each word in the lexicon to produce an effective sentiment score. Inthe second step, a naive Bayesian algorithm is applied to the developed lexicon for efficient sentiment analysis ofRoman Pashto. Both the sentiment lexicon and sentiment analysis steps were evaluated using information retrievalmetrics, with an accuracy score of 0.89 for the sentiment lexicon and 0.83 for the sentiment analysis. The resultsshowcase the potential for improving software engineering tasks related to user feedback analysis and productdevelopment.
基金supported by Kyungpook National University Research Fund,2020.
文摘The rise or fall of the stock markets directly affects investors’interest and loyalty.Therefore,it is necessary to measure the performance of stocks in the market in advance to prevent our assets from suffering significant losses.In our proposed study,six supervised machine learning(ML)strategies and deep learning(DL)models with long short-term memory(LSTM)of data science was deployed for thorough analysis and measurement of the performance of the technology stocks.Under discussion are Apple Inc.(AAPL),Microsoft Corporation(MSFT),Broadcom Inc.,Taiwan Semiconductor Manufacturing Company Limited(TSM),NVIDIA Corporation(NVDA),and Avigilon Corporation(AVGO).The datasets were taken from the Yahoo Finance API from 06-05-2005 to 06-05-2022(seventeen years)with 4280 samples.As already noted,multiple studies have been performed to resolve this problem using linear regression,support vectormachines,deep long short-termmemory(LSTM),and many other models.In this research,the Hidden Markov Model(HMM)outperformed other employed machine learning ensembles,tree-based models,the ARIMA(Auto Regressive IntegratedMoving Average)model,and long short-term memory with a robust mean accuracy score of 99.98.Other statistical analyses and measurements for machine learning ensemble algorithms,the Long Short-TermModel,and ARIMA were also carried out for further investigation of the performance of advanced models for forecasting time series data.Thus,the proposed research found the best model to be HMM,and LSTM was the second-best model that performed well in all aspects.A developedmodel will be highly recommended and helpful for early measurement of technology stock performance for investment or withdrawal based on the future stock rise or fall for creating smart environments.
基金supported by the Key Project of National Natural Science Foundation of China-Civil Aviation Joint Fund under Grant No.U2033212。
文摘The mortar pumpability is essential in the construction industry,which requires much labor to estimate manually and always causes material waste.This paper proposes an effective method by combining a 3-dimensional convolutional neural network(3D CNN)with a 2-dimensional convolutional long short-term memory network(ConvLSTM2D)to automatically classify the mortar pumpability.Experiment results show that the proposed model has an accuracy rate of 100%with a fast convergence speed,based on the dataset organized by collecting the corresponding mortar image sequences.This work demonstrates the feasibility of using computer vision and deep learning for mortar pumpability classification.
基金supported by the Teaching Reform Projects of Colleges in Hunan Province(No.HNJG-2022-1410,No.HNJG-2020-0489,No.HNJG-2022-0785,and No.HNJG-2022-0792)Industry-universityCooperative Project of Ministry of Education(No.220506194233806)the Teaching Reform Project of Hunan University of Science and Technology(No.2020XXJG07)。
文摘This paper explores the reform and practice of software engineering-related courses based on the competency model of the Computing Curricula,and proposes some measures of teaching reform and talent cultivation in software engineering.The teaching reform emphasizes student-centered education,and focuses on the cultivation and enhancement of students’knowledge,skills,and dispositions.Based on the three elements of the competency model,specific measures of teaching reform are proposed for some professional courses in software engineering,to strengthen course relevance,improve knowledge systems,reform practical modes with a focus on skill development,and cultivate good dispositions through student-centered education.The teaching reform’s attempts and practice are conducted in some courses such as Advanced Web Technologies,Software Engineering,and Intelligent Terminal Systems and Application Development.Through the analysis and comparison of the implementation effects,significant improvements are observed in teaching effectiveness,students’mastery of knowledge and skills are noticeably improved,and the expected goals of the teaching reform are achieved.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
文摘For years,foot ulcers linked with diabetes mellitus and neuropathy have significantly impacted diabetic patients’ health-related quality of life(HRQoL). Diabetes foot ulcers impact15% of all diabetic patients at some point in their lives. The facilities and resources used for DFU detection and treatment are only available at hospitals and clinics,which results in the unavailability of feasible and timely detection at an early stage. This necessitates the development of an at-home DFU detection system that enables timely predictions and seamless communication with users,thereby preventing amputations due to neglect and severity. This paper proposes a feasible system consisting of three major modules:an IoT device that works to sense foot nodes to send vibrations onto a foot sole,a machine learning model based on supervised learning which predicts the level of severity of the DFU using four different classification techniques including XGBoost,K-SVM,Random Forest,and Decision tree,and a mobile application that acts as an interface between the sensors and the patient. Based on the severity levels,necessary steps for prevention,treatment,and medications are recommended via the application.
文摘The use of metamaterial enhances the performance of a specific class of antennas known as metamaterial antennas.The radiation cost and quality factor of the antenna are influenced by the size of the antenna.Metamaterial antennas allow for the circumvention of the bandwidth restriction for small antennas.Antenna parameters have recently been predicted using machine learning algorithms in existing literature.Machine learning can take the place of the manual process of experimenting to find the ideal simulated antenna parameters.The accuracy of the prediction will be primarily dependent on the model that is used.In this paper,a novel method for forecasting the bandwidth of the metamaterial antenna is proposed,based on using the Pearson Kernel as a standard kernel.Along with these new approaches,this paper suggests a unique hypersphere-based normalization to normalize the values of the dataset attributes and a dimensionality reduction method based on the Pearson kernel to reduce the dimension.A novel algorithm for optimizing the parameters of Convolutional Neural Network(CNN)based on improved Bat Algorithm-based Optimization with Pearson Mutation(BAO-PM)is also presented in this work.The prediction results of the proposed work are better when compared to the existing models in the literature.
基金supported by the Natural Science Foundation of Ningxia Province(No.2023AAC03316)the Ningxia Hui Autonomous Region Education Department Higher Edu-cation Key Scientific Research Project(No.NYG2022051)the North Minzu University Graduate Innovation Project(YCX23146).
文摘Knowledge graph can assist in improving recommendation performance and is widely applied in various person-alized recommendation domains.However,existing knowledge-aware recommendation methods face challenges such as weak user-item interaction supervisory signals and noise in the knowledge graph.To tackle these issues,this paper proposes a neighbor information contrast-enhanced recommendation method by adding subtle noise to construct contrast views and employing contrastive learning to strengthen supervisory signals and reduce knowledge noise.Specifically,first,this paper adopts heterogeneous propagation and knowledge-aware attention networks to obtain multi-order neighbor embedding of users and items,mining the high-order neighbor informa-tion of users and items.Next,in the neighbor information,this paper introduces weak noise following a uniform distribution to construct neighbor contrast views,effectively reducing the time overhead of view construction.This paper then performs contrastive learning between neighbor views to promote the uniformity of view information,adjusting the neighbor structure,and achieving the goal of reducing the knowledge noise in the knowledge graph.Finally,this paper introduces multi-task learning to mitigate the problem of weak supervisory signals.To validate the effectiveness of our method,experiments are conducted on theMovieLens-1M,MovieLens-20M,Book-Crossing,and Last-FM datasets.The results showthat compared to the best baselines,our method shows significant improvements in AUC and F1.
基金Researchers Supporting Project Number(RSPD2024R576),King Saud University,Riyadh,Saudi Arabia。
文摘Cardiovascular disease is the leading cause of death globally.This disease causes loss of heart muscles and is also responsible for the death of heart cells,sometimes damaging their functionality.A person’s life may depend on receiving timely assistance as soon as possible.Thus,minimizing the death ratio can be achieved by early detection of heart attack(HA)symptoms.In the United States alone,an estimated 610,000 people die fromheart attacks each year,accounting for one in every four fatalities.However,by identifying and reporting heart attack symptoms early on,it is possible to reduce damage and save many lives significantly.Our objective is to devise an algorithm aimed at helping individuals,particularly elderly individuals living independently,to safeguard their lives.To address these challenges,we employ deep learning techniques.We have utilized a vision transformer(ViT)to address this problem.However,it has a significant overhead cost due to its memory consumption and computational complexity because of scaling dot-product attention.Also,since transformer performance typically relies on large-scale or adequate data,adapting ViT for smaller datasets is more challenging.In response,we propose a three-in-one steam model,theMulti-Head Attention Vision Hybrid(MHAVH).Thismodel integrates a real-time posture recognition framework to identify chest pain postures indicative of heart attacks using transfer learning techniques,such as ResNet-50 and VGG-16,renowned for their robust feature extraction capabilities.By incorporatingmultiple heads into the vision transformer to generate additional metrics and enhance heart-detection capabilities,we leverage a 2019 posture-based dataset comprising RGB images,a novel creation by the author that marks the first dataset tailored for posture-based heart attack detection.Given the limited online data availability,we segmented this dataset into gender categories(male and female)and conducted testing on both segmented and original datasets.The training accuracy of our model reached an impressive 99.77%.Upon testing,the accuracy for male and female datasets was recorded at 92.87%and 75.47%,respectively.The combined dataset accuracy is 93.96%,showcasing a commendable performance overall.Our proposed approach demonstrates versatility in accommodating small and large datasets,offering promising prospects for real-world applications.
基金funded by Researchers Supporting Program at King Saud University (RSPD2023R809).
文摘Pupil dynamics are the important characteristics of face spoofing detection.The face recognition system is one of the most used biometrics for authenticating individual identity.The main threats to the facial recognition system are different types of presentation attacks like print attacks,3D mask attacks,replay attacks,etc.The proposed model uses pupil characteristics for liveness detection during the authentication process.The pupillary light reflex is an involuntary reaction controlling the pupil’s diameter at different light intensities.The proposed framework consists of two-phase methodologies.In the first phase,the pupil’s diameter is calculated by applying stimulus(light)in one eye of the subject and calculating the constriction of the pupil size on both eyes in different video frames.The above measurement is converted into feature space using Kohn and Clynes model-defined parameters.The Support Vector Machine is used to classify legitimate subjects when the diameter change is normal(or when the eye is alive)or illegitimate subjects when there is no change or abnormal oscillations of pupil behavior due to the presence of printed photograph,video,or 3D mask of the subject in front of the camera.In the second phase,we perform the facial recognition process.Scale-invariant feature transform(SIFT)is used to find the features from the facial images,with each feature having a size of a 128-dimensional vector.These features are scale,rotation,and orientation invariant and are used for recognizing facial images.The brute force matching algorithm is used for matching features of two different images.The threshold value we considered is 0.08 for good matches.To analyze the performance of the framework,we tested our model in two Face antispoofing datasets named Replay attack datasets and CASIA-SURF datasets,which were used because they contain the videos of the subjects in each sample having three modalities(RGB,IR,Depth).The CASIA-SURF datasets showed an 89.9%Equal Error Rate,while the Replay Attack datasets showed a 92.1%Equal Error Rate.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金in part by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)by the 2024 Yeungnam University Research Grant.
文摘In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encryption methods have been proposed by researchers to secure communications.Most of the proposed steganography techniques achieve higher embedding capacities without compromising visual imperceptibility using LSB substitution.In this work,we have an approach that utilizes a combinationofMost SignificantBit(MSB)matching andLeast Significant Bit(LSB)substitution.The proposed algorithm divides confidential messages into pairs of bits and connects them with the MSBs of individual pixels using pair matching,enabling the storage of 6 bits in one pixel by modifying a maximum of three bits.The proposed technique is evaluated using embedding capacity and Peak Signal-to-Noise Ratio(PSNR)score,we compared our work with the Zakariya scheme the results showed a significant increase in data concealment capacity.The achieved results of ourwork showthat our algorithmdemonstrates an improvement in hiding capacity from11%to 22%for different data samples while maintaining a minimumPeak Signal-to-Noise Ratio(PSNR)of 37 dB.These findings highlight the effectiveness and trustworthiness of the proposed algorithm in securing the communication process and maintaining visual integrity.
基金funded in part by the Key Project of Nature Science Research for Universities of Anhui Province of China(No.2022AH051720)in part by the Science and Technology Development Fund,Macao SAR(Grant Nos.0093/2022/A2,0076/2022/A2 and 0008/2022/AGJ)in part by the China University Industry-University-Research Collaborative Innovation Fund(No.2021FNA04017).
文摘This paper focuses on the effective utilization of data augmentation techniques for 3Dlidar point clouds to enhance the performance of neural network models.These point clouds,which represent spatial information through a collection of 3D coordinates,have found wide-ranging applications.Data augmentation has emerged as a potent solution to the challenges posed by limited labeled data and the need to enhance model generalization capabilities.Much of the existing research is devoted to crafting novel data augmentation methods specifically for 3D lidar point clouds.However,there has been a lack of focus on making the most of the numerous existing augmentation techniques.Addressing this deficiency,this research investigates the possibility of combining two fundamental data augmentation strategies.The paper introduces PolarMix andMix3D,two commonly employed augmentation techniques,and presents a new approach,named RandomFusion.Instead of using a fixed or predetermined combination of augmentation methods,RandomFusion randomly chooses one method from a pool of options for each instance or sample.This innovative data augmentation technique randomly augments each point in the point cloud with either PolarMix or Mix3D.The crux of this strategy is the random choice between PolarMix and Mix3Dfor the augmentation of each point within the point cloud data set.The results of the experiments conducted validate the efficacy of the RandomFusion strategy in enhancing the performance of neural network models for 3D lidar point cloud semantic segmentation tasks.This is achieved without compromising computational efficiency.By examining the potential of merging different augmentation techniques,the research contributes significantly to a more comprehensive understanding of how to utilize existing augmentation methods for 3D lidar point clouds.RandomFusion data augmentation technique offers a simple yet effective method to leverage the diversity of augmentation techniques and boost the robustness of models.The insights gained from this research can pave the way for future work aimed at developing more advanced and efficient data augmentation strategies for 3D lidar point cloud analysis.
基金supported by the National Natural Science Foundation of China,Nos.81671671(to JL),61971451(to JL),U22A2034(to XK),62177047(to XK)the National Defense Science and Technology Collaborative Innovation Major Project of Central South University,No.2021gfcx05(to JL)+6 种基金Clinical Research Cen terfor Medical Imaging of Hunan Province,No.2020SK4001(to JL)Key Emergency Project of Pneumonia Epidemic of Novel Coronavirus Infection of Hu nan Province,No.2020SK3006(to JL)Innovative Special Construction Foundation of Hunan Province,No.2019SK2131(to JL)the Science and Technology lnnovation Program of Hunan Province,Nos.2021RC4016(to JL),2021SK53503(to ML)Scientific Research Program of Hunan Commission of Health,No.202209044797(to JL)Central South University Research Program of Advanced Interdisciplinary Studies,No.2023Q YJC020(to XK)the Natural Science Foundation of Hunan Province,No.2022JJ30814(to ML)。
文摘Patients with mild traumatic brain injury have a diverse clinical presentation,and the underlying pathophysiology remains poorly understood.Magnetic resonance imaging is a non-invasive technique that has been widely utilized to investigate neuro biological markers after mild traumatic brain injury.This approach has emerged as a promising tool for investigating the pathogenesis of mild traumatic brain injury.G raph theory is a quantitative method of analyzing complex networks that has been widely used to study changes in brain structure and function.However,most previous mild traumatic brain injury studies using graph theory have focused on specific populations,with limited exploration of simultaneous abnormalities in structural and functional connectivity.Given that mild traumatic brain injury is the most common type of traumatic brain injury encounte red in clinical practice,further investigation of the patient characteristics and evolution of structural and functional connectivity is critical.In the present study,we explored whether abnormal structural and functional connectivity in the acute phase could serve as indicators of longitudinal changes in imaging data and cognitive function in patients with mild traumatic brain injury.In this longitudinal study,we enrolled 46 patients with mild traumatic brain injury who were assessed within 2 wee ks of injury,as well as 36 healthy controls.Resting-state functional magnetic resonance imaging and diffusion-weighted imaging data were acquired for graph theoretical network analysis.In the acute phase,patients with mild traumatic brain injury demonstrated reduced structural connectivity in the dorsal attention network.More than 3 months of followup data revealed signs of recovery in structural and functional connectivity,as well as cognitive function,in 22 out of the 46 patients.Furthermore,better cognitive function was associated with more efficient networks.Finally,our data indicated that small-worldness in the acute stage could serve as a predictor of longitudinal changes in connectivity in patients with mild traumatic brain injury.These findings highlight the importance of integrating structural and functional connectivity in unde rstanding the occurrence and evolution of mild traumatic brain injury.Additionally,exploratory analysis based on subnetworks could serve a predictive function in the prognosis of patients with mild traumatic brain injury.
基金National Natural Science Foundation of China(Grant No.62073227)Liaoning Provincial Science and Technology Department Foundation(Grant No.2023JH2/101300212).
文摘Online Signature Verification (OSV), as a personal identification technology, is widely used in various industries.However, it faces challenges, such as incomplete feature extraction, low accuracy, and computational heaviness. Toaddress these issues, we propose a novel approach for online signature verification, using a one-dimensionalGhost-ACmix Residual Network (1D-ACGRNet), which is a Ghost-ACmix Residual Network that combines convolutionwith a self-attention mechanism and performs improvement by using Ghost method. The Ghost-ACmix Residualstructure is introduced to leverage both self-attention and convolution mechanisms for capturing global featureinformation and extracting local information, effectively complementing whole and local signature features andmitigating the problem of insufficient feature extraction. Then, the Ghost-based Convolution and Self-Attention(ACG) block is proposed to simplify the common parts between convolution and self-attention using the Ghostmodule and employ feature transformation to obtain intermediate features, thus reducing computational costs.Additionally, feature selection is performed using the random forestmethod, and the data is dimensionally reducedusing Principal Component Analysis (PCA). Finally, tests are implemented on the MCYT-100 datasets and theSVC-2004 Task2 datasets, and the equal error rates (EERs) for small-sample training using five genuine andforged signatures are 3.07% and 4.17%, respectively. The EERs for training with ten genuine and forged signaturesare 0.91% and 2.12% on the respective datasets. The experimental results illustrate that the proposed approacheffectively enhances the accuracy of online signature verification.
文摘Due to the fact that a memristor with memory properties is an ideal electronic component for implementation of the artificial neural synaptic function,a brand-new tristable locally active memristor model is first proposed in this paper.Here,a novel four-dimensional fractional-order memristive cellular neural network(FO-MCNN)model with hidden attractors is constructed to enhance the engineering feasibility of the original CNN model and its performance.Then,its hardware circuit implementation and complicated dynamic properties are investigated on multi-simulation platforms.Subsequently,it is used toward secure communication application scenarios.Taking it as the pseudo-random number generator(PRNG),a new privacy image security scheme is designed based on the adaptive sampling rate compressive sensing(ASR-CS)model.Eventually,the simulation analysis and comparative experiments manifest that the proposed data encryption scheme possesses strong immunity against various security attack models and satisfactory compression performance.
文摘The mining sector historically drove the global economy but at the expense of severe environmental and health repercussions,posing sustainability challenges[1]-[3].Recent advancements on artificial intelligence(AI)are revolutionizing mining through robotic and data-driven innovations[4]-[7].While AI offers mining industry advantages,it is crucial to acknowledge the potential risks associated with its widespread use.Over-reliance on AI may lead to a loss of human control over mining operations in the future,resulting in unpredictable consequences.
基金supported by the Project SP2023/074 Application of Machine and Process Control Advanced Methods supported by the Ministry of Education,Youth and Sports,Czech Republic.
文摘Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers.
基金supported by the Key Research and Development Program of Guangxi(2021AB13014)Major Project of Guangxi Innovation Driven(AA18118016)+7 种基金National Key Research and Development Program of China(2017YFC0908000)Natural Key Research and Development Project(2020YFA0113200)National Natural Science Foundation of China(81770759,82060145,31970814)Natural Science Foundation of Guangxi Zhuang Autonomous Region(2021JJA140912)Advanced Innovation Teams and Xinghu Scholars Program of Guangxi Medical University,Guangxi Key Laboratory for Genomic and Personalized Medicine(19-050-22,19-185-33,20-065-33,22-35-17)Major Project of Scientific Research and Technology Development Plan of Nanning(20221023)Guangxi Natural Science Foundation(2022GXNSFAA035641)Self-funded Project of Health Commission of Guangxi Zhuang Autonomous Region(Z-A20230620)。
文摘The Chinese tree shrew(Tupaia belangeri chinensis)has emerged as a promising model for investigating adrenal steroid synthesis,but it is unclear whether the same cells produce steroid hormones and whether their production is regulated in the same way as in humans.Here,we comprehensively mapped the cell types and pathways of steroid metabolism in the adrenal gland of Chinese tree shrews using single-cell RNA sequencing,spatial transcriptome analysis,mass spectrometry,and immunohistochemistry.We compared the transcriptomes of various adrenal cell types across tree shrews,humans,macaques,and mice.Results showed that tree shrew adrenal glands expressed many of the same key enzymes for steroid synthesis as humans,including CYP11B2,CYP11B1,CYB5A,and CHGA.Biochemical analysis confirmed the production of aldosterone,cortisol,and dehydroepiandrosterone but not dehydroepiandrosterone sulfate in the tree shrew adrenal glands.Furthermore,genes in adrenal cell types in tree shrews were correlated with genetic risk factors for polycystic ovary syndrome,primary aldosteronism,hypertension,and related disorders in humans based on genome-wide association studies.Overall,this study suggests that the adrenal glands of Chinese tree shrews may consist of closely related cell populations with functional similarity to those of the human adrenal gland.Our comprehensive results(publicly available at http://gxmujyzmolab.cn:16245/scAGMap/)should facilitate the advancement of this animal model for the investigation of adrenal gland disorders.