One of the major causes of road accidents is sleepy drivers.Such accidents typically result in fatalities and financial losses and disadvantage other road users.Numerous studies have been conducted to identify the dri...One of the major causes of road accidents is sleepy drivers.Such accidents typically result in fatalities and financial losses and disadvantage other road users.Numerous studies have been conducted to identify the driver’s sleepiness and integrate it into a warning system.Most studies have examined how the mouth and eyelids move.However,this limits the system’s ability to identify drowsiness traits.Therefore,this study designed an Accident Detection Framework(RPK)that could be used to reduce road accidents due to sleepiness and detect the location of accidents.The drowsiness detectionmodel used three facial parameters:Yawning,closed eyes(blinking),and an upright head position.This model used a Convolutional Neural Network(CNN)consisting of two phases.The initial phase involves video processing and facial landmark coordinate detection.The second phase involves developing the extraction of frame-based features using normalization methods.All these phases used OpenCV and TensorFlow.The dataset contained 5017 images with 874 open eyes images,850 closed eyes images,723 open-mouth images,725 closed-mouth images,761 sleepy-head images,and 1084 non-sleepy head images.The dataset of 5017 images was divided into the training set with 4505 images and the testing set with 512 images,with a ratio of 90:10.The results showed that the RPK design could detect sleepiness by using deep learning techniques with high accuracy on all three parameters;namely 98%for eye blinking,96%for mouth yawning,and 97%for head movement.Overall,the test results have provided an overview of how the developed RPK prototype can accurately identify drowsy drivers.These findings will have a significant impact on the improvement of road users’safety and mobility.展开更多
The software engineering field has long focused on creating high-quality software despite limited resources.Detecting defects before the testing stage of software development can enable quality assurance engineers to ...The software engineering field has long focused on creating high-quality software despite limited resources.Detecting defects before the testing stage of software development can enable quality assurance engineers to con-centrate on problematic modules rather than all the modules.This approach can enhance the quality of the final product while lowering development costs.Identifying defective modules early on can allow for early corrections and ensure the timely delivery of a high-quality product that satisfies customers and instills greater confidence in the development team.This process is known as software defect prediction,and it can improve end-product quality while reducing the cost of testing and maintenance.This study proposes a software defect prediction system that utilizes data fusion,feature selection,and ensemble machine learning fusion techniques.A novel filter-based metric selection technique is proposed in the framework to select the optimum features.A three-step nested approach is presented for predicting defective modules to achieve high accuracy.In the first step,three supervised machine learning techniques,including Decision Tree,Support Vector Machines,and Naïve Bayes,are used to detect faulty modules.The second step involves integrating the predictive accuracy of these classification techniques through three ensemble machine-learning methods:Bagging,Voting,and Stacking.Finally,in the third step,a fuzzy logic technique is employed to integrate the predictive accuracy of the ensemble machine learning techniques.The experiments are performed on a fused software defect dataset to ensure that the developed fused ensemble model can perform effectively on diverse datasets.Five NASA datasets are integrated to create the fused dataset:MW1,PC1,PC3,PC4,and CM1.According to the results,the proposed system exhibited superior performance to other advanced techniques for predicting software defects,achieving a remarkable accuracy rate of 92.08%.展开更多
Escalating cyber security threats and the increased use of Internet of Things(IoT)devices require utilisation of the latest technologies available to supply adequate protection.The aim of Intrusion Detection Systems(I...Escalating cyber security threats and the increased use of Internet of Things(IoT)devices require utilisation of the latest technologies available to supply adequate protection.The aim of Intrusion Detection Systems(IDS)is to prevent malicious attacks that corrupt operations and interrupt data flow,which might have significant impact on critical industries and infrastructure.This research examines existing IDS,based on Artificial Intelligence(AI)for IoT devices,methods,and techniques.The contribution of this study consists of identification of the most effective IDS systems in terms of accuracy,precision,recall and F1-score;this research also considers training time.Results demonstrate that Graph Neural Networks(GNN)have several benefits over other traditional AI frameworks through their ability to achieve in excess of 99%accuracy in a relatively short training time,while also capable of learning from network traffic the inherent characteristics of different cyber-attacks.These findings identify the GNN(a Deep Learning AI method)as the most efficient IDS system.The novelty of this research lies also in the linking between high yielding AI-based IDS algorithms and the AI-based learning approach for data privacy protection.This research recommends Federated Learning(FL)as the AI training model,which increases data privacy protection and reduces network data flow,resulting in a more secure and efficient IDS solution.展开更多
Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extrac...Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extraction tools to detect the hidden data and ensures high-quality stego image generation.However,using a seed key to generate non-repeated sequential numbers takes a long time because it requires specific mathematical equations.In addition,these numbers may cluster in certain ranges.The hidden data in these clustered pixels will reduce the image quality,which steganalysis tools can detect.Therefore,this paper proposes a data structure that safeguards the steganographic model data and maintains the quality of the stego image.This paper employs the AdelsonVelsky and Landis(AVL)tree data structure algorithm to implement the randomization pixel selection technique for data concealment.The AVL tree algorithm provides several advantages for image steganography.Firstly,it ensures balanced tree structures,which leads to efficient data retrieval and insertion operations.Secondly,the self-balancing nature of AVL trees minimizes clustering by maintaining an even distribution of pixels,thereby preserving the stego image quality.The data structure employs the pixel indicator technique for Red,Green,and Blue(RGB)channel extraction.The green channel serves as the foundation for building a balanced binary tree.First,the sender identifies the colored cover image and secret data.The sender will use the two least significant bits(2-LSB)of RGB channels to conceal the data’s size and associated information.The next step is to create a balanced binary tree based on the green channel.Utilizing the channel pixel indicator on the LSB of the green channel,we can conceal bits in the 2-LSB of the red or blue channel.The first four levels of the data structure tree will mask the data size,while subsequent levels will conceal the remaining digits of secret data.After embedding the bits in the binary tree level by level,the model restores the AVL tree to create the stego image.Ultimately,the receiver receives this stego image through the public channel,enabling secret data recovery without stego or crypto keys.This method ensures that the stego image appears unsuspicious to potential attackers.Without an extraction algorithm,a third party cannot extract the original secret information from an intercepted stego image.Experimental results showed high levels of imperceptibility and security.展开更多
Image steganography is one of the prominent technologies in data hiding standards.Steganographic system performance mostly depends on the embedding strategy.Its goal is to embed strictly confidential information into ...Image steganography is one of the prominent technologies in data hiding standards.Steganographic system performance mostly depends on the embedding strategy.Its goal is to embed strictly confidential information into images without causing perceptible changes in the original image.The randomization strategies in data embedding techniques may utilize random domains,pixels,or region-of-interest for concealing secrets into a cover image,preventing information from being discovered by an attacker.The implementation of an appropriate embedding technique can achieve a fair balance between embedding capability and stego image imperceptibility,but it is challenging.A systematic approach is used with a standard methodology to carry out this study.This review concentrates on the critical examination of several embedding strategies,incorporating experimental results with state-of-the-art methods emphasizing the robustness,security,payload capacity,and visual quality metrics of the stego images.The fundamental ideas of steganography are presented in this work,along with a unique viewpoint that sets it apart from previous works by highlighting research gaps,important problems,and difficulties.Additionally,it offers a discussion of suggested directions for future study to advance and investigate uncharted territory in image steganography.展开更多
A nursing care planning system that automatically generated nursing summaries from information entered into the Psychiatric Outcome Management System (PSYCHOMS?, Tanioka et al.), was developed to enrich the content of...A nursing care planning system that automatically generated nursing summaries from information entered into the Psychiatric Outcome Management System (PSYCHOMS?, Tanioka et al.), was developed to enrich the content of nursing summaries at psychiatric hospitals, thereby reducing the workload of nurses. Preparing nursing summaries entails finding the required information in nursing records that span a long period of time and then concisely summarizing this information. This time consuming process depends on the clinical experience and writing ability of the nurse. The system described here automatically generates the text data needed for nursing summaries using an algorithm that synthesizes patient information recorded in electronic charts, the Nursing Care Plan information or the data entered for North American Nursing Diagnosis Association (NANDA) 13 domains with predetermined fixed phrases. Advantages of this system are that it enables nursing summaries to be generated automatically in real time, simplifies the process, and permits the standardization of useful nursing summaries that reflect the course of the nursing care provided and its evaluation. Use of this system to automatically generate nursing summaries will allow more nursing time to be devoted to patient care. The system is also useful because it enables nursing summaries that contain the required information to be generated regardless of who prepares them.展开更多
A novel fiber optic sensor based on hydrogel-immobilized enzyme complex was developed for the simultaneous measurement of dual-parameter,the leap from a single parameter detecting fiber optic sensor to a fiber optic s...A novel fiber optic sensor based on hydrogel-immobilized enzyme complex was developed for the simultaneous measurement of dual-parameter,the leap from a single parameter detecting fiber optic sensor to a fiber optic sensor that can continuously detect two kinds of parameters was achieved.By controlling the temperature from high to low,the function of fiber sulfide sensor and fiber DCP sensor can be realized,so as to realize the continuous detection of dual-parameter.The different variables affecting the sensor performance were evaluated and optimized.Under the optimal conditions,the response curves,linear detection ranges,detection limits and response times of the dual-parameter sensor for testing sulfide and DCP were obtained,respectively.The sensor displays high selectivity,good repeatability and stability,which have good potentials in analyzing sulfide and DCP concentration of practical water samples.展开更多
This research recognizes the limitation and challenges of adaptingand applying Process Mining as a powerful tool and technique in theHypothetical Software Architecture (SA) Evaluation Framework with thefeatures and fa...This research recognizes the limitation and challenges of adaptingand applying Process Mining as a powerful tool and technique in theHypothetical Software Architecture (SA) Evaluation Framework with thefeatures and factors of lightweightness. Process mining deals with the largescalecomplexity of security and performance analysis, which are the goalsof SA evaluation frameworks. As a result of these conjectures, all ProcessMining researches in the realm of SA are thoroughly reviewed, and ninechallenges for Process Mining Adaption are recognized. Process mining isembedded in the framework and to boost the quality of the SA model forfurther analysis, the framework nominates architectural discovery algorithmsFlower, Alpha, Integer Linear Programming (ILP), Heuristic, and Inductiveand compares them vs. twelve quality criteria. Finally, the framework’s testingon three case studies approves the feasibility of applying process mining toarchitectural evaluation. The extraction of the SA model is also done by thebest model discovery algorithm, which is selected by intensive benchmarkingin this research. This research presents case studies of SA in service-oriented,Pipe and Filter, and component-based styles, modeled and simulated byHierarchical Colored Petri Net techniques based on the cases’ documentation.Processminingwithin this framework dealswith the system’s log files obtainedfrom SA simulation. Applying process mining is challenging, especially for aSA evaluation framework, as it has not been done yet. The research recognizesthe problems of process mining adaption to a hypothetical lightweightSA evaluation framework and addresses these problems during the solutiondevelopment.展开更多
In wireless communications, the Ambient Backscatter Communication (AmBC) technique is a promisingapproach, detecting user presence accurately at low power levels. At low power or a low Signal-to-Noise Ratio(SNR), ther...In wireless communications, the Ambient Backscatter Communication (AmBC) technique is a promisingapproach, detecting user presence accurately at low power levels. At low power or a low Signal-to-Noise Ratio(SNR), there is no dedicated power for the users. Instead, they can transmit information by reflecting the ambientRadio Frequency (RF) signals in the spectrum. Therefore, it is essential to detect user presence in the spectrum forthe transmission of data without loss or without collision at a specific time. In this paper, the authors proposed anovel Spectrum Sensing (SS) detection technique in the Cognitive Radio (CR) spectrum, by developing the AmBC.Novel Matched Filter Detection with Inverse covariance (MFDI), Cyclostationary Feature Detection with Inversecovariance (CFDI) and Hybrid Filter Detection with Inverse covariance (HFDI) approaches are used with AmBCto detect the presence of users at low power levels. The performance of the three detection techniques is measuredusing the parameters of Probability of Detection (PD), Probability of False Alarms (Pfa), Probability of MissedDetection (Pmd), sensing time and throughput at low power or low SNR. The results show that there is a significantimprovement via the HFDI technique for all the parameters.展开更多
The Internet of Medical Things(IoMT)enables digital devices to gather,infer,and broadcast health data via the cloud platform.The phenomenal growth of the IoMT is fueled by many factors,including the widespread and gro...The Internet of Medical Things(IoMT)enables digital devices to gather,infer,and broadcast health data via the cloud platform.The phenomenal growth of the IoMT is fueled by many factors,including the widespread and growing availability of wearables and the ever-decreasing cost of sensor-based technology.There is a growing interest in providing solutions for elderly people living assistance in a world where the population is rising rapidly.The IoMT is a novel reality transforming our daily lives.It can renovate modern healthcare by delivering a more personalized,protective,and collaborative approach to care.However,the current healthcare system for outdoor senior citizens faces new challenges.Traditional healthcare systems are inefficient and lack user-friendly technologies and interfaces appropriate for elderly people in an outdoor environment.Hence,in this research work,a IoMT based Smart Healthcare of Elderly people using Deep Extreme Learning Machine(SH-EDELM)is proposed to monitor the senior citizens’healthcare.The performance of the proposed SH-EDELM technique gives better results in terms of 0.9301 accuracy and 0.0699 miss rate,respectively.展开更多
Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and com...Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and commercial transactions due to its various advantages,including noninvasiveness,user-friendliness,and social and legal acceptability.According to the literature,extensive research has been conducted on signature verification systems in a variety of languages,including English,Hindi,Bangla,and Chinese.However,the Arabic Offline Signature Verification(OSV)system is still a challenging issue that has not been investigated as much by researchers due to the Arabic script being distinguished by changing letter shapes,diacritics,ligatures,and overlapping,making verification more difficult.Recently,signature verification systems have shown promising results for recognizing signatures that are genuine or forgeries;however,performance on skilled forgery detection is still unsatisfactory.Most existing methods require many learning samples to improve verification accuracy,which is a major drawback because the number of available signature samples is often limited in the practical application of signature verification systems.This study addresses these issues by presenting an OSV system based on multifeature fusion and discriminant feature selection using a genetic algorithm(GA).In contrast to existing methods,which use multiclass learning approaches,this study uses a oneclass learning strategy to address imbalanced signature data in the practical application of a signature verification system.The proposed approach is tested on three signature databases(SID)-Arabic handwriting signatures,CEDAR(Center of Excellence for Document Analysis and Recognition),and UTSIG(University of Tehran Persian Signature),and experimental results show that the proposed system outperforms existing systems in terms of reducing the False Acceptance Rate(FAR),False Rejection Rate(FRR),and Equal Error Rate(ERR).The proposed system achieved 5%improvement.展开更多
Knowing each other is obligatory in a multi-agent collaborative environment.Collaborators may develop the desired know-how of each other in various aspects such as habits,job roles,status,and behaviors.Among different...Knowing each other is obligatory in a multi-agent collaborative environment.Collaborators may develop the desired know-how of each other in various aspects such as habits,job roles,status,and behaviors.Among different distinguishing characteristics related to a person,personality traits are an effective predictive tool for an individual’s behavioral pattern.It has been observed that when people are asked to share their details through questionnaires,they intentionally or unintentionally become biased.They knowingly or unknowingly provide enough information in much-unbiased comportment in open writing about themselves.Such writings can effectively assess an individual’s personality traits that may yield enormous possibilities for applications such as forensic departments,job interviews,mental health diagnoses,etc.Stream of consciousness,collected by James Pennbaker and Laura King,is one such way of writing,referring to a narrative technique where the emotions and thoughts of the writer are presented in a way that brings the reader to the fluid through the mental states of the narrator.More-over,computationally,various attempts have been made in an individual’s personality traits assessment through deep learning algorithms;however,the effectiveness and reliability of results vary with varying word embedding techniques.This article proposes an empirical approach to assessing personality by applying convolutional networks to text documents.Bidirectional Encoder Representations from Transformers(BERT)word embedding technique is used for word vector generation to enhance the contextual meanings.展开更多
In psychiatric hospitals, the ratios between patients versus physician and patients versus nurse are low as compared to those in general hospitals. Furthermore, usages of electronic medical records are also low so tha...In psychiatric hospitals, the ratios between patients versus physician and patients versus nurse are low as compared to those in general hospitals. Furthermore, usages of electronic medical records are also low so that nurse administrators are limited in their ability to compile, analyze, and generate patient care staffing information for their administrative use. Psychiatric nurse administrators anticipate the development of a nursing administration analysis system that could perform personnel data simulation, manage information on nursing staff, and manage ward/ practice operations. Responding to this situation, the authors developed a nursing administration analysis system utilizing formulae from the Psychiatric Outcome Management System, PSYCHOMS®to aid nurse administrators. Such formulae are awaiting patent approval. The purpose of this study was to examine the validity of the formulae and the Structured Query Language (SQL) statement, and its practical effectiveness of analyzing data. The study findings showed that two kinds of computation expressions—a classification and extraction were able to display required information desired by nurse administrators. Moreover, significant information critical to assigning staff was validated to ensure high quality of nursing care according to the function and characteristic of the hospital ward.展开更多
Autism spectrum disorder(ASD)is a challenging and complex neurodevelopment syndrome that affects the child’s language,speech,social skills,communication skills,and logical thinking ability.The early detection of ASD ...Autism spectrum disorder(ASD)is a challenging and complex neurodevelopment syndrome that affects the child’s language,speech,social skills,communication skills,and logical thinking ability.The early detection of ASD is essential for delivering effective,timely interventions.Various facial features such as a lack of eye contact,showing uncommon hand or body movements,bab-bling or talking in an unusual tone,and not using common gestures could be used to detect and classify ASD at an early stage.Our study aimed to develop a deep transfer learning model to facilitate the early detection of ASD based on facial fea-tures.A dataset of facial images of autistic and non-autistic children was collected from the Kaggle data repository and was used to develop the transfer learning AlexNet(ASDDTLA)model.Our model achieved a detection accuracy of 87.7%and performed better than other established ASD detection models.Therefore,this model could facilitate the early detection of ASD in clinical practice.展开更多
Physical sensors,intelligent sensors,and output recommenda-tions are all examples of smart health technology that can be used to monitor patients’health and change their behavior.Smart health is an Internet-of-Things...Physical sensors,intelligent sensors,and output recommenda-tions are all examples of smart health technology that can be used to monitor patients’health and change their behavior.Smart health is an Internet-of-Things(IoT)-aware network and sensing infrastructure that provides real-time,intelligent,and ubiquitous healthcare services.Because of the rapid development of cloud computing,as well as related technologies such as fog computing,smart health research is progressively moving in the right direction.Cloud,fog computing,IoT sensors,blockchain,privacy and security,and other related technologies have been the focus of smart health research in recent years.At the moment,the focus in cloud and smart health research is on how to use the cloud to solve the problem of enormous health data and enhance service performance,including cloud storage,retrieval,and calculation of health big data.This article reviews state-of-the-art edge computing methods that has shifted to the collection,transmission,and calculation of health data,which includes various sensors and wearable devices used to collect health data,various wireless sensor technologies,and how to process health data and improve edge performance,among other things.Finally,the typical smart health application cases,blockchain’s application in smart health,and related privacy and security issues were reviewed,as well as future difficulties and potential for smart health services.The comparative analysis provides a reference for the the mobile edge computing in healthcare systems.展开更多
Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional ...Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional performance across a spectrum of tasks.The success of deep learning methods can be attributed to their capability to derive potent representations from data,integral for a myriad of downstream applications.These representations encapsulate the intrinsic structure,fea-tures,or latent variables characterising the underlying statistics of visual data.Despite these achievements,the challenge per-sists in effectively conducting representation learning of visual data with deep models,particularly when confronted with vast and noisy datasets.This special issue is a dedicated platform for researchers worldwide to disseminate their latest,high-quality articles,aiming to enhance readers'comprehension of the principles,limitations,and diverse applications of repre-sentation learning in computer vision.展开更多
Energy management is an inspiring domain in developing of renewable energy sources.However,the growth of decentralized energy production is revealing an increased complexity for power grid managers,inferring more qual...Energy management is an inspiring domain in developing of renewable energy sources.However,the growth of decentralized energy production is revealing an increased complexity for power grid managers,inferring more quality and reliability to regulate electricity flows and less imbalance between electricity production and demand.The major objective of an energy management system is to achieve optimum energy procurement and utilization throughout the organization,minimize energy costs without affecting production,and minimize environmental effects.Modern energy management is an essential and complex subject because of the excessive consumption in residential buildings,which necessitates energy optimization and increased user comfort.To address the issue of energy management,many researchers have developed various frameworks;while the objective of each framework was to sustain a balance between user comfort and energy consumption,this problem hasn’t been fully solved because of how difficult it is to solve it.An inclusive and Intelligent Energy Management System(IEMS)aims to provide overall energy efficiency regarding increased power generation,increase flexibility,increase renewable generation systems,improve energy consumption,reduce carbon dioxide emissions,improve stability,and reduce energy costs.Machine Learning(ML)is an emerging approach that may be beneficial to predict energy efficiency in a better way with the assistance of the Internet of Energy(IoE)network.The IoE network is playing a vital role in the energy sector for collecting effective data and usage,resulting in smart resource management.In this research work,an IEMS is proposed for Smart Cities(SC)using the ML technique to better resolve the energy management problem.The proposed system minimized the energy consumption with its intelligent nature and provided better outcomes than the previous approaches in terms of 92.11% accuracy,and 7.89% miss-rate.展开更多
The behavioral responses of a tilapia (Oreochromis niloticus) school to low (0.13 mg/L), moderate (0.79 mg/L) and high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were monitored using a computer vision...The behavioral responses of a tilapia (Oreochromis niloticus) school to low (0.13 mg/L), moderate (0.79 mg/L) and high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were monitored using a computer vision system. The swimming activity and geometrical parameters such as location of the gravity center and distribution of the fish school were calculated continuously. These behavioral parameters of tilapia school responded sensitively to moderate and high UIA concen-tration. Under high UIA concentration the fish activity showed a significant increase (P<0.05), exhibiting an avoidance reaction to high ammonia condition, and then decreased gradually. Under moderate and high UIA concentration the school’s vertical location had significantly large fluctuation (P<0.05) with the school moving up to the water surface then down to the bottom of the aquarium alternately and tending to crowd together. After several hours’ exposure to high UIA level, the school finally stayed at the aquarium bottom. These observations indicate that alterations in fish behavior under acute stress can provide important in-formation useful in predicting the stress.展开更多
Three kind of application of ADCP is reported for long-term monitoring in coastal sea.(1)The routine monitoring of water qualities. The water quality and ADCP echo data (600 kHz) observed in the long-term are analgzed...Three kind of application of ADCP is reported for long-term monitoring in coastal sea.(1)The routine monitoring of water qualities. The water quality and ADCP echo data (600 kHz) observed in the long-term are analgzed at MT (Marine Tower) Station of Kansai International Airport in the Osaka Bay, Japan. The correlation between the turbidity and echo intensity in the surface layer is not good because air bubbles generated by breaking wave are not detected by the turbidity meter, but detected well by ADCP. When estimating the turbidity consists of plankton population from echo intensity, the effect of bubbles have to be eliminated. (2) Monitoring stirring up of bottom sediment. The special observation was carried out by using following two ADCP in the Osaka Bay, One ADCP was installed upward on the sea. The other ADCP was hanged downward at the gate type stand about 3 m above from the bottom. At the spring tide, high echo intensities indicating the stirring up of bottom sediment were observed. (3) The monitoring for the boundary condition of water mixing at an estuary. In summer season, the ADCP was set at the mouth of Tanabe Bay in Wakayama Prefecture, Japan. During the observation, water temperature near the bottom showed remarkable falls with interval of about 5-7 d. When the bottom temperature fell, the inflow current with low echo intensity water appears at the bottom layer in the ADCP record. It is concluded that when occasional weak northeast wind makes weak coastal upwelling at the mouth of the bay, the combination of upwelling with internal tidal flow causes remarkable water exchange and dispels the red tide.展开更多
In this article, the authors establish some new nonlinear difference inequalities in two independent variables, which generalize some existing results and can be used as handy tools in the study of qualitative as well...In this article, the authors establish some new nonlinear difference inequalities in two independent variables, which generalize some existing results and can be used as handy tools in the study of qualitative as well as quantitative properties of solutions of certain classes of difference equations.展开更多
基金The Faculty of Information Science and Technology,Universiti Kebangsaan Malaysia,provided funding for this research through the Research Grant“An Intelligent 4IR Mobile Technology for Express Bus Safety System Scheme DCP-2017-020/2”.
文摘One of the major causes of road accidents is sleepy drivers.Such accidents typically result in fatalities and financial losses and disadvantage other road users.Numerous studies have been conducted to identify the driver’s sleepiness and integrate it into a warning system.Most studies have examined how the mouth and eyelids move.However,this limits the system’s ability to identify drowsiness traits.Therefore,this study designed an Accident Detection Framework(RPK)that could be used to reduce road accidents due to sleepiness and detect the location of accidents.The drowsiness detectionmodel used three facial parameters:Yawning,closed eyes(blinking),and an upright head position.This model used a Convolutional Neural Network(CNN)consisting of two phases.The initial phase involves video processing and facial landmark coordinate detection.The second phase involves developing the extraction of frame-based features using normalization methods.All these phases used OpenCV and TensorFlow.The dataset contained 5017 images with 874 open eyes images,850 closed eyes images,723 open-mouth images,725 closed-mouth images,761 sleepy-head images,and 1084 non-sleepy head images.The dataset of 5017 images was divided into the training set with 4505 images and the testing set with 512 images,with a ratio of 90:10.The results showed that the RPK design could detect sleepiness by using deep learning techniques with high accuracy on all three parameters;namely 98%for eye blinking,96%for mouth yawning,and 97%for head movement.Overall,the test results have provided an overview of how the developed RPK prototype can accurately identify drowsy drivers.These findings will have a significant impact on the improvement of road users’safety and mobility.
基金supported by the Center for Cyber-Physical Systems,Khalifa University,under Grant 8474000137-RC1-C2PS-T5.
文摘The software engineering field has long focused on creating high-quality software despite limited resources.Detecting defects before the testing stage of software development can enable quality assurance engineers to con-centrate on problematic modules rather than all the modules.This approach can enhance the quality of the final product while lowering development costs.Identifying defective modules early on can allow for early corrections and ensure the timely delivery of a high-quality product that satisfies customers and instills greater confidence in the development team.This process is known as software defect prediction,and it can improve end-product quality while reducing the cost of testing and maintenance.This study proposes a software defect prediction system that utilizes data fusion,feature selection,and ensemble machine learning fusion techniques.A novel filter-based metric selection technique is proposed in the framework to select the optimum features.A three-step nested approach is presented for predicting defective modules to achieve high accuracy.In the first step,three supervised machine learning techniques,including Decision Tree,Support Vector Machines,and Naïve Bayes,are used to detect faulty modules.The second step involves integrating the predictive accuracy of these classification techniques through three ensemble machine-learning methods:Bagging,Voting,and Stacking.Finally,in the third step,a fuzzy logic technique is employed to integrate the predictive accuracy of the ensemble machine learning techniques.The experiments are performed on a fused software defect dataset to ensure that the developed fused ensemble model can perform effectively on diverse datasets.Five NASA datasets are integrated to create the fused dataset:MW1,PC1,PC3,PC4,and CM1.According to the results,the proposed system exhibited superior performance to other advanced techniques for predicting software defects,achieving a remarkable accuracy rate of 92.08%.
文摘Escalating cyber security threats and the increased use of Internet of Things(IoT)devices require utilisation of the latest technologies available to supply adequate protection.The aim of Intrusion Detection Systems(IDS)is to prevent malicious attacks that corrupt operations and interrupt data flow,which might have significant impact on critical industries and infrastructure.This research examines existing IDS,based on Artificial Intelligence(AI)for IoT devices,methods,and techniques.The contribution of this study consists of identification of the most effective IDS systems in terms of accuracy,precision,recall and F1-score;this research also considers training time.Results demonstrate that Graph Neural Networks(GNN)have several benefits over other traditional AI frameworks through their ability to achieve in excess of 99%accuracy in a relatively short training time,while also capable of learning from network traffic the inherent characteristics of different cyber-attacks.These findings identify the GNN(a Deep Learning AI method)as the most efficient IDS system.The novelty of this research lies also in the linking between high yielding AI-based IDS algorithms and the AI-based learning approach for data privacy protection.This research recommends Federated Learning(FL)as the AI training model,which increases data privacy protection and reduces network data flow,resulting in a more secure and efficient IDS solution.
文摘Random pixel selection is one of the image steganography methods that has achieved significant success in enhancing the robustness of hidden data.This property makes it difficult for steganalysts’powerful data extraction tools to detect the hidden data and ensures high-quality stego image generation.However,using a seed key to generate non-repeated sequential numbers takes a long time because it requires specific mathematical equations.In addition,these numbers may cluster in certain ranges.The hidden data in these clustered pixels will reduce the image quality,which steganalysis tools can detect.Therefore,this paper proposes a data structure that safeguards the steganographic model data and maintains the quality of the stego image.This paper employs the AdelsonVelsky and Landis(AVL)tree data structure algorithm to implement the randomization pixel selection technique for data concealment.The AVL tree algorithm provides several advantages for image steganography.Firstly,it ensures balanced tree structures,which leads to efficient data retrieval and insertion operations.Secondly,the self-balancing nature of AVL trees minimizes clustering by maintaining an even distribution of pixels,thereby preserving the stego image quality.The data structure employs the pixel indicator technique for Red,Green,and Blue(RGB)channel extraction.The green channel serves as the foundation for building a balanced binary tree.First,the sender identifies the colored cover image and secret data.The sender will use the two least significant bits(2-LSB)of RGB channels to conceal the data’s size and associated information.The next step is to create a balanced binary tree based on the green channel.Utilizing the channel pixel indicator on the LSB of the green channel,we can conceal bits in the 2-LSB of the red or blue channel.The first four levels of the data structure tree will mask the data size,while subsequent levels will conceal the remaining digits of secret data.After embedding the bits in the binary tree level by level,the model restores the AVL tree to create the stego image.Ultimately,the receiver receives this stego image through the public channel,enabling secret data recovery without stego or crypto keys.This method ensures that the stego image appears unsuspicious to potential attackers.Without an extraction algorithm,a third party cannot extract the original secret information from an intercepted stego image.Experimental results showed high levels of imperceptibility and security.
基金This research was funded by the Ministry of Higher Education(MOHE)through Fundamental Research Grant Scheme(FRGS)under the Grand Number FRGS/1/2020/ICT01/UK M/02/4,and University Kebangsaan Malaysia for open access publication.
文摘Image steganography is one of the prominent technologies in data hiding standards.Steganographic system performance mostly depends on the embedding strategy.Its goal is to embed strictly confidential information into images without causing perceptible changes in the original image.The randomization strategies in data embedding techniques may utilize random domains,pixels,or region-of-interest for concealing secrets into a cover image,preventing information from being discovered by an attacker.The implementation of an appropriate embedding technique can achieve a fair balance between embedding capability and stego image imperceptibility,but it is challenging.A systematic approach is used with a standard methodology to carry out this study.This review concentrates on the critical examination of several embedding strategies,incorporating experimental results with state-of-the-art methods emphasizing the robustness,security,payload capacity,and visual quality metrics of the stego images.The fundamental ideas of steganography are presented in this work,along with a unique viewpoint that sets it apart from previous works by highlighting research gaps,important problems,and difficulties.Additionally,it offers a discussion of suggested directions for future study to advance and investigate uncharted territory in image steganography.
文摘A nursing care planning system that automatically generated nursing summaries from information entered into the Psychiatric Outcome Management System (PSYCHOMS?, Tanioka et al.), was developed to enrich the content of nursing summaries at psychiatric hospitals, thereby reducing the workload of nurses. Preparing nursing summaries entails finding the required information in nursing records that span a long period of time and then concisely summarizing this information. This time consuming process depends on the clinical experience and writing ability of the nurse. The system described here automatically generates the text data needed for nursing summaries using an algorithm that synthesizes patient information recorded in electronic charts, the Nursing Care Plan information or the data entered for North American Nursing Diagnosis Association (NANDA) 13 domains with predetermined fixed phrases. Advantages of this system are that it enables nursing summaries to be generated automatically in real time, simplifies the process, and permits the standardization of useful nursing summaries that reflect the course of the nursing care provided and its evaluation. Use of this system to automatically generate nursing summaries will allow more nursing time to be devoted to patient care. The system is also useful because it enables nursing summaries that contain the required information to be generated regardless of who prepares them.
基金Funded by the Natural Science Foundation of Hubei Province(No.2022CFB861)the Wenhua College Research and Innovation Team(No.2022T01)。
文摘A novel fiber optic sensor based on hydrogel-immobilized enzyme complex was developed for the simultaneous measurement of dual-parameter,the leap from a single parameter detecting fiber optic sensor to a fiber optic sensor that can continuously detect two kinds of parameters was achieved.By controlling the temperature from high to low,the function of fiber sulfide sensor and fiber DCP sensor can be realized,so as to realize the continuous detection of dual-parameter.The different variables affecting the sensor performance were evaluated and optimized.Under the optimal conditions,the response curves,linear detection ranges,detection limits and response times of the dual-parameter sensor for testing sulfide and DCP were obtained,respectively.The sensor displays high selectivity,good repeatability and stability,which have good potentials in analyzing sulfide and DCP concentration of practical water samples.
基金This paper is supported by Research Grant Number:PP-FTSM-2022.
文摘This research recognizes the limitation and challenges of adaptingand applying Process Mining as a powerful tool and technique in theHypothetical Software Architecture (SA) Evaluation Framework with thefeatures and factors of lightweightness. Process mining deals with the largescalecomplexity of security and performance analysis, which are the goalsof SA evaluation frameworks. As a result of these conjectures, all ProcessMining researches in the realm of SA are thoroughly reviewed, and ninechallenges for Process Mining Adaption are recognized. Process mining isembedded in the framework and to boost the quality of the SA model forfurther analysis, the framework nominates architectural discovery algorithmsFlower, Alpha, Integer Linear Programming (ILP), Heuristic, and Inductiveand compares them vs. twelve quality criteria. Finally, the framework’s testingon three case studies approves the feasibility of applying process mining toarchitectural evaluation. The extraction of the SA model is also done by thebest model discovery algorithm, which is selected by intensive benchmarkingin this research. This research presents case studies of SA in service-oriented,Pipe and Filter, and component-based styles, modeled and simulated byHierarchical Colored Petri Net techniques based on the cases’ documentation.Processminingwithin this framework dealswith the system’s log files obtainedfrom SA simulation. Applying process mining is challenging, especially for aSA evaluation framework, as it has not been done yet. The research recognizesthe problems of process mining adaption to a hypothetical lightweightSA evaluation framework and addresses these problems during the solutiondevelopment.
基金the Ministry of Higher Education Malaysia for funding this research project through Fundamental Research Grant Scheme(FRGS)with Project Code:FRGS/1/2022/TK02/UCSI/02/1 and also to UCSI University.
文摘In wireless communications, the Ambient Backscatter Communication (AmBC) technique is a promisingapproach, detecting user presence accurately at low power levels. At low power or a low Signal-to-Noise Ratio(SNR), there is no dedicated power for the users. Instead, they can transmit information by reflecting the ambientRadio Frequency (RF) signals in the spectrum. Therefore, it is essential to detect user presence in the spectrum forthe transmission of data without loss or without collision at a specific time. In this paper, the authors proposed anovel Spectrum Sensing (SS) detection technique in the Cognitive Radio (CR) spectrum, by developing the AmBC.Novel Matched Filter Detection with Inverse covariance (MFDI), Cyclostationary Feature Detection with Inversecovariance (CFDI) and Hybrid Filter Detection with Inverse covariance (HFDI) approaches are used with AmBCto detect the presence of users at low power levels. The performance of the three detection techniques is measuredusing the parameters of Probability of Detection (PD), Probability of False Alarms (Pfa), Probability of MissedDetection (Pmd), sensing time and throughput at low power or low SNR. The results show that there is a significantimprovement via the HFDI technique for all the parameters.
文摘The Internet of Medical Things(IoMT)enables digital devices to gather,infer,and broadcast health data via the cloud platform.The phenomenal growth of the IoMT is fueled by many factors,including the widespread and growing availability of wearables and the ever-decreasing cost of sensor-based technology.There is a growing interest in providing solutions for elderly people living assistance in a world where the population is rising rapidly.The IoMT is a novel reality transforming our daily lives.It can renovate modern healthcare by delivering a more personalized,protective,and collaborative approach to care.However,the current healthcare system for outdoor senior citizens faces new challenges.Traditional healthcare systems are inefficient and lack user-friendly technologies and interfaces appropriate for elderly people in an outdoor environment.Hence,in this research work,a IoMT based Smart Healthcare of Elderly people using Deep Extreme Learning Machine(SH-EDELM)is proposed to monitor the senior citizens’healthcare.The performance of the proposed SH-EDELM technique gives better results in terms of 0.9301 accuracy and 0.0699 miss rate,respectively.
文摘Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and commercial transactions due to its various advantages,including noninvasiveness,user-friendliness,and social and legal acceptability.According to the literature,extensive research has been conducted on signature verification systems in a variety of languages,including English,Hindi,Bangla,and Chinese.However,the Arabic Offline Signature Verification(OSV)system is still a challenging issue that has not been investigated as much by researchers due to the Arabic script being distinguished by changing letter shapes,diacritics,ligatures,and overlapping,making verification more difficult.Recently,signature verification systems have shown promising results for recognizing signatures that are genuine or forgeries;however,performance on skilled forgery detection is still unsatisfactory.Most existing methods require many learning samples to improve verification accuracy,which is a major drawback because the number of available signature samples is often limited in the practical application of signature verification systems.This study addresses these issues by presenting an OSV system based on multifeature fusion and discriminant feature selection using a genetic algorithm(GA).In contrast to existing methods,which use multiclass learning approaches,this study uses a oneclass learning strategy to address imbalanced signature data in the practical application of a signature verification system.The proposed approach is tested on three signature databases(SID)-Arabic handwriting signatures,CEDAR(Center of Excellence for Document Analysis and Recognition),and UTSIG(University of Tehran Persian Signature),and experimental results show that the proposed system outperforms existing systems in terms of reducing the False Acceptance Rate(FAR),False Rejection Rate(FRR),and Equal Error Rate(ERR).The proposed system achieved 5%improvement.
文摘Knowing each other is obligatory in a multi-agent collaborative environment.Collaborators may develop the desired know-how of each other in various aspects such as habits,job roles,status,and behaviors.Among different distinguishing characteristics related to a person,personality traits are an effective predictive tool for an individual’s behavioral pattern.It has been observed that when people are asked to share their details through questionnaires,they intentionally or unintentionally become biased.They knowingly or unknowingly provide enough information in much-unbiased comportment in open writing about themselves.Such writings can effectively assess an individual’s personality traits that may yield enormous possibilities for applications such as forensic departments,job interviews,mental health diagnoses,etc.Stream of consciousness,collected by James Pennbaker and Laura King,is one such way of writing,referring to a narrative technique where the emotions and thoughts of the writer are presented in a way that brings the reader to the fluid through the mental states of the narrator.More-over,computationally,various attempts have been made in an individual’s personality traits assessment through deep learning algorithms;however,the effectiveness and reliability of results vary with varying word embedding techniques.This article proposes an empirical approach to assessing personality by applying convolutional networks to text documents.Bidirectional Encoder Representations from Transformers(BERT)word embedding technique is used for word vector generation to enhance the contextual meanings.
基金supported by a grant from the Strategic Information and Communication R&D Promotion Program(SCOPE)in Japan(No.122309008).
文摘In psychiatric hospitals, the ratios between patients versus physician and patients versus nurse are low as compared to those in general hospitals. Furthermore, usages of electronic medical records are also low so that nurse administrators are limited in their ability to compile, analyze, and generate patient care staffing information for their administrative use. Psychiatric nurse administrators anticipate the development of a nursing administration analysis system that could perform personnel data simulation, manage information on nursing staff, and manage ward/ practice operations. Responding to this situation, the authors developed a nursing administration analysis system utilizing formulae from the Psychiatric Outcome Management System, PSYCHOMS®to aid nurse administrators. Such formulae are awaiting patent approval. The purpose of this study was to examine the validity of the formulae and the Structured Query Language (SQL) statement, and its practical effectiveness of analyzing data. The study findings showed that two kinds of computation expressions—a classification and extraction were able to display required information desired by nurse administrators. Moreover, significant information critical to assigning staff was validated to ensure high quality of nursing care according to the function and characteristic of the hospital ward.
文摘Autism spectrum disorder(ASD)is a challenging and complex neurodevelopment syndrome that affects the child’s language,speech,social skills,communication skills,and logical thinking ability.The early detection of ASD is essential for delivering effective,timely interventions.Various facial features such as a lack of eye contact,showing uncommon hand or body movements,bab-bling or talking in an unusual tone,and not using common gestures could be used to detect and classify ASD at an early stage.Our study aimed to develop a deep transfer learning model to facilitate the early detection of ASD based on facial fea-tures.A dataset of facial images of autistic and non-autistic children was collected from the Kaggle data repository and was used to develop the transfer learning AlexNet(ASDDTLA)model.Our model achieved a detection accuracy of 87.7%and performed better than other established ASD detection models.Therefore,this model could facilitate the early detection of ASD in clinical practice.
基金supported by the Ministry of Education,Malaysia(Grant Code:FRGS/1/2018/ICT02/UKM/02/6).
文摘Physical sensors,intelligent sensors,and output recommenda-tions are all examples of smart health technology that can be used to monitor patients’health and change their behavior.Smart health is an Internet-of-Things(IoT)-aware network and sensing infrastructure that provides real-time,intelligent,and ubiquitous healthcare services.Because of the rapid development of cloud computing,as well as related technologies such as fog computing,smart health research is progressively moving in the right direction.Cloud,fog computing,IoT sensors,blockchain,privacy and security,and other related technologies have been the focus of smart health research in recent years.At the moment,the focus in cloud and smart health research is on how to use the cloud to solve the problem of enormous health data and enhance service performance,including cloud storage,retrieval,and calculation of health big data.This article reviews state-of-the-art edge computing methods that has shifted to the collection,transmission,and calculation of health data,which includes various sensors and wearable devices used to collect health data,various wireless sensor technologies,and how to process health data and improve edge performance,among other things.Finally,the typical smart health application cases,blockchain’s application in smart health,and related privacy and security issues were reviewed,as well as future difficulties and potential for smart health services.The comparative analysis provides a reference for the the mobile edge computing in healthcare systems.
文摘Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional performance across a spectrum of tasks.The success of deep learning methods can be attributed to their capability to derive potent representations from data,integral for a myriad of downstream applications.These representations encapsulate the intrinsic structure,fea-tures,or latent variables characterising the underlying statistics of visual data.Despite these achievements,the challenge per-sists in effectively conducting representation learning of visual data with deep models,particularly when confronted with vast and noisy datasets.This special issue is a dedicated platform for researchers worldwide to disseminate their latest,high-quality articles,aiming to enhance readers'comprehension of the principles,limitations,and diverse applications of repre-sentation learning in computer vision.
文摘Energy management is an inspiring domain in developing of renewable energy sources.However,the growth of decentralized energy production is revealing an increased complexity for power grid managers,inferring more quality and reliability to regulate electricity flows and less imbalance between electricity production and demand.The major objective of an energy management system is to achieve optimum energy procurement and utilization throughout the organization,minimize energy costs without affecting production,and minimize environmental effects.Modern energy management is an essential and complex subject because of the excessive consumption in residential buildings,which necessitates energy optimization and increased user comfort.To address the issue of energy management,many researchers have developed various frameworks;while the objective of each framework was to sustain a balance between user comfort and energy consumption,this problem hasn’t been fully solved because of how difficult it is to solve it.An inclusive and Intelligent Energy Management System(IEMS)aims to provide overall energy efficiency regarding increased power generation,increase flexibility,increase renewable generation systems,improve energy consumption,reduce carbon dioxide emissions,improve stability,and reduce energy costs.Machine Learning(ML)is an emerging approach that may be beneficial to predict energy efficiency in a better way with the assistance of the Internet of Energy(IoE)network.The IoE network is playing a vital role in the energy sector for collecting effective data and usage,resulting in smart resource management.In this research work,an IEMS is proposed for Smart Cities(SC)using the ML technique to better resolve the energy management problem.The proposed system minimized the energy consumption with its intelligent nature and provided better outcomes than the previous approaches in terms of 92.11% accuracy,and 7.89% miss-rate.
基金Project (Nos. 2001AA620104 and 2003AA603140) supported by theHi-Tech Research and Development Program (863) of China
文摘The behavioral responses of a tilapia (Oreochromis niloticus) school to low (0.13 mg/L), moderate (0.79 mg/L) and high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were monitored using a computer vision system. The swimming activity and geometrical parameters such as location of the gravity center and distribution of the fish school were calculated continuously. These behavioral parameters of tilapia school responded sensitively to moderate and high UIA concen-tration. Under high UIA concentration the fish activity showed a significant increase (P<0.05), exhibiting an avoidance reaction to high ammonia condition, and then decreased gradually. Under moderate and high UIA concentration the school’s vertical location had significantly large fluctuation (P<0.05) with the school moving up to the water surface then down to the bottom of the aquarium alternately and tending to crowd together. After several hours’ exposure to high UIA level, the school finally stayed at the aquarium bottom. These observations indicate that alterations in fish behavior under acute stress can provide important in-formation useful in predicting the stress.
文摘Three kind of application of ADCP is reported for long-term monitoring in coastal sea.(1)The routine monitoring of water qualities. The water quality and ADCP echo data (600 kHz) observed in the long-term are analgzed at MT (Marine Tower) Station of Kansai International Airport in the Osaka Bay, Japan. The correlation between the turbidity and echo intensity in the surface layer is not good because air bubbles generated by breaking wave are not detected by the turbidity meter, but detected well by ADCP. When estimating the turbidity consists of plankton population from echo intensity, the effect of bubbles have to be eliminated. (2) Monitoring stirring up of bottom sediment. The special observation was carried out by using following two ADCP in the Osaka Bay, One ADCP was installed upward on the sea. The other ADCP was hanged downward at the gate type stand about 3 m above from the bottom. At the spring tide, high echo intensities indicating the stirring up of bottom sediment were observed. (3) The monitoring for the boundary condition of water mixing at an estuary. In summer season, the ADCP was set at the mouth of Tanabe Bay in Wakayama Prefecture, Japan. During the observation, water temperature near the bottom showed remarkable falls with interval of about 5-7 d. When the bottom temperature fell, the inflow current with low echo intensity water appears at the bottom layer in the ADCP record. It is concluded that when occasional weak northeast wind makes weak coastal upwelling at the mouth of the bay, the combination of upwelling with internal tidal flow causes remarkable water exchange and dispels the red tide.
基金a HKU Seed grant the Research Grants Council of the Hong Kong SAR(HKU7016/07P)
文摘In this article, the authors establish some new nonlinear difference inequalities in two independent variables, which generalize some existing results and can be used as handy tools in the study of qualitative as well as quantitative properties of solutions of certain classes of difference equations.