Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma...Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.展开更多
COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of en...COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics.展开更多
In the contemporary era,the death rate is increasing due to lung cancer.However,technology is continuously enhancing the quality of well-being.To improve the survival rate,radiologists rely on Computed Tomography(CT)s...In the contemporary era,the death rate is increasing due to lung cancer.However,technology is continuously enhancing the quality of well-being.To improve the survival rate,radiologists rely on Computed Tomography(CT)scans for early detection and diagnosis of lung nodules.This paper presented a detailed,systematic review of several identification and categorization techniques for lung nodules.The analysis of the report explored the challenges,advancements,and future opinions in computer-aided diagnosis CAD systems for detecting and classifying lung nodules employing the deep learning(DL)algorithm.The findings also highlighted the usefulness of DL networks,especially convolutional neural networks(CNNs)in elevating sensitivity,accuracy,and specificity as well as overcoming false positives in the initial stages of lung cancer detection.This paper further presented the integral nodule classification stage,which stressed the importance of differentiating between benign and malignant nodules for initial cancer diagnosis.Moreover,the findings presented a comprehensive analysis of multiple techniques and studies for nodule classification,highlighting the evolution of methodologies from conventional machine learning(ML)classifiers to transfer learning and integrated CNNs.Interestingly,while accepting the strides formed by CAD systems,the review addressed persistent challenges.展开更多
In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in...In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN.展开更多
Visible light communication(VLC)has a paramount role in industrial implementations,especially for better energy efficiency,high speed-data rates,and low susceptibility to interference.However,since studies on VLC for ...Visible light communication(VLC)has a paramount role in industrial implementations,especially for better energy efficiency,high speed-data rates,and low susceptibility to interference.However,since studies on VLC for industrial implementations are in scarcity,areas concerning illumination optimisation and communication performances demand further investigation.As such,this paper presents a new modelling of light fixture distribution for a warehouse model to provide acceptable illumination and communication performances.The proposed model was evaluated based on various semi-angles at half power(SAAHP)and different height levels for several parameters,including received power,signal to noise ratio(SNR),and bit error rate(BER).The results revealed improvement in terms of received power and SNR with 30 Mbps data rate.Various modulations were studied to improve the link quality,whereby better average BER values of 5.55×10^(−15) and 1.06×10^(−10) had been achieved with 4 PAM and 8 PPM,respectively.The simulation outcomes are indeed viable for the practical warehouse model.展开更多
In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine ...In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.展开更多
Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both cus...Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.展开更多
In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of ...In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.展开更多
This study was undertaken to examine the options and feasibility of deploying new technologies for transforming the aquaculture sector with the objective of increasing the production efficiency.Selection o...This study was undertaken to examine the options and feasibility of deploying new technologies for transforming the aquaculture sector with the objective of increasing the production efficiency.Selection of technologies to obtain the expected outcome should,obviously,be consistent with the criteria of sustainable development.There is a range of technologies being suggested for driving change in aquaculture to enhance its contribution to food security.It is necessary to highlight the complexity of issues for systems approach that can shape the course of development of aquaculture so that it can live-up to the expected fish demand by 2030 in addition to the current quantity of 82.1 million tons.Some of the Fourth Industrial Revolution(IR4.0)technologies suggested to achieve this target envisage the use of real-time monitoring,integration of a constant stream of data from connected production systems and intelligent automation in controls.This requires application of mobile devices,internet of things(IoT),smart sensors,artificial intelligence(AI),big data analytics,robotics as well as augmented virtual and mixed reality.AI is receiving more attention due to many reasons.Its use in aquaculture can happen in many ways,for example,in detecting and mitigating stress on the captive fish which is considered critical for the success of aquaculture.While the technology intensification in aquaculture holds a great potential but there are constraints in deploying IR4.0 tools in aquaculture.Possible solutions and practical options,especially with respect to future food choices are highlighted in this paper.展开更多
The computational complexity of resource allocation processes,in cognitive radio networks(CRNs),is a major issue to be managed.Furthermore,the complicated solution of the optimal algorithm for handling resource alloca...The computational complexity of resource allocation processes,in cognitive radio networks(CRNs),is a major issue to be managed.Furthermore,the complicated solution of the optimal algorithm for handling resource allocation in CRNs makes it unsuitable to adopt in real-world applications where both cognitive users,CRs,and primary users,PUs,exist in the identical geographical area.Hence,this work offers a primarily price-based power algorithm to reduce computational complexity in uplink scenarioswhile limiting interference to PUs to allowable threshold.Hence,this paper,compared to other frameworks proposed in the literature,proposes a two-step approach to reduce the complexity of the proposed mathematical model.In the first step,the subcarriers are assigned to the users of the CRN,while the cost function includes a pricing scheme to provide better power control algorithm with improved reliability proposed in the second stage.The main contribution of this paper is to lessen the complexity of the proposed algorithm and to offer flexibility in controlling the interference produced to the users of the primary networks,which has been achieved by including a pricing function in the proposed cost function.Finally,the performance of the proposed power and subcarrier algorithm is confirmed for orthogonal frequency-division multiplexing(OFDM).Simulation results prove that the performance of the proposed algorithm is better than other algorithms,albeit with a lesser complexity of O(NM)+O(Nlog(N)).展开更多
This study focuses on testing and quality measurement and analysis of VoIPv6 performance. A client, server codes were developed using FreeBSD. This is a step before analyzing the Architectures of VoIPv6 in the current...This study focuses on testing and quality measurement and analysis of VoIPv6 performance. A client, server codes were developed using FreeBSD. This is a step before analyzing the Architectures of VoIPv6 in the current internet in order for it to cope with IPv6 traffic transmission requirements in general and specifically voice traffic, which is being attracting the efforts of research, bodes currently. These tests were conducted in the application level without looking into the network level of the network. VoIPv6 performance tests were conducted in the current tunneled and native IPv6 aiming for better end-to-end VoIPv6 performance. The results obtained in this study were shown in deferent codec's for different bit rates in Kilo bits per second, which act as an indicator for the better performance of G.711 compared with the rest of the tested codes.展开更多
It is estimated that only 15 percent of Kenyans have made plans for retirement, and many people fall into poverty once they retire. A 2018 survey by the Unclaimed Property Asset register found that insurance companies...It is estimated that only 15 percent of Kenyans have made plans for retirement, and many people fall into poverty once they retire. A 2018 survey by the Unclaimed Property Asset register found that insurance companies hold 25 percent of unclaimed funds with 10 percent belonging to pensioners. This was attributed to a lack of effective information flow between insurance companies and the customers and also between various departments in the insurance companies. Further, there were numerous cases of loss of documents and files and certain files were untraceable in the departments. This paper investigates ways in which mobile technology influences dissemination of information for processing pension claims in the insurance industry. An improvement in dissemination of information for processing of pension claims can carry out a key function in increasing percentage of Kenyans making plans for retirement. The study deployed a descriptive study design. The target population in this study was 561 pensioners in Jubilee Insurance and 8 heads of pensions business, finance, legal services, internal audit, operations, information and communication technology, actuary, business development and strategy and business development departments. The sample size of this study was obtained by use of Krejcie and Morgan formula of determining sample size. As a result of the small number of heads of departments, they were not sampled. Through systematic sampling a sample of 288 pensioners was selected from the list of pensioners in Jubilee Insurance. The findings from the study led to a conclusion that mobile application has a positive and significant association with dissemination of information for pension claims processing in Jubilee Insurance. It was further revealed that text messages have a positive and significant influence on dissemination of information. Concerning unstructured supplementary service data (USSD) it was concluded that it has a positive and significant influence on dissemination of information. The study findings also revealed that voice calls have a positive and significant influence on dissemination of information for pension claims processing in Jubilee Insurance.展开更多
Due to the fact that network space is becoming more limited,the implementation of ultra-dense networks(UDNs)has the potential to enhance not only network coverage but also network throughput.Unmanned Aerial Vehicle(UA...Due to the fact that network space is becoming more limited,the implementation of ultra-dense networks(UDNs)has the potential to enhance not only network coverage but also network throughput.Unmanned Aerial Vehicle(UAV)communications have recently garnered a lot of attention due to the fact that they are extremely versatile and may be applied to a wide variety of contexts and purposes.A cognitive UAV is proposed as a solution for the Internet of Things ground terminal’s wireless nodes in this article.In the IoT system,the UAV is utilised not only to determine how the resources should be distributed but also to provide power to the wireless nodes.The quality of service(QoS)offered by the cognitive node was interpreted as a price-based utility function,which was demonstrated in the form of a non-cooperative game theory in order to maximise customers’net utility functions.An energyefficient non-cooperative game theory power allocation with pricing strategy abbreviated as(EE-NGPAP)is implemented in this study with two trajectories Spiral and Sigmoidal in order to facilitate effective power management in Internet of Things(IoT)wireless nodes.It has also been demonstrated,theoretically and by the use of simulations,that the Nash equilibrium does exist and that it is one of a kind.The proposed energy harvesting approach was shown,through simulations,to significantly reduce the typical amount of power thatwas sent.This is taken into consideration to agree with the objective of 5G networks.In order to converge to Nash Equilibrium(NE),the method that is advised only needs roughly 4 iterations,which makes it easier to utilise in the real world,where things aren’t always the same.展开更多
Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks...Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems.展开更多
With the help of computer-aided diagnostic systems,cardiovascular diseases can be identified timely manner to minimize the mortality rate of patients suffering from cardiac disease.However,the early diagnosis of cardi...With the help of computer-aided diagnostic systems,cardiovascular diseases can be identified timely manner to minimize the mortality rate of patients suffering from cardiac disease.However,the early diagnosis of cardiac arrhythmia is one of the most challenging tasks.The manual analysis of electrocardiogram(ECG)data with the help of the Holter monitor is challenging.Currently,the Convolutional Neural Network(CNN)is receiving considerable attention from researchers for automatically identifying ECG signals.This paper proposes a 9-layer-based CNN model to classify the ECG signals into five primary categories according to the American National Standards Institute(ANSI)standards and the Association for the Advancement of Medical Instruments(AAMI).The Massachusetts Institute of Technology-Beth Israel Hospital(MIT-BIH)arrhythmia dataset is used for the experiment.The proposed model outperformed the previous model in terms of accuracy and achieved a sensitivity of 99.0%and a positivity predictively 99.2%in the detection of a Ventricular Ectopic Beat(VEB).Moreover,it also gained a sensitivity of 99.0%and positivity predictively of 99.2%for the detection of a supraventricular ectopic beat(SVEB).The overall accuracy of the proposed model is 99.68%.展开更多
The Internet ofMedical Things(IoMT)is mainly concernedwith the efficient utilisation of wearable devices in the healthcare domain to manage various processes automatically,whereas machine learning approaches enable th...The Internet ofMedical Things(IoMT)is mainly concernedwith the efficient utilisation of wearable devices in the healthcare domain to manage various processes automatically,whereas machine learning approaches enable these smart systems to make informed decisions.Generally,broadcasting is used for the transmission of frames,whereas congestion,energy efficiency,and excessive load are among the common issues associated with existing approaches.In this paper,a machine learning-enabled shortest path identification scheme is presented to ensure reliable transmission of frames,especially with the minimum possible communication overheads in the IoMT network.For this purpose,the proposed scheme utilises a well-known technique,i.e.,Kruskal’s algorithm,to find an optimal path from source to destination wearable devices.Additionally,other evaluation metrics are used to find a reliable and shortest possible communication path between the two interested parties.Apart from that,every device is bound to hold a supplementary path,preferably a second optimised path,for situations where the current communication path is no longer available,either due to device failure or heavy traffic.Furthermore,the machine learning approach helps enable these devices to update their routing tables simultaneously,and an optimal path could be replaced if a better one is available.The proposed mechanism has been tested using a smart environment developed for the healthcare domain using IoMT networks.Simulation results show that the proposed machine learning-oriented approach performs better than existing approaches where the proposed scheme has achieved the minimum possible ratios,i.e.,17%and 23%,in terms of end to end delay and packet losses,respectively.Moreover,the proposed scheme has achieved an approximately 21%improvement in the average throughput compared to the existing schemes.展开更多
Visible light communication(VLC),which is a prominent emerging solution that complements the radio frequency(RF)technology,exhibits the potential to meet the demands of fifth-generation(5G)and beyond technologies.The ...Visible light communication(VLC),which is a prominent emerging solution that complements the radio frequency(RF)technology,exhibits the potential to meet the demands of fifth-generation(5G)and beyond technologies.The random movement of mobile terminals in the indoor environment is a challenge in the VLC system.The model of optical attocells has a critical role in the uniform distribution and the quality of communication links in terms of received power and signal-to-noise ratio(SNR).As such,the optical attocells positions were optimized in this study with a developed try and error(TE)algorithm.The optimized optical attocells were examined and compared with previous models.This novel approach had successfully increased minimum received power from−1.29 to−0.225 dBm,along with enhanced SNR performance by 2.06 dB.The bit error rate(BER)was reduced to 4.42×10−8 and 6.63×10−14 by utilizing OOK-NRZ and BPSK modulation techniques,respectively.The optimized attocells positions displayed better uniform distribution,as both received power and SNR performances improved by 0.45 and 0.026,respectively.As the results of the proposed model are optimal,it is suitable for standard office and room model applications.展开更多
In this paper,the application of transportation systems in realtime traffic conditions is evaluated with data handling representations.The proposed method is designed in such a way as to detect the number of loads tha...In this paper,the application of transportation systems in realtime traffic conditions is evaluated with data handling representations.The proposed method is designed in such a way as to detect the number of loads that are present in a vehicle where functionality tasks are computed in the system.Compared to the existing approach,the design model in the proposed method is made by dividing the computing areas into several cluster regions,thereby reducing the complex monitoring system where control errors are minimized.Furthermore,a route management technique is combined with Artificial Intelligence(AI)algorithm to transmit the data to appropriate central servers.Therefore,the combined objective case studies are examined as minimization and maximization criteria,thus increasing the efficiency of the proposed method.Finally,four scenarios are chosen to investigate the projected design’s effectiveness.In all simulated metrics,the proposed approach provides better operational outcomes for an average percentage of 97,thereby reducing the amount of traffic in real-time conditions.展开更多
In software-defined networks(SDNs),controller placement is a critical factor in the design and planning for the future Internet of Things(IoT),telecommunication,and satellite communication systems.Existing research ha...In software-defined networks(SDNs),controller placement is a critical factor in the design and planning for the future Internet of Things(IoT),telecommunication,and satellite communication systems.Existing research has concentrated largely on factors such as reliability,latency,controller capacity,propagation delay,and energy consumption.However,SDNs are vulnerable to distributed denial of service(DDoS)attacks that interfere with legitimate use of the network.The ever-increasing frequency of DDoS attacks has made it necessary to consider them in network design,especially in critical applications such as military,health care,and financial services networks requiring high availability.We propose a mathematical model for planning the deployment of SDN smart backup controllers(SBCs)to preserve service in the presence of DDoS attacks.Given a number of input parameters,our model has two distinct capabilities.First,it determines the optimal number of primary controllers to place at specific locations or nodes under normal operating conditions.Second,it recommends an optimal number of smart backup controllers for use with different levels of DDoS attacks.The goal of the model is to improve resistance to DDoS attacks while optimizing the overall cost based on the parameters.Our simulated results demonstrate that the model is useful in planning for SDN reliability in the presence of DDoS attacks while managing the overall cost.展开更多
文摘Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.
文摘COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics.
文摘In the contemporary era,the death rate is increasing due to lung cancer.However,technology is continuously enhancing the quality of well-being.To improve the survival rate,radiologists rely on Computed Tomography(CT)scans for early detection and diagnosis of lung nodules.This paper presented a detailed,systematic review of several identification and categorization techniques for lung nodules.The analysis of the report explored the challenges,advancements,and future opinions in computer-aided diagnosis CAD systems for detecting and classifying lung nodules employing the deep learning(DL)algorithm.The findings also highlighted the usefulness of DL networks,especially convolutional neural networks(CNNs)in elevating sensitivity,accuracy,and specificity as well as overcoming false positives in the initial stages of lung cancer detection.This paper further presented the integral nodule classification stage,which stressed the importance of differentiating between benign and malignant nodules for initial cancer diagnosis.Moreover,the findings presented a comprehensive analysis of multiple techniques and studies for nodule classification,highlighting the evolution of methodologies from conventional machine learning(ML)classifiers to transfer learning and integrated CNNs.Interestingly,while accepting the strides formed by CAD systems,the review addressed persistent challenges.
文摘In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN.
基金supported by Professional Development Research University Grant(UTM Vot No.06E59).
文摘Visible light communication(VLC)has a paramount role in industrial implementations,especially for better energy efficiency,high speed-data rates,and low susceptibility to interference.However,since studies on VLC for industrial implementations are in scarcity,areas concerning illumination optimisation and communication performances demand further investigation.As such,this paper presents a new modelling of light fixture distribution for a warehouse model to provide acceptable illumination and communication performances.The proposed model was evaluated based on various semi-angles at half power(SAAHP)and different height levels for several parameters,including received power,signal to noise ratio(SNR),and bit error rate(BER).The results revealed improvement in terms of received power and SNR with 30 Mbps data rate.Various modulations were studied to improve the link quality,whereby better average BER values of 5.55×10^(−15) and 1.06×10^(−10) had been achieved with 4 PAM and 8 PPM,respectively.The simulation outcomes are indeed viable for the practical warehouse model.
基金The research will be funded by the Multimedia University,Department of Information Technology,Persiaran Multimedia,63100,Cyberjaya,Selangor,Malaysia.
文摘In situations when the precise position of a machine is unknown,localization becomes crucial.This research focuses on improving the position prediction accuracy over long-range(LoRa)network using an optimized machine learning-based technique.In order to increase the prediction accuracy of the reference point position on the data collected using the fingerprinting method over LoRa technology,this study proposed an optimized machine learning(ML)based algorithm.Received signal strength indicator(RSSI)data from the sensors at different positions was first gathered via an experiment through the LoRa network in a multistory round layout building.The noise factor is also taken into account,and the signal-to-noise ratio(SNR)value is recorded for every RSSI measurement.This study concludes the examination of reference point accuracy with the modified KNN method(MKNN).MKNN was created to more precisely anticipate the position of the reference point.The findings showed that MKNN outperformed other algorithms in terms of accuracy and complexity.
基金Ministry of Higher Education of Malaysia under theResearch GrantLRGS/1/2019/UKM-UKM/5/2 and Princess Nourah bint Abdulrahman University for financing this researcher through Supporting Project Number(PNURSP2024R235),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.
文摘In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.
基金Aquaculture Flagship program of Universiti Malaysia Sabah.
文摘This study was undertaken to examine the options and feasibility of deploying new technologies for transforming the aquaculture sector with the objective of increasing the production efficiency.Selection of technologies to obtain the expected outcome should,obviously,be consistent with the criteria of sustainable development.There is a range of technologies being suggested for driving change in aquaculture to enhance its contribution to food security.It is necessary to highlight the complexity of issues for systems approach that can shape the course of development of aquaculture so that it can live-up to the expected fish demand by 2030 in addition to the current quantity of 82.1 million tons.Some of the Fourth Industrial Revolution(IR4.0)technologies suggested to achieve this target envisage the use of real-time monitoring,integration of a constant stream of data from connected production systems and intelligent automation in controls.This requires application of mobile devices,internet of things(IoT),smart sensors,artificial intelligence(AI),big data analytics,robotics as well as augmented virtual and mixed reality.AI is receiving more attention due to many reasons.Its use in aquaculture can happen in many ways,for example,in detecting and mitigating stress on the captive fish which is considered critical for the success of aquaculture.While the technology intensification in aquaculture holds a great potential but there are constraints in deploying IR4.0 tools in aquaculture.Possible solutions and practical options,especially with respect to future food choices are highlighted in this paper.
基金Authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through Large Groups Project under Grant Number RGP.2/111/43supported in part by the Agencia Estatal de Investigación,Ministerio de Ciencia e Innovación(MCIN/AEI/10.13039/501100011033)+1 种基金the R+D+i Project under Grant PID2020-115323RB-C31in part by the Grant from the Spanish Ministry of Economic Affairs and Digital Transformation and the European Union-NextGenerationEU under Grant UNICO-5G I+D/AROMA3D-Hybrid TSI-063000-2021-71.
文摘The computational complexity of resource allocation processes,in cognitive radio networks(CRNs),is a major issue to be managed.Furthermore,the complicated solution of the optimal algorithm for handling resource allocation in CRNs makes it unsuitable to adopt in real-world applications where both cognitive users,CRs,and primary users,PUs,exist in the identical geographical area.Hence,this work offers a primarily price-based power algorithm to reduce computational complexity in uplink scenarioswhile limiting interference to PUs to allowable threshold.Hence,this paper,compared to other frameworks proposed in the literature,proposes a two-step approach to reduce the complexity of the proposed mathematical model.In the first step,the subcarriers are assigned to the users of the CRN,while the cost function includes a pricing scheme to provide better power control algorithm with improved reliability proposed in the second stage.The main contribution of this paper is to lessen the complexity of the proposed algorithm and to offer flexibility in controlling the interference produced to the users of the primary networks,which has been achieved by including a pricing function in the proposed cost function.Finally,the performance of the proposed power and subcarrier algorithm is confirmed for orthogonal frequency-division multiplexing(OFDM).Simulation results prove that the performance of the proposed algorithm is better than other algorithms,albeit with a lesser complexity of O(NM)+O(Nlog(N)).
文摘This study focuses on testing and quality measurement and analysis of VoIPv6 performance. A client, server codes were developed using FreeBSD. This is a step before analyzing the Architectures of VoIPv6 in the current internet in order for it to cope with IPv6 traffic transmission requirements in general and specifically voice traffic, which is being attracting the efforts of research, bodes currently. These tests were conducted in the application level without looking into the network level of the network. VoIPv6 performance tests were conducted in the current tunneled and native IPv6 aiming for better end-to-end VoIPv6 performance. The results obtained in this study were shown in deferent codec's for different bit rates in Kilo bits per second, which act as an indicator for the better performance of G.711 compared with the rest of the tested codes.
文摘It is estimated that only 15 percent of Kenyans have made plans for retirement, and many people fall into poverty once they retire. A 2018 survey by the Unclaimed Property Asset register found that insurance companies hold 25 percent of unclaimed funds with 10 percent belonging to pensioners. This was attributed to a lack of effective information flow between insurance companies and the customers and also between various departments in the insurance companies. Further, there were numerous cases of loss of documents and files and certain files were untraceable in the departments. This paper investigates ways in which mobile technology influences dissemination of information for processing pension claims in the insurance industry. An improvement in dissemination of information for processing of pension claims can carry out a key function in increasing percentage of Kenyans making plans for retirement. The study deployed a descriptive study design. The target population in this study was 561 pensioners in Jubilee Insurance and 8 heads of pensions business, finance, legal services, internal audit, operations, information and communication technology, actuary, business development and strategy and business development departments. The sample size of this study was obtained by use of Krejcie and Morgan formula of determining sample size. As a result of the small number of heads of departments, they were not sampled. Through systematic sampling a sample of 288 pensioners was selected from the list of pensioners in Jubilee Insurance. The findings from the study led to a conclusion that mobile application has a positive and significant association with dissemination of information for pension claims processing in Jubilee Insurance. It was further revealed that text messages have a positive and significant influence on dissemination of information. Concerning unstructured supplementary service data (USSD) it was concluded that it has a positive and significant influence on dissemination of information. The study findings also revealed that voice calls have a positive and significant influence on dissemination of information for pension claims processing in Jubilee Insurance.
基金The authors are grateful to the Taif University Researchers Supporting Project number(TURSP-2020/36),Taif University,Taif,Saudi Arabia.
文摘Due to the fact that network space is becoming more limited,the implementation of ultra-dense networks(UDNs)has the potential to enhance not only network coverage but also network throughput.Unmanned Aerial Vehicle(UAV)communications have recently garnered a lot of attention due to the fact that they are extremely versatile and may be applied to a wide variety of contexts and purposes.A cognitive UAV is proposed as a solution for the Internet of Things ground terminal’s wireless nodes in this article.In the IoT system,the UAV is utilised not only to determine how the resources should be distributed but also to provide power to the wireless nodes.The quality of service(QoS)offered by the cognitive node was interpreted as a price-based utility function,which was demonstrated in the form of a non-cooperative game theory in order to maximise customers’net utility functions.An energyefficient non-cooperative game theory power allocation with pricing strategy abbreviated as(EE-NGPAP)is implemented in this study with two trajectories Spiral and Sigmoidal in order to facilitate effective power management in Internet of Things(IoT)wireless nodes.It has also been demonstrated,theoretically and by the use of simulations,that the Nash equilibrium does exist and that it is one of a kind.The proposed energy harvesting approach was shown,through simulations,to significantly reduce the typical amount of power thatwas sent.This is taken into consideration to agree with the objective of 5G networks.In order to converge to Nash Equilibrium(NE),the method that is advised only needs roughly 4 iterations,which makes it easier to utilise in the real world,where things aren’t always the same.
文摘Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems.
基金supported by Faculty of Computing and Informatics,University Malaysia Sabah,Jalan UMS,Kota Kinabalu Sabah 88400,Malaysia.
文摘With the help of computer-aided diagnostic systems,cardiovascular diseases can be identified timely manner to minimize the mortality rate of patients suffering from cardiac disease.However,the early diagnosis of cardiac arrhythmia is one of the most challenging tasks.The manual analysis of electrocardiogram(ECG)data with the help of the Holter monitor is challenging.Currently,the Convolutional Neural Network(CNN)is receiving considerable attention from researchers for automatically identifying ECG signals.This paper proposes a 9-layer-based CNN model to classify the ECG signals into five primary categories according to the American National Standards Institute(ANSI)standards and the Association for the Advancement of Medical Instruments(AAMI).The Massachusetts Institute of Technology-Beth Israel Hospital(MIT-BIH)arrhythmia dataset is used for the experiment.The proposed model outperformed the previous model in terms of accuracy and achieved a sensitivity of 99.0%and a positivity predictively 99.2%in the detection of a Ventricular Ectopic Beat(VEB).Moreover,it also gained a sensitivity of 99.0%and positivity predictively of 99.2%for the detection of a supraventricular ectopic beat(SVEB).The overall accuracy of the proposed model is 99.68%.
文摘The Internet ofMedical Things(IoMT)is mainly concernedwith the efficient utilisation of wearable devices in the healthcare domain to manage various processes automatically,whereas machine learning approaches enable these smart systems to make informed decisions.Generally,broadcasting is used for the transmission of frames,whereas congestion,energy efficiency,and excessive load are among the common issues associated with existing approaches.In this paper,a machine learning-enabled shortest path identification scheme is presented to ensure reliable transmission of frames,especially with the minimum possible communication overheads in the IoMT network.For this purpose,the proposed scheme utilises a well-known technique,i.e.,Kruskal’s algorithm,to find an optimal path from source to destination wearable devices.Additionally,other evaluation metrics are used to find a reliable and shortest possible communication path between the two interested parties.Apart from that,every device is bound to hold a supplementary path,preferably a second optimised path,for situations where the current communication path is no longer available,either due to device failure or heavy traffic.Furthermore,the machine learning approach helps enable these devices to update their routing tables simultaneously,and an optimal path could be replaced if a better one is available.The proposed mechanism has been tested using a smart environment developed for the healthcare domain using IoMT networks.Simulation results show that the proposed machine learning-oriented approach performs better than existing approaches where the proposed scheme has achieved the minimum possible ratios,i.e.,17%and 23%,in terms of end to end delay and packet losses,respectively.Moreover,the proposed scheme has achieved an approximately 21%improvement in the average throughput compared to the existing schemes.
基金the grant names“ProfessionalDevelopment Research University Grant”(“UTM Vot No.05E69”and“TDR grant Vot No.05G27”).
文摘Visible light communication(VLC),which is a prominent emerging solution that complements the radio frequency(RF)technology,exhibits the potential to meet the demands of fifth-generation(5G)and beyond technologies.The random movement of mobile terminals in the indoor environment is a challenge in the VLC system.The model of optical attocells has a critical role in the uniform distribution and the quality of communication links in terms of received power and signal-to-noise ratio(SNR).As such,the optical attocells positions were optimized in this study with a developed try and error(TE)algorithm.The optimized optical attocells were examined and compared with previous models.This novel approach had successfully increased minimum received power from−1.29 to−0.225 dBm,along with enhanced SNR performance by 2.06 dB.The bit error rate(BER)was reduced to 4.42×10−8 and 6.63×10−14 by utilizing OOK-NRZ and BPSK modulation techniques,respectively.The optimized attocells positions displayed better uniform distribution,as both received power and SNR performances improved by 0.45 and 0.026,respectively.As the results of the proposed model are optimal,it is suitable for standard office and room model applications.
基金funded by the Research Management Centre(RMC),Universiti Malaysia Sabah,through the Journal Article Fund UMS/PPI-DPJ1.
文摘In this paper,the application of transportation systems in realtime traffic conditions is evaluated with data handling representations.The proposed method is designed in such a way as to detect the number of loads that are present in a vehicle where functionality tasks are computed in the system.Compared to the existing approach,the design model in the proposed method is made by dividing the computing areas into several cluster regions,thereby reducing the complex monitoring system where control errors are minimized.Furthermore,a route management technique is combined with Artificial Intelligence(AI)algorithm to transmit the data to appropriate central servers.Therefore,the combined objective case studies are examined as minimization and maximization criteria,thus increasing the efficiency of the proposed method.Finally,four scenarios are chosen to investigate the projected design’s effectiveness.In all simulated metrics,the proposed approach provides better operational outcomes for an average percentage of 97,thereby reducing the amount of traffic in real-time conditions.
基金This research work was funded by TMR&D Sdn Bhd under project code RDTC160902.
文摘In software-defined networks(SDNs),controller placement is a critical factor in the design and planning for the future Internet of Things(IoT),telecommunication,and satellite communication systems.Existing research has concentrated largely on factors such as reliability,latency,controller capacity,propagation delay,and energy consumption.However,SDNs are vulnerable to distributed denial of service(DDoS)attacks that interfere with legitimate use of the network.The ever-increasing frequency of DDoS attacks has made it necessary to consider them in network design,especially in critical applications such as military,health care,and financial services networks requiring high availability.We propose a mathematical model for planning the deployment of SDN smart backup controllers(SBCs)to preserve service in the presence of DDoS attacks.Given a number of input parameters,our model has two distinct capabilities.First,it determines the optimal number of primary controllers to place at specific locations or nodes under normal operating conditions.Second,it recommends an optimal number of smart backup controllers for use with different levels of DDoS attacks.The goal of the model is to improve resistance to DDoS attacks while optimizing the overall cost based on the parameters.Our simulated results demonstrate that the model is useful in planning for SDN reliability in the presence of DDoS attacks while managing the overall cost.