期刊文献+
共找到112篇文章
< 1 2 6 >
每页显示 20 50 100
Automatic Rule Discovery for Data Transformation Using Fusion of Diversified Feature Formats
1
作者 G.Sunil Santhosh Kumar M.Rudra Kumar 《Computers, Materials & Continua》 SCIE EI 2024年第7期695-713,共19页
This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed... This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors. 展开更多
关键词 XGBoost business rule engine machine learning categorical query language humanitarian computing environment
下载PDF
Fuzzy inference system using genetic algorithm and pattern search for predicting roof fall rate in underground coal mines
2
作者 Ayush Sahu Satish Sinha Haider Banka 《International Journal of Coal Science & Technology》 EI CAS CSCD 2024年第1期31-41,共11页
One of the most dangerous safety hazard in underground coal mines is roof falls during retreat mining.Roof falls may cause life-threatening and non-fatal injuries to miners and impede mining and transportation operati... One of the most dangerous safety hazard in underground coal mines is roof falls during retreat mining.Roof falls may cause life-threatening and non-fatal injuries to miners and impede mining and transportation operations.As a result,a reliable roof fall prediction model is essential to tackle such challenges.Different parameters that substantially impact roof falls are ill-defined and intangible,making this an uncertain and challenging research issue.The National Institute for Occupational Safety and Health assembled a national database of roof performance from 37 coal mines to explore the factors contributing to roof falls.Data acquired for 37 mines is limited due to several restrictions,which increased the likelihood of incompleteness.Fuzzy logic is a technique for coping with ambiguity,incompleteness,and uncertainty.Therefore,In this paper,the fuzzy inference method is presented,which employs a genetic algorithm to create fuzzy rules based on 109 records of roof fall data and pattern search to refine the membership functions of parameters.The performance of the deployed model is evaluated using statistical measures such as the Root-Mean-Square Error,Mean-Absolute-Error,and coefficient of determination(R_(2)).Based on these criteria,the suggested model outperforms the existing models to precisely predict roof fall rates using fewer fuzzy rules. 展开更多
关键词 Underground coal mining Roof fall Fuzzy logic Genetic algorithm
下载PDF
A Comprehensive Image Processing Framework for Early Diagnosis of Diabetic Retinopathy
3
作者 Kusum Yadav Yasser Alharbi +6 位作者 Eissa Jaber Alreshidi Abdulrahman Alreshidi Anuj Kumar Jain Anurag Jain Kamal Kumar Sachin Sharma Brij BGupta 《Computers, Materials & Continua》 SCIE EI 2024年第11期2665-2683,共19页
In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis... In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging. 展开更多
关键词 Image processing biological data PSO Fuzzy C-Means(FCM)
下载PDF
A Model of Cloud-Based Enterprise Resource Planning (ERP) for Small and Medium Enterprise
4
作者 Mohammad Jahangir Alam Shoman Das Sharma Tanjia Chowdhury 《Journal of Computer and Communications》 2024年第10期37-50,共14页
Cloud Computing is an uprising technology in the rapid growing IT world. The adaptation of cloud computing is increasing in very large scale business organizations to small institutions rapidly due to many advanced fe... Cloud Computing is an uprising technology in the rapid growing IT world. The adaptation of cloud computing is increasing in very large scale business organizations to small institutions rapidly due to many advanced features of cloud computing, such as SaaS, PaaS and IaaS service models. So, nowadays, many organizations are trying to implement Cloud Computing based ERP system to enjoy the benefits of cloud computing. To implement any ERP system, an organization usually faces many challenges. As a result, this research has introduced how easily this cloud system can be implemented in an organization. By using this ERP system, an organization can be benefited in many ways;especially Small and Medium Enterprises (SMEs) can enjoy the highest possible benefits from this system. 展开更多
关键词 Cloud Computing Cloud Comparison Cloud Models Enterprise Resource planning (ERP) Small and Medium Enterprise (SME)
下载PDF
Logistic Regression Trust–A Trust Model for Internet-of-Things Using Regression Analysis 被引量:1
5
作者 Feslin Anish Mon Solomon Godfrey Winster Sathianesan R.Ramesh 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1125-1142,共18页
Internet of Things(IoT)is a popular social network in which devices are virtually connected for communicating and sharing information.This is applied greatly in business enterprises and government sectors for deliveri... Internet of Things(IoT)is a popular social network in which devices are virtually connected for communicating and sharing information.This is applied greatly in business enterprises and government sectors for delivering the services to their customers,clients and citizens.But,the interaction is success-ful only based on the trust that each device has on another.Thus trust is very much essential for a social network.As Internet of Things have access over sen-sitive information,it urges to many threats that lead data management to risk.This issue is addressed by trust management that help to take decision about trust-worthiness of requestor and provider before communication and sharing.Several trust-based systems are existing for different domain using Dynamic weight meth-od,Fuzzy classification,Bayes inference and very few Regression analysis for IoT.The proposed algorithm is based on Logistic Regression,which provide strong statistical background to trust prediction.To make our stand strong on regression support to trust,we have compared the performance with equivalent sound Bayes analysis using Beta distribution.The performance is studied in simu-lated IoT setup with Quality of Service(QoS)and Social parameters for the nodes.The proposed model performs better in terms of various metrics.An IoT connects heterogeneous devices such as tags and sensor devices for sharing of information and avail different application services.The most salient features of IoT system is to design it with scalability,extendibility,compatibility and resiliency against attack.The existing worksfinds a way to integrate direct and indirect trust to con-verge quickly and estimate the bias due to attacks in addition to the above features. 展开更多
关键词 LRTrust logistic regression trust management internet of things
下载PDF
A Review and Analysis of Localization Techniques in Underwater Wireless Sensor Networks 被引量:1
6
作者 Seema Rani Anju +6 位作者 Anupma Sangwan Krishna Kumar Kashif Nisar Tariq Rahim Soomro Ag.Asri Ag.Ibrahim Manoj Gupta Laxmi Chandand Sadiq Ali Khan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5697-5715,共19页
In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in... In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN. 展开更多
关键词 Underwater wireless sensor networks localization schemes node localization ranging algorithms estimation based prediction based
下载PDF
A Robust Automated Framework for Classification of CT Covid-19 Images Using MSI-ResNet 被引量:1
7
作者 Aghila Rajagopal Sultan Ahmad +3 位作者 Sudan Jha Ramachandran Alagarsamy Abdullah Alharbi Bader Alouffi 《Computer Systems Science & Engineering》 SCIE EI 2023年第6期3215-3229,共15页
Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological i... Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological images,auto-mated COVID-19 diagnosis techniques are needed.The enhancement of AI(Artificial Intelligence)has been focused in previous research,which uses X-ray images for detecting COVID-19.The most common symptoms of COVID-19 are fever,dry cough and sore throat.These symptoms may lead to an increase in the rigorous type of pneumonia with a severe barrier.Since medical imaging is not suggested recently in Canada for critical COVID-19 diagnosis,computer-aided systems are implemented for the early identification of COVID-19,which aids in noticing the disease progression and thus decreases the death rate.Here,a deep learning-based automated method for the extraction of features and classi-fication is enhanced for the detection of COVID-19 from the images of computer tomography(CT).The suggested method functions on the basis of three main pro-cesses:data preprocessing,the extraction of features and classification.This approach integrates the union of deep features with the help of Inception 14 and VGG-16 models.At last,a classifier of Multi-scale Improved ResNet(MSI-ResNet)is developed to detect and classify the CT images into unique labels of class.With the support of available open-source COVID-CT datasets that consists of 760 CT pictures,the investigational validation of the suggested method is estimated.The experimental results reveal that the proposed approach offers greater performance with high specificity,accuracy and sensitivity. 展开更多
关键词 Covid-19 CT images multi-scale improved ResNet AI inception 14 and VGG-16 models
下载PDF
Qualitative Abnormalities of Peripheral Blood Smear Images Using Deep Learning Techniques
8
作者 G.Arutperumjothi K.Suganya Devi +1 位作者 C.Rani P.Srinivasan 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期1069-1086,共18页
In recent years,Peripheral blood smear is a generic analysis to assess the person’s health status.Manual testing of Peripheral blood smear images are difficult,time-consuming and is subject to human intervention and ... In recent years,Peripheral blood smear is a generic analysis to assess the person’s health status.Manual testing of Peripheral blood smear images are difficult,time-consuming and is subject to human intervention and visual error.This method encouraged for researchers to present algorithms and techniques to perform the peripheral blood smear analysis with the help of computer-assisted and decision-making techniques.Existing CAD based methods are lacks in attaining the accurate detection of abnormalities present in the images.In order to mitigate this issue Deep Convolution Neural Network(DCNN)based automatic classification technique is introduced with the classification of eight groups of peripheral blood cells such as basophil,eosinophil,lymphocyte,monocyte,neutrophil,erythroblast,platelet,myocyte,promyocyte and metamyocyte.The proposed DCNN model employs transfer learning approach and additionally it carries three stages such as pre-processing,feature extraction and classification.Initially the pre-processing steps are incorporated to eliminate noisy contents present in the image by using Histogram Equalization(HE).It is enclosed to improve an image contrast.In order to distinguish the dissimilar class and segmentation approach is carried out with the help of Fuzzy C-Means(FCM)model whereas its centroid point optimality method with Slap Swarm based optimization strategy.Moreover some specific set of Gray Level Co-occurrence Matrix(GLCM)features of the segmented images are extracted to augment the performance of proposed detection algorithm.Finally the extracted features are recorded by DCNN and the proposed classifier has the capability to extract their own features.Based on this the diverse set of classes are classified and distinguished from qualitative abnormalities found in the image. 展开更多
关键词 Peripheral blood smear DCNN classifier PRE-PROCESSING SEGMENTATION feature extraction salp swarm optimization classification
下载PDF
Moth Flame Optimization Based FCNN for Prediction of Bugs in Software
9
作者 C.Anjali Julia Punitha Malar Dhas J.Amar Pratap Singh 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1241-1256,共16页
The software engineering technique makes it possible to create high-quality software.One of the most significant qualities of good software is that it is devoid of bugs.One of the most time-consuming and costly softwar... The software engineering technique makes it possible to create high-quality software.One of the most significant qualities of good software is that it is devoid of bugs.One of the most time-consuming and costly software proce-dures isfinding andfixing bugs.Although it is impossible to eradicate all bugs,it is feasible to reduce the number of bugs and their negative effects.To broaden the scope of bug prediction techniques and increase software quality,numerous causes of software problems must be identified,and successful bug prediction models must be implemented.This study employs a hybrid of Faster Convolution Neural Network and the Moth Flame Optimization(MFO)algorithm to forecast the number of bugs in software based on the program data itself,such as the line quantity in codes,methods characteristics,and other essential software aspects.Here,the MFO method is used to train the neural network to identify optimal weights.The proposed MFO-FCNN technique is compared with existing methods such as AdaBoost(AB),Random Forest(RF),K-Nearest Neighbour(KNN),K-Means Clustering(KMC),Support Vector Machine(SVM)and Bagging Clas-sifier(BC)are examples of machine learning(ML)techniques.The assessment method revealed that machine learning techniques may be employed successfully and through a high level of accuracy.The obtained data revealed that the proposed strategy outperforms the traditional approach. 展开更多
关键词 Faster convolution neural network Moth Flame Optimization(MFO) Support Vector Machine(SVM) AdaBoost(AB) software bug prediction
下载PDF
Computation of PoA for Selfish Node Detection and Resource Allocation Using Game Theory
10
作者 S.Kanmani M.Murali 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2583-2598,共16页
The introduction of new technologies has increased communication network coverage and the number of associating nodes in dynamic communication networks(DCN).As the network has the characteristics like decentralized an... The introduction of new technologies has increased communication network coverage and the number of associating nodes in dynamic communication networks(DCN).As the network has the characteristics like decentralized and dynamic,few nodes in the network may not associate with other nodes.These uncooperative nodes also known as selfish nodes corrupt the performance of the cooperative nodes.Namely,the nodes cause congestion,high delay,security concerns,and resource depletion.This study presents an effective selfish node detection method to address these problems.The Price of Anarchy(PoA)and the Price of Stability(PoS)in Game Theory with the Presence of Nash Equilibrium(NE)are discussed for the Selfish Node Detection.This is a novel experiment to detect selfish nodes in a network using PoA.Moreover,the least response dynamic-based Capacitated Selfish Resource Allocation(CSRA)game is introduced to improve resource usage among the nodes.The suggested strategy is simulated using the Solar Winds simulator,and the simulation results show that,when compared to earlier methods,the new scheme offers promising performance in terms of delivery rate,delay,and throughput. 展开更多
关键词 Dynamic communication network(DCN) price of anarchy(PoA) nash equilibrium(NE) capacitated selfish resource allocation(CSRA)game game theory price of stability(PoS)
下载PDF
Nonlinear Dynamic System Identification of ARX Model for Speech Signal Identification
11
作者 Rakesh Kumar Pattanaik Mihir N.Mohanty +1 位作者 Srikanta Ku.Mohapatra Binod Ku.Pattanayak 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期195-208,共14页
System Identification becomes very crucial in the field of nonlinear and dynamic systems or practical systems.As most practical systems don’t have prior information about the system behaviour thus,mathematical modell... System Identification becomes very crucial in the field of nonlinear and dynamic systems or practical systems.As most practical systems don’t have prior information about the system behaviour thus,mathematical modelling is required.The authors have proposed a stacked Bidirectional Long-Short Term Memory(Bi-LSTM)model to handle the problem of nonlinear dynamic system identification in this paper.The proposed model has the ability of faster learning and accurate modelling as it can be trained in both forward and backward directions.The main advantage of Bi-LSTM over other algorithms is that it processes inputs in two ways:one from the past to the future,and the other from the future to the past.In this proposed model a backward-running Long-Short Term Memory(LSTM)can store information from the future along with application of two hidden states together allows for storing information from the past and future at any moment in time.The proposed model is tested with a recorded speech signal to prove its superiority with the performance being evaluated through Mean Square Error(MSE)and Root Means Square Error(RMSE).The RMSE and MSE performances obtained by the proposed model are found to be 0.0218 and 0.0162 respectively for 500 Epochs.The comparison of results and further analysis illustrates that the proposed model achieves better performance over other models and can obtain higher prediction accuracy along with faster convergence speed. 展开更多
关键词 Nonlinear dynamic system identification long-short term memory bidirectional-long-short term memory auto-regressive with exogenous
下载PDF
Implementation of VLSI on Signal Processing-Based Digital Architecture Using AES Algorithm
12
作者 Mohanapriya Marimuthu Santhosh Rajendran +5 位作者 Reshma Radhakrishnan Kalpana Rengarajan Shahzada Khurram Shafiq Ahmad Abdelaty Edrees Sayed Muhammad Shafiq 《Computers, Materials & Continua》 SCIE EI 2023年第3期4729-4745,共17页
Continuous improvements in very-large-scale integration(VLSI)technology and design software have significantly broadened the scope of digital signal processing(DSP)applications.The use of application-specific integrat... Continuous improvements in very-large-scale integration(VLSI)technology and design software have significantly broadened the scope of digital signal processing(DSP)applications.The use of application-specific integrated circuits(ASICs)and programmable digital signal processors for many DSP applications have changed,even though new system implementations based on reconfigurable computing are becoming more complex.Adaptable platforms that combine hardware and software programmability efficiency are rapidly maturing with discrete wavelet transformation(DWT)and sophisticated computerized design techniques,which are much needed in today’s modern world.New research and commercial efforts to sustain power optimization,cost savings,and improved runtime effectiveness have been initiated as initial reconfigurable technologies have emerged.Hence,in this paper,it is proposed that theDWTmethod can be implemented on a fieldprogrammable gate array in a digital architecture(FPGA-DA).We examined the effects of quantization on DWTperformance in classification problems to demonstrate its reliability concerning fixed-point math implementations.The Advanced Encryption Standard(AES)algorithm for DWT learning used in this architecture is less responsive to resampling errors than the previously proposed solution in the literature using the artificial neural networks(ANN)method.By reducing hardware area by 57%,the proposed system has a higher throughput rate of 88.72%,reliability analysis of 95.5%compared to the other standard methods. 展开更多
关键词 VLSI A ES discrete wavelet transformation signal processing
下载PDF
Homogeneous Batch Memory Deduplication Using Clustering of Virtual Machines
13
作者 N.Jagadeeswari V.Mohan Raj 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期929-943,共15页
Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kern... Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kernel Same Page Merging(KSM)is a Linux service for memory pages sharing in virtualized environments.Memory deduplication is vulnerable to a memory disclosure attack,which uses covert channel establishment to reveal the contents of other colocated virtual machines.To avoid a memory disclosure attack,sharing of identical pages within a single user’s virtual machine is permitted,but sharing of contents between different users is forbidden.In our proposed approach,virtual machines with similar operating systems of active domains in a node are recognised and organised into a homogenous batch,with memory deduplication performed inside that batch,to improve the memory pages sharing efficiency.When compared to memory deduplication applied to the entire host,implementation details demonstrate a significant increase in the number of pages shared when memory deduplication applied batch-wise and CPU(Central processing unit)consumption also increased. 展开更多
关键词 Kernel same page merging memory deduplication virtual machine sharing content-based sharing
下载PDF
Energy efficient indoor localisation for narrowband internet of things
14
作者 Ismail Keshta Mukesh Soni +6 位作者 Mohammed Wasim Bhatt Azeem Irshad Ali Rizwan Shakir Khan Renato RMaaliw III Arsalan Muhammad Soomar Mohammad Shabaz 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1150-1163,共14页
There are an increasing number of Narrow Band IoT devices being manufactured as the technology behind them develops quickly.The high co‐channel interference and signal attenuation seen in edge Narrow Band IoT devices... There are an increasing number of Narrow Band IoT devices being manufactured as the technology behind them develops quickly.The high co‐channel interference and signal attenuation seen in edge Narrow Band IoT devices make it challenging to guarantee the service quality of these devices.To maximise the data rate fairness of Narrow Band IoT devices,a multi‐dimensional indoor localisation model is devised,consisting of transmission power,data scheduling,and time slot scheduling,based on a network model that employs non‐orthogonal multiple access via a relay.Based on this network model,the optimisation goal of Narrow Band IoT device data rate ratio fairness is first established by the authors,while taking into account the Narrow Band IoT network:The multidimensional indoor localisation optimisation model of equipment tends to minimize data rate,energy constraints and EH relay energy and data buffer constraints,data scheduling and time slot scheduling.As a result,each Narrow Band IoT device's data rate needs are met while the network's overall performance is optimised.We investigate the model's potential for convex optimisation and offer an algorithm for optimising the distribution of multiple resources using the KKT criterion.The current work primarily considers the NOMA Narrow Band IoT network under a single EH relay.However,the growth of Narrow Band IoT devices also leads to a rise in co‐channel interference,which impacts NOMA's performance enhancement.Through simulation,the proposed approach is successfully shown.These improvements have boosted the network's energy efficiency by 44.1%,data rate proportional fairness by 11.9%,and spectrum efficiency by 55.4%. 展开更多
关键词 artificial inteligence detection of moving objects internet of things
下载PDF
Hybrid XGBoost model with hyperparameter tuning for prediction of liver disease with better accuracy 被引量:1
15
作者 Surjeet Dalal Edeh Michael Onyema Amit Malik 《World Journal of Gastroenterology》 SCIE CAS 2022年第46期6551-6563,共13页
BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver dise... BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver disease.This could be attributed to many factors,among which are human habits,awareness issues,poor healthcare,and late detection.To curb the growing threats from liver disease,early detection is critical to help reduce the risks and improve treatment outcome.Emerging technologies such as machine learning,as shown in this study,could be deployed to assist in enhancing its prediction and treatment.AIM To present a more efficient system for timely prediction of liver disease using a hybrid eXtreme Gradient Boosting model with hyperparameter tuning with a view to assist in early detection,diagnosis,and reduction of risks and mortality associated with the disease.METHODS The dataset used in this study consisted of 416 people with liver problems and 167 with no such history.The data were collected from the state of Andhra Pradesh,India,through https://www.kaggle.com/datasets/uciml/indian-liver-patientrecords.The population was divided into two sets depending on the disease state of the patient.This binary information was recorded in the attribute"is_patient".RESULTS The results indicated that the chi-square automated interaction detection and classification and regression trees models achieved an accuracy level of 71.36%and 73.24%,respectively,which was much better than the conventional method.The proposed solution would assist patients and physicians in tackling the problem of liver disease and ensuring that cases are detected early to prevent it from developing into cirrhosis(scarring)and to enhance the survival of patients.The study showed the potential of machine learning in health care,especially as it concerns disease prediction and monitoring.CONCLUSION This study contributed to the knowledge of machine learning application to health and to the efforts toward combating the problem of liver disease.However,relevant authorities have to invest more into machine learning research and other health technologies to maximize their potential. 展开更多
关键词 Liver infection Machine learning Chi-square automated interaction detection Classification and regression trees Decision tree XGBoost Hyperparameter tuning
下载PDF
TBDDoSA-MD:Trust-Based DDoS Misbehave Detection Approach in Software-defined Vehicular Network(SDVN) 被引量:1
16
作者 Rajendra Prasad Nayak Srinivas Sethi +4 位作者 Sourav Kumar Bhoi Kshira Sagar Sahoo Nz Jhanjhi Thamer A.Tabbakh Zahrah A.Almusaylim 《Computers, Materials & Continua》 SCIE EI 2021年第12期3513-3529,共17页
Reliable vehicles are essential in vehicular networks for effective communication.Since vehicles in the network are dynamic,even a short span of misbehavior by a vehicle can disrupt the whole network which may lead to... Reliable vehicles are essential in vehicular networks for effective communication.Since vehicles in the network are dynamic,even a short span of misbehavior by a vehicle can disrupt the whole network which may lead to catastrophic consequences.In this paper,a Trust-Based Distributed DoS Misbehave Detection Approach(TBDDoSA-MD)is proposed to secure the Software-Defined Vehicular Network(SDVN).A malicious vehicle in this network performs DDoS misbehavior by attacking other vehicles in its neighborhood.It uses the jamming technique by sending unnecessary signals in the network,as a result,the network performance degrades.Attacked vehicles in that network will no longer meet the service requests from other vehicles.Therefore,in this paper,we proposed an approach to detect the DDoS misbehavior by using the trust values of the vehicles.Trust values are calculated based on direct trust and recommendations(indirect trust).These trust values help to decide whether a vehicle is legitimate or malicious.We simply discard the messages from malicious vehicles whereas the authenticity of the messages from legitimate vehicles is checked further before taking any action based on those messages.The performance of TBDDoSA-MD is evaluated in the Veins hybrid simulator,which uses OMNeT++and Simulation of Urban Mobility(SUMO).We compared the performance of TBDDoSA-MD with the recently proposed Trust-Based Framework(TBF)scheme using the following performance parameters such as detection accuracy,packet delivery ratio,detection time,and energy consumption.Simulation results show that the proposed work has a high detection accuracy of more than 90%while keeping the detection time as low as 30 s. 展开更多
关键词 Software-defined vehicular network TRUST evaluator node denial of service misbehavior
下载PDF
Scope of machine learning applications for addressing the challenges in next-generation wireless networks 被引量:1
17
作者 Raj Kumar Samanta Bikash Sadhukhan +3 位作者 Hiranmay Samaddar Suvobrata Sarkar Chandan Koner Monidepa Ghosh 《CAAI Transactions on Intelligence Technology》 SCIE EI 2022年第3期395-418,共24页
The convenience of availing quality services at affordable costs anytime and anywhere makes mobile technology very popular among users.Due to this popularity,there has been a huge rise in mobile data volume,applicatio... The convenience of availing quality services at affordable costs anytime and anywhere makes mobile technology very popular among users.Due to this popularity,there has been a huge rise in mobile data volume,applications,types of services,and number of customers.Furthermore,due to the COVID-19 pandemic,the worldwide lockdown has added fuel to this increase as most of our professional and commercial activities are being done online from home.This massive increase in demand for multi-class services has posed numerous challenges to wireless network frameworks.The services offered through wireless networks are required to support this huge volume of data and multiple types of traffic,such as real-time live streaming of videos,audios,text,images etc.,at a very high bit rate with a negligible delay in transmission and permissible vehicular speed of the customers.Next-generation wireless networks(NGWNs,i.e.5G networks and beyond)are being developed to accommodate the service qualities mentioned above and many more.However,achieving all the desired service qualities to be incorporated into the design of the 5G network infrastructure imposes large challenges for designers and engineers.It requires the analysis of a huge volume of network data(structured and unstructured)received or collected from heterogeneous devices,applications,services,and customers and the effective and dynamic management of network parameters based on this analysis in real time.In the ever-increasing network heterogeneity and complexity,machine learning(ML)techniques may become an efficient tool for effectively managing these issues.In recent days,the progress of artificial intelligence and ML techniques has grown interest in their application in the networking domain.This study discusses current wireless network research,brief discussions on ML methods that can be effectively applied to the wireless networking domain,some tools available to support and customise efficient mobile system design,and some unresolved issues for future research directions. 展开更多
关键词 machine learning network control next-generation wireless networks
下载PDF
Low Profile UHF Antenna Design for Low Earth-Observation CubeSats 被引量:1
18
作者 Md.Amanath Ullah Touhidul Alam +1 位作者 Ali F.Almutairi Mohammad Tariqul Islam 《Computers, Materials & Continua》 SCIE EI 2022年第5期2533-2542,共10页
This paper reveals a new design of UHF CubeSat antenna based on a modified Planar Inverted F Antenna(PIFA)for CubeSat communication.The design utilizes a CubeSat face as the ground plane.There is a gap of 5 mm beneath... This paper reveals a new design of UHF CubeSat antenna based on a modified Planar Inverted F Antenna(PIFA)for CubeSat communication.The design utilizes a CubeSat face as the ground plane.There is a gap of 5 mm beneath the radiating element that facilitates the design providing with space for solar panels.The prototype is fabricated using Aluminum metal sheet and measured.The antenna achieved resonance at 419 MHz.Response of the antenna has been investigated after placing a solar panel.Lossy properties of solar panels made the resonance shift about 20 MHz.This design addresses the frequency shifting issue after placing the antenna with the CubeSat body.This phenomenon has been analyzed considering a typical 1U and 2U CubeSat body with the antenna.The antenna achieved a positive realized gain of 0.7 dB and approximately 78%of efficiency at the resonant frequency with providing 85%of open space for solar irradiance onto the solar panel. 展开更多
关键词 CubeSat antenna UHF antenna small satellite satellite communication
下载PDF
THRFuzzy:Tangential holoentropy-enabled rough fuzzy classifier to classification of evolving data streams 被引量:1
19
作者 Jagannath E.Nalavade T.Senthil Murugan 《Journal of Central South University》 SCIE EI CAS CSCD 2017年第8期1789-1800,共12页
The rapid developments in the fields of telecommunication, sensor data, financial applications, analyzing of data streams, and so on, increase the rate of data arrival, among which the data mining technique is conside... The rapid developments in the fields of telecommunication, sensor data, financial applications, analyzing of data streams, and so on, increase the rate of data arrival, among which the data mining technique is considered a vital process. The data analysis process consists of different tasks, among which the data stream classification approaches face more challenges than the other commonly used techniques. Even though the classification is a continuous process, it requires a design that can adapt the classification model so as to adjust the concept change or the boundary change between the classes. Hence, we design a novel fuzzy classifier known as THRFuzzy to classify new incoming data streams. Rough set theory along with tangential holoentropy function helps in the designing the dynamic classification model. The classification approach uses kernel fuzzy c-means(FCM) clustering for the generation of the rules and tangential holoentropy function to update the membership function. The performance of the proposed THRFuzzy method is verified using three datasets, namely skin segmentation, localization, and breast cancer datasets, and the evaluated metrics, accuracy and time, comparing its performance with HRFuzzy and adaptive k-NN classifiers. The experimental results conclude that THRFuzzy classifier shows better classification results providing a maximum accuracy consuming a minimal time than the existing classifiers. 展开更多
关键词 data stream classification fuzzy rough set tangential holoentropy concept change
下载PDF
Quantitative and qualitative correlation analysis of optimal route discovery for vehicular ad-hoc networks
20
作者 MUKUND B.Wagh GOMATHI N. 《Journal of Central South University》 SCIE EI CAS CSCD 2018年第7期1732-1745,共14页
Vehicular ad-hoc networks (VANETs) are a significant field in the intelligent transportation system (ITS) for improving road security. The interaction among the vehicles is enclosed under VANETs. Many experiments ... Vehicular ad-hoc networks (VANETs) are a significant field in the intelligent transportation system (ITS) for improving road security. The interaction among the vehicles is enclosed under VANETs. Many experiments have been performed in the region of VANET improvement. A familiar challenge that occurs is obtaining various constrained quality of service (QoS) metrics. For resolving this issue, this study obtains a cost design for the vehicle routing issue by focusing on the QoS metrics such as collision, travel cost, awareness, and congestion. The awareness of QoS is fuzzified into a price design that comprises the entire cost of routing. As the genetic algorithm (GA) endures from the most significant challenges such as complexity, unassisted issues in mutation, detecting slow convergence, global maxima, multifaceted features under genetic coding, and better fitting, the currently established lion algorithm (LA) is employed. The computation is analyzed by deploying three well-known studies such as cost analysis, convergence analysis, and complexity investigations. A numerical analysis with quantitative outcome has also been studied based on the obtained correlation analysis among various cost functions. It is found that LA performs better than GA with a reduction in complexity and routing cost. 展开更多
关键词 vehicular ad-hoc network lion algorithm fuzzy quality of service ROUTING
下载PDF
上一页 1 2 6 下一页 到第
使用帮助 返回顶部