期刊文献+
共找到33篇文章
< 1 2 >
每页显示 20 50 100
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
1
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Intelligent Resource Allocations for Software-Defined Mission-Critical IoT Services
2
作者 Chaebeen Nam Sa Math +1 位作者 Prohim Tam Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2022年第11期4087-4102,共16页
Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for b... Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods. 展开更多
关键词 Mobile edge computing Internet of Things software defined networks traffic classification machine learning resource allocation
下载PDF
Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications in Software-Defined IoT
3
作者 Prohim Tam Sa Math +1 位作者 Ahyoung Lee Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2022年第5期3319-3335,共17页
Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging ... Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput. 展开更多
关键词 Deep Q-networks federated learning network functions virtualization quality of service software-defined networking
下载PDF
Comparative Analysis of Machine Learning Models for PDF Malware Detection:Evaluating Different Training and Testing Criteria 被引量:2
4
作者 Bilal Khan Muhammad Arshad Sarwar Shah Khan 《Journal of Cyber Security》 2023年第1期1-11,共11页
The proliferation of maliciously coded documents as file transfers increase has led to a rise in sophisticated attacks.Portable Document Format(PDF)files have emerged as a major attack vector for malware due to their ... The proliferation of maliciously coded documents as file transfers increase has led to a rise in sophisticated attacks.Portable Document Format(PDF)files have emerged as a major attack vector for malware due to their adaptability and wide usage.Detecting malware in PDF files is challenging due to its ability to include various harmful elements such as embedded scripts,exploits,and malicious URLs.This paper presents a comparative analysis of machine learning(ML)techniques,including Naive Bayes(NB),K-Nearest Neighbor(KNN),Average One Dependency Estimator(A1DE),RandomForest(RF),and SupportVectorMachine(SVM)forPDFmalware detection.The study utilizes a dataset obtained from the Canadian Institute for Cyber-security and employs different testing criteria,namely percentage splitting and 10-fold cross-validation.The performance of the techniques is evaluated using F1-score,precision,recall,and accuracy measures.The results indicate that KNNoutperforms other models,achieving an accuracy of 99.8599%using 10-fold cross-validation.The findings highlight the effectiveness of ML models in accurately detecting PDF malware and provide insights for developing robust systems to protect against malicious activities. 展开更多
关键词 Cyber-security PDF malware model training testing
下载PDF
Context Awareness by Noise-Pattern Analysis of a Smart Factory
5
作者 So-Yeon Lee Jihoon Park Dae-Young Kim 《Computers, Materials & Continua》 SCIE EI 2023年第8期1497-1514,共18页
Recently,to build a smart factory,research has been conducted to perform fault diagnosis and defect detection based on vibration and noise signals generated when a mechanical system is driven using deep-learning techn... Recently,to build a smart factory,research has been conducted to perform fault diagnosis and defect detection based on vibration and noise signals generated when a mechanical system is driven using deep-learning technology,a field of artificial intelligence.Most of the related studies apply various audio-feature extraction techniques to one-dimensional raw data to extract sound-specific features and then classify the sound by using the derived spectral image as a training dataset.However,compared to numerical raw data,learning based on image data has the disadvantage that creating a training dataset is very time-consuming.Therefore,we devised a two-step data preprocessing method that efficiently detects machine anomalies in numerical raw data.In the first preprocessing process,sound signal information is analyzed to extract features,and in the second preprocessing process,data filtering is performed by applying the proposed algorithm.An efficient dataset was built formodel learning through a total of two steps of data preprocessing.In addition,both showed excellent performance in the training accuracy of the model that entered each dataset,but it can be seen that the time required to build the dataset was 203 s compared to 39 s,which is about 5.2 times than when building the image dataset. 展开更多
关键词 Noise-pattern recognition context awareness deep learning fault detection smart factory
下载PDF
Edge Cloud Selection in Mobile Edge Computing(MEC)-Aided Applications for Industrial Internet of Things(IIoT)Services
6
作者 Dae-Young Kim SoYeon Lee +1 位作者 MinSeung Kim Seokhoon Kim 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2049-2060,共12页
In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to im... In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to improve IIoT service efficiency.There are two types of costs for this kind of IoT network:a communication cost and a computing cost.For service efficiency,the communication cost of data transmission should be minimized,and the computing cost in the edge cloud should be also minimized.Therefore,in this paper,the communication cost for data transmission is defined as the delay factor,and the computing cost in the edge cloud is defined as the waiting time of the computing intensity.The proposed method selects an edge cloud that minimizes the total cost of the communication and computing costs.That is,a device chooses a routing path to the selected edge cloud based on the costs.The proposed method controls the data flows in a mesh-structured network and appropriately distributes the data processing load.The performance of the proposed method is validated through extensive computer simulation.When the transition probability from good to bad is 0.3 and the transition probability from bad to good is 0.7 in wireless and edge cloud states,the proposed method reduced both the average delay and the service pause counts to about 25%of the existing method. 展开更多
关键词 Industrial Internet of Things(IIoT)network IIoT service mobile edge computing(MEC) edge cloud selection MEC-aided application
下载PDF
Adaptive Partial Task Offloading and Virtual Resource Placement in SDN/NFV-Based Network Softwarization
7
作者 Prohim Tam Sa Math Seokhoon Kim 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期2141-2154,共14页
Edge intelligence brings the deployment of applied deep learning(DL)models in edge computing systems to alleviate the core backbone network congestions.The setup of programmable software-defined networking(SDN)control... Edge intelligence brings the deployment of applied deep learning(DL)models in edge computing systems to alleviate the core backbone network congestions.The setup of programmable software-defined networking(SDN)control and elastic virtual computing resources within network functions virtualization(NFV)are cooperative for enhancing the applicability of intelligent edge softwarization.To offer advancement for multi-dimensional model task offloading in edge networks with SDN/NFV-based control softwarization,this study proposes a DL mechanism to recommend the optimal edge node selection with primary features of congestion windows,link delays,and allocatable bandwidth capacities.Adaptive partial task offloading policy considered the DL-based recommendation to modify efficient virtual resource placement for minimizing the completion time and termination drop ratio.The optimization problem of resource placement is tackled by a deep reinforcement learning(DRL)-based policy following the Markov decision process(MDP).The agent observes the state spaces and applies value-maximized action of available computation resources and adjustable resource allocation steps.The reward formulation primarily considers taskrequired computing resources and action-applied allocation properties.With defined policies of resource determination,the orchestration procedure is configured within each virtual network function(VNF)descriptor using topology and orchestration specification for cloud applications(TOSCA)by specifying the allocated properties.The simulation for the control rule installation is conducted using Mininet and Ryu SDN controller.Average delay and task delivery/drop ratios are used as the key performance metrics. 展开更多
关键词 Deep learning partial task offloading software-defined networking virtual machine virtual network functions
下载PDF
Sentiment Analysis Based on Performance of Linear Support Vector Machine and Multinomial Naïve Bayes Using Movie Reviews with Baseline Techniques
8
作者 Mian Muhammad Danyal Sarwar Shah Khan +3 位作者 Muzammil Khan Muhammad Bilal Ghaffar Bilal Khan Muhammad Arshad 《Journal on Big Data》 2023年第1期1-18,共18页
Movies are the better source of entertainment.Every year,a great percentage of movies are released.People comment on movies in the form of reviews after watching them.Since it is difficult to read all of the reviews f... Movies are the better source of entertainment.Every year,a great percentage of movies are released.People comment on movies in the form of reviews after watching them.Since it is difficult to read all of the reviews for a movie,summarizing all of the reviews will help make this decision without wasting time in reading all of the reviews.Opinion mining also known as sentiment analysis is the process of extracting subjective information from textual data.Opinion mining involves identifying and extracting the opinions of individuals,which can be positive,neutral,or negative.The task of opinion mining also called sentiment analysis is performed to understand people’s emotions and attitudes in movie reviews.Movie reviews are an important source of opinion data because they provide insight into the general public’s opinions about a particular movie.The summary of all reviews can give a general idea about the movie.This study compares baseline techniques,Logistic Regression,Random Forest Classifier,Decision Tree,K-Nearest Neighbor,Gradient Boosting Classifier,and Passive Aggressive Classifier with Linear Support Vector Machines and Multinomial Naïve Bayes on the IMDB Dataset of 50K reviews and Sentiment Polarity Dataset Version 2.0.Before applying these classifiers,in pre-processing both datasets are cleaned,duplicate data is dropped and chat words are treated for better results.On the IMDB Dataset of 50K reviews,Linear Support Vector Machines achieve the highest accuracy of 89.48%,and after hyperparameter tuning,the Passive Aggressive Classifier achieves the highest accuracy of 90.27%,while Multinomial Nave Bayes achieves the highest accuracy of 70.69%and 71.04%after hyperparameter tuning on the Sentiment Polarity Dataset Version 2.0.This study highlights the importance of sentiment analysis as a tool for understanding the emotions and attitudes in movie reviews and predicts the performance of a movie based on the average sentiment of all the reviews. 展开更多
关键词 Opinion mining machine learning movie reviews IMDB Dataset of 50K reviews Sentiment Polarity Dataset Version 2.0
下载PDF
Investigating the Use of Email Application in Illiterate and Semi-Illiterate Population 被引量:1
9
作者 Sadeeq Jan Imran Maqsood +2 位作者 Salman Ahmed Zahid Wadud Iftikhar Ahmad 《Computers, Materials & Continua》 SCIE EI 2020年第3期1473-1486,共14页
The use of electronic communication has been significantly increased over the last few decades.Email is one of the most well-known means of electronic communication.Traditional email applications are widely used by a ... The use of electronic communication has been significantly increased over the last few decades.Email is one of the most well-known means of electronic communication.Traditional email applications are widely used by a large population;however,illiterate and semi-illiterate people face challenges in using them.A major population of Pakistan is illiterate that has little or no practice of computer usage.In this paper,we investigate the challenges of using email applications by illiterate and semi-illiterate people.In addition,we also propose a solution by developing an application tailored to the needs of illiterate/semi-illiterate people.Research shows that illiterate people are good at learning the designs that convey information with pictures instead of text-only,and focus more on one object/action at a time.Our proposed solution is based on designing user interfaces that consist of icons and vocal/audio instructions instead of text.Further,we use background voice/audio which is more helpful than flooding a picture with a lot of information.We tested our application using a large number of users with various skill levels(from no computer knowledge to experts).Our results of the usability tests indicate that the application can be used by illiterate people without any training or third-party’s help. 展开更多
关键词 Illiterate semi-illiterate email usability user interfaces
下载PDF
Implementation of a Subjective Visual Vertical and Horizontal Testing System Using Virtual Reality
10
作者 Sungjin Lee Min Hong +1 位作者 Hongly Va Ji-Yun Park 《Computers, Materials & Continua》 SCIE EI 2021年第6期3669-3679,共11页
Subjective visual vertical(SVV)and subjective visual horizontal(SVH)tests can be used to evaluate the perception of verticality and horizontality,respectively,and can aid the diagnosis of otolith dysfunction in clinic... Subjective visual vertical(SVV)and subjective visual horizontal(SVH)tests can be used to evaluate the perception of verticality and horizontality,respectively,and can aid the diagnosis of otolith dysfunction in clinical practice.In this study,SVV and SVH screen version tests are implemented using virtual reality(VR)equipment;the proposed test method promotes a more immersive feeling for the subject while using a simple equipment configuration and possessing excellent mobility.To verify the performance of the proposed VR-based SVV and SVH tests,a reliable comparison was made between the traditional screen-based SVV and SVH tests and the proposed method,based on 30 healthy subjects.The average results of our experimental tests on the VR-based binocular SVV and SVH equipment were−0.15◦±1.74 and 0.60◦±1.18,respectively.The proposed VR-based method satisfies the normal tolerance for horizontal or vertical lines,i.e.,a±3◦error,as defined in previous studies,and it can be used to replace existing test methods. 展开更多
关键词 Subjective visual vertical subjective visual horizontal virtual reality UNITY3D FOVE HMD vestibular function tests diagnostic equipment
下载PDF
Development of Cloud Based Air Pollution Information System Using Visualization
11
作者 SangWook Han JungYeon Seo +2 位作者 Dae-Young Kim SeokHoon Kim HwaMin Lee 《Computers, Materials & Continua》 SCIE EI 2019年第6期697-711,共15页
Air pollution caused by fine dust is a big problem all over the world and fine dust has a fatal impact on human health.But there are too few fine dust measuring stations and the installation cost of fine dust measurin... Air pollution caused by fine dust is a big problem all over the world and fine dust has a fatal impact on human health.But there are too few fine dust measuring stations and the installation cost of fine dust measuring station is very expensive.In this paper,we propose Cloud-based air pollution information system using R.To measure fine dust,we have developed an inexpensive measuring device and studied the technique to accurately measure the concentration of fine dust at the user’s location.And we have developed the smartphone application to provide air pollution information.In our system,we provide collected data based analytical results through effective data modeling.Our system provides information on fine dust value and action tips through the air pollution information application.And it supports visualization on the map using the statistical program R.The user can check the fine dust statistics map and cope with fine dust accordingly. 展开更多
关键词 Air pollution VISUALIZATION R big data cloud clusters
下载PDF
A Distributed Covert Channel of the Packet Ordering Enhancement Model Based on Data Compression
12
作者 Lejun Zhang Xiaoyan Hu +5 位作者 Zhijie Zhang Weizheng Wang Tianwen Huang Donghai Guan Chunhui Zhao Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2020年第9期2013-2030,共18页
Covert channel of the packet ordering is a hot research topic.Encryption technology is not enough to protect the security of both sides of communication.Covert channel needs to hide the transmission data and protect c... Covert channel of the packet ordering is a hot research topic.Encryption technology is not enough to protect the security of both sides of communication.Covert channel needs to hide the transmission data and protect content of communication.The traditional methods are usually to use proxy technology such as tor anonymous tracking technology to achieve hiding from the communicator.However,because the establishment of proxy communication needs to consume traffic,the communication capacity will be reduced,and in recent years,the tor technology often has vulnerabilities that led to the leakage of secret information.In this paper,the covert channel model of the packet ordering is applied into the distributed system,and a distributed covert channel of the packet ordering enhancement model based on data compression(DCCPOEDC)is proposed.The data compression algorithms are used to reduce the amount of data and transmission time.The distributed system and data compression algorithms can weaken the hidden statistical probability of information.Furthermore,they can enhance the unknowability of the data and weaken the time distribution characteristics of the data packets.This paper selected a compression algorithm suitable for DCCPOEDC and analyzed DCCPOEDC from anonymity,transmission efficiency,and transmission performance.According to the analysis results,it can be seen that DCCPOEDC optimizes the covert channel of the packet ordering,which saves the transmission time and improves the concealment compared with the original covert channel. 展开更多
关键词 Covert channels information hiding data compression distributed system
下载PDF
Reverse Engineering of Mobile Banking Applications
13
作者 Syeda Warda Asher Sadeeq Jan +3 位作者 George Tsaramirsis Fazal Qudus Khan Abdullah Khalil Muhammad Obaidullah 《Computer Systems Science & Engineering》 SCIE EI 2021年第9期265-278,共14页
Software reverse engineering is the process of analyzing a software system to extract the design and implementation details.Reverse engineering provides the source code of an application,the insight view of the archit... Software reverse engineering is the process of analyzing a software system to extract the design and implementation details.Reverse engineering provides the source code of an application,the insight view of the architecture and the third-party dependencies.From a security perspective,it is mostly used for finding vulnerabilities and attacking or cracking an application.The process is carried out either by obtaining the code in plaintext or reading it through the binaries or mnemonics.Nowadays,reverse engineering is widely used for mobile applications and is considered a security risk.The Open Web Application Security Project(OWASP),a leading security research forum,has included reverse engineering in its top 10 list of mobile application vulnerabilities.Mobile applications are used in many sectors,e.g.,banking,education,health.In particular,the banking applications are critical in terms of security as they are used for financial transactions.A security breach of such applications can result in huge financial losses for the customers as well as the banks.There exist various tools for reverse engineering of mobile applications,however,they have deficiencies,e.g.,complex configurations,lack of detailed analysis reports.In this research work,we perform an analysis of the available tools for reverse engineering of mobile applications.Our dataset consists of the mobile banking applications of the banks providing services in Pakistan.Our results indicate that none of the existing tools can carry out the complete reverse engineering process as a standalone tool.In addition,we observe significant differences in terms of the execution time and the number of files generated by each tool for the same file. 展开更多
关键词 Reverse engineering mobile banking applications security analysis
下载PDF
Cardiac CT Image Segmentation for Deep Learning-Based Coronary Calcium Detection Using K-Means Clustering and Grabcut Algorithm 被引量:1
14
作者 Sungjin Lee Ahyoung Lee Min Hong 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2543-2554,共12页
Specific medical data has limitations in that there are not many numbers and it is not standardized.to solve these limitations,it is necessary to study how to efficiently process these limited amounts of data.In this ... Specific medical data has limitations in that there are not many numbers and it is not standardized.to solve these limitations,it is necessary to study how to efficiently process these limited amounts of data.In this paper,deep learning methods for automatically determining cardiovascular diseases are described,and an effective preprocessing method for CT images that can be applied to improve the performance of deep learning was conducted.The cardiac CT images include several parts of the body such as the heart,lungs,spine,and ribs.The preprocessing step proposed in this paper divided CT image data into regions of interest and other regions using K-means clustering and the Grabcut algorithm.We compared the deep learning performance results of original data,data using only K-means clustering,and data using both K-means clustering and the Grabcut algorithm.All data used in this paper were collected at Soonchunhyang University Cheonan Hospital in Korea and the experimental test proceeded with IRB approval.The training was conducted using Resnet 50,VGG,and Inception resnet V2 models,and Resnet 50 had the best accuracy in validation and testing.Through the preprocessing process proposed in this paper,the accuracy of deep learning models was significantly improved by at least 10%and up to 40%. 展开更多
关键词 Deep learning VGG resnet CT image processing
下载PDF
LoRa Backscatter Network Efficient Data Transmission Using RF Source Range Control
15
作者 Dae-Young Kim SoYeon Lee Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2023年第2期4015-4025,共11页
Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that deliver... Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that delivers data;this enables new services in battery-less domains with massive Internet-of-Things(IoT)devices.Connectivity is highly energy-efficient in the context of massive IoT applications.Outdoors,long-range(LoRa)backscattering facilitates large IoT services.A backscatter network guarantees timeslot-and contention-based transmission.Timeslot-based transmission ensures data transmission,but is not scalable to different numbers of transmission devices.If contention-based transmission is used,collisions are unavoidable.To reduce collisions and increase transmission efficiency,the number of devices transmitting data must be controlled.To control device activation,the RF source range can be modulated by adjusting the RF source power during LoRa backscatter.This reduces the number of transmitting devices,and thus collisions and retransmission,thereby improving transmission efficiency.We performed extensive simulations to evaluate the performance of our method. 展开更多
关键词 Backscatter communication LoRa backscatter RF source range control activated device control internet of things
下载PDF
Real-Time Prediction Algorithm for Intelligent Edge Networks with Federated Learning-Based Modeling
16
作者 Seungwoo Kang Seyha Ros +3 位作者 Inseok Song Prohim Tam Sa Math Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2023年第11期1967-1983,共17页
Intelligent healthcare networks represent a significant component in digital applications,where the requirements hold within quality-of-service(QoS)reliability and safeguarding privacy.This paper addresses these requi... Intelligent healthcare networks represent a significant component in digital applications,where the requirements hold within quality-of-service(QoS)reliability and safeguarding privacy.This paper addresses these requirements through the integration of enabler paradigms,including federated learning(FL),cloud/edge computing,softwaredefined/virtualized networking infrastructure,and converged prediction algorithms.The study focuses on achieving reliability and efficiency in real-time prediction models,which depend on the interaction flows and network topology.In response to these challenges,we introduce a modified version of federated logistic regression(FLR)that takes into account convergence latencies and the accuracy of the final FL model within healthcare networks.To establish the FLR framework for mission-critical healthcare applications,we provide a comprehensive workflow in this paper,introducing framework setup,iterative round communications,and model evaluation/deployment.Our optimization process delves into the formulation of loss functions and gradients within the domain of federated optimization,which concludes with the generation of service experience batches for model deployment.To assess the practicality of our approach,we conducted experiments using a hypertension prediction model with data sourced from the 2019 annual dataset(Version 2.0.1)of the Korea Medical Panel Survey.Performance metrics,including end-to-end execution delays,model drop/delivery ratios,and final model accuracies,are captured and compared between the proposed FLR framework and other baseline schemes.Our study offers an FLR framework setup for the enhancement of real-time prediction modeling within intelligent healthcare networks,addressing the critical demands of QoS reliability and privacy preservation. 展开更多
关键词 Edge computing federated logistic regression intelligent healthcare networks prediction modeling privacy-aware and real-time learning
下载PDF
Data Aggregation-based Transmission Method in Ultra-Dense Wireless Networks
17
作者 Dae-Young Kim Seokhoon Kim 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期727-737,共11页
As the Internet of Things(IoT)advances,machine-type devices are densely deployed and massive networks such as ultra-dense networks(UDNs)are formed.Various devices attend to the network to transmit data using machine-t... As the Internet of Things(IoT)advances,machine-type devices are densely deployed and massive networks such as ultra-dense networks(UDNs)are formed.Various devices attend to the network to transmit data using machine-type communication(MTC),whereby numerous,various are generated.MTC devices generally have resource constraints and use wireless communication.In this kind of network,data aggregation is a key function to provide transmission efficiency.It can reduce the number of transmitted data in the network,and this leads to energy saving and reducing transmission delays.In order to effectively operate data aggregation in UDNs,it is important to select an aggregation point well.The total number of transmitted data may vary,depending on the aggregation point to which the data are delivered.Therefore,in this paper,we propose a novel data aggregation scheme to select the appropriate aggregation point and describe the data transmission method applying the proposed aggregation scheme.In addition,we evaluate the proposed scheme with extensive computer simulations.Better performances in the proposed scheme are achieved compared to the conventional approach. 展开更多
关键词 Data aggregation data transmission ultra-dense network machine-type communication Internet of Things
下载PDF
Hybrid Mobile Cloud Computing Architecture with Load Balancing for Healthcare Systems
18
作者 Ahyoung Lee Jui Mhatre +1 位作者 Rupak Kumar Das Min Hong 《Computers, Materials & Continua》 SCIE EI 2023年第1期435-452,共18页
Healthcare is a fundamental part of every individual’s life.The healthcare industry is developing very rapidly with the help of advanced technologies.Many researchers are trying to build cloud-based healthcare applic... Healthcare is a fundamental part of every individual’s life.The healthcare industry is developing very rapidly with the help of advanced technologies.Many researchers are trying to build cloud-based healthcare applications that can be accessed by healthcare professionals from their premises,as well as by patients from their mobile devices through communication interfaces.These systems promote reliable and remote interactions between patients and healthcare professionals.However,there are several limitations to these innovative cloud computing-based systems,namely network availability,latency,battery life and resource availability.We propose a hybrid mobile cloud computing(HMCC)architecture to address these challenges.Furthermore,we also evaluate the performance of heuristic and dynamic machine learning based task scheduling and load balancing algorithms on our proposed architecture.We compare them,to identify the strengths and weaknesses of each algorithm;and provide their comparative results,to show latency and energy consumption performance.Challenging issues for cloudbased healthcare systems are discussed in detail. 展开更多
关键词 Mobile cloud computing hybrid mobile cloud computing load balancing healthcare solution
下载PDF
Network-Aided Intelligent Traffic Steering in 5G Mobile Networks 被引量:4
19
作者 Dae-Young Kim Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2020年第10期243-261,共19页
Recently,the fifth generation(5G)of mobile networks has been deployed and various ranges of mobile services have been provided.The 5G mobile network supports improved mobile broadband,ultra-low latency and densely dep... Recently,the fifth generation(5G)of mobile networks has been deployed and various ranges of mobile services have been provided.The 5G mobile network supports improved mobile broadband,ultra-low latency and densely deployed massive devices.It allows multiple radio access technologies and interworks them for services.5G mobile systems employ traffic steering techniques to efficiently use multiple radio access technologies.However,conventional traffic steering techniques do not consider dynamic network conditions efficiently.In this paper,we propose a network aided traffic steering technique in 5G mobile network architecture.5G mobile systems monitor network conditions and learn with network data.Through a machine learning algorithm such as a feed-forward neural network,it recognizes dynamic network conditions and then performs traffic steering.The proposed scheme controls traffic for multiple radio access according to the ratio of measured throughput.Thus,it can be expected to improve traffic steering efficiency.The performance of the proposed traffic steering scheme is evaluated using extensive computer simulations. 展开更多
关键词 Mobile network 5G traffic steering machine learning MEC
下载PDF
A Covert Communication Method Using Special Bitcoin Addresses Generated by Vanitygen 被引量:7
20
作者 Lejun Zhang Zhijie Zhang +4 位作者 Weizheng Wang Rasheed Waqas Chunhui Zhao Seokhoon Kim Huiling Chen 《Computers, Materials & Continua》 SCIE EI 2020年第10期597-616,共20页
As an extension of the traditional encryption technology,information hiding has been increasingly used in the fields of communication and network media,and the covert communication technology has gradually developed.T... As an extension of the traditional encryption technology,information hiding has been increasingly used in the fields of communication and network media,and the covert communication technology has gradually developed.The blockchain technology that has emerged in recent years has the characteristics of decentralization and tamper resistance,which can effectively alleviate the disadvantages and problems of traditional covert communication.However,its combination with covert communication thus far has been mostly at the theoretical level.The BLOCCE method,as an early result of the combination of blockchain and covert communication technology,has the problems of low information embedding efficiency,the use of too many Bitcoin addresses,low communication efficiency,and high costs.The present research improved on this method,designed the V-BLOCCE which uses base58 to encrypt the plaintext and reuses the addresses generated by Vanitygen multiple times to embed information.This greatly improves the efficiency of information embedding and decreases the number of Bitcoin addresses used.Under the premise of ensuring the order,the Bitcoin transaction OP_RETURN field is used to store the information required to restore the plaintext and the transactions are issued at the same time to improve the information transmission efficiency.Thus,a more efficient and feasible method for the application of covert communication on the blockchain is proposed.In addition,this paper also provides a more feasible scheme and theoretical support for covert communication in blockchain. 展开更多
关键词 Covert communication blockchain Bitcoin address
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部