期刊文献+
共找到23篇文章
< 1 2 >
每页显示 20 50 100
Intelligent Resource Allocations for Software-Defined Mission-Critical IoT Services
1
作者 Chaebeen Nam Sa Math +1 位作者 Prohim Tam Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2022年第11期4087-4102,共16页
Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for b... Heterogeneous Internet of Things(IoT)applications generate a diversity of novelty applications and services in next-generation networks(NGN),which is essential to guarantee end-to-end(E2E)communication resources for both control plane(CP)and data plane(DP).Likewise,the heterogeneous 5th generation(5G)communication applications,including Mobile Broadband Communications(MBBC),massive Machine-Type Commutation(mMTC),and ultra-reliable low latency communications(URLLC),obligate to perform intelligent Quality-of-Service(QoS)Class Identifier(QCI),while the CP entities will be suffered from the complicated massive HIOT applications.Moreover,the existing management and orchestration(MANO)models are inappropriate for resource utilization and allocation in large-scale and complicated network environments.To cope with the issues mentioned above,this paper presents an adopted software-defined mobile edge computing(SDMEC)with a lightweight machine learning(ML)algorithm,namely support vector machine(SVM),to enable intelligent MANO for real-time and resource-constraints IoT applications which require lightweight computation models.Furthermore,the SVM algorithm plays an essential role in performing QCI classification.Moreover,the software-defined networking(SDN)controller allocates and configures priority resources according to the SVM classification outcomes.Thus,the complementary of SVM and SDMEC conducts intelligent resource MANO for massive QCI environments and meets the perspectives of mission-critical communication with resource constraint applications.Based on the E2E experimentation metrics,the proposed scheme shows remarkable outperformance in key performance indicator(KPI)QoS,including communication reliability,latency,and communication throughput over the various powerful reference methods. 展开更多
关键词 Mobile edge computing Internet of Things software defined networks traffic classification machine learning resource allocation
下载PDF
Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications in Software-Defined IoT
2
作者 Prohim Tam Sa Math +1 位作者 Ahyoung Lee Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2022年第5期3319-3335,共17页
Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging ... Federated learning(FL)activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT)cellular networks,massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking(SDN)and network functions virtualization(NFV)to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies,and advocate computation offloading decisions.With gathered network conditions and resource states,the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure(NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances,and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput. 展开更多
关键词 Deep Q-networks federated learning network functions virtualization quality of service software-defined networking
下载PDF
Context Awareness by Noise-Pattern Analysis of a Smart Factory
3
作者 So-Yeon Lee Jihoon Park Dae-Young Kim 《Computers, Materials & Continua》 SCIE EI 2023年第8期1497-1514,共18页
Recently,to build a smart factory,research has been conducted to perform fault diagnosis and defect detection based on vibration and noise signals generated when a mechanical system is driven using deep-learning techn... Recently,to build a smart factory,research has been conducted to perform fault diagnosis and defect detection based on vibration and noise signals generated when a mechanical system is driven using deep-learning technology,a field of artificial intelligence.Most of the related studies apply various audio-feature extraction techniques to one-dimensional raw data to extract sound-specific features and then classify the sound by using the derived spectral image as a training dataset.However,compared to numerical raw data,learning based on image data has the disadvantage that creating a training dataset is very time-consuming.Therefore,we devised a two-step data preprocessing method that efficiently detects machine anomalies in numerical raw data.In the first preprocessing process,sound signal information is analyzed to extract features,and in the second preprocessing process,data filtering is performed by applying the proposed algorithm.An efficient dataset was built formodel learning through a total of two steps of data preprocessing.In addition,both showed excellent performance in the training accuracy of the model that entered each dataset,but it can be seen that the time required to build the dataset was 203 s compared to 39 s,which is about 5.2 times than when building the image dataset. 展开更多
关键词 Noise-pattern recognition context awareness deep learning fault detection smart factory
下载PDF
Edge Cloud Selection in Mobile Edge Computing(MEC)-Aided Applications for Industrial Internet of Things(IIoT)Services
4
作者 Dae-Young Kim SoYeon Lee +1 位作者 MinSeung Kim Seokhoon Kim 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2049-2060,共12页
In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to im... In many IIoT architectures,various devices connect to the edge cloud via gateway systems.For data processing,numerous data are delivered to the edge cloud.Delivering data to an appropriate edge cloud is critical to improve IIoT service efficiency.There are two types of costs for this kind of IoT network:a communication cost and a computing cost.For service efficiency,the communication cost of data transmission should be minimized,and the computing cost in the edge cloud should be also minimized.Therefore,in this paper,the communication cost for data transmission is defined as the delay factor,and the computing cost in the edge cloud is defined as the waiting time of the computing intensity.The proposed method selects an edge cloud that minimizes the total cost of the communication and computing costs.That is,a device chooses a routing path to the selected edge cloud based on the costs.The proposed method controls the data flows in a mesh-structured network and appropriately distributes the data processing load.The performance of the proposed method is validated through extensive computer simulation.When the transition probability from good to bad is 0.3 and the transition probability from bad to good is 0.7 in wireless and edge cloud states,the proposed method reduced both the average delay and the service pause counts to about 25%of the existing method. 展开更多
关键词 Industrial Internet of Things(IIoT)network IIoT service mobile edge computing(MEC) edge cloud selection MEC-aided application
下载PDF
Adaptive Partial Task Offloading and Virtual Resource Placement in SDN/NFV-Based Network Softwarization
5
作者 Prohim Tam Sa Math Seokhoon Kim 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期2141-2154,共14页
Edge intelligence brings the deployment of applied deep learning(DL)models in edge computing systems to alleviate the core backbone network congestions.The setup of programmable software-defined networking(SDN)control... Edge intelligence brings the deployment of applied deep learning(DL)models in edge computing systems to alleviate the core backbone network congestions.The setup of programmable software-defined networking(SDN)control and elastic virtual computing resources within network functions virtualization(NFV)are cooperative for enhancing the applicability of intelligent edge softwarization.To offer advancement for multi-dimensional model task offloading in edge networks with SDN/NFV-based control softwarization,this study proposes a DL mechanism to recommend the optimal edge node selection with primary features of congestion windows,link delays,and allocatable bandwidth capacities.Adaptive partial task offloading policy considered the DL-based recommendation to modify efficient virtual resource placement for minimizing the completion time and termination drop ratio.The optimization problem of resource placement is tackled by a deep reinforcement learning(DRL)-based policy following the Markov decision process(MDP).The agent observes the state spaces and applies value-maximized action of available computation resources and adjustable resource allocation steps.The reward formulation primarily considers taskrequired computing resources and action-applied allocation properties.With defined policies of resource determination,the orchestration procedure is configured within each virtual network function(VNF)descriptor using topology and orchestration specification for cloud applications(TOSCA)by specifying the allocated properties.The simulation for the control rule installation is conducted using Mininet and Ryu SDN controller.Average delay and task delivery/drop ratios are used as the key performance metrics. 展开更多
关键词 Deep learning partial task offloading software-defined networking virtual machine virtual network functions
下载PDF
Design of Evolutionary Algorithm Based Energy Efficient Clustering Approach for Vehicular Adhoc Networks
6
作者 VDinesh SSrinivasan +1 位作者 Gyanendra Prasad Joshi Woong Cho 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期687-699,共13页
In a vehicular ad hoc network(VANET),a massive quantity of data needs to be transmitted on a large scale in shorter time durations.At the same time,vehicles exhibit high velocity,leading to more vehicle disconnections... In a vehicular ad hoc network(VANET),a massive quantity of data needs to be transmitted on a large scale in shorter time durations.At the same time,vehicles exhibit high velocity,leading to more vehicle disconnections.Both of these characteristics result in unreliable data communication in VANET.A vehicle clustering algorithm clusters the vehicles in groups employed in VANET to enhance network scalability and connection reliability.Clustering is considered one of the possible solutions for attaining effectual interaction in VANETs.But one such difficulty was reducing the cluster number under increasing transmitting nodes.This article introduces an Evolutionary Hide Objects Game Optimization based Distance Aware Clustering(EHOGO-DAC)Scheme for VANET.The major intention of the EHOGO-DAC technique is to portion the VANET into distinct sets of clusters by grouping vehicles.In addition,the DHOGO-EAC technique is mainly based on the HOGO algorithm,which is stimulated by old games,and the searching agent tries to identify hidden objects in a given space.The DHOGO-EAC technique derives a fitness function for the clustering process,including the total number of clusters and Euclidean distance.The experimental assessment of the DHOGO-EAC technique was carried out under distinct aspects.The comparison outcome stated the enhanced outcomes of the DHOGO-EAC technique compared to recent approaches. 展开更多
关键词 Vehicular networks CLUSTERING evolutionary algorithm fitness function distance metric
下载PDF
Machine-Learning Based Packet Switching Method for Providing Stable High-Quality Video Streaming in Multi-Stream Transmission
7
作者 Yumin Jo Jongho Paik 《Computers, Materials & Continua》 SCIE EI 2024年第3期4153-4176,共24页
Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as re... Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as reduction in the lifespan of equipment due to frequent switching and interruption,delay,and stoppage of services may occur.Therefore,applying a machine learning(ML)method,which is possible to automatically judge and classify network-related service anomaly,and switch multi-input signals without dropping or changing signals by predicting or quickly determining the time of error occurrence for smooth stream switching when there are problems such as transmission errors,is required.In this paper,we propose an intelligent packet switching method based on the ML method of classification,which is one of the supervised learning methods,that presents the risk level of abnormal multi-stream occurring in broadcasting gateway equipment based on data.Furthermore,we subdivide the risk levels obtained from classification techniques into probabilities and then derive vectorized representative values for each attribute value of the collected input data and continuously update them.The obtained reference vector value is used for switching judgment through the cosine similarity value between input data obtained when a dangerous situation occurs.In the broadcasting gateway equipment to which the proposed method is applied,it is possible to perform more stable and smarter switching than before by solving problems of reliability and broadcasting accidents of the equipment and can maintain stable video streaming as well. 展开更多
关键词 Broadcasting and communication convergence multi-stream packet switching advanced television systems committee standard 3.0(ATSC 3.0) data pre-processing machine learning cosine similarity
下载PDF
Malware Detection Using Dual Siamese Network Model
8
作者 ByeongYeol An JeaHyuk Yang +1 位作者 Seoyeon Kim Taeguen Kim 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期563-584,共22页
This paper proposes a new approach to counter cyberattacks using the increasingly diverse malware in cyber security.Traditional signature detection methods that utilize static and dynamic features face limitations due... This paper proposes a new approach to counter cyberattacks using the increasingly diverse malware in cyber security.Traditional signature detection methods that utilize static and dynamic features face limitations due to the continuous evolution and diversity of new malware.Recently,machine learning-based malware detection techniques,such as Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN),have gained attention.While these methods demonstrate high performance by leveraging static and dynamic features,they are limited in detecting new malware or variants because they learn based on the characteristics of existing malware.To overcome these limitations,malware detection techniques employing One-Shot Learning and Few-Shot Learning have been introduced.Based on this,the Siamese Network,which can effectively learn from a small number of samples and perform predictions based on similarity rather than learning the characteristics of the input data,enables the detection of new malware or variants.We propose a dual Siamese network-based detection framework that utilizes byte images converted frommalware binary data to grayscale,and opcode frequency-based images generated after extracting opcodes and converting them into 2-gramfrequencies.The proposed framework integrates two independent Siamese network models,one learning from byte images and the other from opcode frequency-based images.The detection models trained on the different kinds of images generated separately apply the L1 distancemeasure to the output vectors themodels generate,calculate the similarity,and then apply different weights to each model.Our proposed framework achieved a malware detection accuracy of 95.9%and 99.83%in the experimentsusingdifferentmalware datasets.The experimental resultsdemonstrate that ourmalware detection model can effectively detect malware by utilizing two different types of features and employing the dual Siamese network-based model. 展开更多
关键词 Siamese network malware detection few-shot learning
下载PDF
Implementation of a Subjective Visual Vertical and Horizontal Testing System Using Virtual Reality
9
作者 Sungjin Lee Min Hong +1 位作者 Hongly Va Ji-Yun Park 《Computers, Materials & Continua》 SCIE EI 2021年第6期3669-3679,共11页
Subjective visual vertical(SVV)and subjective visual horizontal(SVH)tests can be used to evaluate the perception of verticality and horizontality,respectively,and can aid the diagnosis of otolith dysfunction in clinic... Subjective visual vertical(SVV)and subjective visual horizontal(SVH)tests can be used to evaluate the perception of verticality and horizontality,respectively,and can aid the diagnosis of otolith dysfunction in clinical practice.In this study,SVV and SVH screen version tests are implemented using virtual reality(VR)equipment;the proposed test method promotes a more immersive feeling for the subject while using a simple equipment configuration and possessing excellent mobility.To verify the performance of the proposed VR-based SVV and SVH tests,a reliable comparison was made between the traditional screen-based SVV and SVH tests and the proposed method,based on 30 healthy subjects.The average results of our experimental tests on the VR-based binocular SVV and SVH equipment were−0.15◦±1.74 and 0.60◦±1.18,respectively.The proposed VR-based method satisfies the normal tolerance for horizontal or vertical lines,i.e.,a±3◦error,as defined in previous studies,and it can be used to replace existing test methods. 展开更多
关键词 Subjective visual vertical subjective visual horizontal virtual reality UNITY3D FOVE HMD vestibular function tests diagnostic equipment
下载PDF
CT-NET: A Novel Convolutional Transformer-Based Network for Short-Term Solar Energy Forecasting Using Climatic Information 被引量:1
10
作者 Muhammad Munsif Fath U Min Ullah +2 位作者 Samee Ullah Khan Noman Khan Sung Wook Baik 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1751-1773,共23页
Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challeng... Photovoltaic(PV)systems are environmentally friendly,generate green energy,and receive support from policies and organizations.However,weather fluctuations make large-scale PV power integration and management challenging despite the economic benefits.Existing PV forecasting techniques(sequential and convolutional neural networks(CNN))are sensitive to environmental conditions,reducing energy distribution system performance.To handle these issues,this article proposes an efficient,weather-resilient convolutional-transformer-based network(CT-NET)for accurate and efficient PV power forecasting.The network consists of three main modules.First,the acquired PV generation data are forwarded to the pre-processing module for data refinement.Next,to carry out data encoding,a CNNbased multi-head attention(MHA)module is developed in which a single MHA is used to decode the encoded data.The encoder module is mainly composed of 1D convolutional and MHA layers,which extract local as well as contextual features,while the decoder part includes MHA and feedforward layers to generate the final prediction.Finally,the performance of the proposed network is evaluated using standard error metrics,including the mean squared error(MSE),root mean squared error(RMSE),and mean absolute percentage error(MAPE).An ablation study and comparative analysis with several competitive state-of-the-art approaches revealed a lower error rate in terms of MSE(0.0471),RMSE(0.2167),and MAPE(0.6135)over publicly available benchmark data.In addition,it is demonstrated that our proposed model is less complex,with the lowest number of parameters(0.0135 M),size(0.106 MB),and inference time(2 ms/step),suggesting that it is easy to integrate into the smart grid. 展开更多
关键词 Solar energy forecasting renewable energy systems photovoltaic generation forecasting time series data transformer models deep learning machine learning
下载PDF
Cardiac CT Image Segmentation for Deep Learning-Based Coronary Calcium Detection Using K-Means Clustering and Grabcut Algorithm 被引量:1
11
作者 Sungjin Lee Ahyoung Lee Min Hong 《Computer Systems Science & Engineering》 SCIE EI 2023年第8期2543-2554,共12页
Specific medical data has limitations in that there are not many numbers and it is not standardized.to solve these limitations,it is necessary to study how to efficiently process these limited amounts of data.In this ... Specific medical data has limitations in that there are not many numbers and it is not standardized.to solve these limitations,it is necessary to study how to efficiently process these limited amounts of data.In this paper,deep learning methods for automatically determining cardiovascular diseases are described,and an effective preprocessing method for CT images that can be applied to improve the performance of deep learning was conducted.The cardiac CT images include several parts of the body such as the heart,lungs,spine,and ribs.The preprocessing step proposed in this paper divided CT image data into regions of interest and other regions using K-means clustering and the Grabcut algorithm.We compared the deep learning performance results of original data,data using only K-means clustering,and data using both K-means clustering and the Grabcut algorithm.All data used in this paper were collected at Soonchunhyang University Cheonan Hospital in Korea and the experimental test proceeded with IRB approval.The training was conducted using Resnet 50,VGG,and Inception resnet V2 models,and Resnet 50 had the best accuracy in validation and testing.Through the preprocessing process proposed in this paper,the accuracy of deep learning models was significantly improved by at least 10%and up to 40%. 展开更多
关键词 Deep learning VGG resnet CT image processing
下载PDF
LoRa Backscatter Network Efficient Data Transmission Using RF Source Range Control
12
作者 Dae-Young Kim SoYeon Lee Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2023年第2期4015-4025,共11页
Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that deliver... Networks based on backscatter communication provide wireless data transmission in the absence of a power source.A backscatter device receives a radio frequency(RF)source and creates a backscattered signal that delivers data;this enables new services in battery-less domains with massive Internet-of-Things(IoT)devices.Connectivity is highly energy-efficient in the context of massive IoT applications.Outdoors,long-range(LoRa)backscattering facilitates large IoT services.A backscatter network guarantees timeslot-and contention-based transmission.Timeslot-based transmission ensures data transmission,but is not scalable to different numbers of transmission devices.If contention-based transmission is used,collisions are unavoidable.To reduce collisions and increase transmission efficiency,the number of devices transmitting data must be controlled.To control device activation,the RF source range can be modulated by adjusting the RF source power during LoRa backscatter.This reduces the number of transmitting devices,and thus collisions and retransmission,thereby improving transmission efficiency.We performed extensive simulations to evaluate the performance of our method. 展开更多
关键词 Backscatter communication LoRa backscatter RF source range control activated device control internet of things
下载PDF
Real-Time Prediction Algorithm for Intelligent Edge Networks with Federated Learning-Based Modeling
13
作者 Seungwoo Kang Seyha Ros +3 位作者 Inseok Song Prohim Tam Sa Math Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2023年第11期1967-1983,共17页
Intelligent healthcare networks represent a significant component in digital applications,where the requirements hold within quality-of-service(QoS)reliability and safeguarding privacy.This paper addresses these requi... Intelligent healthcare networks represent a significant component in digital applications,where the requirements hold within quality-of-service(QoS)reliability and safeguarding privacy.This paper addresses these requirements through the integration of enabler paradigms,including federated learning(FL),cloud/edge computing,softwaredefined/virtualized networking infrastructure,and converged prediction algorithms.The study focuses on achieving reliability and efficiency in real-time prediction models,which depend on the interaction flows and network topology.In response to these challenges,we introduce a modified version of federated logistic regression(FLR)that takes into account convergence latencies and the accuracy of the final FL model within healthcare networks.To establish the FLR framework for mission-critical healthcare applications,we provide a comprehensive workflow in this paper,introducing framework setup,iterative round communications,and model evaluation/deployment.Our optimization process delves into the formulation of loss functions and gradients within the domain of federated optimization,which concludes with the generation of service experience batches for model deployment.To assess the practicality of our approach,we conducted experiments using a hypertension prediction model with data sourced from the 2019 annual dataset(Version 2.0.1)of the Korea Medical Panel Survey.Performance metrics,including end-to-end execution delays,model drop/delivery ratios,and final model accuracies,are captured and compared between the proposed FLR framework and other baseline schemes.Our study offers an FLR framework setup for the enhancement of real-time prediction modeling within intelligent healthcare networks,addressing the critical demands of QoS reliability and privacy preservation. 展开更多
关键词 Edge computing federated logistic regression intelligent healthcare networks prediction modeling privacy-aware and real-time learning
下载PDF
Metaheuristics Based Node Localization Approach for Real-Time Clustered Wireless Networks
14
作者 R.Bhaskaran P.S.Sujith Kumar +3 位作者 G.Shanthi L.Raja Gyanendra Prasad Joshi Woong Cho 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期1-17,共17页
In recent times,real time wireless networks have found their applicability in several practical applications such as smart city,healthcare,surveillance,environmental monitoring,etc.At the same time,proper localization... In recent times,real time wireless networks have found their applicability in several practical applications such as smart city,healthcare,surveillance,environmental monitoring,etc.At the same time,proper localization of nodes in real time wireless networks helps to improve the overall functioning of networks.This study presents an Improved Metaheuristics based Energy Efficient Clustering with Node Localization(IM-EECNL)approach for real-time wireless networks.The proposed IM-EECNL technique involves two major processes namely node localization and clustering.Firstly,Chaotic Water Strider Algorithm based Node Localization(CWSANL)technique to determine the unknown position of the nodes.Secondly,an Oppositional Archimedes Optimization Algorithm based Clustering(OAOAC)technique is applied to accomplish energy efficiency in the network.Besides,the OAOAC technique derives afitness function comprising residual energy,distance to cluster heads(CHs),distance to base station(BS),and load.The performance validation of the IM-EECNL technique is carried out under several aspects such as localization and energy efficiency.A wide ranging comparative outcomes analysis highlighted the improved performance of the IM-EECNL approach on the recent approaches with the maximum packet delivery ratio(PDR)of 0.985. 展开更多
关键词 Wireless networks real time applications CLUSTERING node localization energy efficiency metaheuristics
下载PDF
Intelligent Real-Time IoT Traffic Steering in 5G Edge Networks 被引量:2
15
作者 Sa Math Prohim Tam Seokhoon Kim 《Computers, Materials & Continua》 SCIE EI 2021年第6期3433-3450,共18页
In the Next Generation Radio Networks(NGRN),there will be extreme massive connectivity with the Heterogeneous Internet of Things(HetIoT)devices.The millimeter-Wave(mmWave)communications will become a potential core te... In the Next Generation Radio Networks(NGRN),there will be extreme massive connectivity with the Heterogeneous Internet of Things(HetIoT)devices.The millimeter-Wave(mmWave)communications will become a potential core technology to increase the capacity of Radio Networks(RN)and enable Multiple-Input and Multiple-Output(MIMO)of Radio Remote Head(RRH)technology.However,the challenging key issues in unfair radio resource handling remain unsolved when massive requests are occurring concurrently.The imbalance of resource utilization is one of the main issues occurs when there is overloaded connectivity to the closest RRH receiving exceeding requests.To handle this issue effectively,Machine Learning(ML)algorithm plays an important role to tackle the requests of massive IoT devices to RRH with its obvious capacity conditions.This paper proposed a dynamic RRH gateways steering based on a lightweight supervised learning algorithm,namely K-Nearest Neighbor(KNN),to improve the communication Quality of Service(QoS)in real-time IoT networks.KNN supervises the model to classify and recommend the user’s requests to optimal RRHs which preserves higher power.The experimental dataset was generated by using computer software and the simulation results illustrated a remarkable outperformance of the proposed scheme over the conventional methods in terms of multiple significant QoS parameters,including communication reliability,latency,and throughput. 展开更多
关键词 Machine learning Internet of Things traffic steering mobile edge computing
下载PDF
A Lightweight Certificate-Based Aggregate Signature Scheme Providing Key Insulation 被引量:1
16
作者 Yong-Woon Hwang Im-Yeong Lee 《Computers, Materials & Continua》 SCIE EI 2021年第11期1747-1764,共18页
Recently,with the advancement of Information and Communications Technology(ICT),Internet of Things(IoT)has been connected to the cloud and used in industrial sectors,medical environments,and smart grids.However,if dat... Recently,with the advancement of Information and Communications Technology(ICT),Internet of Things(IoT)has been connected to the cloud and used in industrial sectors,medical environments,and smart grids.However,if data is transmitted in plain text when collecting data in an IoTcloud environment,it can be exposed to various security threats such as replay attacks and data forgery.Thus,digital signatures are required.Data integrity is ensured when a user(or a device)transmits data using a signature.In addition,the concept of data aggregation is important to efficiently collect data transmitted from multiple users(or a devices)in an industrial IoT environment.However,signatures based on pairing during aggregation compromise efficiency as the number of signatories increases.Aggregate signature methods(e.g.,identity-based and certificateless cryptography)have been studied.Both methods pose key escrow and key distribution problems.In order to solve these problems,the use of aggregate signatures in certificate-based cryptography is being studied,and studies to satisfy the prevention of forgery of signatures and other security problems are being conducted.In this paper,we propose a new lightweight signature scheme that uses a certificate-based aggregate signature and can generate and verify signed messages from IoT devices in an IoT-cloud environment.In this proposed method,by providing key insulation,security threats that occur when keys are exposed due to physical attacks such as side channels can be solved.This can be applied to create an environment in which data is collected safely and efficiently in IoT-cloud is environments. 展开更多
关键词 Internet of things certificate-based aggregate signature key insulation cloud LIGHTWEIGHT physical attack
下载PDF
Parallel Cloth Simulation Using OpenGL Shading Language 被引量:1
17
作者 Hongly Va Min-Hyung Choi Min Hong 《Computer Systems Science & Engineering》 SCIE EI 2022年第5期427-443,共17页
The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic.In general,the mass–spring system is applied t... The primary goal of cloth simulation is to express object behavior in a realistic manner and achieve real-time performance by following the fundamental concept of physic.In general,the mass–spring system is applied to real-time cloth simulation with three types of springs.However,hard spring cloth simulation using the mass–spring system requires a small integration time-step in order to use a large stiffness coefficient.Furthermore,to obtain stable behavior,constraint enforcement is used instead of maintenance of the force of each spring.Constraint force computation involves a large sparse linear solving operation.Due to the large computation,we implement a cloth simulation using adaptive constraint activation and deactivation techniques that involve the mass-spring system and constraint enforcement method to prevent excessive elongation of cloth.At the same time,when the length of the spring is stretched or compressed over a defined threshold,adaptive constraint activation and deactivation method deactivates the spring and generate the implicit constraint.Traditional method that uses a serial process of the Central Processing Unit(CPU)to solve the system in every frame cannot handle the complex structure of cloth model in real-time.Our simulation utilizes the Graphic Processing Unit(GPU)parallel processing with compute shader in OpenGL Shading Language(GLSL)to solve the system effectively.In this paper,we design and implement parallel method for cloth simulation,and experiment on the performance and behavior comparison of the mass-spring system,constraint enforcement,and adaptive constraint activation and deactivation techniques the using GPU-based parallel method. 展开更多
关键词 Adaptive constraint cloth simulation constraint enforcement GLSL compute shader mass–spring system parallel GPU
下载PDF
Bandwidth-Efficient Transmission Method for User View-Oriented Video Services
18
作者 Minjae Seo Jong-Ho Paik 《Computers, Materials & Continua》 SCIE EI 2020年第12期2571-2589,共19页
The trend in video viewing has been evolving beyond simply providing a multi-view option.Recently,a function that allows selection and viewing of a clip from a multi-view service that captures a specific range or obje... The trend in video viewing has been evolving beyond simply providing a multi-view option.Recently,a function that allows selection and viewing of a clip from a multi-view service that captures a specific range or object has been added.In particular,the free-view service is an extended concept of multi-view and provides a freer viewpoint.However,since numerous videos and additional data are required for its construction,all of the clips constituting the content cannot be simultaneously provided.Only certain clips are selected and provided to the user.If the video is not the preferred video,change request is made,and a delay occurs during retransmission from the server.Delays due to frequent re-requests degrade the overall quality of service.For free-view services,selectively transmitting the video according to the user’s desired viewpoint and region of interest within the limited network of available videos is important.In this study,we propose a method of screening and providing the correct video based on objects in the contents.Based on the method of recognizing the object in each clip,we designed a method of setting its priority based on information about the object’s location for each viewpoint.During the transmission and receiving process using this information,the selected video can be rapidly recognized and changed.Herein,we present a service system configuration method and propose video selection examples for free-view services. 展开更多
关键词 Free-viewpoint video multi-view video coding scene change object co-detection transmission method
下载PDF
Real-time Volume Preserving Constraints for Volumetric Model on GPU
19
作者 Hongly Va Min-Hyung Choi Min Hong 《Computers, Materials & Continua》 SCIE EI 2022年第10期831-848,共18页
This paper presents a parallel method for simulating real-time 3D deformable objects using the volume preservation mass-spring system method on tetrahedron meshes.In general,the conventional mass-spring system is mani... This paper presents a parallel method for simulating real-time 3D deformable objects using the volume preservation mass-spring system method on tetrahedron meshes.In general,the conventional mass-spring system is manipulated as a force-driven method because it is fast,simple to implement,and the parameters can be controlled.However,the springs in traditional mass-spring system can be excessively elongated which cause severe stability and robustness issues that lead to shape restoring,simulation blow-up,and huge volume loss of the deformable object.In addition,traditional method that uses a serial process of the central processing unit(CPU)to solve the system in every frame cannot handle the complex structure of deformable object in real-time.Therefore,the first order implicit constraint enforcement for a mass-spring model is utilized to achieve accurate visual realism of deformable objects with tough constraint error.In this paper,we applied the distance constraint and volume conservation constraints for each tetrahedron element to improve the stability of deformable object simulation using the mass-spring system and behave the same as its real-world counterparts.To reduce the computational complexity while ensuring stable simulation,we applied a method that utilizes OpenGL compute shader,a part of OpenGL Shading Language(GLSL)that executes on the graphic processing unit(GPU)to solve the numerical problems effectively.We applied the proposed methods to experimental volumetric models,and volume percentages of all objects are compared.The average volume percentages of all models during the simulation using the mass-spring system,distance constraint,and the volume constraint method were 68.21%,89.64%,and 98.70%,respectively.The proposed approaches are successfully applied to improve the stability of mass-spring system and the performance comparison from our experimental tests also shows that the GPU-based method is faster than CPU-based implementation for all cases. 展开更多
关键词 Deformable object simulation mass-spring system implicit constraint enforcement volume conservation constraint GPU parallel computing
下载PDF
Enhanced Metaheuristics-Based Clustering Scheme for Wireless Multimedia Sensor Networks
20
作者 R.Uma Mageswari Sara A.Althubiti +3 位作者 Fayadh Alenezi E.Laxmi Lydia Gyanendra Prasad Joshi Woong Cho 《Computers, Materials & Continua》 SCIE EI 2022年第11期4179-4192,共14页
Traditional Wireless Sensor Networks(WSNs)comprise of costeffective sensors that can send physical parameters of the target environment to an intended user.With the evolution of technology,multimedia sensor nodes have... Traditional Wireless Sensor Networks(WSNs)comprise of costeffective sensors that can send physical parameters of the target environment to an intended user.With the evolution of technology,multimedia sensor nodes have become the hot research topic since it can continue gathering multimedia content and scalar from the target domain.The existence of multimedia sensors,integrated with effective signal processing and multimedia source coding approaches,has led to the increased application of Wireless Multimedia Sensor Network(WMSN).This sort of network has the potential to capture,transmit,and receive multimedia content.Since energy is a major source in WMSN,novel clustering approaches are essential to deal with adaptive topologies of WMSN and prolonged network lifetime.With this motivation,the current study develops an Enhanced Spider Monkey Optimization-based Energy-Aware Clustering Scheme(ESMO-EACS)for WMSN.The proposed ESMO-EACS model derives ESMO algorithm by incorporating the concepts of SMO algorithm and quantum computing.The proposed ESMO-EACS model involves the design of fitness functions using distinct input parameters for effective construction of clusters.A comprehensive experimental analysis was conducted to validate the effectiveness of the proposed ESMO-EACS technique in terms of different performance measures.The simulation outcome established the superiority of the proposed ESMO-EACS technique to other methods under various measures. 展开更多
关键词 Wireless multimedia sensor networks CLUSTERING spider monkey optimization algorithm energy efficiency metaheuristics quantum computing
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部