期刊文献+
共找到514篇文章
< 1 2 26 >
每页显示 20 50 100
Privacy-Preserving Information Fusion Technique for Device to Server-Enabled Communication in the Internet of Things:A Hybrid Approach
1
作者 Amal Al-Rasheed Rahim Khan +3 位作者 Tahani Alsaed Mahwish Kundi Mohamad Hanif Md.Saad Mahidur R.Sarker 《Computers, Materials & Continua》 SCIE EI 2024年第7期1305-1323,共19页
Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both cus... Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures. 展开更多
关键词 Internet of things information fusion differential privacy dynamic programming Laplace function
下载PDF
A Parallel Hybrid Testing Technique for Tri-Programming Model-Based Software Systems
2
作者 Huda Basloom Mohamed Dahab +3 位作者 Abdullah Saad AL-Ghamdi Fathy Eassa Ahmed Mohammed Alghamdi Seif Haridi 《Computers, Materials & Continua》 SCIE EI 2023年第2期4501-4530,共30页
Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ... Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation. 展开更多
关键词 Software testing hybrid testing technique OpenACC OPENMP MPI tri-programming model exascale computing
下载PDF
Information Management in Disaster and Humanitarian Response: A Case in United Nations Office for the Coordination of Humanitarian Affairs
3
作者 Solomon M. Zewde 《Intelligent Information Management》 2023年第2期47-65,共19页
To guarantee a unified response to disasters, humanitarian organizations work together via the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Although the OCHA has made great strides to imp... To guarantee a unified response to disasters, humanitarian organizations work together via the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Although the OCHA has made great strides to improve its information management and increase the availability of accurate, real-time data for disaster and humanitarian response teams, significant gaps persist. There are inefficiencies in the emergency management of data at every stage of its lifecycle: collection, processing, analysis, distribution, storage, and retrieval. Disaster risk reduction and disaster risk management are the two main tenets of the United Nations’ worldwide plan for disaster management. Information systems are crucial because of the crucial roles they play in capturing, processing, and transmitting data. The management of information is seldom discussed in published works. The goal of this study is to employ qualitative research methods to provide insight by facilitating an expanded comprehension of relevant contexts, phenomena, and individual experiences. Humanitarian workers and OCHA staffers will take part in the research. The study subjects will be chosen using a random selection procedure. Online surveys with both closed- and open-ended questions will be used to compile the data. UN OCHA offers a structure for the handling of information via which all humanitarian actors may contribute to the overall response. This research will enable the UN Office for OCHA better gather, process, analyze, disseminate, store, and retrieve data in the event of a catastrophe or humanitarian crisis. 展开更多
关键词 Information Systems Management Information Management UNOCHA (United Nations Office for Coordination of Humanitarian Affairs) Humanitarian Emergency Actors DISASTER Risk Reduction RESPONSE Emer-gency Management
下载PDF
Olive Leaf Disease Detection via Wavelet Transform and Feature Fusion of Pre-Trained Deep Learning Models
4
作者 Mahmood A.Mahmood Khalaf Alsalem 《Computers, Materials & Continua》 SCIE EI 2024年第3期3431-3448,共18页
Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wa... Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases. 展开更多
关键词 Olive leaf diseases wavelet transform deep learning feature fusion
下载PDF
Ensemble Approach Combining Deep Residual Networks and BiGRU with Attention Mechanism for Classification of Heart Arrhythmias
5
作者 Batyrkhan Omarov Meirzhan Baikuvekov +3 位作者 Daniyar Sultan Nurzhan Mukazhanov Madina Suleimenova Maigul Zhekambayeva 《Computers, Materials & Continua》 SCIE EI 2024年第7期341-359,共19页
This research introduces an innovative ensemble approach,combining Deep Residual Networks(ResNets)and Bidirectional Gated Recurrent Units(BiGRU),augmented with an Attention Mechanism,for the classification of heart ar... This research introduces an innovative ensemble approach,combining Deep Residual Networks(ResNets)and Bidirectional Gated Recurrent Units(BiGRU),augmented with an Attention Mechanism,for the classification of heart arrhythmias.The escalating prevalence of cardiovascular diseases necessitates advanced diagnostic tools to enhance accuracy and efficiency.The model leverages the deep hierarchical feature extraction capabilities of ResNets,which are adept at identifying intricate patterns within electrocardiogram(ECG)data,while BiGRU layers capture the temporal dynamics essential for understanding the sequential nature of ECG signals.The integration of an Attention Mechanism refines the model’s focus on critical segments of ECG data,ensuring a nuanced analysis that highlights the most informative features for arrhythmia classification.Evaluated on a comprehensive dataset of 12-lead ECG recordings,our ensemble model demonstrates superior performance in distinguishing between various types of arrhythmias,with an accuracy of 98.4%,a precision of 98.1%,a recall of 98%,and an F-score of 98%.This novel combination of convolutional and recurrent neural networks,supplemented by attention-driven mechanisms,advances automated ECG analysis,contributing significantly to healthcare’s machine learning applications and presenting a step forward in developing non-invasive,efficient,and reliable tools for early diagnosis and management of heart diseases. 展开更多
关键词 CNN BiGRU ensemble deep learning ECG ARRHYTHMIA heart disease
下载PDF
Optimizing Deep Learning for Computer-Aided Diagnosis of Lung Diseases: An Automated Method Combining Evolutionary Algorithm, Transfer Learning, and Model Compression
6
作者 Hassen Louati Ali Louati +1 位作者 Elham Kariri Slim Bechikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2519-2547,共29页
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w... Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures. 展开更多
关键词 Computer-aided diagnosis deep learning evolutionary algorithms deep compression transfer learning
下载PDF
A Hybrid and Lightweight Device-to-Server Authentication Technique for the Internet of Things
7
作者 Shaha Al-Otaibi Rahim Khan +3 位作者 Hashim Ali Aftab Ahmed Khan Amir Saeed Jehad Ali 《Computers, Materials & Continua》 SCIE EI 2024年第3期3805-3823,共19页
The Internet of Things(IoT)is a smart networking infrastructure of physical devices,i.e.,things,that are embedded with sensors,actuators,software,and other technologies,to connect and share data with the respective se... The Internet of Things(IoT)is a smart networking infrastructure of physical devices,i.e.,things,that are embedded with sensors,actuators,software,and other technologies,to connect and share data with the respective server module.Although IoTs are cornerstones in different application domains,the device’s authenticity,i.e.,of server(s)and ordinary devices,is the most crucial issue and must be resolved on a priority basis.Therefore,various field-proven methodologies were presented to streamline the verification process of the communicating devices;however,location-aware authentication has not been reported as per our knowledge,which is a crucial metric,especially in scenarios where devices are mobile.This paper presents a lightweight and location-aware device-to-server authentication technique where the device’s membership with the nearest server is subjected to its location information along with other measures.Initially,Media Access Control(MAC)address and Advance Encryption Scheme(AES)along with a secret shared key,i.e.,λ_(i) of 128 bits,have been utilized by Trusted Authority(TA)to generate MaskIDs,which are used instead of the original ID,for every device,i.e.,server and member,and are shared in the offline phase.Secondly,TA shares a list of authentic devices,i.e.,server S_(j) and members C_(i),with every device in the IoT for the onward verification process,which is required to be executed before the initialization of the actual communication process.Additionally,every device should be located such that it lies within the coverage area of a server,and this location information is used in the authentication process.A thorough analytical analysis was carried out to check the susceptibility of the proposed and existing authentication approaches against well-known intruder attacks,i.e.,man-in-the-middle,masquerading,device,and server impersonations,etc.,especially in the IoT domain.Moreover,proposed authentication and existing state-of-the-art approaches have been simulated in the real environment of IoT to verify their performance,particularly in terms of various evaluation metrics,i.e.,processing,communication,and storage overheads.These results have verified the superiority of the proposed scheme against existing state-of-the-art approaches,preferably in terms of communication,storage,and processing costs. 展开更多
关键词 Internet of things AUTHENTICITY security LOCATION communication
下载PDF
Survey on digital twins for Internet of Vehicles:Fundamentals,challenges,and opportunities
8
作者 Jiajie Guo Muhammad Bilal +3 位作者 Yuying Qiu Cheng Qian Xiaolong Xu Kim-Kwang Raymond Choo 《Digital Communications and Networks》 SCIE CSCD 2024年第2期237-247,共11页
As autonomous vehicles and the other supporting infrastructures(e.g.,smart cities and intelligent transportation systems)become more commonplace,the Internet of Vehicles(IoV)is getting increasingly prevalent.There hav... As autonomous vehicles and the other supporting infrastructures(e.g.,smart cities and intelligent transportation systems)become more commonplace,the Internet of Vehicles(IoV)is getting increasingly prevalent.There have been attempts to utilize Digital Twins(DTs)to facilitate the design,evaluation,and deployment of IoV-based systems,for example by supporting high-fidelity modeling,real-time monitoring,and advanced predictive capabilities.However,the literature review undertaken in this paper suggests that integrating DTs into IoV-based system design and deployment remains an understudied topic.In addition,this paper explains how DTs can benefit IoV system designers and implementers,as well as describes several challenges and opportunities for future researchers. 展开更多
关键词 Internet of vehicles Digital twin Simulation Traffic systems
下载PDF
Performance Comparison of Hyper-V and KVM for Cryptographic Tasks in Cloud Computing
9
作者 Nader Abdel Karim Osama A.Khashan +4 位作者 Waleed K.Abdulraheem Moutaz Alazab Hasan Kanaker Mahmoud E.Farfoura Mohammad Alshinwan 《Computers, Materials & Continua》 SCIE EI 2024年第2期2023-2045,共23页
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i... As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads. 展开更多
关键词 Cloud computing performance VIRTUALIZATION hypervisors HYPER-V KVM cryptographic algorithm
下载PDF
Artificial intelligence in physiological characteristics recognition for internet of things authentication
10
作者 Zhimin Zhang Huansheng Ning +2 位作者 Fadi Farha Jianguo Ding Kim-Kwang Raymond Choo 《Digital Communications and Networks》 SCIE CSCD 2024年第3期740-755,共16页
Effective user authentication is key to ensuring equipment security,data privacy,and personalized services in Internet of Things(IoT)systems.However,conventional mode-based authentication methods(e.g.,passwords and sm... Effective user authentication is key to ensuring equipment security,data privacy,and personalized services in Internet of Things(IoT)systems.However,conventional mode-based authentication methods(e.g.,passwords and smart cards)may be vulnerable to a broad range of attacks(e.g.,eavesdropping and side-channel attacks).Hence,there have been attempts to design biometric-based authentication solutions,which rely on physiological and behavioral characteristics.Behavioral characteristics need continuous monitoring and specific environmental settings,which can be challenging to implement in practice.However,we can also leverage Artificial Intelligence(AI)in the extraction and classification of physiological characteristics from IoT devices processing to facilitate authentication.Thus,we review the literature on the use of AI in physiological characteristics recognition pub-lished after 2015.We use the three-layer architecture of the IoT(i.e.,sensing layer,feature layer,and algorithm layer)to guide the discussion of existing approaches and their limitations.We also identify a number of future research opportunities,which will hopefully guide the design of next generation solutions. 展开更多
关键词 Physiological characteristics recognition Artificial intelligence Internet of things Biological-driven authentication
下载PDF
Improving Prediction of Chronic Kidney Disease Using KNN Imputed SMOTE Features and TrioNet Model
11
作者 Nazik Alturki Abdulaziz Altamimi +5 位作者 Muhammad Umer Oumaima Saidani Amal Alshardan Shtwai Alsubai Marwan Omar Imran Ashraf 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3513-3534,共22页
Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ... Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD. 展开更多
关键词 Precisionmedicine chronic kidney disease detection SMOTE missing values healthcare KNNimputer ensemble learning
下载PDF
ThyroidNet:A Deep Learning Network for Localization and Classification of Thyroid Nodules
12
作者 Lu Chen Huaqiang Chen +6 位作者 Zhikai Pan Sheng Xu Guangsheng Lai Shuwen Chen Shuihua Wang Xiaodong Gu Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期361-382,共22页
Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on... Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis. 展开更多
关键词 ThyroidNet deep learning TransUnet multitask learning medical image analysis
下载PDF
CapsNet-FR: Capsule Networks for Improved Recognition of Facial Features
13
作者 Mahmood Ul Haq Muhammad Athar Javed Sethi +3 位作者 Najib Ben Aoun Ala Saleh Alluhaidan Sadique Ahmad Zahid farid 《Computers, Materials & Continua》 SCIE EI 2024年第5期2169-2186,共18页
Face recognition (FR) technology has numerous applications in artificial intelligence including biometrics, security,authentication, law enforcement, and surveillance. Deep learning (DL) models, notably convolutional ... Face recognition (FR) technology has numerous applications in artificial intelligence including biometrics, security,authentication, law enforcement, and surveillance. Deep learning (DL) models, notably convolutional neuralnetworks (CNNs), have shown promising results in the field of FR. However CNNs are easily fooled since theydo not encode position and orientation correlations between features. Hinton et al. envisioned Capsule Networksas a more robust design capable of retaining pose information and spatial correlations to recognize objects morelike the brain does. Lower-level capsules hold 8-dimensional vectors of attributes like position, hue, texture, andso on, which are routed to higher-level capsules via a new routing by agreement algorithm. This provides capsulenetworks with viewpoint invariance, which has previously evaded CNNs. This research presents a FR model basedon capsule networks that was tested using the LFW dataset, COMSATS face dataset, and own acquired photos usingcameras measuring 128 × 128 pixels, 40 × 40 pixels, and 30 × 30 pixels. The trained model outperforms state-ofthe-art algorithms, achieving 95.82% test accuracy and performing well on unseen faces that have been blurred orrotated. Additionally, the suggested model outperformed the recently released approaches on the COMSATS facedataset, achieving a high accuracy of 92.47%. Based on the results of this research as well as previous results, capsulenetworks perform better than deeper CNNs on unobserved altered data because of their special equivarianceproperties. 展开更多
关键词 CapsNet face recognition artificial intelligence
下载PDF
Machine Learning Empowered Security and Privacy Architecture for IoT Networks with the Integration of Blockchain
14
作者 Sohaib Latif M.Saad Bin Ilyas +3 位作者 Azhar Imran Hamad Ali Abosaq Abdulaziz Alzubaidi Vincent Karovic Jr. 《Intelligent Automation & Soft Computing》 2024年第2期353-379,共27页
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ... The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively. 展开更多
关键词 Machine learning internet of things blockchain data privacy SECURITY Industry 4.0
下载PDF
Building Custom Spreadsheet Functions with Python: End-User Software Engineering Approach
15
作者 Tamer Bahgat Elserwy Atef Tayh Nour El-Din Raslan +1 位作者 Tarek Ali Mervat H. Gheith 《Journal of Software Engineering and Applications》 2024年第5期246-258,共13页
End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data a... End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data analysis. However, Excel functionalities have limits compared to dedicated programming languages. This paper addresses this gap by proposing a prototype for integrating Python’s capabilities into Excel through on-premises desktop to build custom spreadsheet functions with Python. This approach overcomes potential latency issues associated with cloud-based solutions. This prototype utilizes Excel-DNA and IronPython. Excel-DNA allows creating custom Python functions that seamlessly integrate with Excel’s calculation engine. IronPython enables the execution of these Python (CSFs) directly within Excel. C# and VSTO add-ins form the core components, facilitating communication between Python and Excel. This approach empowers users with a potentially open-ended set of Python (CSFs) for tasks like mathematical calculations, statistical analysis, and even predictive modeling, all within the familiar Excel interface. This prototype demonstrates smooth integration, allowing users to call Python (CSFs) just like standard Excel functions. This research contributes to enhancing spreadsheet capabilities for end-user programmers by leveraging Python’s power within Excel. Future research could explore expanding data analysis capabilities by expanding the (CSFs) functions for complex calculations, statistical analysis, data manipulation, and even external library integration. The possibility of integrating machine learning models through the (CSFs) functions within the familiar Excel environment. 展开更多
关键词 End-User Software Engineering Custom Spreadsheet Functions (CSFs)
下载PDF
Detection of Knowledge on Social Media Using Data Mining Techniques
16
作者 Aseel Abdullah Alolayan Ahmad A. Alhamed 《Open Journal of Applied Sciences》 2024年第2期472-482,共11页
In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), s... In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), such as extracting information related to people’s behaviors and interactions to analyze feelings or understand the behavior of users or groups, and many others. This extracted knowledge has a very important role in decision-making, creating and improving marketing objectives and competitive advantage, monitoring events, whether political or economic, and development in all fields. Therefore, to extract this knowledge, we need to analyze the vast amount of data found within social media using the most popular data mining techniques and applications related to social media sites. 展开更多
关键词 Data Mining KNOWLEDGE Data Mining Techniques Social Media
下载PDF
How signaling and search costs affect information asymmetry in P2P lending:the economics of big data 被引量:6
17
作者 Jiaqi Yan Wayne Yu JLeon Zhao 《Financial Innovation》 2015年第1期279-289,共11页
In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based ... In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues. 展开更多
关键词 Lending industry P2P lending Big data Economics of big data Fintech Information economics
下载PDF
Robust control for a class of nonlinear networked systems with stochastic communication delays via sliding mode conception 被引量:2
18
作者 Lifeng MA Zidong WANG +1 位作者 Xuemin CHEN Zhi GUO 《控制理论与应用(英文版)》 EI 2010年第1期34-39,共6页
This paper deals with the robust control problem for a class of uncertain nonlinear networked systems with stochastic communication delays via sliding mode conception (SMC). A sequence of variables obeying Bernoulli... This paper deals with the robust control problem for a class of uncertain nonlinear networked systems with stochastic communication delays via sliding mode conception (SMC). A sequence of variables obeying Bernoulli distribution are employed to model the randomly occurring communication delays which could be different for different state variables. A discrete switching function that is different from those in the existing literature is first proposed. Then, expressed as the feasibility of a linear matrix inequality (LMI) with an equality constraint, sufficient conditions are derived in order to ensure the globally mean-square asymptotic stability of the system dynamics on the sliding surface. A discrete-time SMC controller is then synthesized to guarantee the discrete-time sliding mode reaching condition with the specified sliding surface. Finally, a simulation example is given to show the effectiveness of the proposed method. 展开更多
关键词 Sliding mode control Nonlinear systems Networked systems Stochastic communication delays
下载PDF
Guest Editorial for Special Issue on Blockchain for Internet-of-Things and Cyber-Physical Systems 被引量:2
19
作者 Mohammad Mehedi Hassan Giancarlo Fortino +4 位作者 Laurence T.Yang Hai Jiang Kim-Kwang Raymond Choo Jun Jason Zhang Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第12期1867-1867,共1页
Cyber-physical systems(CPS)are increasingly commonplace,with applications in energy,health,transportation,and many other sectors.One of the major requirements in CPS is that the interaction between cyber-world and man... Cyber-physical systems(CPS)are increasingly commonplace,with applications in energy,health,transportation,and many other sectors.One of the major requirements in CPS is that the interaction between cyber-world and man-made physical world(exchanging and sharing of data and information with other physical objects and systems)must be safe,especially in bi-directional communications.In particular,there is a need to suitably address security and/or privacy concerns in this human-in-the-loop CPS ecosystem.However,existing centralized architecture models in CPS,and also the more general IoT systems,have a number of associated limitations,in terms of single point of failure,data privacy,security,robustness,etc.Such limitations reinforce the importance of designing reliable,secure and privacy-preserving distributed solutions and other novel approaches,such as those based on blockchain technology due to its features(e.g.,decentralization,transparency and immutability of data).This is the focus of this special issue. 展开更多
关键词 INTERNET LIMITATIONS TRANSPARENCY
下载PDF
Intrusion Detection Systems in Internet of Things and Mobile Ad-Hoc Networks 被引量:2
20
作者 Vasaki Ponnusamy Mamoona Humayun +2 位作者 NZJhanjhi Aun Yichiet Maram Fahhad Almufareh 《Computer Systems Science & Engineering》 SCIE EI 2022年第3期1199-1215,共17页
Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traff... Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traffic features with high information gain are primarily found in data link layers rather than application layers in wired networks.This survey investigates some of the complexities and challenges in deploying wireless IDS in terms of data collection methods,IDS techniques,IDS placement strategies,and traffic data analysis techniques.This paper’s main finding highlights the lack of available network traces for training modern machine-learning models against IoT specific intrusions.Specifically,the Knowledge Discovery in Databases(KDD)Cup dataset is reviewed to highlight the design challenges of wireless intrusion detection based on current data attributes and proposed several guidelines to future-proof following traffic capture methods in the wireless network(WN).The paper starts with a review of various intrusion detection techniques,data collection methods and placement methods.The main goal of this paper is to study the design challenges of deploying intrusion detection system in a wireless environment.Intrusion detection system deployment in a wireless environment is not as straightforward as in the wired network environment due to the architectural complexities.So this paper reviews the traditional wired intrusion detection deployment methods and discusses how these techniques could be adopted into the wireless environment and also highlights the design challenges in the wireless environment.The main wireless environments to look into would be Wireless Sensor Networks(WSN),Mobile Ad Hoc Networks(MANET)and IoT as this are the future trends and a lot of attacks have been targeted into these networks.So it is very crucial to design an IDS specifically to target on the wireless networks. 展开更多
关键词 Internet of Things MANET intrusion detection systems wireless networks
下载PDF
上一页 1 2 26 下一页 到第
使用帮助 返回顶部