Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Sma...Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Small and medium sized enterprises(SMEs)are the backbone of the global economy,comprising of 90%of businesses worldwide.However,only 10%SMEs have adopted big data analytics despite the competitive advantage they could achieve.Previous research has analysed the barriers to adoption and a strategic framework has been developed to help SMEs adopt big data analytics.The framework was converted into a scoring tool which has been applied to multiple case studies of SMEs in the UK.This paper documents the process of evaluating the framework based on the structured feedback from a focus group composed of experienced practitioners.The results of the evaluation are presented with a discussion on the results,and the paper concludes with recommendations to improve the scoring tool based on the proposed framework.The research demonstrates that this positioning tool is beneficial for SMEs to achieve competitive advantages by increasing the application of business intelligence and big data analytics.展开更多
The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big da...The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big data allows for boundless potential outcomes for discovering knowledge.Big data analytics(BDA)in healthcare can,for instance,help determine causes of diseases,generate effective diagnoses,enhance Qo S guarantees by increasing efficiency of the healthcare delivery and effectiveness and viability of treatments,generate accurate predictions of readmissions,enhance clinical care,and pinpoint opportunities for cost savings.However,BDA implementations in any domain are generally complicated and resource-intensive with a high failure rate and no roadmap or success strategies to guide the practitioners.In this paper,we present a comprehensive roadmap to derive insights from BDA in the healthcare(patient care)domain,based on the results of a systematic literature review.We initially determine big data characteristics for healthcare and then review BDA applications to healthcare in academic research focusing particularly on No SQL databases.We also identify the limitations and challenges of these applications and justify the potential of No SQL databases to address these challenges and further enhance BDA healthcare research.We then propose and describe a state-of-the-art BDA architecture called Med-BDA for healthcare domain which solves all current BDA challenges and is based on the latest zeta big data paradigm.We also present success strategies to ensure the working of Med-BDA along with outlining the major benefits of BDA applications to healthcare.Finally,we compare our work with other related literature reviews across twelve hallmark features to justify the novelty and importance of our work.The aforementioned contributions of our work are collectively unique and clearly present a roadmap for clinical administrators,practitioners and professionals to successfully implement BDA initiatives in their organizations.展开更多
As financial criminal methods become increasingly sophisticated, traditional anti-money laundering and fraud detection approaches face significant challenges. This study focuses on the application technologies and cha...As financial criminal methods become increasingly sophisticated, traditional anti-money laundering and fraud detection approaches face significant challenges. This study focuses on the application technologies and challenges of big data analytics in anti-money laundering and financial fraud detection. The research begins by outlining the evolutionary trends of financial crimes and highlighting the new characteristics of the big data era. Subsequently, it systematically analyzes the application of big data analytics technologies in this field, including machine learning, network analysis, and real-time stream processing. Through case studies, the research demonstrates how these technologies enhance the accuracy and efficiency of anomalous transaction detection. However, the study also identifies challenges faced by big data analytics, such as data quality issues, algorithmic bias, and privacy protection concerns. To address these challenges, the research proposes solutions from both technological and managerial perspectives, including the application of privacy-preserving technologies like federated learning. Finally, the study discusses the development prospects of Regulatory Technology (RegTech), emphasizing the importance of synergy between technological innovation and regulatory policies. This research provides guidance for financial institutions and regulatory bodies in optimizing their anti-money laundering and fraud detection strategies.展开更多
With the concepts of Industry 4.0 and smart manufacturing gaining popularity,there is a growing notion that conventional manufacturing will witness a transition toward a new paradigm,targeting innovation,automation,be...With the concepts of Industry 4.0 and smart manufacturing gaining popularity,there is a growing notion that conventional manufacturing will witness a transition toward a new paradigm,targeting innovation,automation,better response to customer needs,and intelligent systems.Within this context,this review focuses on the concept of cyber–physical production system(CPPS)and presents a holistic perspective on the role of the CPPS in three key and essential drivers of this transformation:data-driven manufacturing,decentralized manufacturing,and integrated blockchains for data security.The paper aims to connect these three aspects of smart manufacturing and proposes that through the application of data-driven modeling,CPPS will aid in transforming manufacturing to become more intuitive and automated.In turn,automated manufacturing will pave the way for the decentralization of manufacturing.Layering blockchain technologies on top of CPPS will ensure the reliability and security of data sharing and integration across decentralized systems.Each of these claims is supported by relevant case studies recently published in the literature and from the industry;a brief on existing challenges and the way forward is also provided.展开更多
This paper focuses on facilitating state-of-the-art applications of big data analytics(BDA) architectures and infrastructures to telecommunications(telecom) industrial sector.Telecom companies are dealing with terabyt...This paper focuses on facilitating state-of-the-art applications of big data analytics(BDA) architectures and infrastructures to telecommunications(telecom) industrial sector.Telecom companies are dealing with terabytes to petabytes of data on a daily basis. Io T applications in telecom are further contributing to this data deluge. Recent advances in BDA have exposed new opportunities to get actionable insights from telecom big data. These benefits and the fast-changing BDA technology landscape make it important to investigate existing BDA applications to telecom sector. For this, we initially determine published research on BDA applications to telecom through a systematic literature review through which we filter 38 articles and categorize them in frameworks, use cases, literature reviews, white papers and experimental validations. We also discuss the benefits and challenges mentioned in these articles. We find that experiments are all proof of concepts(POC) on a severely limited BDA technology stack(as compared to the available technology stack), i.e.,we did not find any work focusing on full-fledged BDA implementation in an operational telecom environment. To facilitate these applications at research-level, we propose a state-of-the-art lambda architecture for BDA pipeline implementation(called Lambda Tel) based completely on open source BDA technologies and the standard Python language, along with relevant guidelines.We discovered only one research paper which presented a relatively-limited lambda architecture using the proprietary AWS cloud infrastructure. We believe Lambda Tel presents a clear roadmap for telecom industry practitioners to implement and enhance BDA applications in their enterprises.展开更多
Big Data and Data Analytics affect almost all aspects of modern organisations’decision-making and business strategies.Big Data and Data Analytics create opportunities,challenges,and implications for the external audi...Big Data and Data Analytics affect almost all aspects of modern organisations’decision-making and business strategies.Big Data and Data Analytics create opportunities,challenges,and implications for the external auditing procedure.The purpose of this article is to reveal essential aspects of the impact of Big Data and Data Analytics on external auditing.It seems that Big Data Analytics is a critical tool for organisations,as well as auditors,that contributes to the enhancement of the auditing process.Also,legislative implications must be taken under consideration,since existing standards may need to change.Last,auditors need to develop new skills and competence,and educational organisations need to change their educational programs in order to be able to correspond to new market needs.展开更多
To obtain the platform s big data analytics support,manufacturers in the traditional retail channel must decide whether to use the direct online channel.A retail supply chain model and a direct online supply chain mod...To obtain the platform s big data analytics support,manufacturers in the traditional retail channel must decide whether to use the direct online channel.A retail supply chain model and a direct online supply chain model are built,in which manufacturers design products alone in the retail channel,while the platform and manufacturer complete the product design in the direct online channel.These two models are analyzed using the game theoretical model and numerical simulation.The findings indicate that if the manufacturers design capabilities are not very high and the commission rate is not very low,the manufacturers will choose the direct online channel if the platform s technical efforts are within an interval.When the platform s technical efforts are exogenous,they positively influence the manufacturers decisions;however,in the endogenous case,the platform s effect on the manufacturers is reflected in the interaction of the commission rate and cost efficiency.The manufacturers and the platform should make synthetic effort decisions based on the manufacturer s development capabilities,the intensity of market competition,and the cost efficiency of the platform.展开更多
In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker....In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker.Due to such massive generation of big data,the utilization of new methods based on Big Data Analytics(BDA),Machine Learning(ML),and Artificial Intelligence(AI)have become essential.In this aspect,the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning(BDA-CSODL)technique for medical image classification on Apache Spark environment.The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately.BDA-CSODL technique involves different stages of operations such as preprocessing,segmentation,fea-ture extraction,and classification.In addition,BDA-CSODL technique also fol-lows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image.Moreover,a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor.Stochastic Gradient Descent(SGD)model is used for parameter tuning process.Furthermore,CSO with Long Short-Term Memory(CSO-LSTM)model is employed as a classification model to determine the appropriate class labels to it.Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique.A wide range of simulations was conducted on benchmark medical image datasets and the com-prehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.展开更多
Lately,the Internet of Things(IoT)application requires millions of structured and unstructured data since it has numerous problems,such as data organization,production,and capturing.To address these shortcomings,big d...Lately,the Internet of Things(IoT)application requires millions of structured and unstructured data since it has numerous problems,such as data organization,production,and capturing.To address these shortcomings,big data analytics is the most superior technology that has to be adapted.Even though big data and IoT could make human life more convenient,those benefits come at the expense of security.To manage these kinds of threats,the intrusion detection system has been extensively applied to identify malicious network traffic,particularly once the preventive technique fails at the level of endpoint IoT devices.As cyberattacks targeting IoT have gradually become stealthy and more sophisticated,intrusion detection systems(IDS)must continually emerge to manage evolving security threats.This study devises Big Data Analytics with the Internet of Things Assisted Intrusion Detection using Modified Buffalo Optimization Algorithm with Deep Learning(IDMBOA-DL)algorithm.In the presented IDMBOA-DL model,the Hadoop MapReduce tool is exploited for managing big data.The MBOA algorithm is applied to derive an optimal subset of features from picking an optimum set of feature subsets.Finally,the sine cosine algorithm(SCA)with convolutional autoencoder(CAE)mechanism is utilized to recognize and classify the intrusions in the IoT network.A wide range of simulations was conducted to demonstrate the enhanced results of the IDMBOA-DL algorithm.The comparison outcomes emphasized the better performance of the IDMBOA-DL model over other approaches.展开更多
With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeo...With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeon Phi on data analytics workloads in data center is still an open question. Phibench 2.0 is built for the latest generation of Intel Xeon Phi(KNL, Knights Landing), based on the prior work PhiBench(also named BigDataBench-Phi), which is designed for the former generation of Intel Xeon Phi(KNC, Knights Corner). Workloads of PhiBench 2.0 are delicately chosen based on BigdataBench 4.0 and PhiBench 1.0. Other than that, these workloads are well optimized on KNL, and run on real-world datasets to evaluate their performance and scalability. Further, the microarchitecture-level characteristics including CPI, cache behavior, vectorization intensity, and branch prediction efficiency are analyzed and the impact of affinity and scheduling policy on performance are investigated. It is believed that the observations would help other researchers working on Intel Xeon Phi and data analytics workloads.展开更多
Big Data applications face different types of complexities in classifications.Cleaning and purifying data by eliminating irrelevant or redundant data for big data applications becomes a complex operation while attempt...Big Data applications face different types of complexities in classifications.Cleaning and purifying data by eliminating irrelevant or redundant data for big data applications becomes a complex operation while attempting to maintain discriminative features in processed data.The existing scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.Recently ensemble methods have made a mark in classification tasks as combine multiple results into a single representation.When comparing to a single model,this technique offers for improved prediction.Ensemble based feature selections parallel multiple expert’s judgments on a single topic.The major goal of this research is to suggest HEFSM(Heterogeneous Ensemble Feature Selection Model),a hybrid approach that combines multiple algorithms.The major goal of this research is to suggest HEFSM(Heterogeneous Ensemble Feature Selection Model),a hybrid approach that combines multiple algorithms.Further,individual outputs produced by methods producing subsets of features or rankings or voting are also combined in this work.KNN(K-Nearest Neighbor)classifier is used to classify the big dataset obtained from the ensemble learning approach.The results found of the study have been good,proving the proposed model’s efficiency in classifications in terms of the performance metrics like precision,recall,F-measure and accuracy used.展开更多
The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with iden...The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with identifying data requirements, mainly how the data should be grouped or labeled. For example, for data about Cybersecurity in organizations, grouping can be done into categories such as DOS denial of services, unauthorized access from local or remote, and surveillance and another probing. Next, after identifying the groups, a researcher or whoever carrying out the data analytics goes out into the field and primarily collects the data. The data collected is then organized in an orderly fashion to enable easy analysis;we aim to study different articles and compare performances for each algorithm to choose the best suitable classifies.展开更多
Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering stru...Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering structures within them are at flood risk. The economic and social impact of flooding revealed that the damage caused by flash floods leading to blue spots is very high in terms of dollar amount and direct impacts on people’s lives. The impact of flooding within blue spots is either infrastructural or social, affecting lives and properties. Currently, more than 16.1 million properties in the U.S are vulnerable to flooding, and this is projected to increase by 3.2% within the next 30 years. Some models have been developed for flood risks analysis and management including some hydrological models, algorithms and machine learning and geospatial models. The models and methods reviewed are based on location data collection, statistical analysis and computation, and visualization (mapping). This research aims to create blue spots model for the State of Tennessee using ArcGIS visual programming language (model) and data analytics pipeline.展开更多
Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workload...Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workloads running on state-of-the-art SMT( simultaneous multithreading) processors,which needs comprehensive understanding to workload characteristics. This paper chooses the Spark workloads as the representative big data analytics workloads and performs comprehensive measurements on the POWER8 platform,which supports a wide range of multithreading. The research finds that the thread assignment policy and cache contention have significant impacts on application performance. In order to identify the potential optimization method from the experiment results,this study performs micro-architecture level characterizations by means of hardware performance counters and gives implications accordingly.展开更多
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli...One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.展开更多
Data science is an interdisciplinary discipline that employs big data,machine learning algorithms,data mining techniques,and scientific methodologies to extract insights and information from massive amounts of structu...Data science is an interdisciplinary discipline that employs big data,machine learning algorithms,data mining techniques,and scientific methodologies to extract insights and information from massive amounts of structured and unstructured data.The healthcare industry constantly creates large,important databases on patient demographics,treatment plans,results of medical exams,insurance coverage,and more.The data that IoT(Internet of Things)devices collect is of interest to data scientists.Data science can help with the healthcare industry's massive amounts of disparate,structured,and unstructured data by processing,managing,analyzing,and integrating it.To get reliable findings from this data,proper management and analysis are essential.This article provides a comprehen-sive study and discussion of process data analysis as it pertains to healthcare applications.The article discusses the advantages and dis-advantages of using big data analytics(BDA)in the medical industry.The insights offered by BDA,which can also aid in making strategic decisions,can assist the healthcare system.展开更多
Smart metering has gained considerable attention as a research focus due to its reliability and energy-efficient nature compared to traditional electromechanical metering systems. Existing methods primarily focus on d...Smart metering has gained considerable attention as a research focus due to its reliability and energy-efficient nature compared to traditional electromechanical metering systems. Existing methods primarily focus on data management,rather than emphasizing efficiency. Accurate prediction of electricity consumption is crucial for enabling intelligent grid operations,including resource planning and demandsupply balancing. Smart metering solutions offer users the benefits of effectively interpreting their energy utilization and optimizing costs. Motivated by this,this paper presents an Intelligent Energy Utilization Analysis using Smart Metering Data(IUA-SMD)model to determine energy consumption patterns. The proposed IUA-SMD model comprises three major processes:data Pre-processing,feature extraction,and classification,with parameter optimization. We employ the extreme learning machine(ELM)based classification approach within the IUA-SMD model to derive optimal energy utilization labels. Additionally,we apply the shell game optimization(SGO)algorithm to enhance the classification efficiency of the ELM by optimizing its parameters. The effectiveness of the IUA-SMD model is evaluated using an extensive dataset of smart metering data,and the results are analyzed in terms of accuracy and mean square error(MSE). The proposed model demonstrates superior performance,achieving a maximum accuracy of65.917% and a minimum MSE of0.096. These results highlight the potential of the IUA-SMD model for enabling efficient energy utilization through intelligent analysis of smart metering data.展开更多
Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal he...Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.展开更多
This study investigates the transformative potential of big data analytics in healthcare, focusing on its application for forecasting patient outcomes and enhancing clinical decision-making. The primary challenges add...This study investigates the transformative potential of big data analytics in healthcare, focusing on its application for forecasting patient outcomes and enhancing clinical decision-making. The primary challenges addressed include data integration, quality, privacy issues, and the interpretability of complex machine-learning models. An extensive literature review evaluates the current state of big data analytics in healthcare, particularly predictive analytics. The research employs machine learning algorithms to develop predictive models aimed at specific patient outcomes, such as disease progression and treatment responses. The models are assessed based on three key metrics: accuracy, interpretability, and clinical relevance. The findings demonstrate that big data analytics can significantly revolutionize healthcare by providing data-driven insights that inform treatment decisions, anticipate complications, and identify high-risk patients. The predictive models developed show promise for enhancing clinical judgment and facilitating personalized treatment approaches. Moreover, the study underscores the importance of addressing data quality, integration, and privacy to ensure the ethical application of predictive analytics in clinical settings. The results contribute to the growing body of research on practical big data applications in healthcare, offering valuable recommendations for balancing patient privacy with the benefits of data-driven insights. Ultimately, this research has implications for policy-making, guiding the implementation of predictive models and fostering innovation aimed at improving healthcare outcomes.展开更多
The advent of the digital era and computer-based remote communications has significantly enhanced the applicability of various sciences over the past two decades,notably data science(DS)and cryptography(CG).Data scien...The advent of the digital era and computer-based remote communications has significantly enhanced the applicability of various sciences over the past two decades,notably data science(DS)and cryptography(CG).Data science involves clustering and categorizing unstructured data,while cryptography ensures security and privacy aspects.Despite certain CG laws and requirements mandating fully randomized or pseudonoise outputs from CG primitives and schemes,it appears that CG policies might impede data scientists from working on ciphers or analyzing information systems supporting security and privacy services.However,this study posits that CG does not entirely preclude data scientists from operating in the presence of ciphers,as there are several examples of successful collaborations,including homomorphic encryption schemes,searchable encryption algorithms,secret-sharing protocols,and protocols offering conditional privacy.These instances,along with others,indicate numerous potential solutions for fostering collaboration between DS and CG.Therefore,this study classifies the challenges faced by DS and CG into three distinct groups:challenging problems(which can be conditionally solved and are currently available to use;e.g.,using secret sharing protocols,zero-knowledge proofs,partial homomorphic encryption algorithms,etc.),open problems(where proofs to solve exist but remain unsolved and is now considered as open problems;e.g.,proposing efficient functional encryption algorithm,fully homomorphic encryption scheme,etc.),and hard problems(infeasible to solve with current knowledge and tools).Ultimately,the paper will address specific solutions and outline future directions to tackle the challenges arising at the intersection of DS and CG,such as providing specific access for DS experts in secret-sharing algorithms,assigning data index dimensions to DS experts in ultra-dimension encryption algorithms,defining some functional keys in functional encryption schemes for DS experts,and giving limited shares of data to them for analytics.展开更多
文摘Big data analytics has been widely adopted by large companies to achieve measurable benefits including increased profitability,customer demand forecasting,cheaper development of products,and improved stock control.Small and medium sized enterprises(SMEs)are the backbone of the global economy,comprising of 90%of businesses worldwide.However,only 10%SMEs have adopted big data analytics despite the competitive advantage they could achieve.Previous research has analysed the barriers to adoption and a strategic framework has been developed to help SMEs adopt big data analytics.The framework was converted into a scoring tool which has been applied to multiple case studies of SMEs in the UK.This paper documents the process of evaluating the framework based on the structured feedback from a focus group composed of experienced practitioners.The results of the evaluation are presented with a discussion on the results,and the paper concludes with recommendations to improve the scoring tool based on the proposed framework.The research demonstrates that this positioning tool is beneficial for SMEs to achieve competitive advantages by increasing the application of business intelligence and big data analytics.
基金supported by two research grants provided by the Karachi Institute of Economics and Technology(KIET)the Big Data Analytics Laboratory at the Insitute of Business Administration(IBAKarachi)。
文摘The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big data allows for boundless potential outcomes for discovering knowledge.Big data analytics(BDA)in healthcare can,for instance,help determine causes of diseases,generate effective diagnoses,enhance Qo S guarantees by increasing efficiency of the healthcare delivery and effectiveness and viability of treatments,generate accurate predictions of readmissions,enhance clinical care,and pinpoint opportunities for cost savings.However,BDA implementations in any domain are generally complicated and resource-intensive with a high failure rate and no roadmap or success strategies to guide the practitioners.In this paper,we present a comprehensive roadmap to derive insights from BDA in the healthcare(patient care)domain,based on the results of a systematic literature review.We initially determine big data characteristics for healthcare and then review BDA applications to healthcare in academic research focusing particularly on No SQL databases.We also identify the limitations and challenges of these applications and justify the potential of No SQL databases to address these challenges and further enhance BDA healthcare research.We then propose and describe a state-of-the-art BDA architecture called Med-BDA for healthcare domain which solves all current BDA challenges and is based on the latest zeta big data paradigm.We also present success strategies to ensure the working of Med-BDA along with outlining the major benefits of BDA applications to healthcare.Finally,we compare our work with other related literature reviews across twelve hallmark features to justify the novelty and importance of our work.The aforementioned contributions of our work are collectively unique and clearly present a roadmap for clinical administrators,practitioners and professionals to successfully implement BDA initiatives in their organizations.
文摘As financial criminal methods become increasingly sophisticated, traditional anti-money laundering and fraud detection approaches face significant challenges. This study focuses on the application technologies and challenges of big data analytics in anti-money laundering and financial fraud detection. The research begins by outlining the evolutionary trends of financial crimes and highlighting the new characteristics of the big data era. Subsequently, it systematically analyzes the application of big data analytics technologies in this field, including machine learning, network analysis, and real-time stream processing. Through case studies, the research demonstrates how these technologies enhance the accuracy and efficiency of anomalous transaction detection. However, the study also identifies challenges faced by big data analytics, such as data quality issues, algorithmic bias, and privacy protection concerns. To address these challenges, the research proposes solutions from both technological and managerial perspectives, including the application of privacy-preserving technologies like federated learning. Finally, the study discusses the development prospects of Regulatory Technology (RegTech), emphasizing the importance of synergy between technological innovation and regulatory policies. This research provides guidance for financial institutions and regulatory bodies in optimizing their anti-money laundering and fraud detection strategies.
文摘With the concepts of Industry 4.0 and smart manufacturing gaining popularity,there is a growing notion that conventional manufacturing will witness a transition toward a new paradigm,targeting innovation,automation,better response to customer needs,and intelligent systems.Within this context,this review focuses on the concept of cyber–physical production system(CPPS)and presents a holistic perspective on the role of the CPPS in three key and essential drivers of this transformation:data-driven manufacturing,decentralized manufacturing,and integrated blockchains for data security.The paper aims to connect these three aspects of smart manufacturing and proposes that through the application of data-driven modeling,CPPS will aid in transforming manufacturing to become more intuitive and automated.In turn,automated manufacturing will pave the way for the decentralization of manufacturing.Layering blockchain technologies on top of CPPS will ensure the reliability and security of data sharing and integration across decentralized systems.Each of these claims is supported by relevant case studies recently published in the literature and from the industry;a brief on existing challenges and the way forward is also provided.
基金supported in part by the Big Data Analytics Laboratory(BDALAB)at the Institute of Business Administration under the research grant approved by the Higher Education Commission of Pakistan(www.hec.gov.pk)the Darbi company(www.darbi.io)
文摘This paper focuses on facilitating state-of-the-art applications of big data analytics(BDA) architectures and infrastructures to telecommunications(telecom) industrial sector.Telecom companies are dealing with terabytes to petabytes of data on a daily basis. Io T applications in telecom are further contributing to this data deluge. Recent advances in BDA have exposed new opportunities to get actionable insights from telecom big data. These benefits and the fast-changing BDA technology landscape make it important to investigate existing BDA applications to telecom sector. For this, we initially determine published research on BDA applications to telecom through a systematic literature review through which we filter 38 articles and categorize them in frameworks, use cases, literature reviews, white papers and experimental validations. We also discuss the benefits and challenges mentioned in these articles. We find that experiments are all proof of concepts(POC) on a severely limited BDA technology stack(as compared to the available technology stack), i.e.,we did not find any work focusing on full-fledged BDA implementation in an operational telecom environment. To facilitate these applications at research-level, we propose a state-of-the-art lambda architecture for BDA pipeline implementation(called Lambda Tel) based completely on open source BDA technologies and the standard Python language, along with relevant guidelines.We discovered only one research paper which presented a relatively-limited lambda architecture using the proprietary AWS cloud infrastructure. We believe Lambda Tel presents a clear roadmap for telecom industry practitioners to implement and enhance BDA applications in their enterprises.
文摘Big Data and Data Analytics affect almost all aspects of modern organisations’decision-making and business strategies.Big Data and Data Analytics create opportunities,challenges,and implications for the external auditing procedure.The purpose of this article is to reveal essential aspects of the impact of Big Data and Data Analytics on external auditing.It seems that Big Data Analytics is a critical tool for organisations,as well as auditors,that contributes to the enhancement of the auditing process.Also,legislative implications must be taken under consideration,since existing standards may need to change.Last,auditors need to develop new skills and competence,and educational organisations need to change their educational programs in order to be able to correspond to new market needs.
基金The National Natural Science Foundation of China(No.72071039)the Foundation of China Scholarship Council(No.202106090197)。
文摘To obtain the platform s big data analytics support,manufacturers in the traditional retail channel must decide whether to use the direct online channel.A retail supply chain model and a direct online supply chain model are built,in which manufacturers design products alone in the retail channel,while the platform and manufacturer complete the product design in the direct online channel.These two models are analyzed using the game theoretical model and numerical simulation.The findings indicate that if the manufacturers design capabilities are not very high and the commission rate is not very low,the manufacturers will choose the direct online channel if the platform s technical efforts are within an interval.When the platform s technical efforts are exogenous,they positively influence the manufacturers decisions;however,in the endogenous case,the platform s effect on the manufacturers is reflected in the interaction of the commission rate and cost efficiency.The manufacturers and the platform should make synthetic effort decisions based on the manufacturer s development capabilities,the intensity of market competition,and the cost efficiency of the platform.
基金The author extends his appreciation to the Deanship of Scientific Research at Majmaah University for funding this study under Project Number(R-2022-61).
文摘In recent years,huge volumes of healthcare data are getting generated in various forms.The advancements made in medical imaging are tremendous owing to which biomedical image acquisition has become easier and quicker.Due to such massive generation of big data,the utilization of new methods based on Big Data Analytics(BDA),Machine Learning(ML),and Artificial Intelligence(AI)have become essential.In this aspect,the current research work develops a new Big Data Analytics with Cat Swarm Optimization based deep Learning(BDA-CSODL)technique for medical image classification on Apache Spark environment.The aim of the proposed BDA-CSODL technique is to classify the medical images and diagnose the disease accurately.BDA-CSODL technique involves different stages of operations such as preprocessing,segmentation,fea-ture extraction,and classification.In addition,BDA-CSODL technique also fol-lows multi-level thresholding-based image segmentation approach for the detection of infected regions in medical image.Moreover,a deep convolutional neural network-based Inception v3 method is utilized in this study as feature extractor.Stochastic Gradient Descent(SGD)model is used for parameter tuning process.Furthermore,CSO with Long Short-Term Memory(CSO-LSTM)model is employed as a classification model to determine the appropriate class labels to it.Both SGD and CSO design approaches help in improving the overall image classification performance of the proposed BDA-CSODL technique.A wide range of simulations was conducted on benchmark medical image datasets and the com-prehensive comparative results demonstrate the supremacy of the proposed BDA-CSODL technique under different measures.
文摘Lately,the Internet of Things(IoT)application requires millions of structured and unstructured data since it has numerous problems,such as data organization,production,and capturing.To address these shortcomings,big data analytics is the most superior technology that has to be adapted.Even though big data and IoT could make human life more convenient,those benefits come at the expense of security.To manage these kinds of threats,the intrusion detection system has been extensively applied to identify malicious network traffic,particularly once the preventive technique fails at the level of endpoint IoT devices.As cyberattacks targeting IoT have gradually become stealthy and more sophisticated,intrusion detection systems(IDS)must continually emerge to manage evolving security threats.This study devises Big Data Analytics with the Internet of Things Assisted Intrusion Detection using Modified Buffalo Optimization Algorithm with Deep Learning(IDMBOA-DL)algorithm.In the presented IDMBOA-DL model,the Hadoop MapReduce tool is exploited for managing big data.The MBOA algorithm is applied to derive an optimal subset of features from picking an optimum set of feature subsets.Finally,the sine cosine algorithm(SCA)with convolutional autoencoder(CAE)mechanism is utilized to recognize and classify the intrusions in the IoT network.A wide range of simulations was conducted to demonstrate the enhanced results of the IDMBOA-DL algorithm.The comparison outcomes emphasized the better performance of the IDMBOA-DL model over other approaches.
基金Supported by the National High Technology Research and Development Program of China(No.2015AA015308)the National Key Research and Development Plan of China(No.2016YFB1000600,2016YFB1000601)the Major Program of National Natural Science Foundation of China(No.61432006)
文摘With high computational capacity, e.g. many-core and wide floating point SIMD units, Intel Xeon Phi shows promising prospect to accelerate high-performance computing(HPC) applications. But the application of Intel Xeon Phi on data analytics workloads in data center is still an open question. Phibench 2.0 is built for the latest generation of Intel Xeon Phi(KNL, Knights Landing), based on the prior work PhiBench(also named BigDataBench-Phi), which is designed for the former generation of Intel Xeon Phi(KNC, Knights Corner). Workloads of PhiBench 2.0 are delicately chosen based on BigdataBench 4.0 and PhiBench 1.0. Other than that, these workloads are well optimized on KNL, and run on real-world datasets to evaluate their performance and scalability. Further, the microarchitecture-level characteristics including CPI, cache behavior, vectorization intensity, and branch prediction efficiency are analyzed and the impact of affinity and scheduling policy on performance are investigated. It is believed that the observations would help other researchers working on Intel Xeon Phi and data analytics workloads.
文摘Big Data applications face different types of complexities in classifications.Cleaning and purifying data by eliminating irrelevant or redundant data for big data applications becomes a complex operation while attempting to maintain discriminative features in processed data.The existing scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.Recently ensemble methods have made a mark in classification tasks as combine multiple results into a single representation.When comparing to a single model,this technique offers for improved prediction.Ensemble based feature selections parallel multiple expert’s judgments on a single topic.The major goal of this research is to suggest HEFSM(Heterogeneous Ensemble Feature Selection Model),a hybrid approach that combines multiple algorithms.The major goal of this research is to suggest HEFSM(Heterogeneous Ensemble Feature Selection Model),a hybrid approach that combines multiple algorithms.Further,individual outputs produced by methods producing subsets of features or rankings or voting are also combined in this work.KNN(K-Nearest Neighbor)classifier is used to classify the big dataset obtained from the ensemble learning approach.The results found of the study have been good,proving the proposed model’s efficiency in classifications in terms of the performance metrics like precision,recall,F-measure and accuracy used.
文摘The information gained after the data analysis is vital to implement its outcomes to optimize processes and systems for more straightforward problem-solving. Therefore, the first step of data analytics deals with identifying data requirements, mainly how the data should be grouped or labeled. For example, for data about Cybersecurity in organizations, grouping can be done into categories such as DOS denial of services, unauthorized access from local or remote, and surveillance and another probing. Next, after identifying the groups, a researcher or whoever carrying out the data analytics goes out into the field and primarily collects the data. The data collected is then organized in an orderly fashion to enable easy analysis;we aim to study different articles and compare performances for each algorithm to choose the best suitable classifies.
文摘Climate change and global warming results in natural hazards, including flash floods. Flash floods can create blue spots;areas where transport networks (roads, tunnels, bridges, passageways) and other engineering structures within them are at flood risk. The economic and social impact of flooding revealed that the damage caused by flash floods leading to blue spots is very high in terms of dollar amount and direct impacts on people’s lives. The impact of flooding within blue spots is either infrastructural or social, affecting lives and properties. Currently, more than 16.1 million properties in the U.S are vulnerable to flooding, and this is projected to increase by 3.2% within the next 30 years. Some models have been developed for flood risks analysis and management including some hydrological models, algorithms and machine learning and geospatial models. The models and methods reviewed are based on location data collection, statistical analysis and computation, and visualization (mapping). This research aims to create blue spots model for the State of Tennessee using ArcGIS visual programming language (model) and data analytics pipeline.
基金Supported by the National High Technology Research and Development Program of China(No.2015AA015308)the State Key Development Program for Basic Research of China(No.2014CB340402)
文摘Big data analytics is emerging as one kind of the most important workloads in modern data centers. Hence,it is of great interest to identify the method of achieving the best performance for big data analytics workloads running on state-of-the-art SMT( simultaneous multithreading) processors,which needs comprehensive understanding to workload characteristics. This paper chooses the Spark workloads as the representative big data analytics workloads and performs comprehensive measurements on the POWER8 platform,which supports a wide range of multithreading. The research finds that the thread assignment policy and cache contention have significant impacts on application performance. In order to identify the potential optimization method from the experiment results,this study performs micro-architecture level characterizations by means of hardware performance counters and gives implications accordingly.
文摘One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.
文摘Data science is an interdisciplinary discipline that employs big data,machine learning algorithms,data mining techniques,and scientific methodologies to extract insights and information from massive amounts of structured and unstructured data.The healthcare industry constantly creates large,important databases on patient demographics,treatment plans,results of medical exams,insurance coverage,and more.The data that IoT(Internet of Things)devices collect is of interest to data scientists.Data science can help with the healthcare industry's massive amounts of disparate,structured,and unstructured data by processing,managing,analyzing,and integrating it.To get reliable findings from this data,proper management and analysis are essential.This article provides a comprehen-sive study and discussion of process data analysis as it pertains to healthcare applications.The article discusses the advantages and dis-advantages of using big data analytics(BDA)in the medical industry.The insights offered by BDA,which can also aid in making strategic decisions,can assist the healthcare system.
文摘Smart metering has gained considerable attention as a research focus due to its reliability and energy-efficient nature compared to traditional electromechanical metering systems. Existing methods primarily focus on data management,rather than emphasizing efficiency. Accurate prediction of electricity consumption is crucial for enabling intelligent grid operations,including resource planning and demandsupply balancing. Smart metering solutions offer users the benefits of effectively interpreting their energy utilization and optimizing costs. Motivated by this,this paper presents an Intelligent Energy Utilization Analysis using Smart Metering Data(IUA-SMD)model to determine energy consumption patterns. The proposed IUA-SMD model comprises three major processes:data Pre-processing,feature extraction,and classification,with parameter optimization. We employ the extreme learning machine(ELM)based classification approach within the IUA-SMD model to derive optimal energy utilization labels. Additionally,we apply the shell game optimization(SGO)algorithm to enhance the classification efficiency of the ELM by optimizing its parameters. The effectiveness of the IUA-SMD model is evaluated using an extensive dataset of smart metering data,and the results are analyzed in terms of accuracy and mean square error(MSE). The proposed model demonstrates superior performance,achieving a maximum accuracy of65.917% and a minimum MSE of0.096. These results highlight the potential of the IUA-SMD model for enabling efficient energy utilization through intelligent analysis of smart metering data.
文摘Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.
文摘This study investigates the transformative potential of big data analytics in healthcare, focusing on its application for forecasting patient outcomes and enhancing clinical decision-making. The primary challenges addressed include data integration, quality, privacy issues, and the interpretability of complex machine-learning models. An extensive literature review evaluates the current state of big data analytics in healthcare, particularly predictive analytics. The research employs machine learning algorithms to develop predictive models aimed at specific patient outcomes, such as disease progression and treatment responses. The models are assessed based on three key metrics: accuracy, interpretability, and clinical relevance. The findings demonstrate that big data analytics can significantly revolutionize healthcare by providing data-driven insights that inform treatment decisions, anticipate complications, and identify high-risk patients. The predictive models developed show promise for enhancing clinical judgment and facilitating personalized treatment approaches. Moreover, the study underscores the importance of addressing data quality, integration, and privacy to ensure the ethical application of predictive analytics in clinical settings. The results contribute to the growing body of research on practical big data applications in healthcare, offering valuable recommendations for balancing patient privacy with the benefits of data-driven insights. Ultimately, this research has implications for policy-making, guiding the implementation of predictive models and fostering innovation aimed at improving healthcare outcomes.
文摘The advent of the digital era and computer-based remote communications has significantly enhanced the applicability of various sciences over the past two decades,notably data science(DS)and cryptography(CG).Data science involves clustering and categorizing unstructured data,while cryptography ensures security and privacy aspects.Despite certain CG laws and requirements mandating fully randomized or pseudonoise outputs from CG primitives and schemes,it appears that CG policies might impede data scientists from working on ciphers or analyzing information systems supporting security and privacy services.However,this study posits that CG does not entirely preclude data scientists from operating in the presence of ciphers,as there are several examples of successful collaborations,including homomorphic encryption schemes,searchable encryption algorithms,secret-sharing protocols,and protocols offering conditional privacy.These instances,along with others,indicate numerous potential solutions for fostering collaboration between DS and CG.Therefore,this study classifies the challenges faced by DS and CG into three distinct groups:challenging problems(which can be conditionally solved and are currently available to use;e.g.,using secret sharing protocols,zero-knowledge proofs,partial homomorphic encryption algorithms,etc.),open problems(where proofs to solve exist but remain unsolved and is now considered as open problems;e.g.,proposing efficient functional encryption algorithm,fully homomorphic encryption scheme,etc.),and hard problems(infeasible to solve with current knowledge and tools).Ultimately,the paper will address specific solutions and outline future directions to tackle the challenges arising at the intersection of DS and CG,such as providing specific access for DS experts in secret-sharing algorithms,assigning data index dimensions to DS experts in ultra-dimension encryption algorithms,defining some functional keys in functional encryption schemes for DS experts,and giving limited shares of data to them for analytics.