Customer churns remains a key focus in this research, using artificial intelligence-based technique of machine learning. Research is based on the feature-based analysis four main features were used that are selected o...Customer churns remains a key focus in this research, using artificial intelligence-based technique of machine learning. Research is based on the feature-based analysis four main features were used that are selected on the basis of our customer churn to deduct the meaning full analysis of the data set. Data-set is taken from the Kaggle that is about the fine food review having more than half a million records in it. This research remains on feature based analysis that is further concluded using confusion matrix. In this research we are using confusion matrix to conclude the customer churn results. Such specific analysis helps e-commerce business for real time growth in their specific products focusing more sales and to analyze which product is getting outage. Moreover, after applying the techniques, Support Vector Machine and K-Nearest Neighbour perform better than the random forest in this particular scenario. Using confusion matrix for obtaining the results three things are obtained that are precision, recall and accuracy. The result explains feature-based analysis on fine food reviews, Amazon at customer churn Support Vector Machine performed better as in overall comparison.展开更多
Big data has had significant impacts on our lives,economies,academia and industries over the past decade.The current equations are:What is the future of big data?What era do we live in?This article addresses these que...Big data has had significant impacts on our lives,economies,academia and industries over the past decade.The current equations are:What is the future of big data?What era do we live in?This article addresses these questions by looking at meta as an operation and argues that we are living in the era of big intelligence through analyzing from meta(big data)to big intelligence.More specifically,this article will analyze big data from an evolutionary perspective.The article overviews data,information,knowledge,and intelligence(DIKI)and reveals their relationships.After analyzing meta as an operation,this article explores Meta(DIKE)and its relationship.It reveals 5 Bigs consisting of big data,big information,big knowledge,big intelligence and big analytics.Applying meta on 5 Bigs,this article infers that 4 Big Data 4.0=meta(big data)=big intelligence.This article analyzes how intelligent big analytics support big intelligence.The proposed approach in this research might facilitate the research and development of big data,big data analytics,business intelligence,artificial intelligence,and data science.展开更多
Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit...Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.展开更多
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli...One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.展开更多
This article delves into the intricate relationship between big data, cloud computing, and artificial intelligence, shedding light on their fundamental attributes and interdependence. It explores the seamless amalgama...This article delves into the intricate relationship between big data, cloud computing, and artificial intelligence, shedding light on their fundamental attributes and interdependence. It explores the seamless amalgamation of AI methodologies within cloud computing and big data analytics, encompassing the development of a cloud computing framework built on the robust foundation of the Hadoop platform, enriched by AI learning algorithms. Additionally, it examines the creation of a predictive model empowered by tailored artificial intelligence techniques. Rigorous simulations are conducted to extract valuable insights, facilitating method evaluation and performance assessment, all within the dynamic Hadoop environment, thereby reaffirming the precision of the proposed approach. The results and analysis section reveals compelling findings derived from comprehensive simulations within the Hadoop environment. These outcomes demonstrate the efficacy of the Sport AI Model (SAIM) framework in enhancing the accuracy of sports-related outcome predictions. Through meticulous mathematical analyses and performance assessments, integrating AI with big data emerges as a powerful tool for optimizing decision-making in sports. The discussion section extends the implications of these results, highlighting the potential for SAIM to revolutionize sports forecasting, strategic planning, and performance optimization for players and coaches. The combination of big data, cloud computing, and AI offers a promising avenue for future advancements in sports analytics. This research underscores the synergy between these technologies and paves the way for innovative approaches to sports-related decision-making and performance enhancement.展开更多
In the context of the digital economy,the volume of data is growing exponentially,the types of data are becoming more diverse,and its value is increasing,often providing critical support for decision-making by enterpr...In the context of the digital economy,the volume of data is growing exponentially,the types of data are becoming more diverse,and its value is increasing,often providing critical support for decision-making by enterprises and government institutions.Effective data governance is a crucial tool for maximizing data value and mitigating data risks.This article examines the application of data governance models in the digital economy,aiming to offer technical insights and guidance for data-driven enterprises and governments in China.By elevating their data governance standards in the new era,this approach will comprehensively enhance their ability to harness digital value and ensure security in the digital economy,ultimately driving the continued growth of both the digital economy and society.展开更多
This paper describes the function,structure and working status of the data buffer unitDBU,one of the most important functional units on ITM-1.It also discusses DBU’s supportto the multiprocessor system and Prolog lan...This paper describes the function,structure and working status of the data buffer unitDBU,one of the most important functional units on ITM-1.It also discusses DBU’s supportto the multiprocessor system and Prolog language.展开更多
With the continuous development of social economy,science and technology are also in continuous progress,relying on the Internet technology of big data era has come in an all-round way.On the basis of the development ...With the continuous development of social economy,science and technology are also in continuous progress,relying on the Internet technology of big data era has come in an all-round way.On the basis of the development of cloud computing and Internet technology,artificial intelligence technology has emerged as the times require.It also has more advantages.Applying it to computer network technology can effectively improve the data processing efficiency and quality of computer network technology,and improve the convenience for people’s life and production.This paper studies and analyzes the practical application requirements of computer network,and discusses the application characteristics and timeliness of artificial intelligence technology.展开更多
Artificial intelligence is a new technological science that researches and develops theories,methods,technologies and application systems for simulating,extending and expanding human intelligence.It simulates certain ...Artificial intelligence is a new technological science that researches and develops theories,methods,technologies and application systems for simulating,extending and expanding human intelligence.It simulates certain human thought processes and intelligent behaviors(such as learning,reasoning,thinking,planning,etc.),and produces a new type of intelligent machine that can respond in a similar way to human intelligence.In the past 30 years,it has achieved rapid development in various industries and related disciplines such as manufacturing,medical care,finance,and transportation.展开更多
Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have rev...Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.展开更多
Mechatronic product development is a complex and multidisciplinary field that encompasses various domains, including, among others, mechanical engineering, electrical engineering, control theory and software engineeri...Mechatronic product development is a complex and multidisciplinary field that encompasses various domains, including, among others, mechanical engineering, electrical engineering, control theory and software engineering. The integration of artificial intelligence technologies is revolutionizing this domain, offering opportunities to enhance design processes, optimize performance, and leverage vast amounts of knowledge. However, human expertise remains essential in contextualizing information, considering trade-offs, and ensuring ethical and societal implications are taken into account. This paper therefore explores the existing literature regarding the application of artificial intelligence as a comprehensive database, decision support system, and modeling tool in mechatronic product development. It analyzes the benefits of artificial intelligence in enabling domain linking, replacing human expert knowledge, improving prediction quality, and enhancing intelligent control systems. For this purpose, a consideration of the V-cycle takes place, a standard in mechatronic product development. Along this, an initial assessment of the AI potential is shown and important categories of AI support are formed. This is followed by an examination of the literature with regard to these aspects. As a result, the integration of artificial intelligence in mechatronic product development opens new possibilities and transforms the way innovative mechatronic systems are conceived, designed, and deployed. However, the approaches are only taking place selectively, and a holistic view of the development processes and the potential for robust and context-sensitive artificial intelligence along them is still needed.展开更多
In this in-depth exploration, I delve into the complex implications and costs of cybersecurity breaches. Venturing beyond just the immediate repercussions, the research unearths both the overt and concealed long-term ...In this in-depth exploration, I delve into the complex implications and costs of cybersecurity breaches. Venturing beyond just the immediate repercussions, the research unearths both the overt and concealed long-term consequences that businesses encounter. This study integrates findings from various research, including quantitative reports, drawing upon real-world incidents faced by both small and large enterprises. This investigation emphasizes the profound intangible costs, such as trade name devaluation and potential damage to brand reputation, which can persist long after the breach. By collating insights from industry experts and a myriad of research, the study provides a comprehensive perspective on the profound, multi-dimensional impacts of cybersecurity incidents. The overarching aim is to underscore the often-underestimated scope and depth of these breaches, emphasizing the entire timeline post-incident and the urgent need for fortified preventative and reactive measures in the digital domain.展开更多
Artificial Intelligence(AI)has gained popularity for the containment of COVID-19 pandemic applications.Several AI techniques provide efficient mechanisms for handling pandemic situations.AI methods,protocols,data sets...Artificial Intelligence(AI)has gained popularity for the containment of COVID-19 pandemic applications.Several AI techniques provide efficient mechanisms for handling pandemic situations.AI methods,protocols,data sets,and various validation mechanisms empower the users towards proper decision-making and procedures to handle the situation.Despite so many tools,there still exist conditions in which AI must go a long way.To increase the adaptability and potential of these techniques,a combination of AI and Bigdata is currently gaining popularity.This paper surveys and analyzes the methods within the various computational paradigms used by different researchers and national governments,such as China and South Korea,to fight against this pandemic.The process of vaccine development requires multiple medical experiments.This process requires analyzing datasets from different parts of the world.Deep learning and the Internet of Things(IoT)revolutionized the field of disease diagnosis and disease prediction.The accurate observations from different datasets across the world empowered the process of drug development and drug repurposing.To overcome the issues generated by the pandemic,using such sophisticated computing paradigms such as AI,Machine Learning(ML),deep learning,Robotics and Bigdata is essential.展开更多
Biomedical data classification has become a hot research topic in recent years,thanks to the latest technological advancements made in healthcare.Biome-dical data is usually examined by physicians for decision making ...Biomedical data classification has become a hot research topic in recent years,thanks to the latest technological advancements made in healthcare.Biome-dical data is usually examined by physicians for decision making process in patient treatment.Since manual diagnosis is a tedious and time consuming task,numerous automated models,using Artificial Intelligence(AI)techniques,have been presented so far.With this motivation,the current research work presents a novel Biomedical Data Classification using Cat and Mouse Based Optimizer with AI(BDC-CMBOAI)technique.The aim of the proposed BDC-CMBOAI technique is to determine the occurrence of diseases using biomedical data.Besides,the proposed BDC-CMBOAI technique involves the design of Cat and Mouse Optimizer-based Feature Selection(CMBO-FS)technique to derive a useful subset of features.In addition,Ridge Regression(RR)model is also utilized as a classifier to identify the existence of disease.The novelty of the current work is its designing of CMBO-FS model for data classification.Moreover,CMBO-FS technique is used to get rid of unwanted features and boosts the classification accuracy.The results of the experimental analysis accomplished by BDC-CMBOAI technique on benchmark medical dataset established the supremacy of the proposed technique under different evaluation measures.展开更多
The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalizatio...The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalization performance and can be seen as a type of implicit regularization.Thismethod is recommended in the casewhere the amount of high-quality data is limited,and gaining new examples is costly and time-consuming.In this paper,we trained YOLOv7 with a dataset that is part of the Open Images dataset that has 8,600 images with four classes(Car,Bus,Motorcycle,and Person).We used five different data augmentations techniques for duplicates and improvement of our dataset.The performance of the object detection algorithm was compared when using the proposed augmented dataset with a combination of two and three types of data augmentation with the result of the original data.The evaluation result for the augmented data gives a promising result for every object,and every kind of data augmentation gives a different improvement.The mAP@.5 of all classes was 76%,and F1-score was 74%.The proposed method increased the mAP@.5 value by+13%and F1-score by+10%for all objects.展开更多
Mobile edge computing(MEC)provides effective cloud services and functionality at the edge device,to improve the quality of service(QoS)of end users by offloading the high computation tasks.Currently,the introduction o...Mobile edge computing(MEC)provides effective cloud services and functionality at the edge device,to improve the quality of service(QoS)of end users by offloading the high computation tasks.Currently,the introduction of deep learning(DL)and hardware technologies paves amethod in detecting the current traffic status,data offloading,and cyberattacks in MEC.This study introduces an artificial intelligence with metaheuristic based data offloading technique for Secure MEC(AIMDO-SMEC)systems.The proposed AIMDO-SMEC technique incorporates an effective traffic prediction module using Siamese Neural Networks(SNN)to determine the traffic status in the MEC system.Also,an adaptive sampling cross entropy(ASCE)technique is utilized for data offloading in MEC systems.Moreover,the modified salp swarm algorithm(MSSA)with extreme gradient boosting(XGBoost)technique was implemented to identification and classification of cyberattack that exist in the MEC systems.For examining the enhanced outcomes of the AIMDO-SMEC technique,a comprehensive experimental analysis is carried out and the results demonstrated the enhanced outcomes of the AIMDOSMEC technique with the minimal completion time of tasks(CTT)of 0.680.展开更多
The implementation of artificial intelligence(AI)in a smart society,in which the analysis of human habits is mandatory,requires automated data scheduling and analysis using smart applications,a smart infrastructure,sm...The implementation of artificial intelligence(AI)in a smart society,in which the analysis of human habits is mandatory,requires automated data scheduling and analysis using smart applications,a smart infrastructure,smart systems,and a smart network.In this context,which is characterized by a large gap between training and operative processes,a dedicated method is required to manage and extract the massive amount of data and the related information mining.The method presented in this work aims to reduce this gap with near-zero-failure advanced diagnostics(AD)for smart management,which is exploitable in any context of Society 5.0,thus reducing the risk factors at all management levels and ensuring quality and sustainability.We have also developed innovative applications for a humancentered management system to support scheduling in the maintenance of operative processes,for reducing training costs,for improving production yield,and for creating a human–machine cyberspace for smart infrastructure design.The results obtained in 12 international companies demonstrate a possible global standardization of operative processes,leading to the design of a near-zero-failure intelligent system that is able to learn and upgrade itself.Our new method provides guidance for selecting the new generation of intelligent manufacturing and smart systems in order to optimize human–machine interactions,with the related smart maintenance and education.展开更多
In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new medi...In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new media platform site content production with new possible, as also make the traditional media found in Internet age, the breakthrough point of the times. Site homemade video program, which is beneficial to reduce copyright purchase demand, reduce the cost, avoid the homogeneity competition, rich advertising marketing at the same time, improve the profit pattern, the organic combination of content production and operation, complete the strategic transformation. On the basis of these advantages, once the site of homemade video program to form a brand and a higher brand influence. Our later research provides the literature survey for the related issues.展开更多
The history of educational technology in the last 50 years contains few instances of dramatic improvements in learning based on the adoption of a particular technology.An example involving artificial intelligence occu...The history of educational technology in the last 50 years contains few instances of dramatic improvements in learning based on the adoption of a particular technology.An example involving artificial intelligence occurred in the 1990s with the development of intelligent tutoring systems( ITSs). What happened with ITSs was that their success was limited to well-defined and relatively simple declarative and procedural learning tasks(e. g.,learning how to write a recursive function in LISP; doing multi-column addition),and improvements that were observed tended to be more limited than promised(e. g.,one standard deviation improvement at best rather than the promised standard deviation improvement).Still,there was some progress in terms of how to conceptualize learning. A seldom documented limitation was the notion of only viewing learning from only content and cognitive perspectives( i. e.,in terms of memory limitations,prior knowledge,bug libraries,learning hierarchies and sequences etc.). Little attention was paid to education conceived more broadly than developing specific cognitive skills with highly constrained problems. New technologies offer the potential to create dynamic and multi-dimensional models of a particular learner,and to track large data sets of learning activities,resources,interventions,and outcomes over a great many learners. Using those data to personalize learning for a particular learner developing knowledge,competence and understanding in a specific domain of inquiry is finally a real possibility. While the potential to make significant progress is clearly possible,the reality is less not so promising. There are many as yet unmet challenging some of which will be mentioned in this paper. A persistent worry is that educational technologists and computer scientists will again promise too much,too soon at too little cost and with too little effort and attention to the realities in schools and universities.展开更多
The Internet of Things (IoT) has received much attention over the past decade. With the rapid increase in the use of smart devices, we are now able to collect big data on a daily basis. The data we are gathering (a...The Internet of Things (IoT) has received much attention over the past decade. With the rapid increase in the use of smart devices, we are now able to collect big data on a daily basis. The data we are gathering (and related problems) are becoming more complex and uncertain. Researchers have therefore turned to artificial intelligence (AI) to efficiently deal with the problems ereated by big data.展开更多
文摘Customer churns remains a key focus in this research, using artificial intelligence-based technique of machine learning. Research is based on the feature-based analysis four main features were used that are selected on the basis of our customer churn to deduct the meaning full analysis of the data set. Data-set is taken from the Kaggle that is about the fine food review having more than half a million records in it. This research remains on feature based analysis that is further concluded using confusion matrix. In this research we are using confusion matrix to conclude the customer churn results. Such specific analysis helps e-commerce business for real time growth in their specific products focusing more sales and to analyze which product is getting outage. Moreover, after applying the techniques, Support Vector Machine and K-Nearest Neighbour perform better than the random forest in this particular scenario. Using confusion matrix for obtaining the results three things are obtained that are precision, recall and accuracy. The result explains feature-based analysis on fine food reviews, Amazon at customer churn Support Vector Machine performed better as in overall comparison.
基金This research is supported partially by the Papua New Guinea Science and Technology Secretariat(PNGSTS)under the project grant No.1-3962 PNGSTS.
文摘Big data has had significant impacts on our lives,economies,academia and industries over the past decade.The current equations are:What is the future of big data?What era do we live in?This article addresses these questions by looking at meta as an operation and argues that we are living in the era of big intelligence through analyzing from meta(big data)to big intelligence.More specifically,this article will analyze big data from an evolutionary perspective.The article overviews data,information,knowledge,and intelligence(DIKI)and reveals their relationships.After analyzing meta as an operation,this article explores Meta(DIKE)and its relationship.It reveals 5 Bigs consisting of big data,big information,big knowledge,big intelligence and big analytics.Applying meta on 5 Bigs,this article infers that 4 Big Data 4.0=meta(big data)=big intelligence.This article analyzes how intelligent big analytics support big intelligence.The proposed approach in this research might facilitate the research and development of big data,big data analytics,business intelligence,artificial intelligence,and data science.
文摘Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.
文摘One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.
文摘This article delves into the intricate relationship between big data, cloud computing, and artificial intelligence, shedding light on their fundamental attributes and interdependence. It explores the seamless amalgamation of AI methodologies within cloud computing and big data analytics, encompassing the development of a cloud computing framework built on the robust foundation of the Hadoop platform, enriched by AI learning algorithms. Additionally, it examines the creation of a predictive model empowered by tailored artificial intelligence techniques. Rigorous simulations are conducted to extract valuable insights, facilitating method evaluation and performance assessment, all within the dynamic Hadoop environment, thereby reaffirming the precision of the proposed approach. The results and analysis section reveals compelling findings derived from comprehensive simulations within the Hadoop environment. These outcomes demonstrate the efficacy of the Sport AI Model (SAIM) framework in enhancing the accuracy of sports-related outcome predictions. Through meticulous mathematical analyses and performance assessments, integrating AI with big data emerges as a powerful tool for optimizing decision-making in sports. The discussion section extends the implications of these results, highlighting the potential for SAIM to revolutionize sports forecasting, strategic planning, and performance optimization for players and coaches. The combination of big data, cloud computing, and AI offers a promising avenue for future advancements in sports analytics. This research underscores the synergy between these technologies and paves the way for innovative approaches to sports-related decision-making and performance enhancement.
文摘In the context of the digital economy,the volume of data is growing exponentially,the types of data are becoming more diverse,and its value is increasing,often providing critical support for decision-making by enterprises and government institutions.Effective data governance is a crucial tool for maximizing data value and mitigating data risks.This article examines the application of data governance models in the digital economy,aiming to offer technical insights and guidance for data-driven enterprises and governments in China.By elevating their data governance standards in the new era,this approach will comprehensively enhance their ability to harness digital value and ensure security in the digital economy,ultimately driving the continued growth of both the digital economy and society.
基金the High Technology Research and Development Programme of china.
文摘This paper describes the function,structure and working status of the data buffer unitDBU,one of the most important functional units on ITM-1.It also discusses DBU’s supportto the multiprocessor system and Prolog language.
文摘With the continuous development of social economy,science and technology are also in continuous progress,relying on the Internet technology of big data era has come in an all-round way.On the basis of the development of cloud computing and Internet technology,artificial intelligence technology has emerged as the times require.It also has more advantages.Applying it to computer network technology can effectively improve the data processing efficiency and quality of computer network technology,and improve the convenience for people’s life and production.This paper studies and analyzes the practical application requirements of computer network,and discusses the application characteristics and timeliness of artificial intelligence technology.
文摘Artificial intelligence is a new technological science that researches and develops theories,methods,technologies and application systems for simulating,extending and expanding human intelligence.It simulates certain human thought processes and intelligent behaviors(such as learning,reasoning,thinking,planning,etc.),and produces a new type of intelligent machine that can respond in a similar way to human intelligence.In the past 30 years,it has achieved rapid development in various industries and related disciplines such as manufacturing,medical care,finance,and transportation.
基金supported by a grant from the Standardization and Integration of Resources Information for Seed-cluster in Hub-Spoke Material Bank Program,Rural Development Administration,Republic of Korea(PJ01587004).
文摘Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.
文摘Mechatronic product development is a complex and multidisciplinary field that encompasses various domains, including, among others, mechanical engineering, electrical engineering, control theory and software engineering. The integration of artificial intelligence technologies is revolutionizing this domain, offering opportunities to enhance design processes, optimize performance, and leverage vast amounts of knowledge. However, human expertise remains essential in contextualizing information, considering trade-offs, and ensuring ethical and societal implications are taken into account. This paper therefore explores the existing literature regarding the application of artificial intelligence as a comprehensive database, decision support system, and modeling tool in mechatronic product development. It analyzes the benefits of artificial intelligence in enabling domain linking, replacing human expert knowledge, improving prediction quality, and enhancing intelligent control systems. For this purpose, a consideration of the V-cycle takes place, a standard in mechatronic product development. Along this, an initial assessment of the AI potential is shown and important categories of AI support are formed. This is followed by an examination of the literature with regard to these aspects. As a result, the integration of artificial intelligence in mechatronic product development opens new possibilities and transforms the way innovative mechatronic systems are conceived, designed, and deployed. However, the approaches are only taking place selectively, and a holistic view of the development processes and the potential for robust and context-sensitive artificial intelligence along them is still needed.
文摘In this in-depth exploration, I delve into the complex implications and costs of cybersecurity breaches. Venturing beyond just the immediate repercussions, the research unearths both the overt and concealed long-term consequences that businesses encounter. This study integrates findings from various research, including quantitative reports, drawing upon real-world incidents faced by both small and large enterprises. This investigation emphasizes the profound intangible costs, such as trade name devaluation and potential damage to brand reputation, which can persist long after the breach. By collating insights from industry experts and a myriad of research, the study provides a comprehensive perspective on the profound, multi-dimensional impacts of cybersecurity incidents. The overarching aim is to underscore the often-underestimated scope and depth of these breaches, emphasizing the entire timeline post-incident and the urgent need for fortified preventative and reactive measures in the digital domain.
文摘Artificial Intelligence(AI)has gained popularity for the containment of COVID-19 pandemic applications.Several AI techniques provide efficient mechanisms for handling pandemic situations.AI methods,protocols,data sets,and various validation mechanisms empower the users towards proper decision-making and procedures to handle the situation.Despite so many tools,there still exist conditions in which AI must go a long way.To increase the adaptability and potential of these techniques,a combination of AI and Bigdata is currently gaining popularity.This paper surveys and analyzes the methods within the various computational paradigms used by different researchers and national governments,such as China and South Korea,to fight against this pandemic.The process of vaccine development requires multiple medical experiments.This process requires analyzing datasets from different parts of the world.Deep learning and the Internet of Things(IoT)revolutionized the field of disease diagnosis and disease prediction.The accurate observations from different datasets across the world empowered the process of drug development and drug repurposing.To overcome the issues generated by the pandemic,using such sophisticated computing paradigms such as AI,Machine Learning(ML),deep learning,Robotics and Bigdata is essential.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R203)Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR03.
文摘Biomedical data classification has become a hot research topic in recent years,thanks to the latest technological advancements made in healthcare.Biome-dical data is usually examined by physicians for decision making process in patient treatment.Since manual diagnosis is a tedious and time consuming task,numerous automated models,using Artificial Intelligence(AI)techniques,have been presented so far.With this motivation,the current research work presents a novel Biomedical Data Classification using Cat and Mouse Based Optimizer with AI(BDC-CMBOAI)technique.The aim of the proposed BDC-CMBOAI technique is to determine the occurrence of diseases using biomedical data.Besides,the proposed BDC-CMBOAI technique involves the design of Cat and Mouse Optimizer-based Feature Selection(CMBO-FS)technique to derive a useful subset of features.In addition,Ridge Regression(RR)model is also utilized as a classifier to identify the existence of disease.The novelty of the current work is its designing of CMBO-FS model for data classification.Moreover,CMBO-FS technique is used to get rid of unwanted features and boosts the classification accuracy.The results of the experimental analysis accomplished by BDC-CMBOAI technique on benchmark medical dataset established the supremacy of the proposed technique under different evaluation measures.
基金the United States Air Force Office of Scientific Research(AFOSR)contract FA9550-22-1-0268 awarded to KHA,https://www.afrl.af.mil/AFOSR/.The contract is entitled:“Investigating Improving Safety of Autonomous Exploring Intelligent Agents with Human-in-the-Loop Reinforcement Learning,”and in part by Jackson State University.
文摘The object detection technique depends on various methods for duplicating the dataset without adding more images.Data augmentation is a popularmethod that assists deep neural networks in achieving better generalization performance and can be seen as a type of implicit regularization.Thismethod is recommended in the casewhere the amount of high-quality data is limited,and gaining new examples is costly and time-consuming.In this paper,we trained YOLOv7 with a dataset that is part of the Open Images dataset that has 8,600 images with four classes(Car,Bus,Motorcycle,and Person).We used five different data augmentations techniques for duplicates and improvement of our dataset.The performance of the object detection algorithm was compared when using the proposed augmented dataset with a combination of two and three types of data augmentation with the result of the original data.The evaluation result for the augmented data gives a promising result for every object,and every kind of data augmentation gives a different improvement.The mAP@.5 of all classes was 76%,and F1-score was 74%.The proposed method increased the mAP@.5 value by+13%and F1-score by+10%for all objects.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP 2/209/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2022R77),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Mobile edge computing(MEC)provides effective cloud services and functionality at the edge device,to improve the quality of service(QoS)of end users by offloading the high computation tasks.Currently,the introduction of deep learning(DL)and hardware technologies paves amethod in detecting the current traffic status,data offloading,and cyberattacks in MEC.This study introduces an artificial intelligence with metaheuristic based data offloading technique for Secure MEC(AIMDO-SMEC)systems.The proposed AIMDO-SMEC technique incorporates an effective traffic prediction module using Siamese Neural Networks(SNN)to determine the traffic status in the MEC system.Also,an adaptive sampling cross entropy(ASCE)technique is utilized for data offloading in MEC systems.Moreover,the modified salp swarm algorithm(MSSA)with extreme gradient boosting(XGBoost)technique was implemented to identification and classification of cyberattack that exist in the MEC systems.For examining the enhanced outcomes of the AIMDO-SMEC technique,a comprehensive experimental analysis is carried out and the results demonstrated the enhanced outcomes of the AIMDOSMEC technique with the minimal completion time of tasks(CTT)of 0.680.
文摘The implementation of artificial intelligence(AI)in a smart society,in which the analysis of human habits is mandatory,requires automated data scheduling and analysis using smart applications,a smart infrastructure,smart systems,and a smart network.In this context,which is characterized by a large gap between training and operative processes,a dedicated method is required to manage and extract the massive amount of data and the related information mining.The method presented in this work aims to reduce this gap with near-zero-failure advanced diagnostics(AD)for smart management,which is exploitable in any context of Society 5.0,thus reducing the risk factors at all management levels and ensuring quality and sustainability.We have also developed innovative applications for a humancentered management system to support scheduling in the maintenance of operative processes,for reducing training costs,for improving production yield,and for creating a human–machine cyberspace for smart infrastructure design.The results obtained in 12 international companies demonstrate a possible global standardization of operative processes,leading to the design of a near-zero-failure intelligent system that is able to learn and upgrade itself.Our new method provides guidance for selecting the new generation of intelligent manufacturing and smart systems in order to optimize human–machine interactions,with the related smart maintenance and education.
文摘In this paper, we conduct research on the big data and the artificial intelligence aided decision-making mechanism with the applications on video website homemade program innovation. Make homemade video shows new media platform site content production with new possible, as also make the traditional media found in Internet age, the breakthrough point of the times. Site homemade video program, which is beneficial to reduce copyright purchase demand, reduce the cost, avoid the homogeneity competition, rich advertising marketing at the same time, improve the profit pattern, the organic combination of content production and operation, complete the strategic transformation. On the basis of these advantages, once the site of homemade video program to form a brand and a higher brand influence. Our later research provides the literature survey for the related issues.
文摘The history of educational technology in the last 50 years contains few instances of dramatic improvements in learning based on the adoption of a particular technology.An example involving artificial intelligence occurred in the 1990s with the development of intelligent tutoring systems( ITSs). What happened with ITSs was that their success was limited to well-defined and relatively simple declarative and procedural learning tasks(e. g.,learning how to write a recursive function in LISP; doing multi-column addition),and improvements that were observed tended to be more limited than promised(e. g.,one standard deviation improvement at best rather than the promised standard deviation improvement).Still,there was some progress in terms of how to conceptualize learning. A seldom documented limitation was the notion of only viewing learning from only content and cognitive perspectives( i. e.,in terms of memory limitations,prior knowledge,bug libraries,learning hierarchies and sequences etc.). Little attention was paid to education conceived more broadly than developing specific cognitive skills with highly constrained problems. New technologies offer the potential to create dynamic and multi-dimensional models of a particular learner,and to track large data sets of learning activities,resources,interventions,and outcomes over a great many learners. Using those data to personalize learning for a particular learner developing knowledge,competence and understanding in a specific domain of inquiry is finally a real possibility. While the potential to make significant progress is clearly possible,the reality is less not so promising. There are many as yet unmet challenging some of which will be mentioned in this paper. A persistent worry is that educational technologists and computer scientists will again promise too much,too soon at too little cost and with too little effort and attention to the realities in schools and universities.
文摘The Internet of Things (IoT) has received much attention over the past decade. With the rapid increase in the use of smart devices, we are now able to collect big data on a daily basis. The data we are gathering (and related problems) are becoming more complex and uncertain. Researchers have therefore turned to artificial intelligence (AI) to efficiently deal with the problems ereated by big data.