Information security and quality management are often considered two different fields. However, organizations must be mindful of how software security may affect quality control. This paper examines and promotes metho...Information security and quality management are often considered two different fields. However, organizations must be mindful of how software security may affect quality control. This paper examines and promotes methods through which secure software development processes can be integrated into the Systems Software Development Life-cycle (SDLC) to improve system quality. Cyber-security and quality assurance are both involved in reducing risk. Software security teams work to reduce security risks, whereas quality assurance teams work to decrease risks to quality. There is a need for clear standards, frameworks, processes, and procedures to be followed by organizations to ensure high-level quality while reducing security risks. This research uses a survey of industry professionals to help identify best practices for developing software with fewer defects from the early stages of the SDLC to improve both the quality and security of software. Results show that there is a need for better security awareness among all members of software development teams.展开更多
In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding ...In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.展开更多
Ontology-Driven Analytic Models for Pension Management are sophisticated approaches that integrate the principles of ontology and analytics to optimize the management and decision-making processes within pension syste...Ontology-Driven Analytic Models for Pension Management are sophisticated approaches that integrate the principles of ontology and analytics to optimize the management and decision-making processes within pension systems. While Ontology-Driven Analytic Models offer significant benefits for pension management, there are also challenges associated with implementing and utilizing the models. Developing a comprehensive and accurate ontology for pension management requires a deep understanding of the domain, including regulatory frameworks, investment strategies, retirement planning, and integration of data from heterogenous sources. Integrating these data into a cohesive ontology can be challenging. This research work leverages on semantic ontology as an approach for structured representation of knowledge about concepts and their relationships, and applies it to analyze and optimize decision support for pension management. The proposed ontology presents a formal and explicit specification of concepts (classes), their attributes, and the relationships between them and provides a shared and standardized understanding of the domain;enabling precise communication and knowledge representation for decision-support. The ontology deploys computational frameworks and analytic models to assess and evaluate data, generate insights, predict future pension fund performance as well as assess risk exposure. The research adopts the Reasoner, SPARQL query and OWL Visualizer executed over Java IDE for modelling the ontology-driven analytics. The approach encapsulated and integrated semantic ontologies with analytical models to enhance the accuracy, contextuality, and comprehensiveness of analyses and decisions within pension systems.展开更多
Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and r...Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and reducing the frequency of erroneous fire alerts, thereby enhancing the effectiveness of numerous safety monitoring systems. This research explores the development of optimized probabilistic graphic models for the discretization thresholds of alarm system predictor variables. The study presents a statistical model framework that increases the efficacy of fire detection by predicting the discretization thresholds of alarm system predictor variable fluctuations used to detect the onset of fire. The work applies the Bayesian networks and probabilistic visual models to reveal the specific characteristics required to cope with fire detection strategies and patterns. The adopted methodology utilizes a combination of prior knowledge and statistical data to draw conclusions from observations. Utilizing domain knowledge to compute conditional dependencies between network variables enabled predictions to be made through the application of specialized analytical and simulation techniques.展开更多
Enterprise Information System management has become an increasingly vital factor for many firms. Several organizations have encountered problems when attempting to evaluate organizational performance. Measurement of p...Enterprise Information System management has become an increasingly vital factor for many firms. Several organizations have encountered problems when attempting to evaluate organizational performance. Measurement of performance metrics is a key challenge for a huge number of firms. In order to preserve relevance and adaptability in competitive markets, it has become essential to respond proactively to complex events through informed decision-making that is supported by technology. Therefore, the objective of this study was to apply neural networks to the modeling, simulation, and forecasting of the effects of the performance indicators of Enterprise Information Systems on the achievement of corporate objectives and value creation. A set of quantifiable and sizeable conditionally independent associations were derived using a simplified joint probability distribution technique. Bayesian Neural Networks were utilized to describe the link between random variables (features) and to concisely and easily specify the joint probability distribution. The research demonstrated that Bayesian networks could effectively explore complex logical linkages by employing probability to represent uncertainty and probabilistic rules;and by applying impact models from Bayesian taxonomies to achieve learning and reasoning processes.展开更多
The efficacy of an automated collision detection system is contingent upon the caliber and volume of data at its disposal. In the event that the data is deficient, incongruous, or erroneous, it has the potential to ge...The efficacy of an automated collision detection system is contingent upon the caliber and volume of data at its disposal. In the event that the data is deficient, incongruous, or erroneous, it has the potential to generate erroneous positive or negative outcomes, thereby compromising the system’s credibility. The occurrence of false positives is observed when the system erroneously identifies genuine activity as collusion. The phenomenon of false negatives arises when the system is unable to identify instances of genuine collusion. Collusion detection systems are required to handle substantial volumes of data in real time, capable of analyzing relationships between different objects. The intricate nature of collusion can pose difficulties in devising and executing efficient systems for its detection. The present study proposes an automated anti-collision system that utilizes sensor devices to detect objects and activate an alert mechanism in the event that the vehicle approaches the object in close proximity. The study introduces a novel methodology for mitigating vehicular accidents by implementing a combined system that integrates collision detection and alert mechanisms. The proposed system comprises an ultrasonic sensor, a microprocessor, and an alarm system. The sensor transmits a signal to the microcontroller, which in turn sends a signal to the warning unit. The warning unit is designed to prevent potential accidents by emitting an audible warning signal through a buzzer. Additionally, the distance information is displayed on an LCD screen. The Proteus Design Suite is utilized for simulation purposes, while Arduino.cc is employed for implementation.展开更多
This research involved an exploratory evaluation of the dynamics of vehicular traffic on a road network across two traffic light-controlled junctions. The study uses the case study of a one-kilometer road system model...This research involved an exploratory evaluation of the dynamics of vehicular traffic on a road network across two traffic light-controlled junctions. The study uses the case study of a one-kilometer road system modelled on Anylogic version 8.8.4. Anylogic is a multi-paradigm simulation tool that supports three main simulation methodologies: discrete event simulation, agent-based modeling, and system dynamics modeling. The system is used to evaluate the implication of stochastic time-based vehicle variables on the general efficiency of road use. Road use efficiency as reflected in this model is based on the percentage of entry vehicles to exit the model within a one-hour simulation period. The study deduced that for the model under review, an increase in entry point time delay has a domineering influence on the efficiency of road use far beyond any other consideration. This study therefore presents a novel approach that leverages Discrete Events Simulation to facilitate efficient road management with a focus on optimum road use efficiency. The study also determined that the inclusion of appropriate random parameters to reflect road use activities at critical event points in a simulation can help in the effective representation of authentic traffic models. The Anylogic simulation software leverages the Classic DEVS and Parallel DEVS formalisms to achieve these objectives.展开更多
In vehicular ad hoc networks(VANETs),the topology information(TI)is updated frequently due to vehicle mobility.These frequent changes in topology increase the topology maintenance overhead.To reduce the control messag...In vehicular ad hoc networks(VANETs),the topology information(TI)is updated frequently due to vehicle mobility.These frequent changes in topology increase the topology maintenance overhead.To reduce the control message overhead,cluster-based routing schemes are proposed.In clusterbased routing schemes,the nodes are divided into different virtual groups,and each group(logical node)is considered a cluster.The topology changes are accommodated within each cluster,and broadcasting TI to the whole VANET is not required.The cluster head(CH)is responsible for managing the communication of a node with other nodes outside the cluster.However,transmitting real-time data via a CH may cause delays in VANETs.Such real-time data require quick service and should be routed through the shortest path when the quality of service(QoS)is required.This paper proposes a hybrid scheme which transmits time-critical data through the QoS shortest path and normal data through CHs.In this way,the real-time data are delivered efciently to the destination on time.Similarly,the routine data are transmitted through CHs to reduce the topology maintenance overhead.The work is validated through a series of simulations,and results show that the proposed scheme outperforms existing algorithms in terms of topology maintenance overhead,QoS and real-time and routine packet transmission.展开更多
The traditional roles of a university are teaching and research with the aim of developing society and contributing positively to the national economic development by producing skilled and well-tutored graduates. Howe...The traditional roles of a university are teaching and research with the aim of developing society and contributing positively to the national economic development by producing skilled and well-tutored graduates. However, recruitments by these higher institutions are too reliant on the eligibility provided by Resumes of candidates, while neglecting their suitability drawn from their research activity and publications online. This study identifies insights in recruitment trends in higher institutions of learning and uses Artificial Intelligence to produce a more rounded and balanced decision-making process that caters for both eligibility and suitability. The methodology employs the machine learning process using the Multinomial Naïve Bayes for training the model as well as the Vader sentiment analyzer for accuracy and testing. The datasets used contained Resume instances as well as author publication information. The results show a score of 83.9% for the model as well as a sentiment analysis score of 1, indicating an overall positive score. The results show that sentiment analysis can help educational institutions in improving their recruitment models and attracting more suitable candidates for such roles.展开更多
The problem of traffic congestion is a significant phenomenon that has had a substantial impact on the transportation system within the country. This phenomenon has given rise to numerous intricacies, particularly in ...The problem of traffic congestion is a significant phenomenon that has had a substantial impact on the transportation system within the country. This phenomenon has given rise to numerous intricacies, particularly in instances where emergency situations occur at traffic light intersections that are consistently congested with a high volume of vehicles. This implementation of a traffic light controller system is designed with the intention of addressing this problem. The purpose of the system was to facilitate the operation of a 3-way traffic control light and provide priority to emergency vehicles using a Radio Frequency Identification (RFID) sensor and Reduced Instruction Set Computing (RISC) Architecture Based Microcontroller. This research work involved designing a system to mitigate the occurrence of accidents commonly observed at traffic light intersections, where vehicles often need to maneuver in order to make way for emergency vehicles following a designated route. The research effectively achieved the analysis, simulation and implementation of wireless communication devices for traffic light control. The implemented prototype utilizes RFID transmission, operates in conjunction with the sequential mode of traffic lights to alter the traffic light sequence accordingly and reverts the traffic lights back to their normal sequence after the emergency vehicle has passed the traffic lights.展开更多
Automatic text summarization involves reducing a text document or a larger corpus of multiple documents to a short set of sentences or paragraphs that convey the main meaning of the text. In this paper, we discuss abo...Automatic text summarization involves reducing a text document or a larger corpus of multiple documents to a short set of sentences or paragraphs that convey the main meaning of the text. In this paper, we discuss about multi-document summarization that differs from the single one in which the issues of compression, speed, redundancy and passage selection are critical in the formation of useful summaries. Since the number and variety of online medical news make them difficult for experts in the medical field to read all of the medical news, an automatic multi-document summarization can be useful for easy study of information on the web. Hence we propose a new approach based on machine learning meta-learner algorithm called AdaBoost that is used for summarization. We treat a document as a set of sentences, and the learning algorithm must learn to classify as positive or negative examples of sentences based on the score of the sentences. For this learning task, we apply AdaBoost meta-learning algorithm where a C4.5 decision tree has been chosen as the base learner. In our experiment, we use 450 pieces of news that are downloaded from different medical websites. Then we compare our results with some existing approaches.展开更多
Dear Editor,Visual localization relies on local features and searches a prestored GPS-tagged image database to retrieve the reference image with the highest similarity in feature spaces to predict the current location...Dear Editor,Visual localization relies on local features and searches a prestored GPS-tagged image database to retrieve the reference image with the highest similarity in feature spaces to predict the current location[1]-[3].For the conventional methods[4]-[6],local features are generally explored by multiple-stage feature extraction whichfirst detects and then describes key-point features[4],[7].展开更多
This work presents the design of an Expert System that aims to advice the club teams to buy a football player in the post that they needed. Suggesting different player in many posts by an expert person is based on foo...This work presents the design of an Expert System that aims to advice the club teams to buy a football player in the post that they needed. Suggesting different player in many posts by an expert person is based on football experience, knowledge about the player and the club that he works. For mechanization the ability of this person, we use Expert System because it can model the ability of a person in solving a problem. Visual Prolog language is used as a tool for designing our Expert System.展开更多
The dramatic improvement of information and communication technology (ICT) has made an evolution in learning management systems (LMS). The rapid growth in LMSs has caused users to demand more advanced, automated, and ...The dramatic improvement of information and communication technology (ICT) has made an evolution in learning management systems (LMS). The rapid growth in LMSs has caused users to demand more advanced, automated, and intelligent services. This paper discusses how Artificial Intelligence and Machine Learning techniques are adopted to fulfill users’ needs in a social learning management system named “CourseNetworking”. The paper explains how machine learning contributed to developing an intelligent agent called “Rumi” as a personal assistant in CourseNetworking platform to add personalization, gamification, and more dynamics to the system. This paper aims to introduce machine learning to traditional learning platforms and guide the developers working in LMS field to benefit from advanced technologies in learning platforms by offering customized services.展开更多
Cloud computing has attracted significant interest due to the increasing service demands from organizations offloading computationally intensive tasks to datacenters.Meanwhile,datacenter infrastructure comprises hardw...Cloud computing has attracted significant interest due to the increasing service demands from organizations offloading computationally intensive tasks to datacenters.Meanwhile,datacenter infrastructure comprises hardware resources that consume high amount of energy and give out carbon emissions at hazardous levels.In cloud datacenter,Virtual Machines(VMs)need to be allocated on various Physical Machines(PMs)in order to minimize resource wastage and increase energy efficiency.Resource allocation problem is NP-hard.Hence finding an exact solution is complicated especially for large-scale datacenters.In this con text,this paper proposes an Energy-oriented Flower Pollination Algorithm(E-FPA)for VM allocation in cloud datacenter environments.A system framework for the scheme was developed to enable energy-oriented allocation of various VMs on a PM.The allocation uses a strategy called Dynamic Switching Probability(DSP).The framework finds a near optimal solution quickly and balances the exploration of the global search and exploitation of the local search.It considers a processor,storage,and memory constraints of a PM while prioritizing energy-oriented allocation for a set of VMs.Simulations performed on MultiRecCloudSim utilizing planet workload show that the E-FPA outperforms the Genetic Algorithm for Power-Aware(GAPA)by 21.8%,Order of Exchange Migration(OEM)ant colony system by 21.5%,and First Fit Decreasing(FFD)by 24.9%.Therefore,E-FPA significantly improves datacenter performance and thus,enhances environmental sustainability.展开更多
Most entity ranking research aims to retrieve a ranked list of entities from a Web corpus given a user query. The rank order of entities is determined by the relevance between the query and contexts of entities. Howev...Most entity ranking research aims to retrieve a ranked list of entities from a Web corpus given a user query. The rank order of entities is determined by the relevance between the query and contexts of entities. However, entities can be ranked directly based on their relative importance in a document collection, independent of any queries. In this paper, we introduce an entity ranking algorithm named NERank+. Given a document collection, NERank+ first constructs a graph model called Topical Tripartite Graph, consisting of document, topic and entity nodes. We design separate ranking functions to compute the prior ranks of entities and topics, respectively. A meta-path constrained random walk algorithm is proposed to propagate prior entity and topic ranks based on the graph model. We evaluate NERank+ over real-life datasets and compare it with baselines. Experimental results illustrate the effectiveness of our approach.展开更多
文摘Information security and quality management are often considered two different fields. However, organizations must be mindful of how software security may affect quality control. This paper examines and promotes methods through which secure software development processes can be integrated into the Systems Software Development Life-cycle (SDLC) to improve system quality. Cyber-security and quality assurance are both involved in reducing risk. Software security teams work to reduce security risks, whereas quality assurance teams work to decrease risks to quality. There is a need for clear standards, frameworks, processes, and procedures to be followed by organizations to ensure high-level quality while reducing security risks. This research uses a survey of industry professionals to help identify best practices for developing software with fewer defects from the early stages of the SDLC to improve both the quality and security of software. Results show that there is a need for better security awareness among all members of software development teams.
基金the deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number(IFP-2022-34).
文摘In the cloud environment,ensuring a high level of data security is in high demand.Data planning storage optimization is part of the whole security process in the cloud environment.It enables data security by avoiding the risk of data loss and data overlapping.The development of data flow scheduling approaches in the cloud environment taking security parameters into account is insufficient.In our work,we propose a data scheduling model for the cloud environment.Themodel is made up of three parts that together help dispatch user data flow to the appropriate cloudVMs.The first component is the Collector Agent whichmust periodically collect information on the state of the network links.The second one is the monitoring agent which must then analyze,classify,and make a decision on the state of the link and finally transmit this information to the scheduler.The third one is the scheduler who must consider previous information to transfer user data,including fair distribution and reliable paths.It should be noted that each part of the proposedmodel requires the development of its algorithms.In this article,we are interested in the development of data transfer algorithms,including fairness distribution with the consideration of a stable link state.These algorithms are based on the grouping of transmitted files and the iterative method.The proposed algorithms showthe performances to obtain an approximate solution to the studied problem which is an NP-hard(Non-Polynomial solution)problem.The experimental results show that the best algorithm is the half-grouped minimum excluding(HME),with a percentage of 91.3%,an average deviation of 0.042,and an execution time of 0.001 s.
文摘Ontology-Driven Analytic Models for Pension Management are sophisticated approaches that integrate the principles of ontology and analytics to optimize the management and decision-making processes within pension systems. While Ontology-Driven Analytic Models offer significant benefits for pension management, there are also challenges associated with implementing and utilizing the models. Developing a comprehensive and accurate ontology for pension management requires a deep understanding of the domain, including regulatory frameworks, investment strategies, retirement planning, and integration of data from heterogenous sources. Integrating these data into a cohesive ontology can be challenging. This research work leverages on semantic ontology as an approach for structured representation of knowledge about concepts and their relationships, and applies it to analyze and optimize decision support for pension management. The proposed ontology presents a formal and explicit specification of concepts (classes), their attributes, and the relationships between them and provides a shared and standardized understanding of the domain;enabling precise communication and knowledge representation for decision-support. The ontology deploys computational frameworks and analytic models to assess and evaluate data, generate insights, predict future pension fund performance as well as assess risk exposure. The research adopts the Reasoner, SPARQL query and OWL Visualizer executed over Java IDE for modelling the ontology-driven analytics. The approach encapsulated and integrated semantic ontologies with analytical models to enhance the accuracy, contextuality, and comprehensiveness of analyses and decisions within pension systems.
文摘Sensors for fire alarms require a high level of predictive variables to ensure accurate detection, injury prevention, and loss prevention. Bayesian networks can aid in enhancing early fire detection capabilities and reducing the frequency of erroneous fire alerts, thereby enhancing the effectiveness of numerous safety monitoring systems. This research explores the development of optimized probabilistic graphic models for the discretization thresholds of alarm system predictor variables. The study presents a statistical model framework that increases the efficacy of fire detection by predicting the discretization thresholds of alarm system predictor variable fluctuations used to detect the onset of fire. The work applies the Bayesian networks and probabilistic visual models to reveal the specific characteristics required to cope with fire detection strategies and patterns. The adopted methodology utilizes a combination of prior knowledge and statistical data to draw conclusions from observations. Utilizing domain knowledge to compute conditional dependencies between network variables enabled predictions to be made through the application of specialized analytical and simulation techniques.
文摘Enterprise Information System management has become an increasingly vital factor for many firms. Several organizations have encountered problems when attempting to evaluate organizational performance. Measurement of performance metrics is a key challenge for a huge number of firms. In order to preserve relevance and adaptability in competitive markets, it has become essential to respond proactively to complex events through informed decision-making that is supported by technology. Therefore, the objective of this study was to apply neural networks to the modeling, simulation, and forecasting of the effects of the performance indicators of Enterprise Information Systems on the achievement of corporate objectives and value creation. A set of quantifiable and sizeable conditionally independent associations were derived using a simplified joint probability distribution technique. Bayesian Neural Networks were utilized to describe the link between random variables (features) and to concisely and easily specify the joint probability distribution. The research demonstrated that Bayesian networks could effectively explore complex logical linkages by employing probability to represent uncertainty and probabilistic rules;and by applying impact models from Bayesian taxonomies to achieve learning and reasoning processes.
文摘The efficacy of an automated collision detection system is contingent upon the caliber and volume of data at its disposal. In the event that the data is deficient, incongruous, or erroneous, it has the potential to generate erroneous positive or negative outcomes, thereby compromising the system’s credibility. The occurrence of false positives is observed when the system erroneously identifies genuine activity as collusion. The phenomenon of false negatives arises when the system is unable to identify instances of genuine collusion. Collusion detection systems are required to handle substantial volumes of data in real time, capable of analyzing relationships between different objects. The intricate nature of collusion can pose difficulties in devising and executing efficient systems for its detection. The present study proposes an automated anti-collision system that utilizes sensor devices to detect objects and activate an alert mechanism in the event that the vehicle approaches the object in close proximity. The study introduces a novel methodology for mitigating vehicular accidents by implementing a combined system that integrates collision detection and alert mechanisms. The proposed system comprises an ultrasonic sensor, a microprocessor, and an alarm system. The sensor transmits a signal to the microcontroller, which in turn sends a signal to the warning unit. The warning unit is designed to prevent potential accidents by emitting an audible warning signal through a buzzer. Additionally, the distance information is displayed on an LCD screen. The Proteus Design Suite is utilized for simulation purposes, while Arduino.cc is employed for implementation.
文摘This research involved an exploratory evaluation of the dynamics of vehicular traffic on a road network across two traffic light-controlled junctions. The study uses the case study of a one-kilometer road system modelled on Anylogic version 8.8.4. Anylogic is a multi-paradigm simulation tool that supports three main simulation methodologies: discrete event simulation, agent-based modeling, and system dynamics modeling. The system is used to evaluate the implication of stochastic time-based vehicle variables on the general efficiency of road use. Road use efficiency as reflected in this model is based on the percentage of entry vehicles to exit the model within a one-hour simulation period. The study deduced that for the model under review, an increase in entry point time delay has a domineering influence on the efficiency of road use far beyond any other consideration. This study therefore presents a novel approach that leverages Discrete Events Simulation to facilitate efficient road management with a focus on optimum road use efficiency. The study also determined that the inclusion of appropriate random parameters to reflect road use activities at critical event points in a simulation can help in the effective representation of authentic traffic models. The Anylogic simulation software leverages the Classic DEVS and Parallel DEVS formalisms to achieve these objectives.
基金supported by Taif University Researchers Supporting Project Number(TURSP-2020/231),Taif University,Taif,Saudi Arabia.
文摘In vehicular ad hoc networks(VANETs),the topology information(TI)is updated frequently due to vehicle mobility.These frequent changes in topology increase the topology maintenance overhead.To reduce the control message overhead,cluster-based routing schemes are proposed.In clusterbased routing schemes,the nodes are divided into different virtual groups,and each group(logical node)is considered a cluster.The topology changes are accommodated within each cluster,and broadcasting TI to the whole VANET is not required.The cluster head(CH)is responsible for managing the communication of a node with other nodes outside the cluster.However,transmitting real-time data via a CH may cause delays in VANETs.Such real-time data require quick service and should be routed through the shortest path when the quality of service(QoS)is required.This paper proposes a hybrid scheme which transmits time-critical data through the QoS shortest path and normal data through CHs.In this way,the real-time data are delivered efciently to the destination on time.Similarly,the routine data are transmitted through CHs to reduce the topology maintenance overhead.The work is validated through a series of simulations,and results show that the proposed scheme outperforms existing algorithms in terms of topology maintenance overhead,QoS and real-time and routine packet transmission.
文摘The traditional roles of a university are teaching and research with the aim of developing society and contributing positively to the national economic development by producing skilled and well-tutored graduates. However, recruitments by these higher institutions are too reliant on the eligibility provided by Resumes of candidates, while neglecting their suitability drawn from their research activity and publications online. This study identifies insights in recruitment trends in higher institutions of learning and uses Artificial Intelligence to produce a more rounded and balanced decision-making process that caters for both eligibility and suitability. The methodology employs the machine learning process using the Multinomial Naïve Bayes for training the model as well as the Vader sentiment analyzer for accuracy and testing. The datasets used contained Resume instances as well as author publication information. The results show a score of 83.9% for the model as well as a sentiment analysis score of 1, indicating an overall positive score. The results show that sentiment analysis can help educational institutions in improving their recruitment models and attracting more suitable candidates for such roles.
文摘The problem of traffic congestion is a significant phenomenon that has had a substantial impact on the transportation system within the country. This phenomenon has given rise to numerous intricacies, particularly in instances where emergency situations occur at traffic light intersections that are consistently congested with a high volume of vehicles. This implementation of a traffic light controller system is designed with the intention of addressing this problem. The purpose of the system was to facilitate the operation of a 3-way traffic control light and provide priority to emergency vehicles using a Radio Frequency Identification (RFID) sensor and Reduced Instruction Set Computing (RISC) Architecture Based Microcontroller. This research work involved designing a system to mitigate the occurrence of accidents commonly observed at traffic light intersections, where vehicles often need to maneuver in order to make way for emergency vehicles following a designated route. The research effectively achieved the analysis, simulation and implementation of wireless communication devices for traffic light control. The implemented prototype utilizes RFID transmission, operates in conjunction with the sequential mode of traffic lights to alter the traffic light sequence accordingly and reverts the traffic lights back to their normal sequence after the emergency vehicle has passed the traffic lights.
文摘Automatic text summarization involves reducing a text document or a larger corpus of multiple documents to a short set of sentences or paragraphs that convey the main meaning of the text. In this paper, we discuss about multi-document summarization that differs from the single one in which the issues of compression, speed, redundancy and passage selection are critical in the formation of useful summaries. Since the number and variety of online medical news make them difficult for experts in the medical field to read all of the medical news, an automatic multi-document summarization can be useful for easy study of information on the web. Hence we propose a new approach based on machine learning meta-learner algorithm called AdaBoost that is used for summarization. We treat a document as a set of sentences, and the learning algorithm must learn to classify as positive or negative examples of sentences based on the score of the sentences. For this learning task, we apply AdaBoost meta-learning algorithm where a C4.5 decision tree has been chosen as the base learner. In our experiment, we use 450 pieces of news that are downloaded from different medical websites. Then we compare our results with some existing approaches.
文摘Dear Editor,Visual localization relies on local features and searches a prestored GPS-tagged image database to retrieve the reference image with the highest similarity in feature spaces to predict the current location[1]-[3].For the conventional methods[4]-[6],local features are generally explored by multiple-stage feature extraction whichfirst detects and then describes key-point features[4],[7].
文摘This work presents the design of an Expert System that aims to advice the club teams to buy a football player in the post that they needed. Suggesting different player in many posts by an expert person is based on football experience, knowledge about the player and the club that he works. For mechanization the ability of this person, we use Expert System because it can model the ability of a person in solving a problem. Visual Prolog language is used as a tool for designing our Expert System.
文摘The dramatic improvement of information and communication technology (ICT) has made an evolution in learning management systems (LMS). The rapid growth in LMSs has caused users to demand more advanced, automated, and intelligent services. This paper discusses how Artificial Intelligence and Machine Learning techniques are adopted to fulfill users’ needs in a social learning management system named “CourseNetworking”. The paper explains how machine learning contributed to developing an intelligent agent called “Rumi” as a personal assistant in CourseNetworking platform to add personalization, gamification, and more dynamics to the system. This paper aims to introduce machine learning to traditional learning platforms and guide the developers working in LMS field to benefit from advanced technologies in learning platforms by offering customized services.
文摘Cloud computing has attracted significant interest due to the increasing service demands from organizations offloading computationally intensive tasks to datacenters.Meanwhile,datacenter infrastructure comprises hardware resources that consume high amount of energy and give out carbon emissions at hazardous levels.In cloud datacenter,Virtual Machines(VMs)need to be allocated on various Physical Machines(PMs)in order to minimize resource wastage and increase energy efficiency.Resource allocation problem is NP-hard.Hence finding an exact solution is complicated especially for large-scale datacenters.In this con text,this paper proposes an Energy-oriented Flower Pollination Algorithm(E-FPA)for VM allocation in cloud datacenter environments.A system framework for the scheme was developed to enable energy-oriented allocation of various VMs on a PM.The allocation uses a strategy called Dynamic Switching Probability(DSP).The framework finds a near optimal solution quickly and balances the exploration of the global search and exploitation of the local search.It considers a processor,storage,and memory constraints of a PM while prioritizing energy-oriented allocation for a set of VMs.Simulations performed on MultiRecCloudSim utilizing planet workload show that the E-FPA outperforms the Genetic Algorithm for Power-Aware(GAPA)by 21.8%,Order of Exchange Migration(OEM)ant colony system by 21.5%,and First Fit Decreasing(FFD)by 24.9%.Therefore,E-FPA significantly improves datacenter performance and thus,enhances environmental sustainability.
文摘Most entity ranking research aims to retrieve a ranked list of entities from a Web corpus given a user query. The rank order of entities is determined by the relevance between the query and contexts of entities. However, entities can be ranked directly based on their relative importance in a document collection, independent of any queries. In this paper, we introduce an entity ranking algorithm named NERank+. Given a document collection, NERank+ first constructs a graph model called Topical Tripartite Graph, consisting of document, topic and entity nodes. We design separate ranking functions to compute the prior ranks of entities and topics, respectively. A meta-path constrained random walk algorithm is proposed to propagate prior entity and topic ranks based on the graph model. We evaluate NERank+ over real-life datasets and compare it with baselines. Experimental results illustrate the effectiveness of our approach.