The integration of organisation’s information security policy into threat modeling enhances effectiveness of security strategies for information security management. These security policies are the ones which define ...The integration of organisation’s information security policy into threat modeling enhances effectiveness of security strategies for information security management. These security policies are the ones which define the sets of security issues, controls and organisation’s commitment for seamless integration with knowledge based platforms in order to protect critical assets and data. Such platforms are needed to evaluate and share violations which can create security loop-hole. The lack of rules-based approaches for discovering potential threats at organisation’s context, poses a challenge for many organisations in safeguarding their critical assets. To address the challenge, this paper introduces a Platform for Organisation Security Threat Analytic and Management (POSTAM) using rule-based approach. The platform enhances strategies for combating information security threats and thus improves organisations’ commitment in protecting their critical assets. R scripting language for data visualization and java-based scripts were used to develop a prototype to run on web protocol. MySQL database management system was used as back-end for data storage during threat analytic processes.展开更多
One of the most useful Information Extraction (IE) solutions to Web information harnessing is Named Entity Recognition (NER). Hand-coded rule methods are still the best performers. These methods and statistical method...One of the most useful Information Extraction (IE) solutions to Web information harnessing is Named Entity Recognition (NER). Hand-coded rule methods are still the best performers. These methods and statistical methods exploit Natural Language Processing (NLP) features and characteristics (e.g. Capitalization) to extract Named Entities (NE) like personal and company names. For entities with multiple sub-entities of higher cardinality (e.g. linux command, citation) and which are non-speech, these systems fail to deliver efficiently. Promising Machine Learning (ML) methods would require large amounts of training examples which are impossible to manually produce. We call these entities Named High Cardinality Entities (NHCEs). We propose a sequence validation based approach for the extraction and validation of NHCEs. In the approach, sub-entities of NHCE candidates are statistically and structurally characterized during top-down annotation process and guided to transformation into either value types (v-type) or user-defined types (u-type) using a ML model. Treated as sequences of sub-entities, NHCE candidates with transformed sub-entities are then validated (and subsequently labeled) using a series of validation operators. We present a case study to demonstrate the approach and show how it helps to bridge the gap between IE and Intelligent Systems (IS) through the use of transformed sub-entities in supervised learning.展开更多
<div style="text-align:justify;"> <span style="font-family:Verdana;">The growing need to use Artificial Intelligence (AI) technologies in addressing challenges in education sectors of d...<div style="text-align:justify;"> <span style="font-family:Verdana;">The growing need to use Artificial Intelligence (AI) technologies in addressing challenges in education sectors of developing countries is undermined by low awareness, limited skill and poor data quality. One particular persisting challenge, which can be addressed by AI, is school dropouts whereby hundreds of thousands of children drop annually in Africa. This article presents a data-driven approach to proactively predict likelihood of dropping from schools and enable effective management of dropouts. The approach is guided by a carefully crafted conceptual framework and new concepts of average absenteeism, current cumulative absenteeism and dropout risk appetite. In this study, a typical scenario of missing quality data is considered and for which synthetic data is generated to enable development of a functioning prediction model using neural network. The results show that, using the proposed approach, the levels of risk of dropping out of schools can be practically determined using data that is largely available in schools. Potentially, the study will inspire further research, encourage deployment of the technologies in real life, and inform processes of formulating or improving policies.</span> </div>展开更多
文摘The integration of organisation’s information security policy into threat modeling enhances effectiveness of security strategies for information security management. These security policies are the ones which define the sets of security issues, controls and organisation’s commitment for seamless integration with knowledge based platforms in order to protect critical assets and data. Such platforms are needed to evaluate and share violations which can create security loop-hole. The lack of rules-based approaches for discovering potential threats at organisation’s context, poses a challenge for many organisations in safeguarding their critical assets. To address the challenge, this paper introduces a Platform for Organisation Security Threat Analytic and Management (POSTAM) using rule-based approach. The platform enhances strategies for combating information security threats and thus improves organisations’ commitment in protecting their critical assets. R scripting language for data visualization and java-based scripts were used to develop a prototype to run on web protocol. MySQL database management system was used as back-end for data storage during threat analytic processes.
文摘One of the most useful Information Extraction (IE) solutions to Web information harnessing is Named Entity Recognition (NER). Hand-coded rule methods are still the best performers. These methods and statistical methods exploit Natural Language Processing (NLP) features and characteristics (e.g. Capitalization) to extract Named Entities (NE) like personal and company names. For entities with multiple sub-entities of higher cardinality (e.g. linux command, citation) and which are non-speech, these systems fail to deliver efficiently. Promising Machine Learning (ML) methods would require large amounts of training examples which are impossible to manually produce. We call these entities Named High Cardinality Entities (NHCEs). We propose a sequence validation based approach for the extraction and validation of NHCEs. In the approach, sub-entities of NHCE candidates are statistically and structurally characterized during top-down annotation process and guided to transformation into either value types (v-type) or user-defined types (u-type) using a ML model. Treated as sequences of sub-entities, NHCE candidates with transformed sub-entities are then validated (and subsequently labeled) using a series of validation operators. We present a case study to demonstrate the approach and show how it helps to bridge the gap between IE and Intelligent Systems (IS) through the use of transformed sub-entities in supervised learning.
文摘<div style="text-align:justify;"> <span style="font-family:Verdana;">The growing need to use Artificial Intelligence (AI) technologies in addressing challenges in education sectors of developing countries is undermined by low awareness, limited skill and poor data quality. One particular persisting challenge, which can be addressed by AI, is school dropouts whereby hundreds of thousands of children drop annually in Africa. This article presents a data-driven approach to proactively predict likelihood of dropping from schools and enable effective management of dropouts. The approach is guided by a carefully crafted conceptual framework and new concepts of average absenteeism, current cumulative absenteeism and dropout risk appetite. In this study, a typical scenario of missing quality data is considered and for which synthetic data is generated to enable development of a functioning prediction model using neural network. The results show that, using the proposed approach, the levels of risk of dropping out of schools can be practically determined using data that is largely available in schools. Potentially, the study will inspire further research, encourage deployment of the technologies in real life, and inform processes of formulating or improving policies.</span> </div>