The Analects, Mengzi and Xunzi are the top-three classical works of pre-Qin Confucianism, which epitomized thoughts and ideas of Confucius, Mencius and XunKuang1. There have been lots of spirited and in-depth discussi...The Analects, Mengzi and Xunzi are the top-three classical works of pre-Qin Confucianism, which epitomized thoughts and ideas of Confucius, Mencius and XunKuang1. There have been lots of spirited and in-depth discussions on their ideological inheritance and development from all kinds of academics. This paper tries to cast a new light on these discussions through “machine reading2”.展开更多
Due to the rapid increase in the exchange of text information via internet networks,the security and the reliability of digital content have become a major research issue.The main challenges faced by researchers are a...Due to the rapid increase in the exchange of text information via internet networks,the security and the reliability of digital content have become a major research issue.The main challenges faced by researchers are authentication,integrity verication,and tampering detection of the digital contents.In this paper,text zero-watermarking and text feature-based approach is proposed to improve the tampering detection accuracy of English text contents.The proposed approach embeds and detects the watermark logically without altering the original English text document.Based on hidden Markov model(HMM),the fourth level order of the word mechanism is used to analyze the contents of the given English text to nd the interrelationship between the contexts.The extracted features are used as watermark information and integrated with digital zero-watermarking techniques.To detect eventual tampering,the proposed approach has been implemented and validated with attacked English text.Experiments were performed using four standard datasets of varying lengths under multiple random locations of insertion,reorder,and deletion attacks.The experimental and simulation results prove the tampering detection accuracy of our method against all kinds of tampering attacks.Comparison results show that our proposed approach outperforms all the other baseline approaches in terms of tampering detection accuracy.展开更多
This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schem...This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schemes like tf-idf and BM25.These conventional methods often struggle with accurately capturing document relevance,leading to inefficiencies in both retrieval performance and index size management.OWS proposes a dynamic weighting mechanism that evaluates the significance of terms based on their orbital position within the vector space,emphasizing term relationships and distribution patterns overlooked by existing models.Our research focuses on evaluating OWS’s impact on model accuracy using Information Retrieval metrics like Recall,Precision,InterpolatedAverage Precision(IAP),andMeanAverage Precision(MAP).Additionally,we assessOWS’s effectiveness in reducing the inverted index size,crucial for model efficiency.We compare OWS-based retrieval models against others using different schemes,including tf-idf variations and BM25Delta.Results reveal OWS’s superiority,achieving a 54%Recall and 81%MAP,and a notable 38%reduction in the inverted index size.This highlights OWS’s potential in optimizing retrieval processes and underscores the need for further research in this underrepresented area to fully leverage OWS’s capabilities in information retrieval methodologies.展开更多
In the 21st century, the surge in natural and human-induced disasters necessitates robust disaster managementframeworks. This research addresses a critical gap, exploring dynamics in the successful implementation andp...In the 21st century, the surge in natural and human-induced disasters necessitates robust disaster managementframeworks. This research addresses a critical gap, exploring dynamics in the successful implementation andperformance monitoring of disaster management. Focusing on eleven key elements like Vulnerability and RiskAssessment, Training, Disaster Preparedness, Communication, and Community Resilience, the study utilizesScopus Database for secondary data, employing Text Mining and MS-Excel for analysis and data management.IBM SPSS (26) and IBM AMOS (20) facilitate Exploratory Factor Analysis (EFA) and Structural Equation Modeling(SEM) for model evaluation.The research raises questions about crafting a comprehensive, adaptable model, understanding the interplaybetween vulnerability assessment, training, and disaster preparedness, and integrating effective communicationand collaboration. Findings offer actionable insights for policy, practice, and community resilience against disasters. By scrutinizing each factor's role and interactions, the research lays the groundwork for a flexible model.Ultimately, the study aspires to cultivate more resilient communities amid the escalating threats of an unpredictable world, fostering effective navigation and thriving.展开更多
Background: With mounting global environmental, social and economic pressures the resilience and stability of forests and thus the provisioning of vital ecosystem services is increasingly threatened. Intensified moni...Background: With mounting global environmental, social and economic pressures the resilience and stability of forests and thus the provisioning of vital ecosystem services is increasingly threatened. Intensified monitoring can help to detect ecological threats and changes earlier, but monitoring resources are limited. Participatory forest monitoring with the help of "citizen scientists" can provide additional resources for forest monitoring and at the same time help to communicate with stakeholders and the general public. Examples for citizen science projects in the forestry domain can be found but a solid, applicable larger framework to utilise public participation in the area of forest monitoring seems to be lacking. We propose that a better understanding of shared and related topics in citizen science and forest monitoring might be a first step towards such a framework. Methods: We conduct a systematic meta-analysis of 1015 publication abstracts addressing "forest monitoring" and "citizen science" in order to explore the combined topical landscape of these subjects. We employ 'topic modelling an unsupervised probabilistic machine learning method, to identify latent shared topics in the analysed publications. Results: We find that large shared topics exist, but that these are primarily topics that would be expected in scientific publications in general. Common domain-specific topics are under-represented and indicate a topical separation of the two document sets on "forest monitoring" and "citizen science" and thus the represented domains. While topic modelling as a method proves to be a scalable and useful analytical tool, we propose that our approach could deliver even more useful data if a larger document set and full-text publications would be available for analysis. Conclusions: We propose that these results, together with the observation of non-shared but related topics, point at under-utilised opportunities for public participation in forest monitoring. Citizen science could be applied as a versatile tool in forest ecosystems monitoring, complementing traditional forest monitoring programmes, assisting early threat recognition and helping to connect forest management with the general public. We conclude that our presented approach should be pursued further as it may aid the understanding and setup of citizen science efforts in the forest monitoring domain.展开更多
Content authentication,integrity verification,and tampering detection of digital content exchanged via the internet have been used to address a major concern in information and communication technology.In this paper,a...Content authentication,integrity verification,and tampering detection of digital content exchanged via the internet have been used to address a major concern in information and communication technology.In this paper,a text zero-watermarking approach known as Smart-Fragile Approach based on Soft Computing and Digital Watermarking(SFASCDW)is proposed for content authentication and tampering detection of English text.A first-level order of alphanumeric mechanism,based on hidden Markov model,is integrated with digital zero-watermarking techniques to improve the watermark robustness of the proposed approach.The researcher uses the first-level order and alphanumeric mechanism of Markov model as a soft computing technique to analyze English text.Moreover,he extracts the features of the interrelationship among the contexts of the text,utilizes the extracted features as watermark information,and validates it later with the studied English text to detect any tampering.SFASCDW has been implemented using PHP with VS code IDE.The robustness,effectiveness,and applicability of SFASCDW are proved with experiments involving four datasets of various lengths in random locations using the three common attacks,namely insertion,reorder,and deletion.The SFASCDW was found to be effective and could be applicable in detecting any possible tampering.展开更多
In this article,a high-sensitive approach for detecting tampering attacks on transmitted Arabic-text over the Internet(HFDATAI)is proposed by integrating digital watermarking and hidden Markov model as a strategy for ...In this article,a high-sensitive approach for detecting tampering attacks on transmitted Arabic-text over the Internet(HFDATAI)is proposed by integrating digital watermarking and hidden Markov model as a strategy for soft computing.The HFDATAI solution technically integrates and senses the watermark without modifying the original text.The alphanumeric mechanism order in the first stage focused on the Markov model key secret is incorporated into an automated,null-watermarking approach to enhance the proposed approach’s efficiency,accuracy,and intensity.The first-level order and alphanumeric Markov model technique have been used as a strategy for soft computing to analyze the text of the Arabic language.In addition,the features of the interrelationship among text contexts and characteristics of watermark information extraction that is used later validated for detecting any tampering of the Arabic-text attacked.The HFDATAI strategy was introduced based on PHP with included IDE of VS code.Experiments of four separate duration datasets in random sites illustrate the fragility,efficacy,and applicability of HFDATAI by using the three common tampering attacks i.e.,insertion,reorder,and deletion.The HFDATAI was found to be effective,applicable,and very sensitive for detecting any possible tampering on Arabic text.展开更多
In this paper,the text analysis-based approach RTADZWA(Reliable Text Analysis and Digital Zero-Watermarking Approach)has been proposed for transferring and receiving authentic English text via the internet.Second leve...In this paper,the text analysis-based approach RTADZWA(Reliable Text Analysis and Digital Zero-Watermarking Approach)has been proposed for transferring and receiving authentic English text via the internet.Second level order of alphanumeric mechanism of hidden Markov model has been used in RTADZWA approach as a natural language processing to analyze the English text and extracts the features of the interrelationship between contexts of the text and utilizes the extracted features as watermark information and then validates it later with attacked English text to detect any tampering occurred on it.Text analysis and text zero-watermarking techniques have been integrated by RTADZWA approach to improving the performance,accuracy,capacity,and robustness issues of the previous literature proposed by the researchers.The RTADZWA approach embeds and detects the watermark logically without altering the original text document to embed a watermark.RTADZWA has been implemented using PHP with VS code IDE.The experimental and simulation results using standard datasets of varying lengths show that the proposed approach can obtain high robustness and better detection accuracy of tampering common random insertion,reorder,and deletion attacks,e.g.,Comparison results with baseline approaches also show the advantages of the proposed approach.展开更多
The study use crawler to get 842,917 hot tweets written in English with keyword Chinese or China. Topic modeling and sentiment analysis are used to explore the tweets. Thirty topics are extracted. Overall, 33% of the ...The study use crawler to get 842,917 hot tweets written in English with keyword Chinese or China. Topic modeling and sentiment analysis are used to explore the tweets. Thirty topics are extracted. Overall, 33% of the tweets relate to politics, and 20% relate to economy, 21% relate to culture, and 26% relate to society. Regarding the polarity, 55% of the tweets are positive, 31% are negative and the other 14% are neutral. There are only 25.3% of the tweets with obvious sentiment, most of them are joy.展开更多
文摘The Analects, Mengzi and Xunzi are the top-three classical works of pre-Qin Confucianism, which epitomized thoughts and ideas of Confucius, Mencius and XunKuang1. There have been lots of spirited and in-depth discussions on their ideological inheritance and development from all kinds of academics. This paper tries to cast a new light on these discussions through “machine reading2”.
基金The author extends his appreciation to the Deanship of Scientic Research at King Khalid University for funding this work under grant number(R.G.P.2/55/40/2019),Received by Fahd N.Al-Wesabi.www.kku.edu.sa.
文摘Due to the rapid increase in the exchange of text information via internet networks,the security and the reliability of digital content have become a major research issue.The main challenges faced by researchers are authentication,integrity verication,and tampering detection of the digital contents.In this paper,text zero-watermarking and text feature-based approach is proposed to improve the tampering detection accuracy of English text contents.The proposed approach embeds and detects the watermark logically without altering the original English text document.Based on hidden Markov model(HMM),the fourth level order of the word mechanism is used to analyze the contents of the given English text to nd the interrelationship between the contexts.The extracted features are used as watermark information and integrated with digital zero-watermarking techniques.To detect eventual tampering,the proposed approach has been implemented and validated with attacked English text.Experiments were performed using four standard datasets of varying lengths under multiple random locations of insertion,reorder,and deletion attacks.The experimental and simulation results prove the tampering detection accuracy of our method against all kinds of tampering attacks.Comparison results show that our proposed approach outperforms all the other baseline approaches in terms of tampering detection accuracy.
文摘This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schemes like tf-idf and BM25.These conventional methods often struggle with accurately capturing document relevance,leading to inefficiencies in both retrieval performance and index size management.OWS proposes a dynamic weighting mechanism that evaluates the significance of terms based on their orbital position within the vector space,emphasizing term relationships and distribution patterns overlooked by existing models.Our research focuses on evaluating OWS’s impact on model accuracy using Information Retrieval metrics like Recall,Precision,InterpolatedAverage Precision(IAP),andMeanAverage Precision(MAP).Additionally,we assessOWS’s effectiveness in reducing the inverted index size,crucial for model efficiency.We compare OWS-based retrieval models against others using different schemes,including tf-idf variations and BM25Delta.Results reveal OWS’s superiority,achieving a 54%Recall and 81%MAP,and a notable 38%reduction in the inverted index size.This highlights OWS’s potential in optimizing retrieval processes and underscores the need for further research in this underrepresented area to fully leverage OWS’s capabilities in information retrieval methodologies.
文摘In the 21st century, the surge in natural and human-induced disasters necessitates robust disaster managementframeworks. This research addresses a critical gap, exploring dynamics in the successful implementation andperformance monitoring of disaster management. Focusing on eleven key elements like Vulnerability and RiskAssessment, Training, Disaster Preparedness, Communication, and Community Resilience, the study utilizesScopus Database for secondary data, employing Text Mining and MS-Excel for analysis and data management.IBM SPSS (26) and IBM AMOS (20) facilitate Exploratory Factor Analysis (EFA) and Structural Equation Modeling(SEM) for model evaluation.The research raises questions about crafting a comprehensive, adaptable model, understanding the interplaybetween vulnerability assessment, training, and disaster preparedness, and integrating effective communicationand collaboration. Findings offer actionable insights for policy, practice, and community resilience against disasters. By scrutinizing each factor's role and interactions, the research lays the groundwork for a flexible model.Ultimately, the study aspires to cultivate more resilient communities amid the escalating threats of an unpredictable world, fostering effective navigation and thriving.
文摘Background: With mounting global environmental, social and economic pressures the resilience and stability of forests and thus the provisioning of vital ecosystem services is increasingly threatened. Intensified monitoring can help to detect ecological threats and changes earlier, but monitoring resources are limited. Participatory forest monitoring with the help of "citizen scientists" can provide additional resources for forest monitoring and at the same time help to communicate with stakeholders and the general public. Examples for citizen science projects in the forestry domain can be found but a solid, applicable larger framework to utilise public participation in the area of forest monitoring seems to be lacking. We propose that a better understanding of shared and related topics in citizen science and forest monitoring might be a first step towards such a framework. Methods: We conduct a systematic meta-analysis of 1015 publication abstracts addressing "forest monitoring" and "citizen science" in order to explore the combined topical landscape of these subjects. We employ 'topic modelling an unsupervised probabilistic machine learning method, to identify latent shared topics in the analysed publications. Results: We find that large shared topics exist, but that these are primarily topics that would be expected in scientific publications in general. Common domain-specific topics are under-represented and indicate a topical separation of the two document sets on "forest monitoring" and "citizen science" and thus the represented domains. While topic modelling as a method proves to be a scalable and useful analytical tool, we propose that our approach could deliver even more useful data if a larger document set and full-text publications would be available for analysis. Conclusions: We propose that these results, together with the observation of non-shared but related topics, point at under-utilised opportunities for public participation in forest monitoring. Citizen science could be applied as a versatile tool in forest ecosystems monitoring, complementing traditional forest monitoring programmes, assisting early threat recognition and helping to connect forest management with the general public. We conclude that our presented approach should be pursued further as it may aid the understanding and setup of citizen science efforts in the forest monitoring domain.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP.1/147/42),Received by Fahd N.Al-Wesabi.www.kku.edu.sa.
文摘Content authentication,integrity verification,and tampering detection of digital content exchanged via the internet have been used to address a major concern in information and communication technology.In this paper,a text zero-watermarking approach known as Smart-Fragile Approach based on Soft Computing and Digital Watermarking(SFASCDW)is proposed for content authentication and tampering detection of English text.A first-level order of alphanumeric mechanism,based on hidden Markov model,is integrated with digital zero-watermarking techniques to improve the watermark robustness of the proposed approach.The researcher uses the first-level order and alphanumeric mechanism of Markov model as a soft computing technique to analyze English text.Moreover,he extracts the features of the interrelationship among the contexts of the text,utilizes the extracted features as watermark information,and validates it later with the studied English text to detect any tampering.SFASCDW has been implemented using PHP with VS code IDE.The robustness,effectiveness,and applicability of SFASCDW are proved with experiments involving four datasets of various lengths in random locations using the three common attacks,namely insertion,reorder,and deletion.The SFASCDW was found to be effective and could be applicable in detecting any possible tampering.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(G.R.P./14/42),Received by Fahd N.Al-Wesabi.www.kku.edu.sa.
文摘In this article,a high-sensitive approach for detecting tampering attacks on transmitted Arabic-text over the Internet(HFDATAI)is proposed by integrating digital watermarking and hidden Markov model as a strategy for soft computing.The HFDATAI solution technically integrates and senses the watermark without modifying the original text.The alphanumeric mechanism order in the first stage focused on the Markov model key secret is incorporated into an automated,null-watermarking approach to enhance the proposed approach’s efficiency,accuracy,and intensity.The first-level order and alphanumeric Markov model technique have been used as a strategy for soft computing to analyze the text of the Arabic language.In addition,the features of the interrelationship among text contexts and characteristics of watermark information extraction that is used later validated for detecting any tampering of the Arabic-text attacked.The HFDATAI strategy was introduced based on PHP with included IDE of VS code.Experiments of four separate duration datasets in random sites illustrate the fragility,efficacy,and applicability of HFDATAI by using the three common tampering attacks i.e.,insertion,reorder,and deletion.The HFDATAI was found to be effective,applicable,and very sensitive for detecting any possible tampering on Arabic text.
基金The author extends his appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(R.G.P.2/25/42),Received by Fahd N.Al-Wesabi.www.kku.edu.sa.
文摘In this paper,the text analysis-based approach RTADZWA(Reliable Text Analysis and Digital Zero-Watermarking Approach)has been proposed for transferring and receiving authentic English text via the internet.Second level order of alphanumeric mechanism of hidden Markov model has been used in RTADZWA approach as a natural language processing to analyze the English text and extracts the features of the interrelationship between contexts of the text and utilizes the extracted features as watermark information and then validates it later with attacked English text to detect any tampering occurred on it.Text analysis and text zero-watermarking techniques have been integrated by RTADZWA approach to improving the performance,accuracy,capacity,and robustness issues of the previous literature proposed by the researchers.The RTADZWA approach embeds and detects the watermark logically without altering the original text document to embed a watermark.RTADZWA has been implemented using PHP with VS code IDE.The experimental and simulation results using standard datasets of varying lengths show that the proposed approach can obtain high robustness and better detection accuracy of tampering common random insertion,reorder,and deletion attacks,e.g.,Comparison results with baseline approaches also show the advantages of the proposed approach.
文摘The study use crawler to get 842,917 hot tweets written in English with keyword Chinese or China. Topic modeling and sentiment analysis are used to explore the tweets. Thirty topics are extracted. Overall, 33% of the tweets relate to politics, and 20% relate to economy, 21% relate to culture, and 26% relate to society. Regarding the polarity, 55% of the tweets are positive, 31% are negative and the other 14% are neutral. There are only 25.3% of the tweets with obvious sentiment, most of them are joy.