With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial informati...With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial information grow, electronic documents have also proliferated. When dealing with numerous electronic documents and texts written by Chinese beginners, manually written texts often contain hidden grammatical errors, posing a significant challenge to traditional manual proofreading. Correcting these grammatical errors is crucial to ensure fluency and readability. However, certain special types of text grammar or logical errors can have a huge impact, and manually proofreading a large number of texts individually is clearly impractical. Consequently, research on text error correction techniques has garnered significant attention in recent years. The advent and advancement of deep learning have paved the way for sequence-to-sequence learning methods to be extensively applied to the task of text error correction. This paper presents a comprehensive analysis of Chinese text grammar error correction technology, elaborates on its current research status, discusses existing problems, proposes preliminary solutions, and conducts experiments using judicial documents as an example. The aim is to provide a feasible research approach for Chinese text error correction technology.展开更多
Desertification is increasingly serious in Xinjiang,and the construction of water conservancy is a precondition for the development of agriculture.The main project for the development of agriculture and water conserva...Desertification is increasingly serious in Xinjiang,and the construction of water conservancy is a precondition for the development of agriculture.The main project for the development of agriculture and water conservancy in Xinjiang is to build Karez,which played a vital role in the development of Xinjiang agriculture in the Qing Dynasty.It has been recorded many times in historical documents of the Qing Dynasty,such as Lin Zexu s Diary,Tao Baolian s Diary,Xinjiang Atlas and Zuo Zongtang s Memorial to the Emperor,etc.,which recorded the situation and historical origin of Karez.Karez made a significant contribution to the development of agriculture in the Qing Dynasty.It increased the cultivated land in Xinjiang at that time,and increased the types and yields of crops.It is conducive to the stability and development of Xinjiang s economy.Until today,Karez is still an important water source for agricultural irrigation in Xinjiang.展开更多
Traditional human rights theory tends to hold that human rights should be aimed at defending public authority and that the legal issue of human rights is a matter of public law.However,the development of human rights ...Traditional human rights theory tends to hold that human rights should be aimed at defending public authority and that the legal issue of human rights is a matter of public law.However,the development of human rights concepts and practices is not just confined to this.A textual search shows that the term“human rights”exists widely in China’s civil judicial documents.Among the 3,412 civil judicial documents we researched,the concept of“human rights”penetrates all kinds of disputes in lawsuits,ranging from property rights,contracts,labor,and torts to marital property,which is embedded in both the claims of the parties concerned and the reasoning of judges.Human rights have become the discourse and yardstick for understanding and evaluating social behavior.The widespread use of the term“human rights”in civil judicial documents reflects at least three concepts related to human rights:first,the rights to subsistence and development are the primary basic human rights;second,the judicial protection of human rights is a bottom-line guarantee;third,the protection of human rights aims to achieve equal rights.Today,judges quote the theory of human rights in judicial judgments from time to time,evidencing that human rights have a practical function in judicial adjudication activities,and in practice this is mainly manifested in declaring righteous values and strengthening arguments with the values and ideas related to human rights,using the provisions concerning human rights in the Constitution to interpret the constitutionality,and using the principles of human rights to interpret blurred rules and rank the importance of different rights.展开更多
In the information age,electronic documents(e-documents)have become a popular alternative to paper documents due to their lower costs,higher dissemination rates,and ease of knowledge sharing.However,digital copyright ...In the information age,electronic documents(e-documents)have become a popular alternative to paper documents due to their lower costs,higher dissemination rates,and ease of knowledge sharing.However,digital copyright infringements occur frequently due to the ease of copying,which not only infringes on the rights of creators but also weakens their creative enthusiasm.Therefore,it is crucial to establish an e-document sharing system that enforces copyright protection.However,the existing centralized system has outstanding vulnerabilities,and the plagiarism detection algorithm used cannot fully detect the context,semantics,style,and other factors of the text.Digital watermark technology is only used as a means of infringement tracing.This paper proposes a decentralized framework for e-document sharing based on decentralized autonomous organization(DAO)and non-fungible token(NFT)in blockchain.The use of blockchain as a distributed credit base resolves the vulnerabilities inherent in traditional centralized systems.The e-document evaluation and plagiarism detection mechanisms based on the DAO model effectively address challenges in comprehensive text information checks,thereby promoting the enhancement of e-document quality.The mechanism for protecting and circulating e-document copyrights using NFT technology ensures effective safeguarding of users’e-document copyrights and facilitates e-document sharing.Moreover,recognizing the security issues within the DAO governance mechanism,we introduce an innovative optimization solution.Through experimentation,we validate the enhanced security of the optimized governance mechanism,reducing manipulation risks by up to 51%.Additionally,by utilizing evolutionary game analysis to deduce the equilibrium strategies of the framework,we discovered that adjusting the reward and penalty parameters of the incentive mechanism motivates creators to generate superior quality and unique e-documents,while evaluators are more likely to engage in assessments.展开更多
Purpose:Accurately assigning the document type of review articles in citation index databases like Web of Science(WoS)and Scopus is important.This study aims to investigate the document type assignation of review arti...Purpose:Accurately assigning the document type of review articles in citation index databases like Web of Science(WoS)and Scopus is important.This study aims to investigate the document type assignation of review articles in Web of Science,Scopus and Publisher’s websites on a large scale.Design/methodology/approach:27,616 papers from 160 journals from 10 review journal series indexed in SCI are analyzed.The document types of these papers labeled on journals’websites,and assigned by WoS and Scopus are retrieved and compared to determine the assigning accuracy and identify the possible reasons for wrongly assigning.For the document type labeled on the website,we further differentiate them into explicit review and implicit review based on whether the website directly indicates it is a review or not.Findings:Overall,WoS and Scopus performed similarly,with an average precision of about 99% and recall of about 80%.However,there were some differences between WoS and Scopus across different journal series and within the same journal series.The assigning accuracy of WoS and Scopus for implicit reviews dropped significantly,especially for Scopus.Research limitations:The document types we used as the gold standard were based on the journal websites’labeling which were not manually validated one by one.We only studied the labeling performance for review articles published during 2017-2018 in review journals.Whether this conclusion can be extended to review articles published in non-review journals and most current situation is not very clear.Practical implications:This study provides a reference for the accuracy of document type assigning of review articles in WoS and Scopus,and the identified pattern for assigning implicit reviews may be helpful to better labeling on websites,WoS and Scopus.Originality/value:This study investigated the assigning accuracy of document type of reviews and identified the some patterns of wrong assignments.展开更多
The Gannet Optimization Algorithm (GOA) and the Whale Optimization Algorithm (WOA) demonstrate strong performance;however, there remains room for improvement in convergence and practical applications. This study intro...The Gannet Optimization Algorithm (GOA) and the Whale Optimization Algorithm (WOA) demonstrate strong performance;however, there remains room for improvement in convergence and practical applications. This study introduces a hybrid optimization algorithm, named the adaptive inertia weight whale optimization algorithm and gannet optimization algorithm (AIWGOA), which addresses challenges in enhancing handwritten documents. The hybrid strategy integrates the strengths of both algorithms, significantly enhancing their capabilities, whereas the adaptive parameter strategy mitigates the need for manual parameter setting. By amalgamating the hybrid strategy and parameter-adaptive approach, the Gannet Optimization Algorithm was refined to yield the AIWGOA. Through a performance analysis of the CEC2013 benchmark, the AIWGOA demonstrates notable advantages across various metrics. Subsequently, an evaluation index was employed to assess the enhanced handwritten documents and images, affirming the superior practical application of the AIWGOA compared with other algorithms.展开更多
As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of d...As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of digitized documents,the classification of digitized documents in real time has been identified as the primary goal of our study.A paper classification is the first stage in automating document control and efficient knowledge discovery with no or little human involvement.Artificial intelligence methods such as Deep Learning are now combined with segmentation to study and interpret those traits,which were not conceivable ten years ago.Deep learning aids in comprehending input patterns so that object classes may be predicted.The segmentation process divides the input image into separate segments for a more thorough image study.This study proposes a deep learning-enabled framework for automated document classification,which can be implemented in higher education.To further this goal,a dataset was developed that includes seven categories:Diplomas,Personal documents,Journal of Accounting of higher education diplomas,Service letters,Orders,Production orders,and Student orders.Subsequently,a deep learning model based on Conv2D layers is proposed for the document classification process.In the final part of this research,the proposed model is evaluated and compared with other machine-learning techniques.The results demonstrate that the proposed deep learning model shows high results in document categorization overtaking the other machine learning models by reaching 94.84%,94.79%,94.62%,94.43%,94.07%in accuracy,precision,recall,F-score,and AUC-ROC,respectively.The achieved results prove that the proposed deep model is acceptable to use in practice as an assistant to an office worker.展开更多
Research on the use of EHR is contradictory since it presents contradicting results regarding the time spent documenting. There is research that supports the use of electronic records as a tool to speed documentation;...Research on the use of EHR is contradictory since it presents contradicting results regarding the time spent documenting. There is research that supports the use of electronic records as a tool to speed documentation;and research that found that it is time consuming. The purpose of this quantitative retrospective before-after project was to measure the impact of using the laboratory value flowsheet within the EHR on documentation time. The research question was: “Does the use of a laboratory value flowsheet in the EHR impact documentation time by primary care providers (PCPs)?” The theoretical framework utilized in this project was the Donabedian Model. The population in this research was the two PCPs in a small primary care clinic in the northwest of Puerto Rico. The sample was composed of all the encounters during the months of October 2019 and December 2019. The data was obtained through data mining and analyzed using SPSS 27. The evaluative outcome of this project is that there is a decrease in documentation time after implementation of the use of the laboratory value flowsheet in the EHR. However, patients per day increase therefore having an impact on the number of patients seen per day/week/month. The implications for clinical practice include the use of templates to improve workflow and documentation as well as decreasing documentation time while also increasing the number of patients seen per day. .展开更多
Background Document images such as statistical reports and scientific journals are widely used in information technology.Accurate detection of table areas in document images is an essential prerequisite for tasks such...Background Document images such as statistical reports and scientific journals are widely used in information technology.Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction.However,because of the diversity in the shapes and sizes of tables,existing table detection methods adapted from general object detection algorithms,have not yet achieved satisfactory results.Incorrect detection results might lead to the loss of critical information.Methods Therefore,we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections.To better deal with table areas of different shapes and sizes,we added a dualbranch context content attention module(DCCAM)to high-dimensional features to extract context content information,thereby enhancing the network's ability to learn shape features.For feature fusion at different scales,we replaced the original 3×3 convolution with a multilayer residual module,which contains enhanced gradient flow information to improve the feature representation and extraction capability.Results We evaluated our method on public document datasets and compared it with previous methods,which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score.https://github.com/Yong Z-Lee/TD-DCCAM.展开更多
Ghanaian construction projects encounter a number of challenges, including low health, safety, and environmental requirements, poor performance, and time and cost overruns. To provide value for money (VFM) on governme...Ghanaian construction projects encounter a number of challenges, including low health, safety, and environmental requirements, poor performance, and time and cost overruns. To provide value for money (VFM) on government infrastructure projects in Ghana, this research assesses the roles of project consultants, specifically architects and quantity surveyors, and highlights important obstacles. A cross-sectional survey design involving architects and quantity surveyors yielded a 96% response rate after 100 questionnaires were distributed. Consultants’ responsibilities also include monitoring standards compliance, providing advice on delays, controlling budgets, and advising on project completion dates. Difficulties encompass a lack of promptness in decision-making, unethical conduct, political pressure, and inadequate focus on contract administration and construction audits. Project urgency, longevity, political clout, timely decision-making, and team experience are important variables that impact VFM. Policy makers and construction management practitioners should take note of the implications for Ghana’s public infrastructure projects.展开更多
In consultative committee for space data systems(CCSDS) file delivery protocol(CFDP) recommendation of reliable transmission,there are no detail transmission procedure and delay calculation of prompted negative ac...In consultative committee for space data systems(CCSDS) file delivery protocol(CFDP) recommendation of reliable transmission,there are no detail transmission procedure and delay calculation of prompted negative acknowledge and asynchronous negative acknowledge models.CFDP is designed to provide data and storage management,story and forward,custody transfer and reliable end-to-end delivery over deep space characterized by huge latency,intermittent link,asymmetric bandwidth and big bit error rate(BER).Four reliable transmission models are analyzed and an expected file-delivery time is calculated with different trans-mission rates,numbers and sizes of packet data units,BERs and frequencies of external events,etc.By comparison of four CFDP models,the requirement of BER for typical missions in deep space is obtained and rules of choosing CFDP models under different uplink state informations are given,which provides references for protocol models selection,utilization and modification.展开更多
Purpose:Detection of research fields or topics and understanding the dynamics help the scientific community in their decisions regarding the establishment of scientific fields.This also helps in having a better collab...Purpose:Detection of research fields or topics and understanding the dynamics help the scientific community in their decisions regarding the establishment of scientific fields.This also helps in having a better collaboration with governments and businesses.This study aims to investigate the development of research fields over time,translating it into a topic detection problem.Design/methodology/approach:To achieve the objectives,we propose a modified deep clustering method to detect research trends from the abstracts and titles of academic documents.Document embedding approaches are utilized to transform documents into vector-based representations.The proposed method is evaluated by comparing it with a combination of different embedding and clustering approaches and the classical topic modeling algorithms(i.e.LDA)against a benchmark dataset.A case study is also conducted exploring the evolution of Artificial Intelligence(AI)detecting the research topics or sub-fields in related AI publications.Findings:Evaluating the performance of the proposed method using clustering performance indicators reflects that our proposed method outperforms similar approaches against the benchmark dataset.Using the proposed method,we also show how the topics have evolved in the period of the recent 30 years,taking advantage of a keyword extraction method for cluster tagging and labeling,demonstrating the context of the topics.Research limitations:We noticed that it is not possible to generalize one solution for all downstream tasks.Hence,it is required to fine-tune or optimize the solutions for each task and even datasets.In addition,interpretation of cluster labels can be subjective and vary based on the readers’opinions.It is also very difficult to evaluate the labeling techniques,rendering the explanation of the clusters further limited.Practical implications:As demonstrated in the case study,we show that in a real-world example,how the proposed method would enable the researchers and reviewers of the academic research to detect,summarize,analyze,and visualize research topics from decades of academic documents.This helps the scientific community and all related organizations in fast and effective analysis of the fields,by establishing and explaining the topics.Originality/value:In this study,we introduce a modified and tuned deep embedding clustering coupled with Doc2Vec representations for topic extraction.We also use a concept extraction method as a labeling approach in this study.The effectiveness of the method has been evaluated in a case study of AI publications,where we analyze the AI topics during the past three decades.展开更多
The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is av...The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is available, the word-use variability problem will have substantial impact on the Information Retrieval (IR) performance. To solve the problem, a new technology to short document retrieval named Reference Document Model (RDM) is put forward in this letter. RDM gets the statistical semantic of the query/document by pseudo feedback both for the query and document from reference documents. The contributions of this model are three-fold: (1) Pseudo feedback both for the query and the document; (2) Building the query model and the document model from reference documents; (3) Flexible indexing units, which can be ally linguistic elements such as documents, paragraphs, sentences, n-grams, term or character. For short document retrieval, RDM achieves significant improvements over the classical probabilistic models on the task of ad hoc retrieval on Text REtrieval Conference (TREC) test sets. Results also show that the shorter the document, the better the RDM performance.展开更多
There exist a large number of composed documents in universities in the teaching process. Most of them are required to check the similarity for validation. A kind of similarity computation system is constructed for co...There exist a large number of composed documents in universities in the teaching process. Most of them are required to check the similarity for validation. A kind of similarity computation system is constructed for composed documents with images and text information. Firstly, each document is split and outputs two parts as images and text information. Then, these documents are compared by computing the similarities of images and text contents independently. Through Hadoop system, the text contents are easily and quickly separated. Experimental results show that the proposed system is efficient and practical.展开更多
This paper examines automatic recognition and extraction of tables from a large collection of het-erogeneous documents. The heterogeneous documents are initially pre-processed and converted to HTML codes, after which ...This paper examines automatic recognition and extraction of tables from a large collection of het-erogeneous documents. The heterogeneous documents are initially pre-processed and converted to HTML codes, after which an algorithm recognises the table portion of the documents. Hidden Markov Model (HMM) is then applied to the HTML code in order to extract the tables. The model was trained and tested with five hundred and twenty six self-generated tables (three hundred and twenty-one (321) tables for training and two hundred and five (205) tables for testing). Viterbi algorithm was implemented for the testing part. The system was evaluated in terms of accuracy, precision, recall and f-measure. The overall evaluation results show 88.8% accuracy, 96.8% precision, 91.7% recall and 88.8% F-measure revealing that the method is good at solving the problem of table extraction.展开更多
Deacidification and self-cleaning are important for the preservation of paper documents.In this study,nano-CaCO_(3) was used as a deacidification agent and stabilized by nanocellulose(CNC)and hydroxypropyl methylcellu...Deacidification and self-cleaning are important for the preservation of paper documents.In this study,nano-CaCO_(3) was used as a deacidification agent and stabilized by nanocellulose(CNC)and hydroxypropyl methylcellulose(HPMC)to form a uniform dispersion.Followed by polydimethylsiloxane(PDMS)treatment and chemical vapor deposition(CVD)of methyltrimethoxysilane(MTMS),a hydrophobic coating was constructed for self-cleaning purposes.The pH value of the treated paper was approximately 8.20,and the static contact angle was as high as 152.29°.Compared to the untreated paper,the tensile strength of the treated paper increased by 12.6%.This treatment method endows the paper with a good deacidification effect and self-cleaning property,which are beneficial for its long-term preservation.展开更多
The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same ...The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.展开更多
While XML web services become recognized as a solution to business-to-business transactions, there are many problems that should be solved. For example, it is not easy to manipulate business documents of existing stan...While XML web services become recognized as a solution to business-to-business transactions, there are many problems that should be solved. For example, it is not easy to manipulate business documents of existing standards such as RosettaNet and UN/EDIFACT EDI, traditionally regarded as an important resource for managing B2B relationships. As a starting point for the complete implementation of B2B web services, this paper deals with how to support B2B business documents in XML web services. In the first phase, basic requirements for driving XML web services by business documents are introduced. As a solution, this paper presents how to express B2B business documents in WSDL, a core standard for XML web services. This kind of approach facilitates the reuse of existing business documents and enhances interoperability between implemented web services. Furthermore, it suggests how to link with other conceptual modeling frameworks such as ebXML/UMM, built on a rich heritage of electronic business experience.展开更多
文摘With the widespread use of Chinese globally, the number of Chinese learners has been increasing, leading to various grammatical errors among beginners. Additionally, as domestic efforts to develop industrial information grow, electronic documents have also proliferated. When dealing with numerous electronic documents and texts written by Chinese beginners, manually written texts often contain hidden grammatical errors, posing a significant challenge to traditional manual proofreading. Correcting these grammatical errors is crucial to ensure fluency and readability. However, certain special types of text grammar or logical errors can have a huge impact, and manually proofreading a large number of texts individually is clearly impractical. Consequently, research on text error correction techniques has garnered significant attention in recent years. The advent and advancement of deep learning have paved the way for sequence-to-sequence learning methods to be extensively applied to the task of text error correction. This paper presents a comprehensive analysis of Chinese text grammar error correction technology, elaborates on its current research status, discusses existing problems, proposes preliminary solutions, and conducts experiments using judicial documents as an example. The aim is to provide a feasible research approach for Chinese text error correction technology.
文摘Desertification is increasingly serious in Xinjiang,and the construction of water conservancy is a precondition for the development of agriculture.The main project for the development of agriculture and water conservancy in Xinjiang is to build Karez,which played a vital role in the development of Xinjiang agriculture in the Qing Dynasty.It has been recorded many times in historical documents of the Qing Dynasty,such as Lin Zexu s Diary,Tao Baolian s Diary,Xinjiang Atlas and Zuo Zongtang s Memorial to the Emperor,etc.,which recorded the situation and historical origin of Karez.Karez made a significant contribution to the development of agriculture in the Qing Dynasty.It increased the cultivated land in Xinjiang at that time,and increased the types and yields of crops.It is conducive to the stability and development of Xinjiang s economy.Until today,Karez is still an important water source for agricultural irrigation in Xinjiang.
文摘Traditional human rights theory tends to hold that human rights should be aimed at defending public authority and that the legal issue of human rights is a matter of public law.However,the development of human rights concepts and practices is not just confined to this.A textual search shows that the term“human rights”exists widely in China’s civil judicial documents.Among the 3,412 civil judicial documents we researched,the concept of“human rights”penetrates all kinds of disputes in lawsuits,ranging from property rights,contracts,labor,and torts to marital property,which is embedded in both the claims of the parties concerned and the reasoning of judges.Human rights have become the discourse and yardstick for understanding and evaluating social behavior.The widespread use of the term“human rights”in civil judicial documents reflects at least three concepts related to human rights:first,the rights to subsistence and development are the primary basic human rights;second,the judicial protection of human rights is a bottom-line guarantee;third,the protection of human rights aims to achieve equal rights.Today,judges quote the theory of human rights in judicial judgments from time to time,evidencing that human rights have a practical function in judicial adjudication activities,and in practice this is mainly manifested in declaring righteous values and strengthening arguments with the values and ideas related to human rights,using the provisions concerning human rights in the Constitution to interpret the constitutionality,and using the principles of human rights to interpret blurred rules and rank the importance of different rights.
基金This work is supported by the National Key Research and Development Program(2022YFB2702300)National Natural Science Foundation of China(Grant No.62172115)+2 种基金Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education under Grant Number No.1331005Guangdong Higher Education Innovation Group 2020KCXTD007Guangzhou Fundamental Research Plan of Municipal-School Jointly Funded Projects(No.202102010445).
文摘In the information age,electronic documents(e-documents)have become a popular alternative to paper documents due to their lower costs,higher dissemination rates,and ease of knowledge sharing.However,digital copyright infringements occur frequently due to the ease of copying,which not only infringes on the rights of creators but also weakens their creative enthusiasm.Therefore,it is crucial to establish an e-document sharing system that enforces copyright protection.However,the existing centralized system has outstanding vulnerabilities,and the plagiarism detection algorithm used cannot fully detect the context,semantics,style,and other factors of the text.Digital watermark technology is only used as a means of infringement tracing.This paper proposes a decentralized framework for e-document sharing based on decentralized autonomous organization(DAO)and non-fungible token(NFT)in blockchain.The use of blockchain as a distributed credit base resolves the vulnerabilities inherent in traditional centralized systems.The e-document evaluation and plagiarism detection mechanisms based on the DAO model effectively address challenges in comprehensive text information checks,thereby promoting the enhancement of e-document quality.The mechanism for protecting and circulating e-document copyrights using NFT technology ensures effective safeguarding of users’e-document copyrights and facilitates e-document sharing.Moreover,recognizing the security issues within the DAO governance mechanism,we introduce an innovative optimization solution.Through experimentation,we validate the enhanced security of the optimized governance mechanism,reducing manipulation risks by up to 51%.Additionally,by utilizing evolutionary game analysis to deduce the equilibrium strategies of the framework,we discovered that adjusting the reward and penalty parameters of the incentive mechanism motivates creators to generate superior quality and unique e-documents,while evaluators are more likely to engage in assessments.
文摘Purpose:Accurately assigning the document type of review articles in citation index databases like Web of Science(WoS)and Scopus is important.This study aims to investigate the document type assignation of review articles in Web of Science,Scopus and Publisher’s websites on a large scale.Design/methodology/approach:27,616 papers from 160 journals from 10 review journal series indexed in SCI are analyzed.The document types of these papers labeled on journals’websites,and assigned by WoS and Scopus are retrieved and compared to determine the assigning accuracy and identify the possible reasons for wrongly assigning.For the document type labeled on the website,we further differentiate them into explicit review and implicit review based on whether the website directly indicates it is a review or not.Findings:Overall,WoS and Scopus performed similarly,with an average precision of about 99% and recall of about 80%.However,there were some differences between WoS and Scopus across different journal series and within the same journal series.The assigning accuracy of WoS and Scopus for implicit reviews dropped significantly,especially for Scopus.Research limitations:The document types we used as the gold standard were based on the journal websites’labeling which were not manually validated one by one.We only studied the labeling performance for review articles published during 2017-2018 in review journals.Whether this conclusion can be extended to review articles published in non-review journals and most current situation is not very clear.Practical implications:This study provides a reference for the accuracy of document type assigning of review articles in WoS and Scopus,and the identified pattern for assigning implicit reviews may be helpful to better labeling on websites,WoS and Scopus.Originality/value:This study investigated the assigning accuracy of document type of reviews and identified the some patterns of wrong assignments.
文摘The Gannet Optimization Algorithm (GOA) and the Whale Optimization Algorithm (WOA) demonstrate strong performance;however, there remains room for improvement in convergence and practical applications. This study introduces a hybrid optimization algorithm, named the adaptive inertia weight whale optimization algorithm and gannet optimization algorithm (AIWGOA), which addresses challenges in enhancing handwritten documents. The hybrid strategy integrates the strengths of both algorithms, significantly enhancing their capabilities, whereas the adaptive parameter strategy mitigates the need for manual parameter setting. By amalgamating the hybrid strategy and parameter-adaptive approach, the Gannet Optimization Algorithm was refined to yield the AIWGOA. Through a performance analysis of the CEC2013 benchmark, the AIWGOA demonstrates notable advantages across various metrics. Subsequently, an evaluation index was employed to assess the enhanced handwritten documents and images, affirming the superior practical application of the AIWGOA compared with other algorithms.
文摘As digital technologies have advanced more rapidly,the number of paper documents recently converted into a digital format has exponentially increased.To respond to the urgent need to categorize the growing number of digitized documents,the classification of digitized documents in real time has been identified as the primary goal of our study.A paper classification is the first stage in automating document control and efficient knowledge discovery with no or little human involvement.Artificial intelligence methods such as Deep Learning are now combined with segmentation to study and interpret those traits,which were not conceivable ten years ago.Deep learning aids in comprehending input patterns so that object classes may be predicted.The segmentation process divides the input image into separate segments for a more thorough image study.This study proposes a deep learning-enabled framework for automated document classification,which can be implemented in higher education.To further this goal,a dataset was developed that includes seven categories:Diplomas,Personal documents,Journal of Accounting of higher education diplomas,Service letters,Orders,Production orders,and Student orders.Subsequently,a deep learning model based on Conv2D layers is proposed for the document classification process.In the final part of this research,the proposed model is evaluated and compared with other machine-learning techniques.The results demonstrate that the proposed deep learning model shows high results in document categorization overtaking the other machine learning models by reaching 94.84%,94.79%,94.62%,94.43%,94.07%in accuracy,precision,recall,F-score,and AUC-ROC,respectively.The achieved results prove that the proposed deep model is acceptable to use in practice as an assistant to an office worker.
文摘Research on the use of EHR is contradictory since it presents contradicting results regarding the time spent documenting. There is research that supports the use of electronic records as a tool to speed documentation;and research that found that it is time consuming. The purpose of this quantitative retrospective before-after project was to measure the impact of using the laboratory value flowsheet within the EHR on documentation time. The research question was: “Does the use of a laboratory value flowsheet in the EHR impact documentation time by primary care providers (PCPs)?” The theoretical framework utilized in this project was the Donabedian Model. The population in this research was the two PCPs in a small primary care clinic in the northwest of Puerto Rico. The sample was composed of all the encounters during the months of October 2019 and December 2019. The data was obtained through data mining and analyzed using SPSS 27. The evaluative outcome of this project is that there is a decrease in documentation time after implementation of the use of the laboratory value flowsheet in the EHR. However, patients per day increase therefore having an impact on the number of patients seen per day/week/month. The implications for clinical practice include the use of templates to improve workflow and documentation as well as decreasing documentation time while also increasing the number of patients seen per day. .
文摘Background Document images such as statistical reports and scientific journals are widely used in information technology.Accurate detection of table areas in document images is an essential prerequisite for tasks such as information extraction.However,because of the diversity in the shapes and sizes of tables,existing table detection methods adapted from general object detection algorithms,have not yet achieved satisfactory results.Incorrect detection results might lead to the loss of critical information.Methods Therefore,we propose a novel end-to-end trainable deep network combined with a self-supervised pretraining transformer for feature extraction to minimize incorrect detections.To better deal with table areas of different shapes and sizes,we added a dualbranch context content attention module(DCCAM)to high-dimensional features to extract context content information,thereby enhancing the network's ability to learn shape features.For feature fusion at different scales,we replaced the original 3×3 convolution with a multilayer residual module,which contains enhanced gradient flow information to improve the feature representation and extraction capability.Results We evaluated our method on public document datasets and compared it with previous methods,which achieved state-of-the-art results in terms of evaluation metrics such as recall and F1-score.https://github.com/Yong Z-Lee/TD-DCCAM.
文摘Ghanaian construction projects encounter a number of challenges, including low health, safety, and environmental requirements, poor performance, and time and cost overruns. To provide value for money (VFM) on government infrastructure projects in Ghana, this research assesses the roles of project consultants, specifically architects and quantity surveyors, and highlights important obstacles. A cross-sectional survey design involving architects and quantity surveyors yielded a 96% response rate after 100 questionnaires were distributed. Consultants’ responsibilities also include monitoring standards compliance, providing advice on delays, controlling budgets, and advising on project completion dates. Difficulties encompass a lack of promptness in decision-making, unethical conduct, political pressure, and inadequate focus on contract administration and construction audits. Project urgency, longevity, political clout, timely decision-making, and team experience are important variables that impact VFM. Policy makers and construction management practitioners should take note of the implications for Ghana’s public infrastructure projects.
基金supported by the National Natural Science Fandation of China (6067208960772075)
文摘In consultative committee for space data systems(CCSDS) file delivery protocol(CFDP) recommendation of reliable transmission,there are no detail transmission procedure and delay calculation of prompted negative acknowledge and asynchronous negative acknowledge models.CFDP is designed to provide data and storage management,story and forward,custody transfer and reliable end-to-end delivery over deep space characterized by huge latency,intermittent link,asymmetric bandwidth and big bit error rate(BER).Four reliable transmission models are analyzed and an expected file-delivery time is calculated with different trans-mission rates,numbers and sizes of packet data units,BERs and frequencies of external events,etc.By comparison of four CFDP models,the requirement of BER for typical missions in deep space is obtained and rules of choosing CFDP models under different uplink state informations are given,which provides references for protocol models selection,utilization and modification.
文摘Purpose:Detection of research fields or topics and understanding the dynamics help the scientific community in their decisions regarding the establishment of scientific fields.This also helps in having a better collaboration with governments and businesses.This study aims to investigate the development of research fields over time,translating it into a topic detection problem.Design/methodology/approach:To achieve the objectives,we propose a modified deep clustering method to detect research trends from the abstracts and titles of academic documents.Document embedding approaches are utilized to transform documents into vector-based representations.The proposed method is evaluated by comparing it with a combination of different embedding and clustering approaches and the classical topic modeling algorithms(i.e.LDA)against a benchmark dataset.A case study is also conducted exploring the evolution of Artificial Intelligence(AI)detecting the research topics or sub-fields in related AI publications.Findings:Evaluating the performance of the proposed method using clustering performance indicators reflects that our proposed method outperforms similar approaches against the benchmark dataset.Using the proposed method,we also show how the topics have evolved in the period of the recent 30 years,taking advantage of a keyword extraction method for cluster tagging and labeling,demonstrating the context of the topics.Research limitations:We noticed that it is not possible to generalize one solution for all downstream tasks.Hence,it is required to fine-tune or optimize the solutions for each task and even datasets.In addition,interpretation of cluster labels can be subjective and vary based on the readers’opinions.It is also very difficult to evaluate the labeling techniques,rendering the explanation of the clusters further limited.Practical implications:As demonstrated in the case study,we show that in a real-world example,how the proposed method would enable the researchers and reviewers of the academic research to detect,summarize,analyze,and visualize research topics from decades of academic documents.This helps the scientific community and all related organizations in fast and effective analysis of the fields,by establishing and explaining the topics.Originality/value:In this study,we introduce a modified and tuned deep embedding clustering coupled with Doc2Vec representations for topic extraction.We also use a concept extraction method as a labeling approach in this study.The effectiveness of the method has been evaluated in a case study of AI publications,where we analyze the AI topics during the past three decades.
基金Supported by the Funds of Heilongjiang Outstanding Young Teacher (1151G037).
文摘The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is available, the word-use variability problem will have substantial impact on the Information Retrieval (IR) performance. To solve the problem, a new technology to short document retrieval named Reference Document Model (RDM) is put forward in this letter. RDM gets the statistical semantic of the query/document by pseudo feedback both for the query and document from reference documents. The contributions of this model are three-fold: (1) Pseudo feedback both for the query and the document; (2) Building the query model and the document model from reference documents; (3) Flexible indexing units, which can be ally linguistic elements such as documents, paragraphs, sentences, n-grams, term or character. For short document retrieval, RDM achieves significant improvements over the classical probabilistic models on the task of ad hoc retrieval on Text REtrieval Conference (TREC) test sets. Results also show that the shorter the document, the better the RDM performance.
文摘There exist a large number of composed documents in universities in the teaching process. Most of them are required to check the similarity for validation. A kind of similarity computation system is constructed for composed documents with images and text information. Firstly, each document is split and outputs two parts as images and text information. Then, these documents are compared by computing the similarities of images and text contents independently. Through Hadoop system, the text contents are easily and quickly separated. Experimental results show that the proposed system is efficient and practical.
文摘This paper examines automatic recognition and extraction of tables from a large collection of het-erogeneous documents. The heterogeneous documents are initially pre-processed and converted to HTML codes, after which an algorithm recognises the table portion of the documents. Hidden Markov Model (HMM) is then applied to the HTML code in order to extract the tables. The model was trained and tested with five hundred and twenty six self-generated tables (three hundred and twenty-one (321) tables for training and two hundred and five (205) tables for testing). Viterbi algorithm was implemented for the testing part. The system was evaluated in terms of accuracy, precision, recall and f-measure. The overall evaluation results show 88.8% accuracy, 96.8% precision, 91.7% recall and 88.8% F-measure revealing that the method is good at solving the problem of table extraction.
基金This work was supported by Science and Technology Plan Special Project of Guangzhou,China(No.GZDD201808)National Key Research Program for International Cooperation-MOST/STDF(2021YFE0104500).
文摘Deacidification and self-cleaning are important for the preservation of paper documents.In this study,nano-CaCO_(3) was used as a deacidification agent and stabilized by nanocellulose(CNC)and hydroxypropyl methylcellulose(HPMC)to form a uniform dispersion.Followed by polydimethylsiloxane(PDMS)treatment and chemical vapor deposition(CVD)of methyltrimethoxysilane(MTMS),a hydrophobic coating was constructed for self-cleaning purposes.The pH value of the treated paper was approximately 8.20,and the static contact angle was as high as 152.29°.Compared to the untreated paper,the tensile strength of the treated paper increased by 12.6%.This treatment method endows the paper with a good deacidification effect and self-cleaning property,which are beneficial for its long-term preservation.
文摘The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.
文摘While XML web services become recognized as a solution to business-to-business transactions, there are many problems that should be solved. For example, it is not easy to manipulate business documents of existing standards such as RosettaNet and UN/EDIFACT EDI, traditionally regarded as an important resource for managing B2B relationships. As a starting point for the complete implementation of B2B web services, this paper deals with how to support B2B business documents in XML web services. In the first phase, basic requirements for driving XML web services by business documents are introduced. As a solution, this paper presents how to express B2B business documents in WSDL, a core standard for XML web services. This kind of approach facilitates the reuse of existing business documents and enhances interoperability between implemented web services. Furthermore, it suggests how to link with other conceptual modeling frameworks such as ebXML/UMM, built on a rich heritage of electronic business experience.