The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same ...The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.展开更多
随着大数据、人工智能技术的不断发展,大语言模型(Large Language Model,LLM)在知识挖掘、文档整合等领域显示出巨大的潜力。该文通过知识图谱构建、文本分类、信息检索等方法,对大语言模型的架构及其在不同场景下的应用进行探讨,并对...随着大数据、人工智能技术的不断发展,大语言模型(Large Language Model,LLM)在知识挖掘、文档整合等领域显示出巨大的潜力。该文通过知识图谱构建、文本分类、信息检索等方法,对大语言模型的架构及其在不同场景下的应用进行探讨,并对知识的提炼和整合进行深入探讨。研究如何提高多文档协同处理的效率,通过标准化的结构和语义的融合技术。并结合实际案例分析,展示大语言模型在复杂知识体系中的应用效果,以供实际运用大语言模型时参考。展开更多
The production model of“multi-specification and low-quantity”is becoming the main trend of manufacturing industry.As a key activity in the manufacturing chain,traditional computer aided process planning(CAPP)system ...The production model of“multi-specification and low-quantity”is becoming the main trend of manufacturing industry.As a key activity in the manufacturing chain,traditional computer aided process planning(CAPP)system fails to adapt to the production model of customization.Therefore,a novel method for variant design of process planning was proposed to develop CAPP system based on Tabular Layouts of Article Characteristics(Sach-Merk Leisten in German and SML for short). With the support of standard database of master process planning documents which are developed by parameterization technique, and instance process planning for special product(instance product)can be generated automatically by the sub-system of variant de- sign of process planning.Finally,a CAPP system was developed for process design of rotor of steam turbine to validate the feasibil- ity and applicability of the method.展开更多
This critical review looks at the assessment of the application of artificial intelligence in handling legal documents with specific reference to medical negligence cases with a view of identifying its transformative ...This critical review looks at the assessment of the application of artificial intelligence in handling legal documents with specific reference to medical negligence cases with a view of identifying its transformative potentialities, issues and ethical concerns. The review consolidates findings that show the impact of AI in improving the efficiency, accuracy and justice delivery in the legal profession. The studies show increased efficiency in speed of document review and enhancement of the accuracy of the reviewed documents, with time efficiency estimates of 60% reduction of time. However, the review also outlines some of the problems that continue to characterize AI, such as data quality problems, biased algorithms and the problem of the opaque decision-making system. This paper assesses ethical issues related to patient autonomy, justice and non-malignant suffering, with particular focus on patient privacy and fair process, and on potential unfairness to patients. This paper’s review of AI innovations finds that regulations lag behind AI developments, leading to unsettled issues regarding legal responsibility for AI and user control over AI-generated results and findings in legal proceedings. Some of the future avenues that are presented in the study are the future of XAI for legal purposes, utilizing federated learning for resolving privacy issues, and the need to foster adaptive regulation. Finally, the review advocates for Legal Subject Matter Experts to collaborate with legal informatics experts, ethicists, and policy makers to develop the best solutions to implement AI in medical negligence claims. It reasons that there is great potential for AI to have a deep impact on the practice of law but when done, it must do so in a way that respects justice and on the Rights of Individuals.展开更多
In order to study the feasibility of treating petro chemical wastewater by the combination of anaerobic and aerobic biological process, a research of treating wastewater in UASB reactor and aeration basin has been co...In order to study the feasibility of treating petro chemical wastewater by the combination of anaerobic and aerobic biological process, a research of treating wastewater in UASB reactor and aeration basin has been conducted. The test results shows that under moderate temperature, with 5\^2 kgCOD/(m\+3·d) volumetric load of COD Cr in the UASB reactor and 24h of HRT, 85% removal rate of BOD 5 and 83% of COD \{Cr\} and 1\^34 m\+3/(m\+3·d) volumetric gas production rate can be obtained respectively. The aerobic bio degradability can be increased by 20%—30% after the petro chemical wastewater has been treated by anaerobic process. As Ns=0\^45 kgCOD/(kgMLSS·d), HRT=4h in the aeration tank, 94% removal rate of BOD 5, 93% of COD \{Cr\}, 98\^8% total removal rate of COD \{Cr\} and 99% removal rate of BOD 5 can be reached.展开更多
The Extensible Markup Language (XML) is becoming a de-facto standard for exchanging information among the web applications. Efficient implementation of web application needs to be efficient implementation of XML and X...The Extensible Markup Language (XML) is becoming a de-facto standard for exchanging information among the web applications. Efficient implementation of web application needs to be efficient implementation of XML and XML schema document. The quality of XML document has great impact on the design quality of its schema document. Therefore, the design of XML schema document plays an important role in web engineering process and needs to have many schema qualities: functionality, extensibility, reusability, understandability, maintainability and so on. Three schema metrics: Reusable Quality metric (RQ), Extensible Quality metric (EQ) and Understandable Quality metric (UQ) are proposed to measure the Reusable, Extensible and Understandable of XML schema documents in web engineering process respectively. The base attributes are selected according to XML Quality Assurance Design Guidelines. These metrics are formulated based on Binary Entropy Function and Rank Order Centroid method. To check the validity of the proposed metrics empirically and analytically, the self-organizing feature map (SOM) and Weyuker’s 9 properties are used.展开更多
This research develops a knowledge model for Software Process Improvement (SPI) project based on knowledge creation theory and its twenty-four measurement items, and proposes two hypothesizes about the interaction of ...This research develops a knowledge model for Software Process Improvement (SPI) project based on knowledge creation theory and its twenty-four measurement items, and proposes two hypothesizes about the interaction of explicit knowledge and tacit knowledge in SPI. Eleven factors are extracted through statistical analysis. Three knowledge-creation practices for capturing tacit knowledge contribute greatly to SPI, which are communication among members, crossover collaboration in practical work and pair programming. Two knowledge-creation practices for capturing explicit knowledge have significant positive impact on SPI, which are integrating project document and on-the-job training. Ultimately, suggestions for improvement are put forward, that is, encouraging communication among staff and integrating documents in real time, and future research is also illustrated.展开更多
文摘The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.
文摘随着大数据、人工智能技术的不断发展,大语言模型(Large Language Model,LLM)在知识挖掘、文档整合等领域显示出巨大的潜力。该文通过知识图谱构建、文本分类、信息检索等方法,对大语言模型的架构及其在不同场景下的应用进行探讨,并对知识的提炼和整合进行深入探讨。研究如何提高多文档协同处理的效率,通过标准化的结构和语义的融合技术。并结合实际案例分析,展示大语言模型在复杂知识体系中的应用效果,以供实际运用大语言模型时参考。
文摘The production model of“multi-specification and low-quantity”is becoming the main trend of manufacturing industry.As a key activity in the manufacturing chain,traditional computer aided process planning(CAPP)system fails to adapt to the production model of customization.Therefore,a novel method for variant design of process planning was proposed to develop CAPP system based on Tabular Layouts of Article Characteristics(Sach-Merk Leisten in German and SML for short). With the support of standard database of master process planning documents which are developed by parameterization technique, and instance process planning for special product(instance product)can be generated automatically by the sub-system of variant de- sign of process planning.Finally,a CAPP system was developed for process design of rotor of steam turbine to validate the feasibil- ity and applicability of the method.
文摘This critical review looks at the assessment of the application of artificial intelligence in handling legal documents with specific reference to medical negligence cases with a view of identifying its transformative potentialities, issues and ethical concerns. The review consolidates findings that show the impact of AI in improving the efficiency, accuracy and justice delivery in the legal profession. The studies show increased efficiency in speed of document review and enhancement of the accuracy of the reviewed documents, with time efficiency estimates of 60% reduction of time. However, the review also outlines some of the problems that continue to characterize AI, such as data quality problems, biased algorithms and the problem of the opaque decision-making system. This paper assesses ethical issues related to patient autonomy, justice and non-malignant suffering, with particular focus on patient privacy and fair process, and on potential unfairness to patients. This paper’s review of AI innovations finds that regulations lag behind AI developments, leading to unsettled issues regarding legal responsibility for AI and user control over AI-generated results and findings in legal proceedings. Some of the future avenues that are presented in the study are the future of XAI for legal purposes, utilizing federated learning for resolving privacy issues, and the need to foster adaptive regulation. Finally, the review advocates for Legal Subject Matter Experts to collaborate with legal informatics experts, ethicists, and policy makers to develop the best solutions to implement AI in medical negligence claims. It reasons that there is great potential for AI to have a deep impact on the practice of law but when done, it must do so in a way that respects justice and on the Rights of Individuals.
文摘In order to study the feasibility of treating petro chemical wastewater by the combination of anaerobic and aerobic biological process, a research of treating wastewater in UASB reactor and aeration basin has been conducted. The test results shows that under moderate temperature, with 5\^2 kgCOD/(m\+3·d) volumetric load of COD Cr in the UASB reactor and 24h of HRT, 85% removal rate of BOD 5 and 83% of COD \{Cr\} and 1\^34 m\+3/(m\+3·d) volumetric gas production rate can be obtained respectively. The aerobic bio degradability can be increased by 20%—30% after the petro chemical wastewater has been treated by anaerobic process. As Ns=0\^45 kgCOD/(kgMLSS·d), HRT=4h in the aeration tank, 94% removal rate of BOD 5, 93% of COD \{Cr\}, 98\^8% total removal rate of COD \{Cr\} and 99% removal rate of BOD 5 can be reached.
文摘The Extensible Markup Language (XML) is becoming a de-facto standard for exchanging information among the web applications. Efficient implementation of web application needs to be efficient implementation of XML and XML schema document. The quality of XML document has great impact on the design quality of its schema document. Therefore, the design of XML schema document plays an important role in web engineering process and needs to have many schema qualities: functionality, extensibility, reusability, understandability, maintainability and so on. Three schema metrics: Reusable Quality metric (RQ), Extensible Quality metric (EQ) and Understandable Quality metric (UQ) are proposed to measure the Reusable, Extensible and Understandable of XML schema documents in web engineering process respectively. The base attributes are selected according to XML Quality Assurance Design Guidelines. These metrics are formulated based on Binary Entropy Function and Rank Order Centroid method. To check the validity of the proposed metrics empirically and analytically, the self-organizing feature map (SOM) and Weyuker’s 9 properties are used.
文摘This research develops a knowledge model for Software Process Improvement (SPI) project based on knowledge creation theory and its twenty-four measurement items, and proposes two hypothesizes about the interaction of explicit knowledge and tacit knowledge in SPI. Eleven factors are extracted through statistical analysis. Three knowledge-creation practices for capturing tacit knowledge contribute greatly to SPI, which are communication among members, crossover collaboration in practical work and pair programming. Two knowledge-creation practices for capturing explicit knowledge have significant positive impact on SPI, which are integrating project document and on-the-job training. Ultimately, suggestions for improvement are put forward, that is, encouraging communication among staff and integrating documents in real time, and future research is also illustrated.