The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-ro...The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-rounded development in the digital society.The relationship between cross-border data flows and the realization of digital development rights in developing countries is quite complex.Currently,developing countries seek to safeguard their existing digital interests through unilateral regulation to protect data sovereignty and multilateral regulation for cross-border data cooperation.However,developing countries still have to face internal conflicts between national digital development rights and individual and corporate digital development rights during the process of realizing digital development rights.They also encounter external contradictions such as developed countries interfering with developing countries'data sovereignty,developed countries squeezing the policy space of developing countries through dominant rules,and developing countries having conflicts between domestic and international rules.This article argues that balancing openness and security on digital trade platforms is the optimal solution for developing countries to realize their digital development rights.The establishment of WTO digital trade rules should inherently reflect the fundamental demands of developing countries in cross-border data flows.At the same time,given China's dual role as a digital powerhouse and a developing country,it should actively promote the realization of digital development rights in developing countries.展开更多
This paper proposes a method of data-flow testing for Web services composition. Firstly, to facilitate data flow analysis and constraints collecting, the existing model representation of business process execution lan...This paper proposes a method of data-flow testing for Web services composition. Firstly, to facilitate data flow analysis and constraints collecting, the existing model representation of business process execution language (BPEL) is modified in company with the analysis of data dependency and an exact representation of dead path elimination (DPE) is proposed, which over-comes the difficulties brought to dataflow analysis. Then defining and using information based on data flow rules is collected by parsing BPEL and Web services description language (WSDL) documents and the def-use annotated control flow graph is created. Based on this model, data-flow anomalies which indicate potential errors can be discovered by traversing the paths of graph, and all-du-paths used in dynamic data flow testing for Web services composition are automatically generated, then testers can design the test cases according to the collected constraints for each path selected.展开更多
With its high repeatability,the airgun source has been used to monitor the temporal variations of subsurface structures. However,under different working conditions,there will be subtle differences in the airgun source...With its high repeatability,the airgun source has been used to monitor the temporal variations of subsurface structures. However,under different working conditions,there will be subtle differences in the airgun source signals. To some extent,deconvolution can eliminate changes of the recorded signals due to source variations. Generally speaking,in order to remove the airgun source wavelet signal and obtain the Green's functions between the airgun source and stations,we need to select an appropriate method to perform the deconvolution process for seismic waveform data. Frequency domain water level deconvolution and time domain iterative deconvolution are two kinds of deconvolution methods widely used in the field of receiver functions,etc. We use the Binchuan( in Yunnan Province,China) airgun data as an example to compare the performance of these two deconvolution methods in airgun source data processing. The results indicate that frequency domain water level deconvolution is better in terms of computational efficiency;time domain iterative deconvolution is better in terms of the signal-to-noise ratio( SNR),and the initial motion of P-wave is also clearer. We further discuss the sequence issue of deconvolution and stack for multiple-shot airgun data processing. Finally,we propose a general processing flow for the airgun source data to extract the Green 's functions between the airgun source and stations.展开更多
Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at hig...Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and side-stream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/ MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR<sup>TM</sup> and Caliper Staccato<sup>TM</sup> workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle.展开更多
SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate dat...SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate data flow diagram (PDFD). In order to eliminate the ambiguity of predicate data flow diagrams and their associated textual specifications, a formalization of the syntax and semantics of predicate data flow diagrams is necessary. In this paper we use Z notation to define an abstract syntax and the related structural constraints for the PDFD notation, and provide it with an axiomatic semantics based on the concept of data availability and functionality of predicate operation. Finally, an example is given to establish functionality consistent decomposition on hierarchical PDFD (HPDFD).展开更多
Architectures based on the data flow computing model provide an alternative to the conventional Von-Neumann architecture that are widelyused for general purpose computing.Processors based on the data flow architecture...Architectures based on the data flow computing model provide an alternative to the conventional Von-Neumann architecture that are widelyused for general purpose computing.Processors based on the data flow architecture employ fine-grain data-driven parallelism.These architectures have thepotential to exploit the inherent parallelism in compute intensive applicationslike signal processing,image and video processing and so on and can thusachieve faster throughputs and higher power efficiency.In this paper,severaldata flow computing architectures are explored,and their main architecturalfeatures are studied.Furthermore,a classification of the processors is presented based on whether they employ either the data flow execution modelexclusively or in combination with the control flow model and are accordinglygrouped as exclusive data flow or hybrid architectures.The hybrid categoryis further subdivided as conjoint or accelerator-style architectures dependingon how they deploy and separate the data flow and control flow executionmodel within their execution blocks.Lastly,a brief comparison and discussionof their advantages and drawbacks is also considered.From this study weconclude that although the data flow architectures are seen to have maturedsignificantly,issues like data-structure handling and lack of efficient placementand scheduling algorithms have prevented these from becoming commerciallyviable.展开更多
A new synthetical knowledge representation model that integrates the attribute grammar model with the semantic network model was presented. The model mainly uses symbols of attribute grammar to establish a set of sy...A new synthetical knowledge representation model that integrates the attribute grammar model with the semantic network model was presented. The model mainly uses symbols of attribute grammar to establish a set of syntax and semantic rules suitable for a semantic network. Based on the model,the paper introduces a formal method defining data flow diagrams (DFD) and also simply explains how to use the method.展开更多
In this paper, on the basis of experimental data of two kinds of chemical explosions, the piston-pushing model of spherical blast-waves and the second-order Godunov-type scheme of finite difference methods with high i...In this paper, on the basis of experimental data of two kinds of chemical explosions, the piston-pushing model of spherical blast-waves and the second-order Godunov-type scheme of finite difference methods with high identification to discontinuity are used to the numerical reconstruction of part of an actual hemispherical blast-wave flow field by properly adjusting the moving bounary conditions of a piston. This method is simple and reliable. It is suitable to the evaluation of effects of the blast-wave flow field away from the explosion center.展开更多
Nowadays,data are more and more used for intelligent modeling and prediction,and the comprehensive evaluation of data quality is getting more and more attention as a necessary means to measure whether the data are usa...Nowadays,data are more and more used for intelligent modeling and prediction,and the comprehensive evaluation of data quality is getting more and more attention as a necessary means to measure whether the data are usable or not.However,the comprehensive evaluation method of data quality mostly contains the subjective factors of the evaluator,so how to comprehensively and objectively evaluate the data has become a bottleneck that needs to be solved in the research of comprehensive evaluation method.In order to evaluate the data more comprehensively,objectively and differentially,a novel comprehensive evaluation method based on particle swarm optimization(PSO)and grey correlation analysis(GCA)is presented in this paper.At first,an improved GCA evaluation model based on the technique for order preference by similarity to an ideal solution(TOPSIS)is proposed.Then,an objective function model of maximum difference of the comprehensive evaluation values is built,and the PSO algorithm is used to optimize the weights of the improved GCA evaluation model based on the objective function model.Finally,the performance of the proposed method is investigated through parameter analysis.A performance comparison of traffic flow data is carried out,and the simulation results show that the maximum average difference between the evaluation results and its mean value(MDR)of the proposed comprehensive evaluation method is 33.24%higher than that of TOPSIS-GCA,and 6.86%higher than that of GCA.The proposed method has better differentiation than other methods,which means that it objectively and comprehensively evaluates the data from both the relevance and differentiation of the data,and the results more effectively reflect the differences in data quality,which will provide more effective data support for intelligent modeling,prediction and other applications.展开更多
The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regressi...The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regression calculation model for processing the flow data of a FCN-VGG19 aircraft is elaborated based on VGGNet(Visual Geometry Group Net)and FCN(Fully Convolutional Network)techniques.As shown by the results,the model displays a strong fitting ability,and there is almost no over-fitting in training.Moreover,the model has good accuracy and convergence.For different input data and different grids,the model basically achieves convergence,showing good performances.It is shown that the proposed simulation regression model based on FCN has great potential in typical problems of computational fluid dynamics(CFD)and related data processing.展开更多
The “Citizen-Centric Complaint Reporting and Analyzing Mechanism” project is designed to create an online complaint system, called “e-Complaint”, to allow citizens to file complaints related to crime and misconduc...The “Citizen-Centric Complaint Reporting and Analyzing Mechanism” project is designed to create an online complaint system, called “e-Complaint”, to allow citizens to file complaints related to crime and misconduct in a secure and user-friendly way. The proposed system aims to address the challenges of the current complaint system, ensuring transparency and accountability in the police force. The “e-Complaint” system aims to increase police accountability and transparency and has significant benefits for both citizens and police departments.展开更多
This paper states the basic principle of program data flow analysis in a formal way and gives the concept of data flow expression. On the basis of this concept, an algorithm of finding data flow exceptions is rendered...This paper states the basic principle of program data flow analysis in a formal way and gives the concept of data flow expression. On the basis of this concept, an algorithm of finding data flow exceptions is rendered. This algorithm has great generality, with which it is easy to develop a tool for program test. So it is practical in application.展开更多
Cross-border data flows not only involve cross-border trade issues,but also severely challenge personal information protection,national data security,and the jurisdiction of justice and enforcement.As the current digi...Cross-border data flows not only involve cross-border trade issues,but also severely challenge personal information protection,national data security,and the jurisdiction of justice and enforcement.As the current digital trade negotiations could not accommodate these challenges,China has initiated the concept of secure cross-border data flow and has launched a dual-track multi-level regulatory system,including control system for overseas transfer of important data,system of crossborder provision of personal information,and system of cross-border data request for justice and enforcement.To explore a global regulatory framework for cross-border data flows,legitimate and controllable cross-border data flows should be promoted,supervision should be categorized based on risk concerned,and the rule of law should be coordinated at home and abroad to promote system compatibility.To this end,the key is to build a compatible regulatory framework,which includes clarifying the scope of important data to define the“Negative List”for preventing national security risks,improving the cross-border accountability for protecting personal information rights and interests to ease pre-supervision pressure,and focusing on data access rights instead of data localization for upholding the jurisdiction of justice and enforcement.展开更多
The regulations of cross-border data flows is a growing challenge for the international community.International trade agreements,however,appear to be pioneering legal methods to cope,as they have grappled with this is...The regulations of cross-border data flows is a growing challenge for the international community.International trade agreements,however,appear to be pioneering legal methods to cope,as they have grappled with this issue since the 1990s.The World Trade Organization(WTO)rules system offers a partial solution under the General Agreement on Trade in Services(GATS),which covers aspects related to cross-border data flows.The Comprehensive and Progressive Agreement for Trans-Pacific Partnership(CPTPP)and the United States-Mexico-Canada Agreement(USMCA)have also been perceived to provide forward-looking resolutions.In this context,this article analyzes why a resolution to this issue may be illusory.While they regulate cross-border data flows in various ways,the structure and wording of exception articles of both the CPTPP and USMCA have the potential to pose significant challenges to the international legal system.The new system,attempting to weigh societal values and economic development,is imbalanced,often valuing free trade more than individual online privacy and cybersecurity.Furthermore,the inclusion of poison-pill clauses is,by nature,antithetical to cooperation.Thus,for the international community generally,and China in particular,cross-border data flows would best be regulated under the WTO-centered multilateral trade law system.展开更多
In this paper,a case study is carried out in comparison of pipes-and-filters architecture and batch sequential architecture.Concepts on a data flow system and the two mentioned architectures are presented.A Java templ...In this paper,a case study is carried out in comparison of pipes-and-filters architecture and batch sequential architecture.Concepts on a data flow system and the two mentioned architectures are presented.A Java template class design in implementing the "pipes" and "filters" in the pipes-and-filters architecture is given at the design level.Finally,this paper uses a concrete example to show how to use Java to implement the pipesand-filters architecture.Using varied amount of data from text files,performance and memory usage of the two architectures are illustrated.展开更多
In the global scenario one of the important goals for sustainable development in industrial field is innovate new technology,and invest in building infrastructure.All the developed and developing countries focus on bu...In the global scenario one of the important goals for sustainable development in industrial field is innovate new technology,and invest in building infrastructure.All the developed and developing countries focus on building resilient infrastructure and promote sustainable developments by fostering innovation.At this juncture the cloud computing has become an important information and communication technologies model influencing sustainable development of the industries in the developing countries.As part of the innovations happening in the industrial sector,a new concept termed as‘smart manufacturing’has emerged,which employs the benefits of emerging technologies like internet of things and cloud computing.Cloud services deliver an on-demand access to computing,storage,and infrastructural platforms for the industrial users through Internet.In the recent era of information technology the number of business and individual users of cloud services have been increased and larger volumes of data is being processed and stored in it.As a consequence,the data breaches in the cloud services are also increasing day by day.Due to various security vulnerabilities in the cloud architecture;as a result the cloud environment has become non-resilient.To restore the normal behavior of the cloud,detect the deviations,and achieve higher resilience,anomaly detection becomes essential.The deep learning architectures-based anomaly detection mechanisms uses various monitoring metrics characterize the normal behavior of cloud services and identify the abnormal events.This paper focuses on designing an intelligent deep learning based approach for detecting cloud anomalies in real time to make it more resilient.The deep learning models are trained using features extracted from the system level and network level performance metrics observed in the Transfer Control Protocol(TCP)traces of the simulation.The experimental results of the proposed approach demonstrate a superior performance in terms of higher detection rate and lower false alarm rate when compared to the Support Vector Machine(SVM).展开更多
With the development of computer vision researches, due to the state-of-the-art performance on image and video processing tasks, deep neural network (DNN) has been widely applied in various applications (autonomous ve...With the development of computer vision researches, due to the state-of-the-art performance on image and video processing tasks, deep neural network (DNN) has been widely applied in various applications (autonomous vehicles, weather forecasting, counter-terrorism, surveillance, traffic management, etc.). However, to achieve such performance, DNN models have become increasingly complicated and deeper, and result in heavy computational stress. Thus, it is not sufficient for the general central processing unit (CPU) processors to meet the real-time application requirements. To deal with this bottleneck, research based on hardware acceleration solution for DNN attracts great attention. Specifically, to meet various real-life applications, DNN acceleration solutions mainly focus on issue of hardware acceleration with intense memory and calculation resource. In this paper, a novel resource-saving architecture based on Field Programmable Gate Array (FPGA) is proposed. Due to the novel designed processing element (PE), the proposed architecture </span><span style="font-family:Verdana;">achieves good performance with the extremely limited calculating resource. The on-chip buffer allocation helps enhance resource-saving performance on memory. Moreover, the accelerator improves its performance by exploiting</span> <span style="font-family:Verdana;">the sparsity property of the input feature map. Compared to other state-of-the-art</span><span style="font-family:Verdana;"> solutions based on FPGA, our architecture achieves good performance, with quite limited resource consumption, thus fully meet the requirement of real-time applications.展开更多
基金a preliminary result of the Chinese Government Scholarship High-level Graduate Program sponsored by China Scholarship Council(Program No.CSC202206310052)。
文摘The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-rounded development in the digital society.The relationship between cross-border data flows and the realization of digital development rights in developing countries is quite complex.Currently,developing countries seek to safeguard their existing digital interests through unilateral regulation to protect data sovereignty and multilateral regulation for cross-border data cooperation.However,developing countries still have to face internal conflicts between national digital development rights and individual and corporate digital development rights during the process of realizing digital development rights.They also encounter external contradictions such as developed countries interfering with developing countries'data sovereignty,developed countries squeezing the policy space of developing countries through dominant rules,and developing countries having conflicts between domestic and international rules.This article argues that balancing openness and security on digital trade platforms is the optimal solution for developing countries to realize their digital development rights.The establishment of WTO digital trade rules should inherently reflect the fundamental demands of developing countries in cross-border data flows.At the same time,given China's dual role as a digital powerhouse and a developing country,it should actively promote the realization of digital development rights in developing countries.
基金the National Natural Science Foundation of China(60425206, 60503033)National Basic Research Program of China (973 Program, 2002CB312000)Opening Foundation of State Key Laboratory of Software Engineering in Wuhan University
文摘This paper proposes a method of data-flow testing for Web services composition. Firstly, to facilitate data flow analysis and constraints collecting, the existing model representation of business process execution language (BPEL) is modified in company with the analysis of data dependency and an exact representation of dead path elimination (DPE) is proposed, which over-comes the difficulties brought to dataflow analysis. Then defining and using information based on data flow rules is collected by parsing BPEL and Web services description language (WSDL) documents and the def-use annotated control flow graph is created. Based on this model, data-flow anomalies which indicate potential errors can be discovered by traversing the paths of graph, and all-du-paths used in dynamic data flow testing for Web services composition are automatically generated, then testers can design the test cases according to the collected constraints for each path selected.
基金jointly sponsored by the Special Fund for Earthquake Scientific Research in the Public Welfare of China Earthquake Administration(201508008)the tundamental Research Funds for the Central University(WK2080000053)Academician Chen Yong Workstation Project in Yunnan Province
文摘With its high repeatability,the airgun source has been used to monitor the temporal variations of subsurface structures. However,under different working conditions,there will be subtle differences in the airgun source signals. To some extent,deconvolution can eliminate changes of the recorded signals due to source variations. Generally speaking,in order to remove the airgun source wavelet signal and obtain the Green's functions between the airgun source and stations,we need to select an appropriate method to perform the deconvolution process for seismic waveform data. Frequency domain water level deconvolution and time domain iterative deconvolution are two kinds of deconvolution methods widely used in the field of receiver functions,etc. We use the Binchuan( in Yunnan Province,China) airgun data as an example to compare the performance of these two deconvolution methods in airgun source data processing. The results indicate that frequency domain water level deconvolution is better in terms of computational efficiency;time domain iterative deconvolution is better in terms of the signal-to-noise ratio( SNR),and the initial motion of P-wave is also clearer. We further discuss the sequence issue of deconvolution and stack for multiple-shot airgun data processing. Finally,we propose a general processing flow for the airgun source data to extract the Green 's functions between the airgun source and stations.
文摘Volatile nitrosamines (VNAs) are a group of compounds classified as probable (group 2A) and possible (group 2B) carcinogens in humans. Along with certain foods and contaminated drinking water, VNAs are detected at high levels in tobacco products and in both mainstream and side-stream smoke. Our laboratory monitors six urinary VNAs—N-nitrosodimethylamine (NDMA), N-nitrosomethylethylamine (NMEA), N-nitrosodiethylamine (NDEA), N-nitrosopiperidine (NPIP), N-nitrosopyrrolidine (NPYR), and N-nitrosomorpholine (NMOR)—using isotope dilution GC-MS/ MS (QQQ) for large population studies such as the National Health and Nutrition Examination Survey (NHANES). In this paper, we report for the first time a new automated sample preparation method to more efficiently quantitate these VNAs. Automation is done using Hamilton STAR<sup>TM</sup> and Caliper Staccato<sup>TM</sup> workstations. This new automated method reduces sample preparation time from 4 hours to 2.5 hours while maintaining precision (inter-run CV < 10%) and accuracy (85% - 111%). More importantly this method increases sample throughput while maintaining a low limit of detection (<10 pg/mL) for all analytes. A streamlined sample data flow was created in parallel to the automated method, in which samples can be tracked from receiving to final LIMs output with minimal human intervention, further minimizing human error in the sample preparation process. This new automated method and the sample data flow are currently applied in bio-monitoring of VNAs in the US non-institutionalized population NHANES 2013-2014 cycle.
文摘SOZL (structured methodology + object-oriented methodology + Z language) is a language that attempts to integrate structured method, object-oriented method and formal method. The core of this language is predicate data flow diagram (PDFD). In order to eliminate the ambiguity of predicate data flow diagrams and their associated textual specifications, a formalization of the syntax and semantics of predicate data flow diagrams is necessary. In this paper we use Z notation to define an abstract syntax and the related structural constraints for the PDFD notation, and provide it with an axiomatic semantics based on the concept of data availability and functionality of predicate operation. Finally, an example is given to establish functionality consistent decomposition on hierarchical PDFD (HPDFD).
文摘Architectures based on the data flow computing model provide an alternative to the conventional Von-Neumann architecture that are widelyused for general purpose computing.Processors based on the data flow architecture employ fine-grain data-driven parallelism.These architectures have thepotential to exploit the inherent parallelism in compute intensive applicationslike signal processing,image and video processing and so on and can thusachieve faster throughputs and higher power efficiency.In this paper,severaldata flow computing architectures are explored,and their main architecturalfeatures are studied.Furthermore,a classification of the processors is presented based on whether they employ either the data flow execution modelexclusively or in combination with the control flow model and are accordinglygrouped as exclusive data flow or hybrid architectures.The hybrid categoryis further subdivided as conjoint or accelerator-style architectures dependingon how they deploy and separate the data flow and control flow executionmodel within their execution blocks.Lastly,a brief comparison and discussionof their advantages and drawbacks is also considered.From this study weconclude that although the data flow architectures are seen to have maturedsignificantly,issues like data-structure handling and lack of efficient placementand scheduling algorithms have prevented these from becoming commerciallyviable.
文摘A new synthetical knowledge representation model that integrates the attribute grammar model with the semantic network model was presented. The model mainly uses symbols of attribute grammar to establish a set of syntax and semantic rules suitable for a semantic network. Based on the model,the paper introduces a formal method defining data flow diagrams (DFD) and also simply explains how to use the method.
文摘In this paper, on the basis of experimental data of two kinds of chemical explosions, the piston-pushing model of spherical blast-waves and the second-order Godunov-type scheme of finite difference methods with high identification to discontinuity are used to the numerical reconstruction of part of an actual hemispherical blast-wave flow field by properly adjusting the moving bounary conditions of a piston. This method is simple and reliable. It is suitable to the evaluation of effects of the blast-wave flow field away from the explosion center.
基金the Scientific Research Funding Project of Liaoning Education Department of China under Grant No.JDL2020005,No.LJKZ0485the National Key Research and Development Program of China under Grant No.2018YFA0704605.
文摘Nowadays,data are more and more used for intelligent modeling and prediction,and the comprehensive evaluation of data quality is getting more and more attention as a necessary means to measure whether the data are usable or not.However,the comprehensive evaluation method of data quality mostly contains the subjective factors of the evaluator,so how to comprehensively and objectively evaluate the data has become a bottleneck that needs to be solved in the research of comprehensive evaluation method.In order to evaluate the data more comprehensively,objectively and differentially,a novel comprehensive evaluation method based on particle swarm optimization(PSO)and grey correlation analysis(GCA)is presented in this paper.At first,an improved GCA evaluation model based on the technique for order preference by similarity to an ideal solution(TOPSIS)is proposed.Then,an objective function model of maximum difference of the comprehensive evaluation values is built,and the PSO algorithm is used to optimize the weights of the improved GCA evaluation model based on the objective function model.Finally,the performance of the proposed method is investigated through parameter analysis.A performance comparison of traffic flow data is carried out,and the simulation results show that the maximum average difference between the evaluation results and its mean value(MDR)of the proposed comprehensive evaluation method is 33.24%higher than that of TOPSIS-GCA,and 6.86%higher than that of GCA.The proposed method has better differentiation than other methods,which means that it objectively and comprehensively evaluates the data from both the relevance and differentiation of the data,and the results more effectively reflect the differences in data quality,which will provide more effective data support for intelligent modeling,prediction and other applications.
文摘The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regression calculation model for processing the flow data of a FCN-VGG19 aircraft is elaborated based on VGGNet(Visual Geometry Group Net)and FCN(Fully Convolutional Network)techniques.As shown by the results,the model displays a strong fitting ability,and there is almost no over-fitting in training.Moreover,the model has good accuracy and convergence.For different input data and different grids,the model basically achieves convergence,showing good performances.It is shown that the proposed simulation regression model based on FCN has great potential in typical problems of computational fluid dynamics(CFD)and related data processing.
文摘The “Citizen-Centric Complaint Reporting and Analyzing Mechanism” project is designed to create an online complaint system, called “e-Complaint”, to allow citizens to file complaints related to crime and misconduct in a secure and user-friendly way. The proposed system aims to address the challenges of the current complaint system, ensuring transparency and accountability in the police force. The “e-Complaint” system aims to increase police accountability and transparency and has significant benefits for both citizens and police departments.
文摘This paper states the basic principle of program data flow analysis in a formal way and gives the concept of data flow expression. On the basis of this concept, an algorithm of finding data flow exceptions is rendered. This algorithm has great generality, with which it is easy to develop a tool for program test. So it is practical in application.
基金This article is funded by National Social Science Foundation’s general project“Theoretical and Practical Research on International Criminal Judicial Assistance in Combating Cybercrime”(Project No.:19BFX073)National Social Science Foundation’s major project“Translation,Research and Database Construction of Cyberspace Policies and Regulations”(Project No.:20&ZD179).
文摘Cross-border data flows not only involve cross-border trade issues,but also severely challenge personal information protection,national data security,and the jurisdiction of justice and enforcement.As the current digital trade negotiations could not accommodate these challenges,China has initiated the concept of secure cross-border data flow and has launched a dual-track multi-level regulatory system,including control system for overseas transfer of important data,system of crossborder provision of personal information,and system of cross-border data request for justice and enforcement.To explore a global regulatory framework for cross-border data flows,legitimate and controllable cross-border data flows should be promoted,supervision should be categorized based on risk concerned,and the rule of law should be coordinated at home and abroad to promote system compatibility.To this end,the key is to build a compatible regulatory framework,which includes clarifying the scope of important data to define the“Negative List”for preventing national security risks,improving the cross-border accountability for protecting personal information rights and interests to ease pre-supervision pressure,and focusing on data access rights instead of data localization for upholding the jurisdiction of justice and enforcement.
基金This article is supported by the National Social Science Fund Project"China's Non-Market Economy Status in WTO Trade Remedies"(Project No.15XFX023)the Human Rights Institute of Southwest University of Political Science and Law(SWUPL HRI)2015 Yearly Research Project"Global Human Rights Governance under the TPP."All mistakes and omissions are my responsibility.
文摘The regulations of cross-border data flows is a growing challenge for the international community.International trade agreements,however,appear to be pioneering legal methods to cope,as they have grappled with this issue since the 1990s.The World Trade Organization(WTO)rules system offers a partial solution under the General Agreement on Trade in Services(GATS),which covers aspects related to cross-border data flows.The Comprehensive and Progressive Agreement for Trans-Pacific Partnership(CPTPP)and the United States-Mexico-Canada Agreement(USMCA)have also been perceived to provide forward-looking resolutions.In this context,this article analyzes why a resolution to this issue may be illusory.While they regulate cross-border data flows in various ways,the structure and wording of exception articles of both the CPTPP and USMCA have the potential to pose significant challenges to the international legal system.The new system,attempting to weigh societal values and economic development,is imbalanced,often valuing free trade more than individual online privacy and cybersecurity.Furthermore,the inclusion of poison-pill clauses is,by nature,antithetical to cooperation.Thus,for the international community generally,and China in particular,cross-border data flows would best be regulated under the WTO-centered multilateral trade law system.
文摘In this paper,a case study is carried out in comparison of pipes-and-filters architecture and batch sequential architecture.Concepts on a data flow system and the two mentioned architectures are presented.A Java template class design in implementing the "pipes" and "filters" in the pipes-and-filters architecture is given at the design level.Finally,this paper uses a concrete example to show how to use Java to implement the pipesand-filters architecture.Using varied amount of data from text files,performance and memory usage of the two architectures are illustrated.
文摘In the global scenario one of the important goals for sustainable development in industrial field is innovate new technology,and invest in building infrastructure.All the developed and developing countries focus on building resilient infrastructure and promote sustainable developments by fostering innovation.At this juncture the cloud computing has become an important information and communication technologies model influencing sustainable development of the industries in the developing countries.As part of the innovations happening in the industrial sector,a new concept termed as‘smart manufacturing’has emerged,which employs the benefits of emerging technologies like internet of things and cloud computing.Cloud services deliver an on-demand access to computing,storage,and infrastructural platforms for the industrial users through Internet.In the recent era of information technology the number of business and individual users of cloud services have been increased and larger volumes of data is being processed and stored in it.As a consequence,the data breaches in the cloud services are also increasing day by day.Due to various security vulnerabilities in the cloud architecture;as a result the cloud environment has become non-resilient.To restore the normal behavior of the cloud,detect the deviations,and achieve higher resilience,anomaly detection becomes essential.The deep learning architectures-based anomaly detection mechanisms uses various monitoring metrics characterize the normal behavior of cloud services and identify the abnormal events.This paper focuses on designing an intelligent deep learning based approach for detecting cloud anomalies in real time to make it more resilient.The deep learning models are trained using features extracted from the system level and network level performance metrics observed in the Transfer Control Protocol(TCP)traces of the simulation.The experimental results of the proposed approach demonstrate a superior performance in terms of higher detection rate and lower false alarm rate when compared to the Support Vector Machine(SVM).
文摘With the development of computer vision researches, due to the state-of-the-art performance on image and video processing tasks, deep neural network (DNN) has been widely applied in various applications (autonomous vehicles, weather forecasting, counter-terrorism, surveillance, traffic management, etc.). However, to achieve such performance, DNN models have become increasingly complicated and deeper, and result in heavy computational stress. Thus, it is not sufficient for the general central processing unit (CPU) processors to meet the real-time application requirements. To deal with this bottleneck, research based on hardware acceleration solution for DNN attracts great attention. Specifically, to meet various real-life applications, DNN acceleration solutions mainly focus on issue of hardware acceleration with intense memory and calculation resource. In this paper, a novel resource-saving architecture based on Field Programmable Gate Array (FPGA) is proposed. Due to the novel designed processing element (PE), the proposed architecture </span><span style="font-family:Verdana;">achieves good performance with the extremely limited calculating resource. The on-chip buffer allocation helps enhance resource-saving performance on memory. Moreover, the accelerator improves its performance by exploiting</span> <span style="font-family:Verdana;">the sparsity property of the input feature map. Compared to other state-of-the-art</span><span style="font-family:Verdana;"> solutions based on FPGA, our architecture achieves good performance, with quite limited resource consumption, thus fully meet the requirement of real-time applications.