The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-ro...The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-rounded development in the digital society.The relationship between cross-border data flows and the realization of digital development rights in developing countries is quite complex.Currently,developing countries seek to safeguard their existing digital interests through unilateral regulation to protect data sovereignty and multilateral regulation for cross-border data cooperation.However,developing countries still have to face internal conflicts between national digital development rights and individual and corporate digital development rights during the process of realizing digital development rights.They also encounter external contradictions such as developed countries interfering with developing countries'data sovereignty,developed countries squeezing the policy space of developing countries through dominant rules,and developing countries having conflicts between domestic and international rules.This article argues that balancing openness and security on digital trade platforms is the optimal solution for developing countries to realize their digital development rights.The establishment of WTO digital trade rules should inherently reflect the fundamental demands of developing countries in cross-border data flows.At the same time,given China's dual role as a digital powerhouse and a developing country,it should actively promote the realization of digital development rights in developing countries.展开更多
With the rapid growth of the global digital economy, cross-border e-commerce, as an emerging form of trade, has gradually become a powerful engine to promote the development of global trade. BRICS is an important forc...With the rapid growth of the global digital economy, cross-border e-commerce, as an emerging form of trade, has gradually become a powerful engine to promote the development of global trade. BRICS is an important force in the global economy, and the progress of the BRICS countries' trade facilitation level has an important impact on the global trade environment. This paper conducts an in-depth study of the dynamic changes in BRICS trade facilitation from 2013 to 2022, and uses an extended gravity model to analyze the specific impact of this change on China's exports using cross-border e-commerce. The results show that although the BRICS countries have made some progress in trade facilitation, the overall level still needs to be improved, and there are obvious differences among member countries. However, the improvement of trade facilitation among BRICS countries has undoubtedly brought significant positive effects to China's exports using cross-border e-commerce.展开更多
This paper explores the operational strategies of cross-border micro, small, and medium enterprises (MSMEs). Against the backdrop of globalization and digitalization, cross-border trade has become one of the important...This paper explores the operational strategies of cross-border micro, small, and medium enterprises (MSMEs). Against the backdrop of globalization and digitalization, cross-border trade has become one of the important pathways for many MSMEs to achieve growth and competitive advantage. Firstly, the paper outlines the concept and characteristics of cross-border MSMEs, as well as the analysis of their operational environment in the context of globalization, including political, economic, social, technological, environmental, and legal factors. Secondly, the paper proposes operational strategies for cross-border MSMEs, including international market selection and positioning, cross-border marketing strategies, supply chain management, cross-border financial management, and cross-border risk management. Finally, the paper summarizes the importance of effectively implementing these strategies for cross-border MSMEs to seize international market opportunities, reduce operational risks, and enhance competitiveness and profitability.展开更多
Cross-border data flows not only involve cross-border trade issues,but also severely challenge personal information protection,national data security,and the jurisdiction of justice and enforcement.As the current digi...Cross-border data flows not only involve cross-border trade issues,but also severely challenge personal information protection,national data security,and the jurisdiction of justice and enforcement.As the current digital trade negotiations could not accommodate these challenges,China has initiated the concept of secure cross-border data flow and has launched a dual-track multi-level regulatory system,including control system for overseas transfer of important data,system of crossborder provision of personal information,and system of cross-border data request for justice and enforcement.To explore a global regulatory framework for cross-border data flows,legitimate and controllable cross-border data flows should be promoted,supervision should be categorized based on risk concerned,and the rule of law should be coordinated at home and abroad to promote system compatibility.To this end,the key is to build a compatible regulatory framework,which includes clarifying the scope of important data to define the“Negative List”for preventing national security risks,improving the cross-border accountability for protecting personal information rights and interests to ease pre-supervision pressure,and focusing on data access rights instead of data localization for upholding the jurisdiction of justice and enforcement.展开更多
The regulations of cross-border data flows is a growing challenge for the international community.International trade agreements,however,appear to be pioneering legal methods to cope,as they have grappled with this is...The regulations of cross-border data flows is a growing challenge for the international community.International trade agreements,however,appear to be pioneering legal methods to cope,as they have grappled with this issue since the 1990s.The World Trade Organization(WTO)rules system offers a partial solution under the General Agreement on Trade in Services(GATS),which covers aspects related to cross-border data flows.The Comprehensive and Progressive Agreement for Trans-Pacific Partnership(CPTPP)and the United States-Mexico-Canada Agreement(USMCA)have also been perceived to provide forward-looking resolutions.In this context,this article analyzes why a resolution to this issue may be illusory.While they regulate cross-border data flows in various ways,the structure and wording of exception articles of both the CPTPP and USMCA have the potential to pose significant challenges to the international legal system.The new system,attempting to weigh societal values and economic development,is imbalanced,often valuing free trade more than individual online privacy and cybersecurity.Furthermore,the inclusion of poison-pill clauses is,by nature,antithetical to cooperation.Thus,for the international community generally,and China in particular,cross-border data flows would best be regulated under the WTO-centered multilateral trade law system.展开更多
Cross-border investment is essential for western China’s globalization.Global value chain(GVC)forms cross-border investment networks between industries in western China and overseas cities.Focusing on GVC,this study ...Cross-border investment is essential for western China’s globalization.Global value chain(GVC)forms cross-border investment networks between industries in western China and overseas cities.Focusing on GVC,this study uses the social network analysis method,entropy method,multi-index comprehensive evaluation method,and quadratic assignment procedure analysis method to examine the characteristics and influencing factors of the urban networks of research and development(R&D),production,and sales formed as a result of the overseas investments of listed manufacturing companies in western China.Results showed that the three types of investment networks involved multiple industry types and multiple central cities with differentiated diversity and multicentrality.The R&D urban network’s leading sub-industries were the mechanical equipment and instruments,medicine and biological products,and metal and nonmetal industries.The destination cities were mostly those home to educational and scientific research centers.The production urban network’s leading sub-industries were the mechanical equipment,instrument,and food and beverage industries.The destination cities were mostly regional central cities in developing countries.The sales urban network’s leading sub-industries were the mechanical equipment and instrument,metal and nonmetal,and petrochemical and plastics industries.The destination cities were numerous and scattered.In addition,the R&D urban network easily formed specialized clusters,core nodes easily controlled the production urban network,and individual nodes did not easily control the sales urban network.Technological and economic system advantages greatly impacted the three network types.Considering the different influencing factors,this study suggests optimizing the institutional investment environment to narrow the institutional gap,adjusting and optimizing the investment layout to expand overseas markets,and increasing R&D funds to stimulate technological progress and overseas investments in western China.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
Getting insight into the spatiotemporal distribution patterns of knowledge innovation is receiving increasing attention from policymakers and economic research organizations.Many studies use bibliometric data to analy...Getting insight into the spatiotemporal distribution patterns of knowledge innovation is receiving increasing attention from policymakers and economic research organizations.Many studies use bibliometric data to analyze the popularity of certain research topics,well-adopted methodologies,influential authors,and the interrelationships among research disciplines.However,the visual exploration of the patterns of research topics with an emphasis on their spatial and temporal distribution remains challenging.This study combined a Space-Time Cube(STC)and a 3D glyph to represent the complex multivariate bibliographic data.We further implemented a visual design by developing an interactive interface.The effectiveness,understandability,and engagement of ST-Map are evaluated by seven experts in geovisualization.The results suggest that it is promising to use three-dimensional visualization to show the overview and on-demand details on a single screen.展开更多
Ratoon rice,which refers to a second harvest of rice obtained from the regenerated tillers originating from the stubble of the first harvested crop,plays an important role in both food security and agroecology while r...Ratoon rice,which refers to a second harvest of rice obtained from the regenerated tillers originating from the stubble of the first harvested crop,plays an important role in both food security and agroecology while requiring minimal agricultural inputs.However,accurately identifying ratoon rice crops is challenging due to the similarity of its spectral features with other rice cropping systems(e.g.,double rice).Moreover,images with a high spatiotemporal resolution are essential since ratoon rice is generally cultivated in fragmented croplands within regions that frequently exhibit cloudy and rainy weather.In this study,taking Qichun County in Hubei Province,China as an example,we developed a new phenology-based ratoon rice vegetation index(PRVI)for the purpose of ratoon rice mapping at a 30 m spatial resolution using a robust time series generated from Harmonized Landsat and Sentinel-2(HLS)images.The PRVI that incorporated the red,near-infrared,and shortwave infrared 1 bands was developed based on the analysis of spectro-phenological separability and feature selection.Based on actual field samples,the performance of the PRVI for ratoon rice mapping was carefully evaluated by comparing it to several vegetation indices,including normalized difference vegetation index(NDVI),enhanced vegetation index(EVI)and land surface water index(LSWI).The results suggested that the PRVI could sufficiently capture the specific characteristics of ratoon rice,leading to a favorable separability between ratoon rice and other land cover types.Furthermore,the PRVI showed the best performance for identifying ratoon rice in the phenological phases characterized by grain filling and harvesting to tillering of the ratoon crop(GHS-TS2),indicating that only several images are required to obtain an accurate ratoon rice map.Finally,the PRVI performed better than NDVI,EVI,LSWI and their combination at the GHS-TS2 stages,with producer's accuracy and user's accuracy of 92.22 and 89.30%,respectively.These results demonstrate that the proposed PRVI based on HLS data can effectively identify ratoon rice in fragmented croplands at crucial phenological stages,which is promising for identifying the earliest timing of ratoon rice planting and can provide a fundamental dataset for crop management activities.展开更多
The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive st...The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications.展开更多
With the conclusion of the novel coronavirus pandemic and the increasingly complex market environment,China’s cross-border e-commerce has entered a new phase of development.The external landscape is evolving rapidly,...With the conclusion of the novel coronavirus pandemic and the increasingly complex market environment,China’s cross-border e-commerce has entered a new phase of development.The external landscape is evolving rapidly,and there is a gradual improvement in laws and regulations governing cross-border e-commerce,coupled with increased government support.Despite the impact of the COVID-19 pandemic on the market economy,overall development has been steadily improving.The Internet population is expanding,the online retail market is experiencing rapid growth,the consumption structure is undergoing transformation and upgrading,and the e-commerce market is demonstrating significant potential.The advancement of technologies such as big data,artificial intelligence,blockchain,and supply chain has provided more efficient operational support for the cross-border e-commerce industry.Against the backdrop of the emergence of new forms of cross-border e-commerce in China post-pandemic,this paper utilizes the PEST model to analyze the macro environment of cross-border e-commerce in China and project its future development trends.展开更多
The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are ca...The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.展开更多
Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policy...Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources.展开更多
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou...With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.展开更多
A significant obstacle in intelligent transportation systems(ITS)is the capacity to predict traffic flow.Recent advancements in deep neural networks have enabled the development of models to represent traffic flow acc...A significant obstacle in intelligent transportation systems(ITS)is the capacity to predict traffic flow.Recent advancements in deep neural networks have enabled the development of models to represent traffic flow accurately.However,accurately predicting traffic flow at the individual road level is extremely difficult due to the complex interplay of spatial and temporal factors.This paper proposes a technique for predicting short-term traffic flow data using an architecture that utilizes convolutional bidirectional long short-term memory(Conv-BiLSTM)with attention mechanisms.Prior studies neglected to include data pertaining to factors such as holidays,weather conditions,and vehicle types,which are interconnected and significantly impact the accuracy of forecast outcomes.In addition,this research incorporates recurring monthly periodic pattern data that significantly enhances the accuracy of forecast outcomes.The experimental findings demonstrate a performance improvement of 21.68%when incorporating the vehicle type feature.展开更多
Missing value is one of the main factors that cause dirty data.Without high-quality data,there will be no reliable analysis results and precise decision-making.Therefore,the data warehouse needs to integrate high-qual...Missing value is one of the main factors that cause dirty data.Without high-quality data,there will be no reliable analysis results and precise decision-making.Therefore,the data warehouse needs to integrate high-quality data consistently.In the power system,the electricity consumption data of some large users cannot be normally collected resulting in missing data,which affects the calculation of power supply and eventually leads to a large error in the daily power line loss rate.For the problem of missing electricity consumption data,this study proposes a group method of data handling(GMDH)based data interpolation method in distribution power networks and applies it in the analysis of actually collected electricity data.First,the dependent and independent variables are defined from the original data,and the upper and lower limits of missing values are determined according to prior knowledge or existing data information.All missing data are randomly interpolated within the upper and lower limits.Then,the GMDH network is established to obtain the optimal complexity model,which is used to predict the missing data to replace the last imputed electricity consumption data.At last,this process is implemented iteratively until the missing values do not change.Under a relatively small noise level(α=0.25),the proposed approach achieves a maximum error of no more than 0.605%.Experimental findings demonstrate the efficacy and feasibility of the proposed approach,which realizes the transformation from incomplete data to complete data.Also,this proposed data interpolation approach provides a strong basis for the electricity theft diagnosis and metering fault analysis of electricity enterprises.展开更多
Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This a...Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems.展开更多
Genome-wide association mapping studies(GWAS)based on Big Data are a potential approach to improve marker-assisted selection in plant breeding.The number of available phenotypic and genomic data sets in which medium-s...Genome-wide association mapping studies(GWAS)based on Big Data are a potential approach to improve marker-assisted selection in plant breeding.The number of available phenotypic and genomic data sets in which medium-sized populations of several hundred individuals have been studied is rapidly increasing.Combining these data and using them in GWAS could increase both the power of QTL discovery and the accuracy of estimation of underlying genetic effects,but is hindered by data heterogeneity and lack of interoperability.In this study,we used genomic and phenotypic data sets,focusing on Central European winter wheat populations evaluated for heading date.We explored strategies for integrating these data and subsequently the resulting potential for GWAS.Establishing interoperability between data sets was greatly aided by some overlapping genotypes and a linear relationship between the different phenotyping protocols,resulting in high quality integrated phenotypic data.In this context,genomic prediction proved to be a suitable tool to study relevance of interactions between genotypes and experimental series,which was low in our case.Contrary to expectations,fewer associations between markers and traits were found in the larger combined data than in the individual experimental series.However,the predictive power based on the marker-trait associations of the integrated data set was higher across data sets.Therefore,the results show that the integration of medium-sized to Big Data is an approach to increase the power to detect QTL in GWAS.The results encourage further efforts to standardize and share data in the plant breeding community.展开更多
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose...In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.展开更多
基金a preliminary result of the Chinese Government Scholarship High-level Graduate Program sponsored by China Scholarship Council(Program No.CSC202206310052)。
文摘The digital development rights in developing countries are based on establishing a new international economic order and ensuring equal participation in the digital globalization process to achieve people's well-rounded development in the digital society.The relationship between cross-border data flows and the realization of digital development rights in developing countries is quite complex.Currently,developing countries seek to safeguard their existing digital interests through unilateral regulation to protect data sovereignty and multilateral regulation for cross-border data cooperation.However,developing countries still have to face internal conflicts between national digital development rights and individual and corporate digital development rights during the process of realizing digital development rights.They also encounter external contradictions such as developed countries interfering with developing countries'data sovereignty,developed countries squeezing the policy space of developing countries through dominant rules,and developing countries having conflicts between domestic and international rules.This article argues that balancing openness and security on digital trade platforms is the optimal solution for developing countries to realize their digital development rights.The establishment of WTO digital trade rules should inherently reflect the fundamental demands of developing countries in cross-border data flows.At the same time,given China's dual role as a digital powerhouse and a developing country,it should actively promote the realization of digital development rights in developing countries.
基金Supported by Western Project of National Social Science Fund of China(23XJY013)Project of Social Science Foundation of Shaanxi Province(2022D032).
文摘With the rapid growth of the global digital economy, cross-border e-commerce, as an emerging form of trade, has gradually become a powerful engine to promote the development of global trade. BRICS is an important force in the global economy, and the progress of the BRICS countries' trade facilitation level has an important impact on the global trade environment. This paper conducts an in-depth study of the dynamic changes in BRICS trade facilitation from 2013 to 2022, and uses an extended gravity model to analyze the specific impact of this change on China's exports using cross-border e-commerce. The results show that although the BRICS countries have made some progress in trade facilitation, the overall level still needs to be improved, and there are obvious differences among member countries. However, the improvement of trade facilitation among BRICS countries has undoubtedly brought significant positive effects to China's exports using cross-border e-commerce.
文摘This paper explores the operational strategies of cross-border micro, small, and medium enterprises (MSMEs). Against the backdrop of globalization and digitalization, cross-border trade has become one of the important pathways for many MSMEs to achieve growth and competitive advantage. Firstly, the paper outlines the concept and characteristics of cross-border MSMEs, as well as the analysis of their operational environment in the context of globalization, including political, economic, social, technological, environmental, and legal factors. Secondly, the paper proposes operational strategies for cross-border MSMEs, including international market selection and positioning, cross-border marketing strategies, supply chain management, cross-border financial management, and cross-border risk management. Finally, the paper summarizes the importance of effectively implementing these strategies for cross-border MSMEs to seize international market opportunities, reduce operational risks, and enhance competitiveness and profitability.
基金This article is funded by National Social Science Foundation’s general project“Theoretical and Practical Research on International Criminal Judicial Assistance in Combating Cybercrime”(Project No.:19BFX073)National Social Science Foundation’s major project“Translation,Research and Database Construction of Cyberspace Policies and Regulations”(Project No.:20&ZD179).
文摘Cross-border data flows not only involve cross-border trade issues,but also severely challenge personal information protection,national data security,and the jurisdiction of justice and enforcement.As the current digital trade negotiations could not accommodate these challenges,China has initiated the concept of secure cross-border data flow and has launched a dual-track multi-level regulatory system,including control system for overseas transfer of important data,system of crossborder provision of personal information,and system of cross-border data request for justice and enforcement.To explore a global regulatory framework for cross-border data flows,legitimate and controllable cross-border data flows should be promoted,supervision should be categorized based on risk concerned,and the rule of law should be coordinated at home and abroad to promote system compatibility.To this end,the key is to build a compatible regulatory framework,which includes clarifying the scope of important data to define the“Negative List”for preventing national security risks,improving the cross-border accountability for protecting personal information rights and interests to ease pre-supervision pressure,and focusing on data access rights instead of data localization for upholding the jurisdiction of justice and enforcement.
基金This article is supported by the National Social Science Fund Project"China's Non-Market Economy Status in WTO Trade Remedies"(Project No.15XFX023)the Human Rights Institute of Southwest University of Political Science and Law(SWUPL HRI)2015 Yearly Research Project"Global Human Rights Governance under the TPP."All mistakes and omissions are my responsibility.
文摘The regulations of cross-border data flows is a growing challenge for the international community.International trade agreements,however,appear to be pioneering legal methods to cope,as they have grappled with this issue since the 1990s.The World Trade Organization(WTO)rules system offers a partial solution under the General Agreement on Trade in Services(GATS),which covers aspects related to cross-border data flows.The Comprehensive and Progressive Agreement for Trans-Pacific Partnership(CPTPP)and the United States-Mexico-Canada Agreement(USMCA)have also been perceived to provide forward-looking resolutions.In this context,this article analyzes why a resolution to this issue may be illusory.While they regulate cross-border data flows in various ways,the structure and wording of exception articles of both the CPTPP and USMCA have the potential to pose significant challenges to the international legal system.The new system,attempting to weigh societal values and economic development,is imbalanced,often valuing free trade more than individual online privacy and cybersecurity.Furthermore,the inclusion of poison-pill clauses is,by nature,antithetical to cooperation.Thus,for the international community generally,and China in particular,cross-border data flows would best be regulated under the WTO-centered multilateral trade law system.
基金Under the auspices of National Natural Science Foundation of China(No.41971198)。
文摘Cross-border investment is essential for western China’s globalization.Global value chain(GVC)forms cross-border investment networks between industries in western China and overseas cities.Focusing on GVC,this study uses the social network analysis method,entropy method,multi-index comprehensive evaluation method,and quadratic assignment procedure analysis method to examine the characteristics and influencing factors of the urban networks of research and development(R&D),production,and sales formed as a result of the overseas investments of listed manufacturing companies in western China.Results showed that the three types of investment networks involved multiple industry types and multiple central cities with differentiated diversity and multicentrality.The R&D urban network’s leading sub-industries were the mechanical equipment and instruments,medicine and biological products,and metal and nonmetal industries.The destination cities were mostly those home to educational and scientific research centers.The production urban network’s leading sub-industries were the mechanical equipment,instrument,and food and beverage industries.The destination cities were mostly regional central cities in developing countries.The sales urban network’s leading sub-industries were the mechanical equipment and instrument,metal and nonmetal,and petrochemical and plastics industries.The destination cities were numerous and scattered.In addition,the R&D urban network easily formed specialized clusters,core nodes easily controlled the production urban network,and individual nodes did not easily control the sales urban network.Technological and economic system advantages greatly impacted the three network types.Considering the different influencing factors,this study suggests optimizing the institutional investment environment to narrow the institutional gap,adjusting and optimizing the investment layout to expand overseas markets,and increasing R&D funds to stimulate technological progress and overseas investments in western China.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
文摘Getting insight into the spatiotemporal distribution patterns of knowledge innovation is receiving increasing attention from policymakers and economic research organizations.Many studies use bibliometric data to analyze the popularity of certain research topics,well-adopted methodologies,influential authors,and the interrelationships among research disciplines.However,the visual exploration of the patterns of research topics with an emphasis on their spatial and temporal distribution remains challenging.This study combined a Space-Time Cube(STC)and a 3D glyph to represent the complex multivariate bibliographic data.We further implemented a visual design by developing an interactive interface.The effectiveness,understandability,and engagement of ST-Map are evaluated by seven experts in geovisualization.The results suggest that it is promising to use three-dimensional visualization to show the overview and on-demand details on a single screen.
基金supported by the National Natural Science Foundation of China(42271360 and 42271399)the Young Elite Scientists Sponsorship Program by China Association for Science and Technology(CAST)(2020QNRC001)the Fundamental Research Funds for the Central Universities,China(2662021JC013,CCNU22QN018)。
文摘Ratoon rice,which refers to a second harvest of rice obtained from the regenerated tillers originating from the stubble of the first harvested crop,plays an important role in both food security and agroecology while requiring minimal agricultural inputs.However,accurately identifying ratoon rice crops is challenging due to the similarity of its spectral features with other rice cropping systems(e.g.,double rice).Moreover,images with a high spatiotemporal resolution are essential since ratoon rice is generally cultivated in fragmented croplands within regions that frequently exhibit cloudy and rainy weather.In this study,taking Qichun County in Hubei Province,China as an example,we developed a new phenology-based ratoon rice vegetation index(PRVI)for the purpose of ratoon rice mapping at a 30 m spatial resolution using a robust time series generated from Harmonized Landsat and Sentinel-2(HLS)images.The PRVI that incorporated the red,near-infrared,and shortwave infrared 1 bands was developed based on the analysis of spectro-phenological separability and feature selection.Based on actual field samples,the performance of the PRVI for ratoon rice mapping was carefully evaluated by comparing it to several vegetation indices,including normalized difference vegetation index(NDVI),enhanced vegetation index(EVI)and land surface water index(LSWI).The results suggested that the PRVI could sufficiently capture the specific characteristics of ratoon rice,leading to a favorable separability between ratoon rice and other land cover types.Furthermore,the PRVI showed the best performance for identifying ratoon rice in the phenological phases characterized by grain filling and harvesting to tillering of the ratoon crop(GHS-TS2),indicating that only several images are required to obtain an accurate ratoon rice map.Finally,the PRVI performed better than NDVI,EVI,LSWI and their combination at the GHS-TS2 stages,with producer's accuracy and user's accuracy of 92.22 and 89.30%,respectively.These results demonstrate that the proposed PRVI based on HLS data can effectively identify ratoon rice in fragmented croplands at crucial phenological stages,which is promising for identifying the earliest timing of ratoon rice planting and can provide a fundamental dataset for crop management activities.
基金supported by the EU H2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement(Project-DEEP,Grant number:101109045)National Key R&D Program of China with Grant number 2018YFB1800804+2 种基金the National Natural Science Foundation of China(Nos.NSFC 61925105,and 62171257)Tsinghua University-China Mobile Communications Group Co.,Ltd,Joint Institutethe Fundamental Research Funds for the Central Universities,China(No.FRF-NP-20-03)。
文摘The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications.
基金2023 National College Students’Innovation and Entrepreneurship Training Program“Research on Big Data Analysis and Application of Cross-Border E-commerce in the Context of Digital Trade”(Project number:202310621323)。
文摘With the conclusion of the novel coronavirus pandemic and the increasingly complex market environment,China’s cross-border e-commerce has entered a new phase of development.The external landscape is evolving rapidly,and there is a gradual improvement in laws and regulations governing cross-border e-commerce,coupled with increased government support.Despite the impact of the COVID-19 pandemic on the market economy,overall development has been steadily improving.The Internet population is expanding,the online retail market is experiencing rapid growth,the consumption structure is undergoing transformation and upgrading,and the e-commerce market is demonstrating significant potential.The advancement of technologies such as big data,artificial intelligence,blockchain,and supply chain has provided more efficient operational support for the cross-border e-commerce industry.Against the backdrop of the emergence of new forms of cross-border e-commerce in China post-pandemic,this paper utilizes the PEST model to analyze the macro environment of cross-border e-commerce in China and project its future development trends.
基金supported in part by the“Pioneer”and“Leading Goose”R&D Program of Zhejiang(Grant No.2022C03174)the National Natural Science Foundation of China(No.92067103)+4 种基金the Key Research and Development Program of Shaanxi,China(No.2021ZDLGY06-02)the Natural Science Foundation of Shaanxi Province(No.2019ZDLGY12-02)the Shaanxi Innovation Team Project(No.2018TD-007)the Xi'an Science and technology Innovation Plan(No.201809168CX9JC10)the Fundamental Research Funds for the Central Universities(No.YJS2212)and National 111 Program of China B16037.
文摘The security of Federated Learning(FL)/Distributed Machine Learning(DML)is gravely threatened by data poisoning attacks,which destroy the usability of the model by contaminating training samples,so such attacks are called causative availability indiscriminate attacks.Facing the problem that existing data sanitization methods are hard to apply to real-time applications due to their tedious process and heavy computations,we propose a new supervised batch detection method for poison,which can fleetly sanitize the training dataset before the local model training.We design a training dataset generation method that helps to enhance accuracy and uses data complexity features to train a detection model,which will be used in an efficient batch hierarchical detection process.Our model stockpiles knowledge about poison,which can be expanded by retraining to adapt to new attacks.Being neither attack-specific nor scenario-specific,our method is applicable to FL/DML or other online or offline scenarios.
基金Key Research and Development and Promotion Program of Henan Province(No.222102210069)Zhongyuan Science and Technology Innovation Leading Talent Project(224200510003)National Natural Science Foundation of China(No.62102449).
文摘Big data resources are characterized by large scale, wide sources, and strong dynamics. Existing access controlmechanisms based on manual policy formulation by security experts suffer from drawbacks such as low policymanagement efficiency and difficulty in accurately describing the access control policy. To overcome theseproblems, this paper proposes a big data access control mechanism based on a two-layer permission decisionstructure. This mechanism extends the attribute-based access control (ABAC) model. Business attributes areintroduced in the ABAC model as business constraints between entities. The proposed mechanism implementsa two-layer permission decision structure composed of the inherent attributes of access control entities and thebusiness attributes, which constitute the general permission decision algorithm based on logical calculation andthe business permission decision algorithm based on a bi-directional long short-term memory (BiLSTM) neuralnetwork, respectively. The general permission decision algorithm is used to implement accurate policy decisions,while the business permission decision algorithm implements fuzzy decisions based on the business constraints.The BiLSTM neural network is used to calculate the similarity of the business attributes to realize intelligent,adaptive, and efficient access control permission decisions. Through the two-layer permission decision structure,the complex and diverse big data access control management requirements can be satisfied by considering thesecurity and availability of resources. Experimental results show that the proposed mechanism is effective andreliable. In summary, it can efficiently support the secure sharing of big data resources.
基金sponsored by the National Natural Science Foundation of China under grant number No. 62172353, No. 62302114, No. U20B2046 and No. 62172115Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1331007 and No. 1311022+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108
文摘With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.
文摘A significant obstacle in intelligent transportation systems(ITS)is the capacity to predict traffic flow.Recent advancements in deep neural networks have enabled the development of models to represent traffic flow accurately.However,accurately predicting traffic flow at the individual road level is extremely difficult due to the complex interplay of spatial and temporal factors.This paper proposes a technique for predicting short-term traffic flow data using an architecture that utilizes convolutional bidirectional long short-term memory(Conv-BiLSTM)with attention mechanisms.Prior studies neglected to include data pertaining to factors such as holidays,weather conditions,and vehicle types,which are interconnected and significantly impact the accuracy of forecast outcomes.In addition,this research incorporates recurring monthly periodic pattern data that significantly enhances the accuracy of forecast outcomes.The experimental findings demonstrate a performance improvement of 21.68%when incorporating the vehicle type feature.
基金This research was funded by the National Nature Sciences Foundation of China(Grant No.42250410321).
文摘Missing value is one of the main factors that cause dirty data.Without high-quality data,there will be no reliable analysis results and precise decision-making.Therefore,the data warehouse needs to integrate high-quality data consistently.In the power system,the electricity consumption data of some large users cannot be normally collected resulting in missing data,which affects the calculation of power supply and eventually leads to a large error in the daily power line loss rate.For the problem of missing electricity consumption data,this study proposes a group method of data handling(GMDH)based data interpolation method in distribution power networks and applies it in the analysis of actually collected electricity data.First,the dependent and independent variables are defined from the original data,and the upper and lower limits of missing values are determined according to prior knowledge or existing data information.All missing data are randomly interpolated within the upper and lower limits.Then,the GMDH network is established to obtain the optimal complexity model,which is used to predict the missing data to replace the last imputed electricity consumption data.At last,this process is implemented iteratively until the missing values do not change.Under a relatively small noise level(α=0.25),the proposed approach achieves a maximum error of no more than 0.605%.Experimental findings demonstrate the efficacy and feasibility of the proposed approach,which realizes the transformation from incomplete data to complete data.Also,this proposed data interpolation approach provides a strong basis for the electricity theft diagnosis and metering fault analysis of electricity enterprises.
文摘Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems.
基金funding within the Wheat BigData Project(German Federal Ministry of Food and Agriculture,FKZ2818408B18)。
文摘Genome-wide association mapping studies(GWAS)based on Big Data are a potential approach to improve marker-assisted selection in plant breeding.The number of available phenotypic and genomic data sets in which medium-sized populations of several hundred individuals have been studied is rapidly increasing.Combining these data and using them in GWAS could increase both the power of QTL discovery and the accuracy of estimation of underlying genetic effects,but is hindered by data heterogeneity and lack of interoperability.In this study,we used genomic and phenotypic data sets,focusing on Central European winter wheat populations evaluated for heading date.We explored strategies for integrating these data and subsequently the resulting potential for GWAS.Establishing interoperability between data sets was greatly aided by some overlapping genotypes and a linear relationship between the different phenotyping protocols,resulting in high quality integrated phenotypic data.In this context,genomic prediction proved to be a suitable tool to study relevance of interactions between genotypes and experimental series,which was low in our case.Contrary to expectations,fewer associations between markers and traits were found in the larger combined data than in the individual experimental series.However,the predictive power based on the marker-trait associations of the integrated data set was higher across data sets.Therefore,the results show that the integration of medium-sized to Big Data is an approach to increase the power to detect QTL in GWAS.The results encourage further efforts to standardize and share data in the plant breeding community.
文摘In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average.