Nowadays most of the cloud applications process large amount of data to provide the desired results. The Internet environment, the enterprise network advertising, network marketing plan, need partner sites selected as...Nowadays most of the cloud applications process large amount of data to provide the desired results. The Internet environment, the enterprise network advertising, network marketing plan, need partner sites selected as carrier and publishers. Website through static pages, dynamic pages, floating window, AD links, take the initiative to push a variety of ways to show the user enterprise marketing solutions, when the user access to web pages, use eye effect and concentration effect, attract users through reading web pages or click the page again, let the user detailed comprehensive understanding of the marketing plan, which affects the user' s real purchase decisions. Therefore, we combine the cloud environment with search engine optimization technique, the result shows that our method outperforms compared with other approaches.展开更多
In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and center...In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and centering on the server, the data will store model to data- centric data storage model. Storage is considered from the start, just keep a series of data, for the management system and storage device rarely consider the intrinsic value of the stored data. The prosperity of the Internet has changed the world data storage, and with the emergence of many new applications. Theoretically, the proposed algorithm has the ability of dealing with massive data and numerically, the algorithm could enhance the processing accuracy and speed which will be meaningful.展开更多
The meteorological big data in Beijing area are typical multi-dimensional big data containing spatiotemporal characteristics,which have important research value for researches related to urban human settlement environ...The meteorological big data in Beijing area are typical multi-dimensional big data containing spatiotemporal characteristics,which have important research value for researches related to urban human settlement environment.With the help of computer programming and software processing,big data crawling,integration,extraction and multi-dimensional information fusion can be realized quickly and effectively,so as to obtain the data set needed for research and realize the target of visualization.Through big data analysis of wind environment,thermal environment and total atmospheric suspended particulate pollutants in Beijing area,it was found that the average wind speed in Beijing area decreased signifi cantly in recent 40 years,while the surface temperature increased signifi cantly;urban heat island effect was signifi cant,and the phenomenon of atmospheric suspended particulate pollution was relatively common.The spatial distribution of the three climatic and environmental data was not balanced and had signifi cant regularity and correlation.Improving urban ventilation corridors and improving urban ventilation capacity is a feasible way to improve urban heat island effect and reduce urban climate issues such as atmospheric particulate pollution.展开更多
The New Austrian Tunneling Method (NATM) has been widely used in the construction of mountain tun- nels, urban metro lines, underground storage tanks, underground power houses, mining roadways, and so on, The variat...The New Austrian Tunneling Method (NATM) has been widely used in the construction of mountain tun- nels, urban metro lines, underground storage tanks, underground power houses, mining roadways, and so on, The variation patterns of advance geological prediction data, stress-strain data of supporting struc- tures, and deformation data of the surrounding rock are vitally important in assessing the rationality and reliability of construction schemes, and provide essential information to ensure the safety and scheduling of tunnel construction, However, as the quantity of these data increases significantly, the uncertainty and discreteness of the mass data make it extremely difficult to produce a reasonable con- struction scheme; they also reduce the forecast accuracy of accidents and dangerous situations, creating huge challenges in tunnel construction safety, In order to solve this problem, a novel data service system is proposed that uses data-association technology and the NATM, with the support of a big data environ- ment, This system can integrate data resources from distributed monitoring sensors during the construc- tion process, and then identify associations and build relations among data resources under the same construction conditions, These data associations and relations are then stored in a data pool, With the development and supplementation of the data pool, similar relations can then he used under similar con- ditions, in order to provide data references for construction schematic designs and resource allocation, The proposed data service system also provides valuable guidance for the construction of similar projects.展开更多
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
This study analyzes and summarizes seven main characteristics of the marine data sampled by multiple underwater gliders. These characteristics such as the big data volume and data sparseness make it extremely difficul...This study analyzes and summarizes seven main characteristics of the marine data sampled by multiple underwater gliders. These characteristics such as the big data volume and data sparseness make it extremely difficult to do some meaningful applications like early warning of marine environment. In order to make full use of the sea trial data, this paper gives the definition of two types of marine data cube which can integrate the big marine data sampled by multiple underwater gliders along saw-tooth paths, and proposes a data fitting algorithm based on time extraction and space compression(DFTS) to construct the temperature and conductivity data cubes. This research also presents an early warning algorithm based on data cube(EWDC) to realize the early warning of a new sampled data file.Experiments results show that the proposed methods are reasonable and effective. Our work is the first study to do some realistic applications on the data sampled by multiple underwater vehicles, and it provides a research framework for processing and analyzing the big marine data oriented to the applications of underwater gliders.展开更多
The Cloud Computing Environment(CCE)developed for using the dynamic cloud is the ability of software and services likely to grow with any business.It has transformed the methodology for storing the enterprise data,acc...The Cloud Computing Environment(CCE)developed for using the dynamic cloud is the ability of software and services likely to grow with any business.It has transformed the methodology for storing the enterprise data,accessing the data,and Data Sharing(DS).Big data frame a constant way of uploading and sharing the cloud data in a hierarchical architecture with different kinds of separate privileges to access the data.With the requirement of vast volumes of storage area in the CCEs,capturing a secured data access framework is an important issue.This paper proposes an Improved Secure Identification-based Multilevel Structure of Data Sharing(ISIMSDS)to hold the DS of big data in CCEs.The complex file partitioning technique is proposed to verify the access privilege context for sharing data in complex CCEs.An access control Encryption Method(EM)is used to improve the encryption.The Complexity is measured to increase the authentication standard.The active attack is protected using this ISIMSDS methodology.Our proposed ISIMSDS method assists in diminishing the Complexity whenever the user’s population is increasing rapidly.The security analysis proves that the proposed ISIMSDS methodology is more secure against the chosen-PlainText(PT)attack and provides more efficient computation and storage space than the related methods.The performance of the proposed ISIMSDS methodology provides more efficiency in communication costs such as encryption,decryption,and retrieval of the data.展开更多
Big data and associated analytics have the potential to revolutionize healthcare through the tools and techniques they offer to manage and exploit the large volumes of heterogeneous data being collected in the healthc...Big data and associated analytics have the potential to revolutionize healthcare through the tools and techniques they offer to manage and exploit the large volumes of heterogeneous data being collected in the healthcare domain. The strict security and privacy constraints on this data, however, pose a major obstacle to the successful use of these tools and techniques. The paper first describes the security challenges associated with big data analytics in healthcare research from a unique perspective based on the big data analytics pipeline. The paper then examines the use of data safe havens as an approach to addressing the security challenges and argues for the approach by providing a detailed introduction to the security mechanisms implemented in a novel data safe haven. The CIMVHR Data Safe Haven (CDSH) was developed to support research into the health and well-being of Canadian military, Veterans, and their families. The CDSH is shown to overcome the security challenges presented in the different stages of the big data analytics pipeline.展开更多
The retail food environment (RFE) has a significant impact on people’s dietary behavior and diet-related outcomes. Although RFE research has received a lot of attention, there are very few studies that shed light on ...The retail food environment (RFE) has a significant impact on people’s dietary behavior and diet-related outcomes. Although RFE research has received a lot of attention, there are very few studies that shed light on the foodscape and assessment methodologies in the China context. Based on open data obtained from Dianping.com and AutoNavi map, we classified all food outlets into six types. Geographic Information Systems (GIS) techniques were employed to create two network buffer areas (1-km and 3-km) and calculate the absolute measures and relative measures (i.e., mRFEI and Rmix). We modified the calculation of relative measures by adding items and assigning weights. The mean mRFEI using the 1-km and 3-km buffer sizes across the communities were 10.45 and 20.12, respectively, while the mean mRmix of the two buffer sizes were 20.97 and 58.04, indicating that residents in Wuhan have better access to fresh and nutritious food within 3-km network buffers. Residents in urban areas are more likely to be exposed to an unhealthy food environment than those in rural areas. Residents in Xinzhou and Qiaokou districts are more likely to be subjected to unfavorable neighborhood RFE. The open data-driven methods for assessing RFE in Wuhan, China may guide community-level food policy interventions and promote active living by shifting built environments to increase residents’ access to healthy food.展开更多
The Mongolian Plateau(MP),situated in the transitional zone between the Siberian taiga and the arid grasslands of Central Asia,plays a significant role as an Ecological Barrier(EB)with crucial implications for ecologi...The Mongolian Plateau(MP),situated in the transitional zone between the Siberian taiga and the arid grasslands of Central Asia,plays a significant role as an Ecological Barrier(EB)with crucial implications for ecological and resource security in Northeast Asia.EB is a vast concept and a complex issue related to many aspects such as water,land,air,vegetation,animals,and people,et al.It is very difficult to understand the whole of EB without a comprehensive perspective,that traditional diverse studies cannot cover.Big data and artificial intelligence(AI)have enabled a shift in the research paradigm.Faced with these requirements,this study identified issues in the construction of EB on MP from a big data perspective.This includes the issues,progress,and future recommendations for EB construction-related studies using big data and AI.Current issues cover the status of theoretical studies,technical bottlenecks,and insufficient synergistic analyses related to EB construction.Research progress introduces advances in scientific research driven by big data in three key areas of MP:natural resources,the ecological environment,and sustainable development.For the future development of EB construction on MP,it is recommended to utilize big data and intelligent computing technologies,integrate extensive regional data resources,develop precise algorithms and automated tools,and construct a big data collaborative innovation platform.This study aims to call for more attention to big data and AI applications in EB studies,thereby supporting the achievement of sustainable development goals in the MP and enhancing the research paradigm transforming in the fields of resources and the environment.展开更多
Nowadays,healthcare applications necessitate maximum volume of medical data to be fed to help the physicians,academicians,pathologists,doctors and other healthcare professionals.Advancements in the domain of Wireless ...Nowadays,healthcare applications necessitate maximum volume of medical data to be fed to help the physicians,academicians,pathologists,doctors and other healthcare professionals.Advancements in the domain of Wireless Sensor Networks(WSN)andMultimediaWireless Sensor Networks(MWSN)are tremendous.M-WMSN is an advanced form of conventional Wireless Sensor Networks(WSN)to networks that use multimedia devices.When compared with traditional WSN,the quantity of data transmission in M-WMSN is significantly high due to the presence of multimedia content.Hence,clustering techniques are deployed to achieve low amount of energy utilization.The current research work aims at introducing a new Density Based Clustering(DBC)technique to achieve energy efficiency inWMSN.The DBC technique is mainly employed for data collection in healthcare environment which primarily depends on three input parameters namely remaining energy level,distance,and node centrality.In addition,two static data collector points called Super Cluster Head(SCH)are placed,which collects the data from normal CHs and forwards it to the Base Station(BS)directly.SCH supports multi-hop data transmission that assists in effectively balancing the available energy.Adetailed simulation analysiswas conducted to showcase the superior performance of DBC technique and the results were examined under diverse aspects.The simulation outcomes concluded that the proposed DBC technique improved the network lifetime to a maximum of 16,500 rounds,which is significantly higher compared to existing methods.展开更多
The year of 2011 is considered the first year of big data market in China.Compared with the global scale,China's big data growth will be faster than the global average growth rate,and China will usher in the rapid...The year of 2011 is considered the first year of big data market in China.Compared with the global scale,China's big data growth will be faster than the global average growth rate,and China will usher in the rapid expansion of big data market in the next few years.This paper presents the overall big data development in China in terms of market scale and development stages,enterprise development in the industry chain,the technology standards,and industrial applications.The paper points out the issues and challenges facing big data development in China and proposes to make polices and create support approaches for big data transactions and personal privacy protection.展开更多
In the era of big data,traditional regression models cannot deal with uncertain big data efficiently and accurately.In order to make up for this deficiency,this paper proposes a quantum fuzzy regression model,which us...In the era of big data,traditional regression models cannot deal with uncertain big data efficiently and accurately.In order to make up for this deficiency,this paper proposes a quantum fuzzy regression model,which uses fuzzy theory to describe the uncertainty in big data sets and uses quantum computing to exponentially improve the efficiency of data set preprocessing and parameter estimation.In this paper,data envelopment analysis(DEA)is used to calculate the degree of importance of each data point.Meanwhile,Harrow,Hassidim and Lloyd(HHL)algorithm and quantum swap circuits are used to improve the efficiency of high-dimensional data matrix calculation.The application of the quantum fuzzy regression model to smallscale financial data proves that its accuracy is greatly improved compared with the quantum regression model.Moreover,due to the introduction of quantum computing,the speed of dealing with high-dimensional data matrix has an exponential improvement compared with the fuzzy regression model.The quantum fuzzy regression model proposed in this paper combines the advantages of fuzzy theory and quantum computing which can efficiently calculate high-dimensional data matrix and complete parameter estimation using quantum computing while retaining the uncertainty in big data.Thus,it is a new model for efficient and accurate big data processing in uncertain environments.展开更多
Environmental pollution, food safety and health are closely linked. A key challenge in addressing the problem of food safety and protecting public health is building an integrated knowledge base to inform policy and s...Environmental pollution, food safety and health are closely linked. A key challenge in addressing the problem of food safety and protecting public health is building an integrated knowledge base to inform policy and strengthen governance. This requires breaking down the trans-departmental information barrier across the environment, food and health domains to ensure the effective flow of data and the efficient utilization of resources, and facilitate the collaborative governance of food safety. Achieving this will be crucial for the development of health and medical care in China in the era of big data. Currently, the information resources commanded by various departments are incomplete and fragmented. Data resources are also organized in vertical silos and there is a lack of data sharing within and across policy streams. To provide the basis for more effective integrated collection and analysis of data in future, this study summarizes the information resources of various departments whose work relates to interactions between environment, food and health, and presents measures to strengthen top-down design, and establish unified data standards and a big data sharing platform. It also points to the need for increased training of data analysts with interdisciplinary expertise.展开更多
The industrial Internet has germinated with the integration of the traditional industry and information technologies.An identifier is the identification of an object in the industrial Internet.The identifier technolog...The industrial Internet has germinated with the integration of the traditional industry and information technologies.An identifier is the identification of an object in the industrial Internet.The identifier technology is a method to validate the identification of an object and trace it.The identifier is a bridge to connect information islands in the industry,as well as the data basis for building a technology application ecosystem based on identifier resolution.We propose three practical applications and application scenarios of the industrial Internet identifier in this paper.Future applications of identifier resolution in the industrial Internet field are also presented.展开更多
The Xingmeng orogenic belt is located in the eastern section of the Central Asian orogenic belt,which is one of the key areas to study the formation and evolution of the Central Asian orogenic belt.At present,there is...The Xingmeng orogenic belt is located in the eastern section of the Central Asian orogenic belt,which is one of the key areas to study the formation and evolution of the Central Asian orogenic belt.At present,there is a huge controversy over the closure time of the Paleo-Asian Ocean in the Xingmeng orogenic belt.One of the reasons is that the genetic tectonic setting of the Carboniferous volcanic rocks is not clear.Due to the diversity of volcanic rock geochemical characteristics and its related interpretations,there are two different views on the tectonic setting of Carboniferous volcanic rocks in the Xingmeng orogenic belt:island arc and continental rift.In recent years,it is one of the important development directions in the application of geological big data technology to analyze geochemical data based on machine learning methods and further infer the tectonic background of basalt.This paper systematically collects Carboniferous basic rock data from Dongwuqi area of Inner Mongolia,Keyouzhongqi area of Inner Mongolia and Beishan area in the southern section of the Central Asian Orogenic Belt.Random forest algorithm is used for training sets of major elements and trace elements in global island arc basalt and rift basalt,and then the trained model is used to predict the tectonic setting of the Carboniferous magmatic rock samples in the Xingmeng orogenic belt.The prediction results shows that the island arc probability of most of the research samples is between 0.65 and 1,which indicates that the island arc tectonic setting is more credible.In this paper,it is concluded that magmatism in the Beishan area of the southern part of the Central Asian Orogenic belt in the Early Carboniferous may have formed in the heyday of subduction,while the Xingmeng orogenic belt in the Late Carboniferous may have been in the late subduction stage to the collision or even the early rifting stage.This temporal and spatial evolution shows that the subduction of the Paleo-Asian Ocean is different from west to east.Therefore,the research results of this paper show that the subduction of the Xingmeng orogenic belt in the Carboniferous has not ended yet.展开更多
文摘Nowadays most of the cloud applications process large amount of data to provide the desired results. The Internet environment, the enterprise network advertising, network marketing plan, need partner sites selected as carrier and publishers. Website through static pages, dynamic pages, floating window, AD links, take the initiative to push a variety of ways to show the user enterprise marketing solutions, when the user access to web pages, use eye effect and concentration effect, attract users through reading web pages or click the page again, let the user detailed comprehensive understanding of the marketing plan, which affects the user' s real purchase decisions. Therefore, we combine the cloud environment with search engine optimization technique, the result shows that our method outperforms compared with other approaches.
文摘In this paper, we research on the research on the mass structured data storage and sorting algorithm and methodology for SQL database under the big data environment. With the data storage market development and centering on the server, the data will store model to data- centric data storage model. Storage is considered from the start, just keep a series of data, for the management system and storage device rarely consider the intrinsic value of the stored data. The prosperity of the Internet has changed the world data storage, and with the emergence of many new applications. Theoretically, the proposed algorithm has the ability of dealing with massive data and numerically, the algorithm could enhance the processing accuracy and speed which will be meaningful.
基金Sponsored by National Natural Science Foundation of China(51708004)YuYou Talent Training Program of North University of Technology(215051360020XN160/009)+1 种基金General Program of Beijing Natural Science Foundation(8202017)2018 Beijing Municipal University Academic Human Resources Development:Youth Talent Support Program(PXM2018-014212-000043).
文摘The meteorological big data in Beijing area are typical multi-dimensional big data containing spatiotemporal characteristics,which have important research value for researches related to urban human settlement environment.With the help of computer programming and software processing,big data crawling,integration,extraction and multi-dimensional information fusion can be realized quickly and effectively,so as to obtain the data set needed for research and realize the target of visualization.Through big data analysis of wind environment,thermal environment and total atmospheric suspended particulate pollutants in Beijing area,it was found that the average wind speed in Beijing area decreased signifi cantly in recent 40 years,while the surface temperature increased signifi cantly;urban heat island effect was signifi cant,and the phenomenon of atmospheric suspended particulate pollution was relatively common.The spatial distribution of the three climatic and environmental data was not balanced and had signifi cant regularity and correlation.Improving urban ventilation corridors and improving urban ventilation capacity is a feasible way to improve urban heat island effect and reduce urban climate issues such as atmospheric particulate pollution.
文摘The New Austrian Tunneling Method (NATM) has been widely used in the construction of mountain tun- nels, urban metro lines, underground storage tanks, underground power houses, mining roadways, and so on, The variation patterns of advance geological prediction data, stress-strain data of supporting struc- tures, and deformation data of the surrounding rock are vitally important in assessing the rationality and reliability of construction schemes, and provide essential information to ensure the safety and scheduling of tunnel construction, However, as the quantity of these data increases significantly, the uncertainty and discreteness of the mass data make it extremely difficult to produce a reasonable con- struction scheme; they also reduce the forecast accuracy of accidents and dangerous situations, creating huge challenges in tunnel construction safety, In order to solve this problem, a novel data service system is proposed that uses data-association technology and the NATM, with the support of a big data environ- ment, This system can integrate data resources from distributed monitoring sensors during the construc- tion process, and then identify associations and build relations among data resources under the same construction conditions, These data associations and relations are then stored in a data pool, With the development and supplementation of the data pool, similar relations can then he used under similar con- ditions, in order to provide data references for construction schematic designs and resource allocation, The proposed data service system also provides valuable guidance for the construction of similar projects.
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.
基金financially supported by the National Natural Science Foundation of China(Grant Nos.U1709202 and No.61502069)the Foundation of State Key Laboratory of Robotics(Grant No.2015-o03)the Fundamental Research Funds for the Central Universities(Grant Nos.DUT18JC39 and DUT17JC45)
文摘This study analyzes and summarizes seven main characteristics of the marine data sampled by multiple underwater gliders. These characteristics such as the big data volume and data sparseness make it extremely difficult to do some meaningful applications like early warning of marine environment. In order to make full use of the sea trial data, this paper gives the definition of two types of marine data cube which can integrate the big marine data sampled by multiple underwater gliders along saw-tooth paths, and proposes a data fitting algorithm based on time extraction and space compression(DFTS) to construct the temperature and conductivity data cubes. This research also presents an early warning algorithm based on data cube(EWDC) to realize the early warning of a new sampled data file.Experiments results show that the proposed methods are reasonable and effective. Our work is the first study to do some realistic applications on the data sampled by multiple underwater vehicles, and it provides a research framework for processing and analyzing the big marine data oriented to the applications of underwater gliders.
文摘The Cloud Computing Environment(CCE)developed for using the dynamic cloud is the ability of software and services likely to grow with any business.It has transformed the methodology for storing the enterprise data,accessing the data,and Data Sharing(DS).Big data frame a constant way of uploading and sharing the cloud data in a hierarchical architecture with different kinds of separate privileges to access the data.With the requirement of vast volumes of storage area in the CCEs,capturing a secured data access framework is an important issue.This paper proposes an Improved Secure Identification-based Multilevel Structure of Data Sharing(ISIMSDS)to hold the DS of big data in CCEs.The complex file partitioning technique is proposed to verify the access privilege context for sharing data in complex CCEs.An access control Encryption Method(EM)is used to improve the encryption.The Complexity is measured to increase the authentication standard.The active attack is protected using this ISIMSDS methodology.Our proposed ISIMSDS method assists in diminishing the Complexity whenever the user’s population is increasing rapidly.The security analysis proves that the proposed ISIMSDS methodology is more secure against the chosen-PlainText(PT)attack and provides more efficient computation and storage space than the related methods.The performance of the proposed ISIMSDS methodology provides more efficiency in communication costs such as encryption,decryption,and retrieval of the data.
文摘Big data and associated analytics have the potential to revolutionize healthcare through the tools and techniques they offer to manage and exploit the large volumes of heterogeneous data being collected in the healthcare domain. The strict security and privacy constraints on this data, however, pose a major obstacle to the successful use of these tools and techniques. The paper first describes the security challenges associated with big data analytics in healthcare research from a unique perspective based on the big data analytics pipeline. The paper then examines the use of data safe havens as an approach to addressing the security challenges and argues for the approach by providing a detailed introduction to the security mechanisms implemented in a novel data safe haven. The CIMVHR Data Safe Haven (CDSH) was developed to support research into the health and well-being of Canadian military, Veterans, and their families. The CDSH is shown to overcome the security challenges presented in the different stages of the big data analytics pipeline.
文摘The retail food environment (RFE) has a significant impact on people’s dietary behavior and diet-related outcomes. Although RFE research has received a lot of attention, there are very few studies that shed light on the foodscape and assessment methodologies in the China context. Based on open data obtained from Dianping.com and AutoNavi map, we classified all food outlets into six types. Geographic Information Systems (GIS) techniques were employed to create two network buffer areas (1-km and 3-km) and calculate the absolute measures and relative measures (i.e., mRFEI and Rmix). We modified the calculation of relative measures by adding items and assigning weights. The mean mRFEI using the 1-km and 3-km buffer sizes across the communities were 10.45 and 20.12, respectively, while the mean mRmix of the two buffer sizes were 20.97 and 58.04, indicating that residents in Wuhan have better access to fresh and nutritious food within 3-km network buffers. Residents in urban areas are more likely to be exposed to an unhealthy food environment than those in rural areas. Residents in Xinzhou and Qiaokou districts are more likely to be subjected to unfavorable neighborhood RFE. The open data-driven methods for assessing RFE in Wuhan, China may guide community-level food policy interventions and promote active living by shifting built environments to increase residents’ access to healthy food.
基金The National Natural Science Foundation of China(32161143025)The National Key R&D Program of China(2022YFE0119200)+4 种基金The Science&Technology Fundamental Resources Investigation Program of China(2022FY101902)The Mongolian Foundation for Science and Technology(NSFC_2022/01,CHN2022/276)The Key R&D and Achievement Transformation Plan Project in Inner Mongolia Autonomous Region(2023KJHZ0027)The Key Project of Innovation LREIS(KPI006)The Construction Project of China Knowledge Center for Engineering Sciences and Technology(CKCEST-2023-1-5)。
文摘The Mongolian Plateau(MP),situated in the transitional zone between the Siberian taiga and the arid grasslands of Central Asia,plays a significant role as an Ecological Barrier(EB)with crucial implications for ecological and resource security in Northeast Asia.EB is a vast concept and a complex issue related to many aspects such as water,land,air,vegetation,animals,and people,et al.It is very difficult to understand the whole of EB without a comprehensive perspective,that traditional diverse studies cannot cover.Big data and artificial intelligence(AI)have enabled a shift in the research paradigm.Faced with these requirements,this study identified issues in the construction of EB on MP from a big data perspective.This includes the issues,progress,and future recommendations for EB construction-related studies using big data and AI.Current issues cover the status of theoretical studies,technical bottlenecks,and insufficient synergistic analyses related to EB construction.Research progress introduces advances in scientific research driven by big data in three key areas of MP:natural resources,the ecological environment,and sustainable development.For the future development of EB construction on MP,it is recommended to utilize big data and intelligent computing technologies,integrate extensive regional data resources,develop precise algorithms and automated tools,and construct a big data collaborative innovation platform.This study aims to call for more attention to big data and AI applications in EB studies,thereby supporting the achievement of sustainable development goals in the MP and enhancing the research paradigm transforming in the fields of resources and the environment.
文摘Nowadays,healthcare applications necessitate maximum volume of medical data to be fed to help the physicians,academicians,pathologists,doctors and other healthcare professionals.Advancements in the domain of Wireless Sensor Networks(WSN)andMultimediaWireless Sensor Networks(MWSN)are tremendous.M-WMSN is an advanced form of conventional Wireless Sensor Networks(WSN)to networks that use multimedia devices.When compared with traditional WSN,the quantity of data transmission in M-WMSN is significantly high due to the presence of multimedia content.Hence,clustering techniques are deployed to achieve low amount of energy utilization.The current research work aims at introducing a new Density Based Clustering(DBC)technique to achieve energy efficiency inWMSN.The DBC technique is mainly employed for data collection in healthcare environment which primarily depends on three input parameters namely remaining energy level,distance,and node centrality.In addition,two static data collector points called Super Cluster Head(SCH)are placed,which collects the data from normal CHs and forwards it to the Base Station(BS)directly.SCH supports multi-hop data transmission that assists in effectively balancing the available energy.Adetailed simulation analysiswas conducted to showcase the superior performance of DBC technique and the results were examined under diverse aspects.The simulation outcomes concluded that the proposed DBC technique improved the network lifetime to a maximum of 16,500 rounds,which is significantly higher compared to existing methods.
文摘The year of 2011 is considered the first year of big data market in China.Compared with the global scale,China's big data growth will be faster than the global average growth rate,and China will usher in the rapid expansion of big data market in the next few years.This paper presents the overall big data development in China in terms of market scale and development stages,enterprise development in the industry chain,the technology standards,and industrial applications.The paper points out the issues and challenges facing big data development in China and proposes to make polices and create support approaches for big data transactions and personal privacy protection.
基金This work is supported by the NationalNatural Science Foundation of China(No.62076042)the Key Research and Development Project of Sichuan Province(Nos.2021YFSY0012,2020YFG0307,2021YFG0332)+3 种基金the Science and Technology Innovation Project of Sichuan(No.2020017)the Key Research and Development Project of Chengdu(No.2019-YF05-02028-GX)the Innovation Team of Quantum Security Communication of Sichuan Province(No.17TD0009)the Academic and Technical Leaders Training Funding Support Projects of Sichuan Province(No.2016120080102643).
文摘In the era of big data,traditional regression models cannot deal with uncertain big data efficiently and accurately.In order to make up for this deficiency,this paper proposes a quantum fuzzy regression model,which uses fuzzy theory to describe the uncertainty in big data sets and uses quantum computing to exponentially improve the efficiency of data set preprocessing and parameter estimation.In this paper,data envelopment analysis(DEA)is used to calculate the degree of importance of each data point.Meanwhile,Harrow,Hassidim and Lloyd(HHL)algorithm and quantum swap circuits are used to improve the efficiency of high-dimensional data matrix calculation.The application of the quantum fuzzy regression model to smallscale financial data proves that its accuracy is greatly improved compared with the quantum regression model.Moreover,due to the introduction of quantum computing,the speed of dealing with high-dimensional data matrix has an exponential improvement compared with the fuzzy regression model.The quantum fuzzy regression model proposed in this paper combines the advantages of fuzzy theory and quantum computing which can efficiently calculate high-dimensional data matrix and complete parameter estimation using quantum computing while retaining the uncertainty in big data.Thus,it is a new model for efficient and accurate big data processing in uncertain environments.
基金FORHEAD with funding from the Rockefeller Brothers Fund(RBF)
文摘Environmental pollution, food safety and health are closely linked. A key challenge in addressing the problem of food safety and protecting public health is building an integrated knowledge base to inform policy and strengthen governance. This requires breaking down the trans-departmental information barrier across the environment, food and health domains to ensure the effective flow of data and the efficient utilization of resources, and facilitate the collaborative governance of food safety. Achieving this will be crucial for the development of health and medical care in China in the era of big data. Currently, the information resources commanded by various departments are incomplete and fragmented. Data resources are also organized in vertical silos and there is a lack of data sharing within and across policy streams. To provide the basis for more effective integrated collection and analysis of data in future, this study summarizes the information resources of various departments whose work relates to interactions between environment, food and health, and presents measures to strengthen top-down design, and establish unified data standards and a big data sharing platform. It also points to the need for increased training of data analysts with interdisciplinary expertise.
文摘The industrial Internet has germinated with the integration of the traditional industry and information technologies.An identifier is the identification of an object in the industrial Internet.The identifier technology is a method to validate the identification of an object and trace it.The identifier is a bridge to connect information islands in the industry,as well as the data basis for building a technology application ecosystem based on identifier resolution.We propose three practical applications and application scenarios of the industrial Internet identifier in this paper.Future applications of identifier resolution in the industrial Internet field are also presented.
文摘The Xingmeng orogenic belt is located in the eastern section of the Central Asian orogenic belt,which is one of the key areas to study the formation and evolution of the Central Asian orogenic belt.At present,there is a huge controversy over the closure time of the Paleo-Asian Ocean in the Xingmeng orogenic belt.One of the reasons is that the genetic tectonic setting of the Carboniferous volcanic rocks is not clear.Due to the diversity of volcanic rock geochemical characteristics and its related interpretations,there are two different views on the tectonic setting of Carboniferous volcanic rocks in the Xingmeng orogenic belt:island arc and continental rift.In recent years,it is one of the important development directions in the application of geological big data technology to analyze geochemical data based on machine learning methods and further infer the tectonic background of basalt.This paper systematically collects Carboniferous basic rock data from Dongwuqi area of Inner Mongolia,Keyouzhongqi area of Inner Mongolia and Beishan area in the southern section of the Central Asian Orogenic Belt.Random forest algorithm is used for training sets of major elements and trace elements in global island arc basalt and rift basalt,and then the trained model is used to predict the tectonic setting of the Carboniferous magmatic rock samples in the Xingmeng orogenic belt.The prediction results shows that the island arc probability of most of the research samples is between 0.65 and 1,which indicates that the island arc tectonic setting is more credible.In this paper,it is concluded that magmatism in the Beishan area of the southern part of the Central Asian Orogenic belt in the Early Carboniferous may have formed in the heyday of subduction,while the Xingmeng orogenic belt in the Late Carboniferous may have been in the late subduction stage to the collision or even the early rifting stage.This temporal and spatial evolution shows that the subduction of the Paleo-Asian Ocean is different from west to east.Therefore,the research results of this paper show that the subduction of the Xingmeng orogenic belt in the Carboniferous has not ended yet.