The main obstacle to the open sharing of scientific data is the lack of a legal protection system for intellectual property.This article analyzes the progress of research papers on intellectual property in scientific ...The main obstacle to the open sharing of scientific data is the lack of a legal protection system for intellectual property.This article analyzes the progress of research papers on intellectual property in scientific data in China through literature search and statistics.Currently,research subjects are unbalanced,research content is uneven,research methods are intellectual single,and research depth is insufficient.It is recommended that different stakeholders engage in deep cross disciplinary cooperation,further improve China’s legal and policy protection system for scientific data intellectual property,and promote the open sharing of scientific data.展开更多
Scientific data refers to the data or data sets generated from scientific research process through observations, experiments, calculations and analyses. These data are fundamental components for developing new knowled...Scientific data refers to the data or data sets generated from scientific research process through observations, experiments, calculations and analyses. These data are fundamental components for developing new knowledge, advancing technological progress, and creating wealth. In recent years, scientific data has been attracting more and more attention for its preserving, archiving and sharing.展开更多
Feature representation is one of the key issues in data clustering. The existing feature representation of scientific data is not sufficient, which to some extent affects the result of scientific data clustering. Ther...Feature representation is one of the key issues in data clustering. The existing feature representation of scientific data is not sufficient, which to some extent affects the result of scientific data clustering. Therefore, the paper proposes a concept of composite text description(CTD) and a CTD-based feature representation method for biomedical scientific data. The method mainly uses different feature weight algorisms to represent candidate features based on two types of data sources respectively, combines and finally strengthens the two feature sets. Experiments show that comparing with traditional methods, the feature representation method is more effective than traditional methods and can significantly improve the performance of biomedcial data clustering.展开更多
The analysis of the current big data policy for scientific research can promote the ecological optimization of big data policy,and is a positive response to the national big data strategy.This paper constructs a“dual...The analysis of the current big data policy for scientific research can promote the ecological optimization of big data policy,and is a positive response to the national big data strategy.This paper constructs a“dual three-dimensional framework”to analyze the central and local science data policies from 2013 to 2022.With the dissemination and popularization of the concept of scientific data sharing,policies and regulations related to scientific data management have been issued,which promotes the emergence of scientific data policy ecology.The scientific data policy ecology is a complex and multicollaborative dynamic system composed of policy text,policy environment and related personnel,the core of which lies in the policy itself,aiming to ensure the security of scientific data and promote the development of science.There are the following problems in the scientific data policy ecology:In terms of policy text,the policy effectiveness is low and the use of policy tools is uneven.In terms of relevant personnel,the cooperation network density among various subjects is low and there is a lack of highquality talents.In terms of policy environment,there is an imbalance of regional funding support.It also puts forward some optimization strategies,such as strengthening the systematization of policy texts,improving the degree of coordination of policy subjects to form a long-term cooperation network,and improving the degree of compatibility between environment,personnel and policies.展开更多
As the Earth entering into the Anthropocene, global sustainable development requires ecological research to evolve into the large-scale, quantitative, and predictive era. It necessitates a revolution of ecological obs...As the Earth entering into the Anthropocene, global sustainable development requires ecological research to evolve into the large-scale, quantitative, and predictive era. It necessitates a revolution of ecological observation technology and a long-term accumulation of scientific data. The ecosystem flux tower observation technology is the right one to meet this requirement. However, the unique advantages and potential values of global-scale flux tower observation are still not fully appreciated. Reviewing the development history of global meteorological observation and its scientific contributions to the society, we can get an important enlightenment to re-cognize the scientific mission of flux observation.展开更多
Purpose: In the open science era, it is typical to share project-generated scientific data by depositing it in an open and accessible database. Moreover, scientific publications are preserved in a digital library arc...Purpose: In the open science era, it is typical to share project-generated scientific data by depositing it in an open and accessible database. Moreover, scientific publications are preserved in a digital library archive. It is challenging to identify the data usage that is mentioned in literature and associate it with its source. Here, we investigated the data usage of a government-funded cancer genomics project, The Cancer Genome Atlas(TCGA), via a full-text literature analysis.Design/methodology/approach: We focused on identifying articles using the TCGA dataset and constructing linkages between the articles and the specific TCGA dataset. First, we collected 5,372 TCGA-related articles from Pub Med Central(PMC). Second, we constructed a benchmark set with 25 full-text articles that truly used the TCGA data in their studies, and we summarized the key features of the benchmark set. Third, the key features were applied to the remaining PMC full-text articles that were collected from PMC.Findings: The amount of publications that use TCGA data has increased significantly since 2011, although the TCGA project was launched in 2005. Additionally, we found that the critical areas of focus in the studies that use the TCGA data were glioblastoma multiforme, lung cancer, and breast cancer; meanwhile, data from the RNA-sequencing(RNA-seq) platform is the most preferable for use.Research limitations: The current workflow to identify articles that truly used TCGA data is labor-intensive. An automatic method is expected to improve the performance.Practical implications: This study will help cancer genomics researchers determine the latest advancements in cancer molecular therapy, and it will promote data sharing and data-intensive scientific discovery.Originality/value: Few studies have been conducted to investigate data usage by governmentfunded projects/programs since their launch. In this preliminary study, we extracted articles that use TCGA data from PMC, and we created a link between the full-text articles and the source data.展开更多
In the era of the big data. the national strategies and the rapid development of computers and storage technologies bring opportunities and challenges to the library's data services. Based on the investigation litera...In the era of the big data. the national strategies and the rapid development of computers and storage technologies bring opportunities and challenges to the library's data services. Based on the investigation literature of the scientific data services in the university libraries in the United States, the development process of the scientific data is analyzed from three aspects of the service types, the service mode and the service contents. The author of this paper also proposes opportunities and challenges from 5 aspects of the policy support. strengthening the publicity, the self learning, the self positioning and relying on the embedded subject librarians, to promote the development of the library scientific data services.展开更多
Scientific data citation is a common behavior in the process of scientific research and writing academic papers under the context of data-intensive scientific research paradigm. Standardized citation of scientific dat...Scientific data citation is a common behavior in the process of scientific research and writing academic papers under the context of data-intensive scientific research paradigm. Standardized citation of scientific data has received continuous attention from academia and policy management departments in recent years. In order to explore the characteristics and the correlation of scientific data citations of Chinese researchers, based on the results of scientific data citations presented in academic papers, this study use CNKI as the data source to extract771 papers in 12 academic journals during 2017 to 2019. Combining with the Chinese national standard Information Technology-Scientific Data Citation(GB/T 35294-2017), a set of variables were given to reflect the reference characteristics. First, 4992 citation records of scientific data were manually identified and coded one by one, and the citation characteristics were presented with the statistical distribution of data frequency. Then, the chi-square test, log-linear model analysis, and correspondence analysis methods were used to analyze and explore the significant correlation among the characteristic variables. The study found that in general, the phenomenon of scientific data citations in Chinese researchers is widespread, and the number of citations has increased year by year, but there are also a large number of irregular citations. At present, there are roughly two modes of citation labeling behavior, and the traditional document citation mode is still the mainstream citation method for data citation. Furthermore, distributor type of scientific data may affect the reference in marked way. In addition, the completeness of the labeling elements differed in different bibliographic elements of scientific data. Irregular references to Unique Identifiers and parsing addresses are particularly prominent, which may be related to the type of distributor.展开更多
The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created ...The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created new opportunities for startups;for instance,the extraction of pertinent research papers from enormous knowledge repositories using certain innovative methods has become an important task for researchers and entrepreneurs.Traditionally,the content of the papers are compared to list the relevant papers from a repository.The conventional method results in a long list of papers that is often impossible to interpret productively.Therefore,the need for a novel approach that intelligently utilizes the available data is imminent.Moreover,the primary element of the scientific knowledge base is a research article,which consists of various logical sections such as the Abstract,Introduction,Related Work,Methodology,Results,and Conclusion.Thus,this study utilizes these logical sections of research articles,because they hold significant potential in finding relevant papers.In this study,comprehensive experiments were performed to determine the role of the logical sections-based terms indexing method in improving the quality of results(i.e.,retrieving relevant papers).Therefore,we proposed,implemented,and evaluated the logical sections-based content comparisons method to address the research objective with a standard method of indexing terms.The section-based approach outperformed the standard content-based approach in identifying relevant documents from all classified topics of computer science.Overall,the proposed approach extracted 14%more relevant results from the entire dataset.As the experimental results suggested that employing a finer content similarity technique improved the quality of results,the proposed approach has led the foundation of knowledge-based startups.展开更多
In order to realize visualization of three-dimensional data field (TDDF) in instrument, two methods of visualization of TDDF and the usual manner of quick graphic and image processing are analyzed. And how to use Op...In order to realize visualization of three-dimensional data field (TDDF) in instrument, two methods of visualization of TDDF and the usual manner of quick graphic and image processing are analyzed. And how to use OpenGL technique and the characteristic of analyzed data to construct a TDDF, the ways of reality processing and interactive processing are described. Then the medium geometric element and a related realistic model are constructed by means of the first algorithm. Models obtained for attaching the third dimension in three-dimensional data field are presented. An example for TDDF realization of machine measuring is provided. The analysis of resultant graphic indicates that the three-dimensional graphics built by the method developed is featured by good reality, fast processing and strong interaction展开更多
Much research is dependent on Information and Communication Technologies(ICT).Researchers in different research domains have set up their own ICT systems(data labs)to support their research,from data collection(observ...Much research is dependent on Information and Communication Technologies(ICT).Researchers in different research domains have set up their own ICT systems(data labs)to support their research,from data collection(observation,experiment,simulation)through analysis(analytics,visualisation)to publication.However,too frequently the Digital Objects(DOs)upon which the research results are based are not curated and thus neither available for reproduction of the research nor utilization for other(e.g.,multidisciplinary)research purposes.The key to curation is rich metadata recording not only a description of the DO and the conditions of its use but also the provenance-the trail of actions performed on the DO along the research workflow.There are increasing real-world requirements for multidisciplinary research.With DOs in domain-specific ICT systems(silos),commonly with inadequate metadata,such research is hindered.Despite wide agreement on principles for achieving FAIR(findable,accessible,interoperable,and reusable)utilization of research data,current practices fall short.FAIR DOs offer a way forward.The paradoxes,barriers and possible solutions are examined.The key is persuading the researcher to adopt best practices which implies decreasing the cost(easy to use autonomic tools)and increasing the benefit(incentives such as acknowledgement and citation)while maintaining researcher independence and flexibility.展开更多
Big data is a strategic highland in the era of knowledge-driven economies, and it is also a new type of strategic resource for all nations. Big data collected from space for Earth observation—so-called Big Earth Data...Big data is a strategic highland in the era of knowledge-driven economies, and it is also a new type of strategic resource for all nations. Big data collected from space for Earth observation—so-called Big Earth Data—is creating new opportunities for the Earth sciences and revolutionizing the innovation of methodologies and thought patterns. It has potential to advance in-depth development of Earth sciences and bring more exciting scientific discoveries.The Academic Divisions of the Chinese Academy of Sciences Forum on Frontiers of Science and Technology for Big Earth Data from Space was held in Beijing in June of 2015.The forum analyzed the development of Earth observation technology and big data, explored the concepts and scientific connotations of Big Earth Data from space, discussed the correlation between Big Earth Data and Digital Earth, and dissected the potential of Big Earth Data from space to promote scientific discovery in the Earth sciences, especially concerning global changes.展开更多
Big data is a revolutionary innovation that has allowed the development of many new methods in scientific research.This new way of thinking has encouraged the pursuit of new discoveries.Big data occupies the strategic...Big data is a revolutionary innovation that has allowed the development of many new methods in scientific research.This new way of thinking has encouraged the pursuit of new discoveries.Big data occupies the strategic high ground in the era of knowledge economies and also constitutes a new national and global strategic resource.“Big Earth data”,derived from,but not limited to,Earth observation has macro-level capabilities that enable rapid and accurate monitoring of the Earth,and is becoming a new frontier contributing to the advancement of Earth science and significant scientific discoveries.Within the context of the development of big data,this paper analyzes the characteristics of scientific big data and recognizes its great potential for development,particularly with regard to the role that big Earth data can play in promoting the development of Earth science.On this basis,the paper outlines the Big Earth Data Science Engineering Project(CASEarth)of the Chinese Academy of Sciences Strategic Priority Research Program.Big data is at the forefront of the integration of geoscience,information science,and space science and technology,and it is expected that big Earth data will provide new prospects for the development of Earth science.展开更多
Animal models are crucial for the study of severe infectious diseases,which is essential for determining their pathogenesis and the development of vaccines and drugs.Animal experiments involving risk grade 3 agents su...Animal models are crucial for the study of severe infectious diseases,which is essential for determining their pathogenesis and the development of vaccines and drugs.Animal experiments involving risk grade 3 agents such as SARS CoV,HIV,M.tb,H7N9,and Brucella must be conducted in an Animal Biosafety Level 3(ABSL-3)facility.Because of the in vivo work,the biosafety risk in ABSL-3 facilities is higher than that in BSL-3 facilities.Undoubtedly,management practices must be strengthened to ensure biosafety in the ABSL-3 facility.Meanwhile,we cannot ignore the reliable scientific results obtained from animal experiments conducted in ABSL-3 laboratories.It is of great practical significance to study the overall biosafety concepts that can increase the scientific data quality.Based on the management of animal experiments in the ABSL-3 Laboratory of Wuhan University,combined with relevant international and domestic literature,we indicate the main safety issues and factors affecting animal experiment results at ABSL-3 facilities.Based on these issues,management practices regarding animal experiments in ABSL-3 facilities are proposed,which take into account both biosafety and scientifically sound data.展开更多
The impressive conversational and programming abilities of ChatGPT make it an attractive tool for facilitating the education of bioinformatics data analysis for beginners.In this study,we proposed an iterative model t...The impressive conversational and programming abilities of ChatGPT make it an attractive tool for facilitating the education of bioinformatics data analysis for beginners.In this study,we proposed an iterative model to fine-tune instructions for guiding a chatbot in generating code for bioinformatics data analysis tasks.We demonstrated the feasibility of the model by applying it to various bioinformatics topics.Additionally,we discussed practical considerations and limitations regarding the use of the model in chatbot-aided bioinformatics education.展开更多
文摘The main obstacle to the open sharing of scientific data is the lack of a legal protection system for intellectual property.This article analyzes the progress of research papers on intellectual property in scientific data in China through literature search and statistics.Currently,research subjects are unbalanced,research content is uneven,research methods are intellectual single,and research depth is insufficient.It is recommended that different stakeholders engage in deep cross disciplinary cooperation,further improve China’s legal and policy protection system for scientific data intellectual property,and promote the open sharing of scientific data.
基金Ministry of Science and Technology "National Science and Technology Platform Program"(2005DKA31800)
文摘Scientific data refers to the data or data sets generated from scientific research process through observations, experiments, calculations and analyses. These data are fundamental components for developing new knowledge, advancing technological progress, and creating wealth. In recent years, scientific data has been attracting more and more attention for its preserving, archiving and sharing.
基金supported by the Agridata,the sub-program of National Science and Technology Infrastructure Program(Grant No.2005DKA31800)
文摘Feature representation is one of the key issues in data clustering. The existing feature representation of scientific data is not sufficient, which to some extent affects the result of scientific data clustering. Therefore, the paper proposes a concept of composite text description(CTD) and a CTD-based feature representation method for biomedical scientific data. The method mainly uses different feature weight algorisms to represent candidate features based on two types of data sources respectively, combines and finally strengthens the two feature sets. Experiments show that comparing with traditional methods, the feature representation method is more effective than traditional methods and can significantly improve the performance of biomedcial data clustering.
基金supported by the Grant from the Project“Trends,Priorities,and Logic of Science and Technology Policy in the New U.S.Administration(Biden Administration)”commissioned by the International Department of the Ministry of Science and Technology of China(2021ICR12)
文摘The analysis of the current big data policy for scientific research can promote the ecological optimization of big data policy,and is a positive response to the national big data strategy.This paper constructs a“dual three-dimensional framework”to analyze the central and local science data policies from 2013 to 2022.With the dissemination and popularization of the concept of scientific data sharing,policies and regulations related to scientific data management have been issued,which promotes the emergence of scientific data policy ecology.The scientific data policy ecology is a complex and multicollaborative dynamic system composed of policy text,policy environment and related personnel,the core of which lies in the policy itself,aiming to ensure the security of scientific data and promote the development of science.There are the following problems in the scientific data policy ecology:In terms of policy text,the policy effectiveness is low and the use of policy tools is uneven.In terms of relevant personnel,the cooperation network density among various subjects is low and there is a lack of highquality talents.In terms of policy environment,there is an imbalance of regional funding support.It also puts forward some optimization strategies,such as strengthening the systematization of policy texts,improving the degree of coordination of policy subjects to form a long-term cooperation network,and improving the degree of compatibility between environment,personnel and policies.
基金Science and Technology Service Network Initiative of the Chinese Academy of Sciences(KFJ-SW-STS-169)National Natural Science Foundation of China(41671045 and 31600347)
文摘As the Earth entering into the Anthropocene, global sustainable development requires ecological research to evolve into the large-scale, quantitative, and predictive era. It necessitates a revolution of ecological observation technology and a long-term accumulation of scientific data. The ecosystem flux tower observation technology is the right one to meet this requirement. However, the unique advantages and potential values of global-scale flux tower observation are still not fully appreciated. Reviewing the development history of global meteorological observation and its scientific contributions to the society, we can get an important enlightenment to re-cognize the scientific mission of flux observation.
基金supported by the National Population and Health Scientific Data Sharing Program of Chinathe Knowledge Centre for Engineering Sciences and Technology (Medical Centre)the Fundamental Research Funds for the Central Universities (Grant No.: 13R0101)
文摘Purpose: In the open science era, it is typical to share project-generated scientific data by depositing it in an open and accessible database. Moreover, scientific publications are preserved in a digital library archive. It is challenging to identify the data usage that is mentioned in literature and associate it with its source. Here, we investigated the data usage of a government-funded cancer genomics project, The Cancer Genome Atlas(TCGA), via a full-text literature analysis.Design/methodology/approach: We focused on identifying articles using the TCGA dataset and constructing linkages between the articles and the specific TCGA dataset. First, we collected 5,372 TCGA-related articles from Pub Med Central(PMC). Second, we constructed a benchmark set with 25 full-text articles that truly used the TCGA data in their studies, and we summarized the key features of the benchmark set. Third, the key features were applied to the remaining PMC full-text articles that were collected from PMC.Findings: The amount of publications that use TCGA data has increased significantly since 2011, although the TCGA project was launched in 2005. Additionally, we found that the critical areas of focus in the studies that use the TCGA data were glioblastoma multiforme, lung cancer, and breast cancer; meanwhile, data from the RNA-sequencing(RNA-seq) platform is the most preferable for use.Research limitations: The current workflow to identify articles that truly used TCGA data is labor-intensive. An automatic method is expected to improve the performance.Practical implications: This study will help cancer genomics researchers determine the latest advancements in cancer molecular therapy, and it will promote data sharing and data-intensive scientific discovery.Originality/value: Few studies have been conducted to investigate data usage by governmentfunded projects/programs since their launch. In this preliminary study, we extracted articles that use TCGA data from PMC, and we created a link between the full-text articles and the source data.
文摘In the era of the big data. the national strategies and the rapid development of computers and storage technologies bring opportunities and challenges to the library's data services. Based on the investigation literature of the scientific data services in the university libraries in the United States, the development process of the scientific data is analyzed from three aspects of the service types, the service mode and the service contents. The author of this paper also proposes opportunities and challenges from 5 aspects of the policy support. strengthening the publicity, the self learning, the self positioning and relying on the embedded subject librarians, to promote the development of the library scientific data services.
文摘Scientific data citation is a common behavior in the process of scientific research and writing academic papers under the context of data-intensive scientific research paradigm. Standardized citation of scientific data has received continuous attention from academia and policy management departments in recent years. In order to explore the characteristics and the correlation of scientific data citations of Chinese researchers, based on the results of scientific data citations presented in academic papers, this study use CNKI as the data source to extract771 papers in 12 academic journals during 2017 to 2019. Combining with the Chinese national standard Information Technology-Scientific Data Citation(GB/T 35294-2017), a set of variables were given to reflect the reference characteristics. First, 4992 citation records of scientific data were manually identified and coded one by one, and the citation characteristics were presented with the statistical distribution of data frequency. Then, the chi-square test, log-linear model analysis, and correspondence analysis methods were used to analyze and explore the significant correlation among the characteristic variables. The study found that in general, the phenomenon of scientific data citations in Chinese researchers is widespread, and the number of citations has increased year by year, but there are also a large number of irregular citations. At present, there are roughly two modes of citation labeling behavior, and the traditional document citation mode is still the mainstream citation method for data citation. Furthermore, distributor type of scientific data may affect the reference in marked way. In addition, the completeness of the labeling elements differed in different bibliographic elements of scientific data. Irregular references to Unique Identifiers and parsing addresses are particularly prominent, which may be related to the type of distributor.
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(2020-0-01592)Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2019R1F1A1058548).
文摘The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created new opportunities for startups;for instance,the extraction of pertinent research papers from enormous knowledge repositories using certain innovative methods has become an important task for researchers and entrepreneurs.Traditionally,the content of the papers are compared to list the relevant papers from a repository.The conventional method results in a long list of papers that is often impossible to interpret productively.Therefore,the need for a novel approach that intelligently utilizes the available data is imminent.Moreover,the primary element of the scientific knowledge base is a research article,which consists of various logical sections such as the Abstract,Introduction,Related Work,Methodology,Results,and Conclusion.Thus,this study utilizes these logical sections of research articles,because they hold significant potential in finding relevant papers.In this study,comprehensive experiments were performed to determine the role of the logical sections-based terms indexing method in improving the quality of results(i.e.,retrieving relevant papers).Therefore,we proposed,implemented,and evaluated the logical sections-based content comparisons method to address the research objective with a standard method of indexing terms.The section-based approach outperformed the standard content-based approach in identifying relevant documents from all classified topics of computer science.Overall,the proposed approach extracted 14%more relevant results from the entire dataset.As the experimental results suggested that employing a finer content similarity technique improved the quality of results,the proposed approach has led the foundation of knowledge-based startups.
基金This project is supported by National Natural Science Foundation of China (No.50405009)
文摘In order to realize visualization of three-dimensional data field (TDDF) in instrument, two methods of visualization of TDDF and the usual manner of quick graphic and image processing are analyzed. And how to use OpenGL technique and the characteristic of analyzed data to construct a TDDF, the ways of reality processing and interactive processing are described. Then the medium geometric element and a related realistic model are constructed by means of the first algorithm. Models obtained for attaching the third dimension in three-dimensional data field are presented. An example for TDDF realization of machine measuring is provided. The analysis of resultant graphic indicates that the three-dimensional graphics built by the method developed is featured by good reality, fast processing and strong interaction
文摘Much research is dependent on Information and Communication Technologies(ICT).Researchers in different research domains have set up their own ICT systems(data labs)to support their research,from data collection(observation,experiment,simulation)through analysis(analytics,visualisation)to publication.However,too frequently the Digital Objects(DOs)upon which the research results are based are not curated and thus neither available for reproduction of the research nor utilization for other(e.g.,multidisciplinary)research purposes.The key to curation is rich metadata recording not only a description of the DO and the conditions of its use but also the provenance-the trail of actions performed on the DO along the research workflow.There are increasing real-world requirements for multidisciplinary research.With DOs in domain-specific ICT systems(silos),commonly with inadequate metadata,such research is hindered.Despite wide agreement on principles for achieving FAIR(findable,accessible,interoperable,and reusable)utilization of research data,current practices fall short.FAIR DOs offer a way forward.The paradoxes,barriers and possible solutions are examined.The key is persuading the researcher to adopt best practices which implies decreasing the cost(easy to use autonomic tools)and increasing the benefit(incentives such as acknowledgement and citation)while maintaining researcher independence and flexibility.
基金supported by the Academic Divisions of the Chinese Academy of Sciences Forum on Frontiers of Science and Technology for Big Earth Data from Space
文摘Big data is a strategic highland in the era of knowledge-driven economies, and it is also a new type of strategic resource for all nations. Big data collected from space for Earth observation—so-called Big Earth Data—is creating new opportunities for the Earth sciences and revolutionizing the innovation of methodologies and thought patterns. It has potential to advance in-depth development of Earth sciences and bring more exciting scientific discoveries.The Academic Divisions of the Chinese Academy of Sciences Forum on Frontiers of Science and Technology for Big Earth Data from Space was held in Beijing in June of 2015.The forum analyzed the development of Earth observation technology and big data, explored the concepts and scientific connotations of Big Earth Data from space, discussed the correlation between Big Earth Data and Digital Earth, and dissected the potential of Big Earth Data from space to promote scientific discovery in the Earth sciences, especially concerning global changes.
基金This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences,Project title:CASEarth(XDA19000000)and Digital Belt and Road(XDA19030000).
文摘Big data is a revolutionary innovation that has allowed the development of many new methods in scientific research.This new way of thinking has encouraged the pursuit of new discoveries.Big data occupies the strategic high ground in the era of knowledge economies and also constitutes a new national and global strategic resource.“Big Earth data”,derived from,but not limited to,Earth observation has macro-level capabilities that enable rapid and accurate monitoring of the Earth,and is becoming a new frontier contributing to the advancement of Earth science and significant scientific discoveries.Within the context of the development of big data,this paper analyzes the characteristics of scientific big data and recognizes its great potential for development,particularly with regard to the role that big Earth data can play in promoting the development of Earth science.On this basis,the paper outlines the Big Earth Data Science Engineering Project(CASEarth)of the Chinese Academy of Sciences Strategic Priority Research Program.Big data is at the forefront of the integration of geoscience,information science,and space science and technology,and it is expected that big Earth data will provide new prospects for the development of Earth science.
基金We are grateful for the funding from the National Key Research and Development Program of China(grant No.:2016YFC1202203).
文摘Animal models are crucial for the study of severe infectious diseases,which is essential for determining their pathogenesis and the development of vaccines and drugs.Animal experiments involving risk grade 3 agents such as SARS CoV,HIV,M.tb,H7N9,and Brucella must be conducted in an Animal Biosafety Level 3(ABSL-3)facility.Because of the in vivo work,the biosafety risk in ABSL-3 facilities is higher than that in BSL-3 facilities.Undoubtedly,management practices must be strengthened to ensure biosafety in the ABSL-3 facility.Meanwhile,we cannot ignore the reliable scientific results obtained from animal experiments conducted in ABSL-3 laboratories.It is of great practical significance to study the overall biosafety concepts that can increase the scientific data quality.Based on the management of animal experiments in the ABSL-3 Laboratory of Wuhan University,combined with relevant international and domestic literature,we indicate the main safety issues and factors affecting animal experiment results at ABSL-3 facilities.Based on these issues,management practices regarding animal experiments in ABSL-3 facilities are proposed,which take into account both biosafety and scientifically sound data.
基金NIH-NIGMS grants P20 GM103434,U54 GM-104942,and 1P20 GM121322 to GHNIH-NLM grant R01LM013438 to LL.
文摘The impressive conversational and programming abilities of ChatGPT make it an attractive tool for facilitating the education of bioinformatics data analysis for beginners.In this study,we proposed an iterative model to fine-tune instructions for guiding a chatbot in generating code for bioinformatics data analysis tasks.We demonstrated the feasibility of the model by applying it to various bioinformatics topics.Additionally,we discussed practical considerations and limitations regarding the use of the model in chatbot-aided bioinformatics education.