Based on the kinetic theoretical Vlasov-Poisson equation, a surface Coulomb explosion model of SiO2 material induced by ultra-short pulsed laser radiation is established. The non-equilibrium free electron distribution...Based on the kinetic theoretical Vlasov-Poisson equation, a surface Coulomb explosion model of SiO2 material induced by ultra-short pulsed laser radiation is established. The non-equilibrium free electron distribution resulting from the two mechanisms of multi-photon ionization and avalanche ionization is computed. A quantitative analysis is given to describe the Coulomb explosion induced by the self-consistent electric field, and the impact of the parameters of laser pulses on the surface ablation is also discussed. The results show that the electron relaxation time is not constant, but it is related to the microscopic state of the electrons, so the relaxation time approximation is not available on the femtosecond time scale. The ablation depths computed by the theoretical model are in good agreement with the experimental results in the range of pulse durations from 0 to 1 ps.展开更多
Surface soil/sediment samples were collected from the Water-Level Fluctuation Zone(WLFZ), cultivated land and forest land at 50 different grid points from Shenjia watershed, the Three Gorges Reservoir area in August 2...Surface soil/sediment samples were collected from the Water-Level Fluctuation Zone(WLFZ), cultivated land and forest land at 50 different grid points from Shenjia watershed, the Three Gorges Reservoir area in August 2013. The spatial distribution, sources and ecological risk assessment for Arsenic(As), Cadmium(Cd),Chromium(Cr), Copper(Cu), Nickel(Ni), Lead(Pb)and Zinc(Zn) were analyzed in this study. The results showed all tested metals had similar distribution patterns except Ni and Cr, with areas of high concentrations distributed in the southwest(WLFZ and watershed outlet) of the study area. Ni and Cr,which were highly positively correlated and present in high concentrations, were primarily distributed in the south and middle zones of the study area. Lower concentration areas of all metals were uniformly distributed west of the high-elevation zones and forest land. Factor analysis(FA) and factor analysismultiple linear regression(FA-MLR) showed that the major sources of Cd were fertilizer and traffic sources,which together accounted for 87% of Cd. As, Zn and Cu levels were primarily supplied by industrial and domestic sources, accounting for 76% of As, 75% of Cu and 67% of Zn. Surface soils/sediments of the study watershed contaminated by Cd represent a high ecological risk, whereas other metals represent low ecological risks. The potential ecological risk index(PERI) analysis indicated that it had a low(widerange) ecological risk and a moderate(small-range)ecological risk primarily distributed in the outlet of the study watershed. Fertilizers and traffic are the primary sources of Cd pollution, which should be more closely controlled for the purposes of water quality and ecological conservation.展开更多
Nowadays, the scale of data normally stored in a database collected by Data Acquisition System (DAS) or Distributed Control System (DCS) in a power plant is becoming larger and larger. However there are abundant valua...Nowadays, the scale of data normally stored in a database collected by Data Acquisition System (DAS) or Distributed Control System (DCS) in a power plant is becoming larger and larger. However there are abundant valuable knowledge hidden behind them. It will be beyond people's capacity to analyze and understand these data stored in such a scale database. Fortunately data mining techniques are arising at the historic moment. In this paper, we explain the basic concept and general knowledge of data mining; analyze the characteristics and research method of data mining; give some typical applications of data mining system based on power plant real time database on intranet.展开更多
The distribution of vertical stress for both active and passive state in the silo with a central innerdowncomer is reported in this paper. Experimental measurement of the axial distribution of vertical stress for both...The distribution of vertical stress for both active and passive state in the silo with a central innerdowncomer is reported in this paper. Experimental measurement of the axial distribution of vertical stress for bothactive and passive state in the silo are in good agreement with that predicted by theoretical analysis. The meanaxial stress is reduced due to the presence of the inner downcomer in the silo.展开更多
This paper focuses on exporting relational data into extensible markup language (XML). First, the characteristics of both relational schemas represented by E-R diagrams and XML document type definitions (DTDs) are an...This paper focuses on exporting relational data into extensible markup language (XML). First, the characteristics of both relational schemas represented by E-R diagrams and XML document type definitions (DTDs) are analyzed. Secondly, the corresponding mapping rules are proposed. At last an algorithm based on edge tables is presented. There are two key points in the algorithm. One is that the edge table is used to store the information of the relational dictionary, and this brings about the efficiency of the algorithm. The other is that structural information can be obtained from the resulting DTDs and other applications can optimize their query processes using the structural information.展开更多
The future usage of heterogeneous databases will consist of the WWW and CORBA environments. The integration of the WWW databases and CORBA standards are discussed. These two techniques need to merge together to make d...The future usage of heterogeneous databases will consist of the WWW and CORBA environments. The integration of the WWW databases and CORBA standards are discussed. These two techniques need to merge together to make distributed usage of heterogeneous databases user friendly. In an environment integrating WWW databases and CORBA technologies, CORBA can be used to access heterogeneous data sources in the internet. This kind of applications can achieve distributed transactions to assure data consistency and integrity. The application of this technology is with a good prospect.展开更多
Building a cloud geodatabase for a sponge city is crucial to integrate the geospatial information dispersed in various departments for multi-user high concurrent access and retrieval,high scalability and availability,...Building a cloud geodatabase for a sponge city is crucial to integrate the geospatial information dispersed in various departments for multi-user high concurrent access and retrieval,high scalability and availability,efficient storage and management.In this study,Hadoop distributed computing framework,including Hadoop distributed file system and MapReduce(mapper and reducer),is firstly designed with a parallel computing framework to process massive spatial data.Then,access control with a series of standard application programming interfaces for different functions is designed,including spatial data storage layer,cloud geodatabase access layer,spatial data access layer and spatial data analysis layer.Subsequently,a retrieval model is designed,including direct addressing via file name,three-level concurrent retrieval and block data retrieval strategies.Main functions are realised,including real-time concurrent access,high-performance computing,communication,massive data storage,efficient retrieval and scheduling decisions on the multi-scale,multi-source and massive spatial data.Finally,the performance of Hadoop cloud geodatabases is validated and compared with that of the Oracle database.The cloud geodatabase for the sponge city can avoid redundant configuration of personnel,hardware and software,support the data transfer,model debugging and application development,and provide accurate,real-time,virtual,intelligent,reliable,elastically scalable,dynamic and on-demand cloud services of the basic and thematic geographic information for the construction and management of the sponge city.展开更多
Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify dat...Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups.展开更多
Spatial objects have two types of attributes: geometrical attributes and non-geometrical attributes, which belong to two different attribute domains (geometrical and non-geometrical domains). Although geometrically...Spatial objects have two types of attributes: geometrical attributes and non-geometrical attributes, which belong to two different attribute domains (geometrical and non-geometrical domains). Although geometrically scattered in a geometrical domain, spatial objects may be similar to each other in a non-geometrical domain. Most existing clustering algorithms group spatial datasets into different compact regions in a geometrical domain without considering the aspect of a non-geometrical domain. However, many application scenarios require clustering results in which a cluster has not only high proximity in a geometrical domain, but also high similarity in a non-geometrical domain. This means constraints are imposed on the clustering goal from both geometrical and non-geometrical domains simultaneously. Such a clustering problem is called dual clustering. As distributed clustering applications become more and more popular, it is necessary to tackle the dual clustering problem in distributed databases. The DCAD algorithm is proposed to solve this problem. DCAD consists of two levels of clustering: local clustering and global clustering. First, clustering is conducted at each local site with a local clustering algorithm, and the features of local clusters are extracted clustering is obtained based on those features fective and efficient. Second, local features from each site are sent to a central site where global Experiments on both artificial and real spatial datasets show that DCAD is effective and efficient.展开更多
In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connec...In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method.展开更多
A highland reservoir in the West Black Sea region of Turkey which belongs to the Mediterranean climatic zone was examined.Both littoral and profundal zones were sampled from October 2009 to September 2010,to determine...A highland reservoir in the West Black Sea region of Turkey which belongs to the Mediterranean climatic zone was examined.Both littoral and profundal zones were sampled from October 2009 to September 2010,to determine taxonomic composition,biodiversity and abundance of benthic invertebrates as well as the seasonal variation of these measures.A total of 35 taxa were identified,of which 12 belong to Chironomidae and 10 to Oligochaeta groups.The highest diversity and abundance of benthic macroinvertebrates were found at the littoral stations.Macroinvertebrates showed significant positive correlations with water temperature and NO_2 and NO_3 concentrations,and negative correlation with dissolved oxygen.展开更多
This study investigates foreign learners' acquisition of the Chinese modal adverb fanzheng on the basis of interlanguage corpus. We find that the sentences written by foreign learners of Chinese (hereafter simplifie...This study investigates foreign learners' acquisition of the Chinese modal adverb fanzheng on the basis of interlanguage corpus. We find that the sentences written by foreign learners of Chinese (hereafter simplified as FLC) with the wordfanzheng are quite similar to those by Chinese native speakers in the distribution of the syntactic categories, semantic types, semantic functions, pragmatic or discourse functions. These characterize basically the wordfanzheng as one of the modal adverbs in such three aspects as syntax, semantics and pragmatics. Most of the FLCs study the basic meanings and typical contexts of the wordfanzheng at primary stage, and later the functions emphasizing reasons, summary or explaining, and then the function of textual cohesion at the intermediate and advanced stage. The main causes of errors reflect that the FLCs are not able to differentiate the usage offanzheng and that of other causal or adversative conjunctions.展开更多
To make further investigation of the IgE antibodyrepertoire in Trichosanthin (TCS) allergic responses, amurine IgE phage surface display library was constructed(3.0×105 independent clones). We first constructed t...To make further investigation of the IgE antibodyrepertoire in Trichosanthin (TCS) allergic responses, amurine IgE phage surface display library was constructed(3.0×105 independent clones). We first constructed theVe cDNA library (4.6×105 independent clones) and VκcDNA library (3.0×105 independent clones). Then, theVε and Vκ gene segments were amplified from both libraries by PCR respectively, and assembled into Fab fragment by SOE PCR. The phage library containing Fabs wasthus constructed. The diversity of Vε from this library wasanalyzed and proved. Fab clones with high specificity toTCS have been screened out.展开更多
As an important part of land use/cover change(LUCC), historical LUCC in long time series attracts much more attention from scholars. Currently, based on the view of combining the overall control of cropland area and ...As an important part of land use/cover change(LUCC), historical LUCC in long time series attracts much more attention from scholars. Currently, based on the view of combining the overall control of cropland area and ′top-down′ decision-making behaviors, here are two global historical land-use datasets, generally referred as the Sustainability and the Global Environment datasets(SAGE datasets) and History Database of the Global Environment datasets(HYDE datasets). However, at the regional level, these global datasets have coarse resolutions and inevitable errors. Considering various factors that influenced cropland distribution, including cropland connectivity and the limitation of natural and human factors, this study developed a reconstruction model of historical cropland based on constrained Cellular Automaton(CA) of ′bottom-up′. Then, an available labor force index is used as a proxy for the amount of cropland to inspect and calibrate these spatial patterns. Applied the reconstruction model to Shandong Province, we reconstructed its spatial distribution of cropland during 8 periods. The reconstructed results show that: 1) it is properly suitable for constrained CA to simulate and reconstruct the spatial distribution of cropland in traditional cultivated region of China; 2) compared with ′SAGE datasets′ and ′HYDE datasets′, this study have formed higher-resolution Boolean spatial distribution datasets of historical cropland with a more definitive concept of spatial pattern in terms of fractional format.展开更多
The recent explosion of biological data and the concomitant proliferation of distributed databases make it challenging for biologists and bioinformaticians to discover the best data resources for their needs, and the ...The recent explosion of biological data and the concomitant proliferation of distributed databases make it challenging for biologists and bioinformaticians to discover the best data resources for their needs, and the most efficient way to access and use them For the biologist, running bioinformatics analyses involve a time-consuming management of data and tools. Users need support to organize their work, retrieve parameters and reproduce their analyses. They also need to be able to combine their analytic tools using a safe data flow software mechanism. Finally we have designed a system, Bioinfo-Portal, to provide a flexible and usable web environment for defining and running bioinformatics analyses. It embeds simple yet powerful data management features that allow the user to reproduce analyses and to combine tools using an adobe flex tool. Bioinfo-Portal can also act as a front end to provide a unified view of already-existing collections of bioinformatics resources. Users can analyze genomic and proteomic data by using the tools that has been integrated in the portal (tools for alignments, dotplots, motif detection, domain analysis, profile searching and tertiary structure prediction). The sequences that user obtained from portal's nucleotide and protein databases are easily analyzed by the portal tools on the same interface in no time. User can also take benefit from the animations.展开更多
This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, thro...This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, through a series of simulation studies, it shows that using the new concurrency control protocol the performance of distributed real-time databases can be much improved.展开更多
Since the early 1990, significant progress in database technology has provided new platform for emerging new dimensions of data engineering. New models were introduced to utilize the data sets stored in the new genera...Since the early 1990, significant progress in database technology has provided new platform for emerging new dimensions of data engineering. New models were introduced to utilize the data sets stored in the new generations of databases. These models have a deep impact on evolving decision-support systems. But they suffer a variety of practical problems while accessing real-world data sources. Specifically a type of data storage model based on data distribution theory has been increasingly used in recent years by large-scale enterprises, while it is not compatible with existing decision-support models. This data storage model stores the data in different geographical sites where they are more regularly accessed. This leads to considerably less inter-site data transfer that can reduce data security issues in some circumstances and also significantly improve data manipulation transactions speed. The aim of this paper is to propose a new approach for supporting proactive decision-making that utilizes a workable data source management methodology. The new model can effectively organize and use complex data sources, even when they are distributed in different sites in a fragmented form. At the same time, the new model provides a very high level of intellectual management decision-support by intelligent use of the data collections through utilizing new smart methods in synthesizing useful knowledge. The results of an empirical study to evaluate the model are provided.展开更多
According to the different equipment, different system and heterogeneous database have be information "isolated island" problem, and the data of equipments can be updated in real time on the business node. The paper...According to the different equipment, different system and heterogeneous database have be information "isolated island" problem, and the data of equipments can be updated in real time on the business node. The paper proposes a program of data synchronization platform based on J2EE (JMS) and XML, and detailed analysis and description of the workflow system, its frame structure and the key technology. Practice shows that this scheme has the advantages of convenient and real-time etc..展开更多
文摘Based on the kinetic theoretical Vlasov-Poisson equation, a surface Coulomb explosion model of SiO2 material induced by ultra-short pulsed laser radiation is established. The non-equilibrium free electron distribution resulting from the two mechanisms of multi-photon ionization and avalanche ionization is computed. A quantitative analysis is given to describe the Coulomb explosion induced by the self-consistent electric field, and the impact of the parameters of laser pulses on the surface ablation is also discussed. The results show that the electron relaxation time is not constant, but it is related to the microscopic state of the electrons, so the relaxation time approximation is not available on the femtosecond time scale. The ablation depths computed by the theoretical model are in good agreement with the experimental results in the range of pulse durations from 0 to 1 ps.
基金Financial support for this study was jointly provided by the National Natural Science Foundation of China(Grant No.41430750)National Key Basic Research Program of China(Grant Nos.2015CB452704,2016YFC0402301)the Chinese Academy of Sciences(Grant Nos.KFJ-EW-STS-008,KFJSW-STS-175)
文摘Surface soil/sediment samples were collected from the Water-Level Fluctuation Zone(WLFZ), cultivated land and forest land at 50 different grid points from Shenjia watershed, the Three Gorges Reservoir area in August 2013. The spatial distribution, sources and ecological risk assessment for Arsenic(As), Cadmium(Cd),Chromium(Cr), Copper(Cu), Nickel(Ni), Lead(Pb)and Zinc(Zn) were analyzed in this study. The results showed all tested metals had similar distribution patterns except Ni and Cr, with areas of high concentrations distributed in the southwest(WLFZ and watershed outlet) of the study area. Ni and Cr,which were highly positively correlated and present in high concentrations, were primarily distributed in the south and middle zones of the study area. Lower concentration areas of all metals were uniformly distributed west of the high-elevation zones and forest land. Factor analysis(FA) and factor analysismultiple linear regression(FA-MLR) showed that the major sources of Cd were fertilizer and traffic sources,which together accounted for 87% of Cd. As, Zn and Cu levels were primarily supplied by industrial and domestic sources, accounting for 76% of As, 75% of Cu and 67% of Zn. Surface soils/sediments of the study watershed contaminated by Cd represent a high ecological risk, whereas other metals represent low ecological risks. The potential ecological risk index(PERI) analysis indicated that it had a low(widerange) ecological risk and a moderate(small-range)ecological risk primarily distributed in the outlet of the study watershed. Fertilizers and traffic are the primary sources of Cd pollution, which should be more closely controlled for the purposes of water quality and ecological conservation.
文摘Nowadays, the scale of data normally stored in a database collected by Data Acquisition System (DAS) or Distributed Control System (DCS) in a power plant is becoming larger and larger. However there are abundant valuable knowledge hidden behind them. It will be beyond people's capacity to analyze and understand these data stored in such a scale database. Fortunately data mining techniques are arising at the historic moment. In this paper, we explain the basic concept and general knowledge of data mining; analyze the characteristics and research method of data mining; give some typical applications of data mining system based on power plant real time database on intranet.
文摘The distribution of vertical stress for both active and passive state in the silo with a central innerdowncomer is reported in this paper. Experimental measurement of the axial distribution of vertical stress for bothactive and passive state in the silo are in good agreement with that predicted by theoretical analysis. The meanaxial stress is reduced due to the presence of the inner downcomer in the silo.
文摘This paper focuses on exporting relational data into extensible markup language (XML). First, the characteristics of both relational schemas represented by E-R diagrams and XML document type definitions (DTDs) are analyzed. Secondly, the corresponding mapping rules are proposed. At last an algorithm based on edge tables is presented. There are two key points in the algorithm. One is that the edge table is used to store the information of the relational dictionary, and this brings about the efficiency of the algorithm. The other is that structural information can be obtained from the resulting DTDs and other applications can optimize their query processes using the structural information.
文摘The future usage of heterogeneous databases will consist of the WWW and CORBA environments. The integration of the WWW databases and CORBA standards are discussed. These two techniques need to merge together to make distributed usage of heterogeneous databases user friendly. In an environment integrating WWW databases and CORBA technologies, CORBA can be used to access heterogeneous data sources in the internet. This kind of applications can achieve distributed transactions to assure data consistency and integrity. The application of this technology is with a good prospect.
基金Project(NZ1628)supported by the Natural Science Foundation of Ningxia,China
文摘Building a cloud geodatabase for a sponge city is crucial to integrate the geospatial information dispersed in various departments for multi-user high concurrent access and retrieval,high scalability and availability,efficient storage and management.In this study,Hadoop distributed computing framework,including Hadoop distributed file system and MapReduce(mapper and reducer),is firstly designed with a parallel computing framework to process massive spatial data.Then,access control with a series of standard application programming interfaces for different functions is designed,including spatial data storage layer,cloud geodatabase access layer,spatial data access layer and spatial data analysis layer.Subsequently,a retrieval model is designed,including direct addressing via file name,three-level concurrent retrieval and block data retrieval strategies.Main functions are realised,including real-time concurrent access,high-performance computing,communication,massive data storage,efficient retrieval and scheduling decisions on the multi-scale,multi-source and massive spatial data.Finally,the performance of Hadoop cloud geodatabases is validated and compared with that of the Oracle database.The cloud geodatabase for the sponge city can avoid redundant configuration of personnel,hardware and software,support the data transfer,model debugging and application development,and provide accurate,real-time,virtual,intelligent,reliable,elastically scalable,dynamic and on-demand cloud services of the basic and thematic geographic information for the construction and management of the sponge city.
基金Supported by National Natural Science Foundation of China (No. 50475117)Tianjin Natural Science Foundation (No.06YFJMJC03700).
文摘Integrating heterogeneous data sources is a precondition to share data for enterprises. Highly-efficient data updating can both save system expenses, and offer real-time data. It is one of the hot issues to modify data rapidly in the pre-processing area of the data warehouse. An extract transform loading design is proposed based on a new data algorithm called Diff-Match,which is developed by utilizing mode matching and data-filtering technology. It can accelerate data renewal, filter the heterogeneous data, and seek out different sets of data. Its efficiency has been proved by its successful application in an enterprise of electric apparatus groups.
基金Funded by the National 973 Program of China (No.2003CB415205)the National Natural Science Foundation of China (No.40523005, No.60573183, No.60373019)the Open Research Fund Program of LIESMARS (No.WKL(04)0303).
文摘Spatial objects have two types of attributes: geometrical attributes and non-geometrical attributes, which belong to two different attribute domains (geometrical and non-geometrical domains). Although geometrically scattered in a geometrical domain, spatial objects may be similar to each other in a non-geometrical domain. Most existing clustering algorithms group spatial datasets into different compact regions in a geometrical domain without considering the aspect of a non-geometrical domain. However, many application scenarios require clustering results in which a cluster has not only high proximity in a geometrical domain, but also high similarity in a non-geometrical domain. This means constraints are imposed on the clustering goal from both geometrical and non-geometrical domains simultaneously. Such a clustering problem is called dual clustering. As distributed clustering applications become more and more popular, it is necessary to tackle the dual clustering problem in distributed databases. The DCAD algorithm is proposed to solve this problem. DCAD consists of two levels of clustering: local clustering and global clustering. First, clustering is conducted at each local site with a local clustering algorithm, and the features of local clusters are extracted clustering is obtained based on those features fective and efficient. Second, local features from each site are sent to a central site where global Experiments on both artificial and real spatial datasets show that DCAD is effective and efficient.
文摘In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method.
基金Supported by the Scientific and Technological Council of Turkey,TUBITAK(No.109Y013)
文摘A highland reservoir in the West Black Sea region of Turkey which belongs to the Mediterranean climatic zone was examined.Both littoral and profundal zones were sampled from October 2009 to September 2010,to determine taxonomic composition,biodiversity and abundance of benthic invertebrates as well as the seasonal variation of these measures.A total of 35 taxa were identified,of which 12 belong to Chironomidae and 10 to Oligochaeta groups.The highest diversity and abundance of benthic macroinvertebrates were found at the littoral stations.Macroinvertebrates showed significant positive correlations with water temperature and NO_2 and NO_3 concentrations,and negative correlation with dissolved oxygen.
文摘This study investigates foreign learners' acquisition of the Chinese modal adverb fanzheng on the basis of interlanguage corpus. We find that the sentences written by foreign learners of Chinese (hereafter simplified as FLC) with the wordfanzheng are quite similar to those by Chinese native speakers in the distribution of the syntactic categories, semantic types, semantic functions, pragmatic or discourse functions. These characterize basically the wordfanzheng as one of the modal adverbs in such three aspects as syntax, semantics and pragmatics. Most of the FLCs study the basic meanings and typical contexts of the wordfanzheng at primary stage, and later the functions emphasizing reasons, summary or explaining, and then the function of textual cohesion at the intermediate and advanced stage. The main causes of errors reflect that the FLCs are not able to differentiate the usage offanzheng and that of other causal or adversative conjunctions.
文摘To make further investigation of the IgE antibodyrepertoire in Trichosanthin (TCS) allergic responses, amurine IgE phage surface display library was constructed(3.0×105 independent clones). We first constructed theVe cDNA library (4.6×105 independent clones) and VκcDNA library (3.0×105 independent clones). Then, theVε and Vκ gene segments were amplified from both libraries by PCR respectively, and assembled into Fab fragment by SOE PCR. The phage library containing Fabs wasthus constructed. The diversity of Vε from this library wasanalyzed and proved. Fab clones with high specificity toTCS have been screened out.
基金Under the auspices of National Basic Research Program of China(No.2011CB952001)National Natural Science Foundation of China(No.41340016,412013860)
文摘As an important part of land use/cover change(LUCC), historical LUCC in long time series attracts much more attention from scholars. Currently, based on the view of combining the overall control of cropland area and ′top-down′ decision-making behaviors, here are two global historical land-use datasets, generally referred as the Sustainability and the Global Environment datasets(SAGE datasets) and History Database of the Global Environment datasets(HYDE datasets). However, at the regional level, these global datasets have coarse resolutions and inevitable errors. Considering various factors that influenced cropland distribution, including cropland connectivity and the limitation of natural and human factors, this study developed a reconstruction model of historical cropland based on constrained Cellular Automaton(CA) of ′bottom-up′. Then, an available labor force index is used as a proxy for the amount of cropland to inspect and calibrate these spatial patterns. Applied the reconstruction model to Shandong Province, we reconstructed its spatial distribution of cropland during 8 periods. The reconstructed results show that: 1) it is properly suitable for constrained CA to simulate and reconstruct the spatial distribution of cropland in traditional cultivated region of China; 2) compared with ′SAGE datasets′ and ′HYDE datasets′, this study have formed higher-resolution Boolean spatial distribution datasets of historical cropland with a more definitive concept of spatial pattern in terms of fractional format.
文摘The recent explosion of biological data and the concomitant proliferation of distributed databases make it challenging for biologists and bioinformaticians to discover the best data resources for their needs, and the most efficient way to access and use them For the biologist, running bioinformatics analyses involve a time-consuming management of data and tools. Users need support to organize their work, retrieve parameters and reproduce their analyses. They also need to be able to combine their analytic tools using a safe data flow software mechanism. Finally we have designed a system, Bioinfo-Portal, to provide a flexible and usable web environment for defining and running bioinformatics analyses. It embeds simple yet powerful data management features that allow the user to reproduce analyses and to combine tools using an adobe flex tool. Bioinfo-Portal can also act as a front end to provide a unified view of already-existing collections of bioinformatics resources. Users can analyze genomic and proteomic data by using the tools that has been integrated in the portal (tools for alignments, dotplots, motif detection, domain analysis, profile searching and tertiary structure prediction). The sequences that user obtained from portal's nucleotide and protein databases are easily analyzed by the portal tools on the same interface in no time. User can also take benefit from the animations.
基金the National Natural Science Foundation of China and the Commission of Science,Technokgy and Industry for National Defense
文摘This paper formally defines and analyses the new notion of correctness called quasi serializability, and then outlines corresponding concurrency control protocol QDHP for distributed real-time databases. Finally, through a series of simulation studies, it shows that using the new concurrency control protocol the performance of distributed real-time databases can be much improved.
文摘Since the early 1990, significant progress in database technology has provided new platform for emerging new dimensions of data engineering. New models were introduced to utilize the data sets stored in the new generations of databases. These models have a deep impact on evolving decision-support systems. But they suffer a variety of practical problems while accessing real-world data sources. Specifically a type of data storage model based on data distribution theory has been increasingly used in recent years by large-scale enterprises, while it is not compatible with existing decision-support models. This data storage model stores the data in different geographical sites where they are more regularly accessed. This leads to considerably less inter-site data transfer that can reduce data security issues in some circumstances and also significantly improve data manipulation transactions speed. The aim of this paper is to propose a new approach for supporting proactive decision-making that utilizes a workable data source management methodology. The new model can effectively organize and use complex data sources, even when they are distributed in different sites in a fragmented form. At the same time, the new model provides a very high level of intellectual management decision-support by intelligent use of the data collections through utilizing new smart methods in synthesizing useful knowledge. The results of an empirical study to evaluate the model are provided.
文摘According to the different equipment, different system and heterogeneous database have be information "isolated island" problem, and the data of equipments can be updated in real time on the business node. The paper proposes a program of data synchronization platform based on J2EE (JMS) and XML, and detailed analysis and description of the workflow system, its frame structure and the key technology. Practice shows that this scheme has the advantages of convenient and real-time etc..