This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 te...This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 teachers and undergraduates from four universities in Beijing. The results showed people's different attitudes towards the two forms of display.展开更多
In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clar...In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clarify that PADT is necessary in programming language description.展开更多
We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe ...We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe may be opaque between the redshift regions 0.35 0.44, 0.44 0.57 and 0.6-0.73 since the best fit values of cosmic opacity in these regions are positive, while a transparent universe is favored in the redshift region 0.57-0.63. However, in general, a transparent universe is still consistent with observations at the lo confidence level.展开更多
Land evaluation factors often contain continuous-, discrete- and nominal-valued attributes. In traditional land evaluation, these different attributes are usually graded into categorical indexes by land resource exper...Land evaluation factors often contain continuous-, discrete- and nominal-valued attributes. In traditional land evaluation, these different attributes are usually graded into categorical indexes by land resource experts, and the evaluation results rely heavily on experts' experiences. In order to overcome the shortcoming, we presented a fuzzy neural network ensemble method that did not require grading the evaluation factors into categorical indexes and could evaluate land resources by using the three kinds of attribute values directly. A fuzzy back propagation neural network (BPNN), a fuzzy radial basis function neural network (RBFNN), a fuzzy BPNN ensemble, and a fuzzy RBFNN ensemble were used to evaluate the land resources in Guangdong Province. The evaluation results by using the fuzzy BPNN ensemble and the fuzzy RBFNN ensemble were much better than those by using the single fuzzy BPNN and the single fuzzy RBFNN, and the error rate of the single fuzzy RBFNN or fuzzy RBFNN ensemble was lower than that of the single fuzzy BPNN or fuzzy BPNN ensemble, respectively. By using the fuzzy neural network ensembles, the validity of land resource evaluation was improved and reliance on land evaluators' experiences was considerably reduced.展开更多
Deep learning (DL) has seen an exponential development in recent years, with major impact in many medical fields, especially in the field of medical image. The purpose of the work converges in determining the importan...Deep learning (DL) has seen an exponential development in recent years, with major impact in many medical fields, especially in the field of medical image. The purpose of the work converges in determining the importance of each component, describing the specificity and correlations of these elements involved in achieving the precision of interpretation of medical images using DL. The major contribution of this work is primarily to the updated characterisation of the characteristics of the constituent elements of the deep learning process, scientific data, methods of knowledge incorporation, DL models according to the objectives for which they were designed and the presentation of medical applications in accordance with these tasks. Secondly, it describes the specific correlations between the quality, type and volume of data, the deep learning patterns used in the interpretation of diagnostic medical images and their applications in medicine. Finally presents problems and directions of future research. Data quality and volume, annotations and labels, identification and automatic extraction of specific medical terms can help deep learning models perform image analysis tasks. Moreover, the development of models capable of extracting unattended features and easily incorporated into the architecture of DL networks and the development of techniques to search for a certain network architecture according to the objectives set lead to performance in the interpretation of medical images.展开更多
Ada provides full capacities of supporting object orientation, but the diversified objects patterned in Ada are so intricate that Ada95's aim would be demolished. In order to complement the disfigurement that Ada...Ada provides full capacities of supporting object orientation, but the diversified objects patterned in Ada are so intricate that Ada95's aim would be demolished. In order to complement the disfigurement that Ada does lack for a pristine notion of class, this paper presents a remolded object pattern known as A object, an Ada based class description language A ObjAda aiming at support for A object pattern and the related approach for key algorithms and implementation. In consequent, A ObjAda hereby promotes Ada with highlighted object orientation, which not only effectively exploits the capacities in Ada95, but also rationally hides befuddling concepts from Ada95.展开更多
With the continuous development of urbanization in China,the country’s growing population brings great challenges to urban development.By mastering the refined population spatial distribution in administrative units,...With the continuous development of urbanization in China,the country’s growing population brings great challenges to urban development.By mastering the refined population spatial distribution in administrative units,the quantity and agglomeration of population distribution can be estimated and visualized.It will provide a basis for a more rational urban planning.This paper takes Beijing as the research area and uses a new Luojia1-01 nighttime light image with high resolution,land use type data,Points of Interest(POI)data,and other data to construct the population spatial index system,establishing the index weight based on the principal component analysis.The comprehensive weight value of population distribution in the study area was then used to calculate the street population distribution of Beijing in 2018.Then the population spatial distribution was visualize using GIS technology.After accuracy assessments by comparing the result with the WorldPop data,the accuracy has reached 0.74.The proposed method was validated as a qualified method to generate population spatial maps.By contrast of local areas,Luojia 1-01 data is more suitable for population distribution estimation than the NPP/VIIRS(Net Primary Productivity/Visible infrared Imaging Radiometer)nighttime light data.More geospatial big data and mathematical models can be combined to create more accurate population maps in the future.展开更多
With the increasing of data size and model size,deep neural networks(DNNs)show outstanding performance in many artificial intelligence(AI)applications.But the big model size makes it a challenge for high-performance a...With the increasing of data size and model size,deep neural networks(DNNs)show outstanding performance in many artificial intelligence(AI)applications.But the big model size makes it a challenge for high-performance and low-power running DNN on processors,such as central processing unit(CPU),graphics processing unit(GPU),and tensor processing unit(TPU).This paper proposes a LOGNN data representation of 8 bits and a hardware and software co-design deep neural network accelerator LACC to meet the challenge.LOGNN data representation replaces multiply operations to add and shift operations in running DNN.LACC accelerator achieves higher efficiency than the state-of-the-art DNN accelerators by domain specific arithmetic computing units.Finally,LACC speeds up the performance per watt by 1.5 times,compared to the state-of-the-art DNN accelerators on average.展开更多
During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not c...During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not clear how to utilize this data to estimate the pharmacokinetic parameters effectively. This study was aimed at comparing a new method to handle such single-observation-per-animal type data with the conventional method in estimating pharmacokinetic parameters. We assumed there were 15 animals within the study receiving a single dose by intravenous injection. Each animal provided one observation point. There were five time points in total, and each time point contained three measurements. The data were simulated with a one-compartment model with first-order elimination. The inter-individual variabilities (ⅡV) were set to 10%, 30% and 50% for both clearance (CL) and apparent volume of distribution (V). A proportional model was used to describe the residual error, which was also set to 10%, 30% and 50%. Two methods (conventional method and the finite msampling method) to handle with the simulated single-observation-per-animal type data in estimating pharmacokinetic parameters were compared. The conventional method (MI) estimated pharmacokinetic parameters directly with original data, i.e., single-observation-per-animal type data. The finite resampling method (M2) was to expand original data to a new dataset by resampling original data with all kinds of combinations by time. After resampling, each individual in the new dataset contained complete pharmacokinetic data, i.e., in this study, there were 243 (C3^1×C3^1×C3^1×C3^1×C3^1) kinds of possible combinations and each of them was a virtual animal. The study was simulated 100 times by the NONMEM software. According to the results, parameter estimates of CL and V by M2 based on the simulated dataset were closer to their true values, though there was a small difference among different combinations of ⅡVs and the residual errors. In general, M2 was less advantageous over M1 when the residual error increased. It was also influenced by the levels of ⅡV as higher levels of IIV could lead to a decrease in the advantage of M2. However, M2 had no ability to estimate the ⅡV of parameters, nor did M1. The finite resampling method could provide more reliable results compared to the conventional method in estimating pharmacokinetic parameters with single-observation-per-animal type data. Compared to the inter-individual variability, the results of estimation were mainly influenced by the residual error.展开更多
We address the well-posedness of the 2D(Euler)–Boussinesq equations with zero viscosity and positive diffusivity in the polygonal-like domains with Yudovich’s type data,which gives a positive answer to part of the q...We address the well-posedness of the 2D(Euler)–Boussinesq equations with zero viscosity and positive diffusivity in the polygonal-like domains with Yudovich’s type data,which gives a positive answer to part of the questions raised in Lai(Arch Ration Mech Anal 199(3):739–760,2011).Our analysis on the the polygonallike domains essentially relies on the recent elliptic regularity results for such domains proved in Bardos et al.(J Math Anal Appl 407(1):69–89,2013)and Di Plinio(SIAM J Math Anal 47(1):159–178,2015).展开更多
In this paper we present a novel GPU-oriented method of creating an inherently continuous triangular mesh for tile-based rendering of regular height fields. The method is based on tiling data-independent semi-regular ...In this paper we present a novel GPU-oriented method of creating an inherently continuous triangular mesh for tile-based rendering of regular height fields. The method is based on tiling data-independent semi-regular meshes of non-uniform structure, a technique that is quite different from other mesh tiling approaches. A complete, memory efficient set of mesh patterns is created by an off-line procedure and stored into the graphics adapter's memory at runtime. At rendering time, for each tile, one of the precomputed mesh patterns is selected for rendering. The selected mesh pattern fits the required level of details of the tile and ensures seamless connection with other adjacent mesh patterns, like in a game of dominoes. The scalability potential of the proposed method is demonstrated through quadtree hierarchical grouping of tiles. The efficiency is verified by experimental results on height fields for terrain representation, where the method achieves high frame rates and sustained triangle throughput on high resolution viewports with sub-pixel error tolerance. Frame rate sensitivity to real-time modifications of the height field is measured, and it is shown that the method is very tolerant and consequently well tailored for applications dealing with rapidly changeable phenomena represented by height fields.展开更多
Accurate age estimates of immature necrophagous insects associated with a human or animal body can provide evidence of how long the body has been dead.These estimates are based on species-specific details of the inse...Accurate age estimates of immature necrophagous insects associated with a human or animal body can provide evidence of how long the body has been dead.These estimates are based on species-specific details of the insects’aging processes,and therefore require accurate species identification and developmental stage estimation.Many professionals who produce or use identified organisms as forensic evidence have little training in taxonomy or metrology,and appreciate the availability of formalized principles and standards for biological identification.Taxonomic identifications are usually most readily and economically made using categorical and qualitative morphological characters,but it may be necessary to use less convenient and potentially more ambiguous characters that are continuous and quantitative if two candidate species are closely related,or if identifying developmental stages within a species.Characters should be selected by criteria such as taxonomic specificity and metrological repeatability and relative error.We propose such a hierarchical framework,critique various measurements of immature insects,and suggest some standard approaches to determine the reliability of organismal identifications and measurements in estimating postmortem intervals.Relevant criteria for good characters include high repeatability(including low scope for ambiguity or parallax effects),pronounced discreteness,and small relative error in measurements.These same principles apply to individuation of unique objects in general.展开更多
Decompiling, as a means of analysing and understanding software, has great practical value. This paper presents a kind of decompiling method offered by the authors,in which the techniques of library-function pattern r...Decompiling, as a means of analysing and understanding software, has great practical value. This paper presents a kind of decompiling method offered by the authors,in which the techniques of library-function pattern recognition, intermediate language,symbolic execution, rule-based 4ata type recovery program transformation, and knowledge engineering are separately aPPlied to diIfernt phases of decompiling. Then it is discussed that the techulques of developing expert systems are adopted to build a decompiling system shell independent of the knowledge of language and program runningenvironment. The shell will become a real decompiler, as long as the new knowledgeof application environment is interactively acqired.展开更多
基金Project supported partly by the National Basic Research Program (973) of China (No. 2002B312103)+2 种基金the National Natural Science Foundation of China (No. 3027466)the Chinese Academy of Sciences
文摘This study aimed at investigating the characteristics of table and graph that people perceive and the data types which people consider the two displays are most appropriate for. Participants in this survey were 195 teachers and undergraduates from four universities in Beijing. The results showed people's different attitudes towards the two forms of display.
基金The Project Supported by National Natural Science Foundation of China
文摘In this paper, we discuss some characteristic properties of partial abstract data type (PADT) and show the diffrence between PADT and abstract data type (ADT) in specification of programming language. Finally, we clarify that PADT is necessary in programming language description.
基金Supported by the National Natural Science Foundation of China under Grant Nos 11175093,11222545,11435006 and 11375092the K.C.Wong Magna Fund of Ningbo University
文摘We use the latest baryon acoustic oscillation and Union 2.1 type Ia supernova data to test the cosmic opacity between different redshift regions without assuming any cosmological models. It is found that the universe may be opaque between the redshift regions 0.35 0.44, 0.44 0.57 and 0.6-0.73 since the best fit values of cosmic opacity in these regions are positive, while a transparent universe is favored in the redshift region 0.57-0.63. However, in general, a transparent universe is still consistent with observations at the lo confidence level.
基金the National Natural Science Foundation of China (No.40671145)the Natural Science Foundation of Guangdong Province (Nos.04300504 and 05006623)and the Science and Technology Plan Foundation of Guangdong Province (Nos.2005B20701008,2005B10101028,and 2004B20701006).
文摘Land evaluation factors often contain continuous-, discrete- and nominal-valued attributes. In traditional land evaluation, these different attributes are usually graded into categorical indexes by land resource experts, and the evaluation results rely heavily on experts' experiences. In order to overcome the shortcoming, we presented a fuzzy neural network ensemble method that did not require grading the evaluation factors into categorical indexes and could evaluate land resources by using the three kinds of attribute values directly. A fuzzy back propagation neural network (BPNN), a fuzzy radial basis function neural network (RBFNN), a fuzzy BPNN ensemble, and a fuzzy RBFNN ensemble were used to evaluate the land resources in Guangdong Province. The evaluation results by using the fuzzy BPNN ensemble and the fuzzy RBFNN ensemble were much better than those by using the single fuzzy BPNN and the single fuzzy RBFNN, and the error rate of the single fuzzy RBFNN or fuzzy RBFNN ensemble was lower than that of the single fuzzy BPNN or fuzzy BPNN ensemble, respectively. By using the fuzzy neural network ensembles, the validity of land resource evaluation was improved and reliance on land evaluators' experiences was considerably reduced.
文摘Deep learning (DL) has seen an exponential development in recent years, with major impact in many medical fields, especially in the field of medical image. The purpose of the work converges in determining the importance of each component, describing the specificity and correlations of these elements involved in achieving the precision of interpretation of medical images using DL. The major contribution of this work is primarily to the updated characterisation of the characteristics of the constituent elements of the deep learning process, scientific data, methods of knowledge incorporation, DL models according to the objectives for which they were designed and the presentation of medical applications in accordance with these tasks. Secondly, it describes the specific correlations between the quality, type and volume of data, the deep learning patterns used in the interpretation of diagnostic medical images and their applications in medicine. Finally presents problems and directions of future research. Data quality and volume, annotations and labels, identification and automatic extraction of specific medical terms can help deep learning models perform image analysis tasks. Moreover, the development of models capable of extracting unattended features and easily incorporated into the architecture of DL networks and the development of techniques to search for a certain network architecture according to the objectives set lead to performance in the interpretation of medical images.
基金Supported by National Natural Science Foundation of China(6 97730 41)
文摘Ada provides full capacities of supporting object orientation, but the diversified objects patterned in Ada are so intricate that Ada95's aim would be demolished. In order to complement the disfigurement that Ada does lack for a pristine notion of class, this paper presents a remolded object pattern known as A object, an Ada based class description language A ObjAda aiming at support for A object pattern and the related approach for key algorithms and implementation. In consequent, A ObjAda hereby promotes Ada with highlighted object orientation, which not only effectively exploits the capacities in Ada95, but also rationally hides befuddling concepts from Ada95.
基金Under the auspices of Natural Science Foundation of China(No.42071342,31870713)Beijing Natural Science Foundation Program(No.8182038)Fundamental Research Funds for the Central Universities(No.2015ZCQ-LX-01,2018ZY06)。
文摘With the continuous development of urbanization in China,the country’s growing population brings great challenges to urban development.By mastering the refined population spatial distribution in administrative units,the quantity and agglomeration of population distribution can be estimated and visualized.It will provide a basis for a more rational urban planning.This paper takes Beijing as the research area and uses a new Luojia1-01 nighttime light image with high resolution,land use type data,Points of Interest(POI)data,and other data to construct the population spatial index system,establishing the index weight based on the principal component analysis.The comprehensive weight value of population distribution in the study area was then used to calculate the street population distribution of Beijing in 2018.Then the population spatial distribution was visualize using GIS technology.After accuracy assessments by comparing the result with the WorldPop data,the accuracy has reached 0.74.The proposed method was validated as a qualified method to generate population spatial maps.By contrast of local areas,Luojia 1-01 data is more suitable for population distribution estimation than the NPP/VIIRS(Net Primary Productivity/Visible infrared Imaging Radiometer)nighttime light data.More geospatial big data and mathematical models can be combined to create more accurate population maps in the future.
基金Supported by the National Key Research and Development Program of China(No.2018AAA0103300,2017YFA0700900,2017YFA0700902,2017YFA0700901,2019AAA0103802,2020AAA0103802)。
文摘With the increasing of data size and model size,deep neural networks(DNNs)show outstanding performance in many artificial intelligence(AI)applications.But the big model size makes it a challenge for high-performance and low-power running DNN on processors,such as central processing unit(CPU),graphics processing unit(GPU),and tensor processing unit(TPU).This paper proposes a LOGNN data representation of 8 bits and a hardware and software co-design deep neural network accelerator LACC to meet the challenge.LOGNN data representation replaces multiply operations to add and shift operations in running DNN.LACC accelerator achieves higher efficiency than the state-of-the-art DNN accelerators by domain specific arithmetic computing units.Finally,LACC speeds up the performance per watt by 1.5 times,compared to the state-of-the-art DNN accelerators on average.
文摘During pre-clinical pharmacokinetic research, it is not easy to gather complete pharmacokinetic data in each animal. In some cases, an animal can only provide a single observation. Under this circumstance, it is not clear how to utilize this data to estimate the pharmacokinetic parameters effectively. This study was aimed at comparing a new method to handle such single-observation-per-animal type data with the conventional method in estimating pharmacokinetic parameters. We assumed there were 15 animals within the study receiving a single dose by intravenous injection. Each animal provided one observation point. There were five time points in total, and each time point contained three measurements. The data were simulated with a one-compartment model with first-order elimination. The inter-individual variabilities (ⅡV) were set to 10%, 30% and 50% for both clearance (CL) and apparent volume of distribution (V). A proportional model was used to describe the residual error, which was also set to 10%, 30% and 50%. Two methods (conventional method and the finite msampling method) to handle with the simulated single-observation-per-animal type data in estimating pharmacokinetic parameters were compared. The conventional method (MI) estimated pharmacokinetic parameters directly with original data, i.e., single-observation-per-animal type data. The finite resampling method (M2) was to expand original data to a new dataset by resampling original data with all kinds of combinations by time. After resampling, each individual in the new dataset contained complete pharmacokinetic data, i.e., in this study, there were 243 (C3^1×C3^1×C3^1×C3^1×C3^1) kinds of possible combinations and each of them was a virtual animal. The study was simulated 100 times by the NONMEM software. According to the results, parameter estimates of CL and V by M2 based on the simulated dataset were closer to their true values, though there was a small difference among different combinations of ⅡVs and the residual errors. In general, M2 was less advantageous over M1 when the residual error increased. It was also influenced by the levels of ⅡV as higher levels of IIV could lead to a decrease in the advantage of M2. However, M2 had no ability to estimate the ⅡV of parameters, nor did M1. The finite resampling method could provide more reliable results compared to the conventional method in estimating pharmacokinetic parameters with single-observation-per-animal type data. Compared to the inter-individual variability, the results of estimation were mainly influenced by the residual error.
文摘We address the well-posedness of the 2D(Euler)–Boussinesq equations with zero viscosity and positive diffusivity in the polygonal-like domains with Yudovich’s type data,which gives a positive answer to part of the questions raised in Lai(Arch Ration Mech Anal 199(3):739–760,2011).Our analysis on the the polygonallike domains essentially relies on the recent elliptic regularity results for such domains proved in Bardos et al.(J Math Anal Appl 407(1):69–89,2013)and Di Plinio(SIAM J Math Anal 47(1):159–178,2015).
基金supported by the projects TR32039 and TR32047 of the Ministry of Science and Technological Development of Serbia
文摘In this paper we present a novel GPU-oriented method of creating an inherently continuous triangular mesh for tile-based rendering of regular height fields. The method is based on tiling data-independent semi-regular meshes of non-uniform structure, a technique that is quite different from other mesh tiling approaches. A complete, memory efficient set of mesh patterns is created by an off-line procedure and stored into the graphics adapter's memory at runtime. At rendering time, for each tile, one of the precomputed mesh patterns is selected for rendering. The selected mesh pattern fits the required level of details of the tile and ensures seamless connection with other adjacent mesh patterns, like in a game of dominoes. The scalability potential of the proposed method is demonstrated through quadtree hierarchical grouping of tiles. The efficiency is verified by experimental results on height fields for terrain representation, where the method achieves high frame rates and sustained triangle throughput on high resolution viewports with sub-pixel error tolerance. Frame rate sensitivity to real-time modifications of the height field is measured, and it is shown that the method is very tolerant and consequently well tailored for applications dealing with rapidly changeable phenomena represented by height fields.
文摘Accurate age estimates of immature necrophagous insects associated with a human or animal body can provide evidence of how long the body has been dead.These estimates are based on species-specific details of the insects’aging processes,and therefore require accurate species identification and developmental stage estimation.Many professionals who produce or use identified organisms as forensic evidence have little training in taxonomy or metrology,and appreciate the availability of formalized principles and standards for biological identification.Taxonomic identifications are usually most readily and economically made using categorical and qualitative morphological characters,but it may be necessary to use less convenient and potentially more ambiguous characters that are continuous and quantitative if two candidate species are closely related,or if identifying developmental stages within a species.Characters should be selected by criteria such as taxonomic specificity and metrological repeatability and relative error.We propose such a hierarchical framework,critique various measurements of immature insects,and suggest some standard approaches to determine the reliability of organismal identifications and measurements in estimating postmortem intervals.Relevant criteria for good characters include high repeatability(including low scope for ambiguity or parallax effects),pronounced discreteness,and small relative error in measurements.These same principles apply to individuation of unique objects in general.
文摘Decompiling, as a means of analysing and understanding software, has great practical value. This paper presents a kind of decompiling method offered by the authors,in which the techniques of library-function pattern recognition, intermediate language,symbolic execution, rule-based 4ata type recovery program transformation, and knowledge engineering are separately aPPlied to diIfernt phases of decompiling. Then it is discussed that the techulques of developing expert systems are adopted to build a decompiling system shell independent of the knowledge of language and program runningenvironment. The shell will become a real decompiler, as long as the new knowledgeof application environment is interactively acqired.