In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illuminati...In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.展开更多
Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripp...Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripples the performance of such approaches owing to the variability of the magnetic field data.In the same vein,smaller lengths of magnetic field data decrease the localization accuracy substantially.The current study proposes the use of multiple neural networks like deep neural network(DNN),long short term memory network(LSTM),and gated recurrent unit network(GRN)to perform indoor localization based on the embedded magnetic sensor of the smartphone.A voting scheme is introduced that takes predictions from neural networks into consideration to estimate the current location of the user.Contrary to conventional magnetic field-based localization approaches that rely on the magnetic field data intensity,this study utilizes the normalized magnetic field data for this purpose.Training of neural networks is carried out using Galaxy S8 data while the testing is performed with three devices,i.e.,LG G7,Galaxy S8,and LG Q6.Experiments are performed during different times of the day to analyze the impact of time variability.Results indicate that the proposed approach minimizes the impact of smartphone variability and elevates the localization accuracy.Performance comparison with three approaches reveals that the proposed approach outperforms them in mean,50%,and 75%error even using a lesser amount of magnetic field data than those of other approaches.展开更多
In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier ...In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier Transform(FFT)algorithms,convolution,image processing algorithms,etcetera.In the domain of digital signal processing,the use of normalization architecture is very vast.The main objective of using normalization is to performcomparison and shift operations.In this research paper,an evolutionary approach for designing an optimized normalization algorithm is proposed using basic logical blocks such as Multiplexer,Adder etc.The proposed normalization algorithm is further used in designing an 8×8 bit Signed Floating-Point Multiply-Accumulate(SFMAC)architecture.Since the SFMAC can accept an 8-bit significand and a 3-bit exponent,the input to the said architecture can be somewhere between−(7.96872)_(10) to+(7.96872)_(10).The proposed architecture is designed and implemented using the Cadence Virtuoso using 90 and 130 nm technologies(in Generic Process Design Kit(GPDK)and Taiwan Semiconductor Manufacturing Company(TSMC),respectively).To reduce the power consumption of the proposed normalization architecture,techniques such as“block enabling”and“clock gating”are used rigorously.According to the analysis done on Cadence,the proposed architecture uses the least amount of power compared to its current predecessors.展开更多
Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of m...Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.展开更多
Metabolomics as a research field and a set of techniques is to study the entire small molecules in biological samples.Metabolomics is emerging as a powerful tool generally for pre-cision medicine.Particularly,integrat...Metabolomics as a research field and a set of techniques is to study the entire small molecules in biological samples.Metabolomics is emerging as a powerful tool generally for pre-cision medicine.Particularly,integration of microbiome and metabolome has revealed the mechanism and functionality of microbiome in human health and disease.However,metabo-lomics data are very complicated.Preprocessing/pretreating and normalizing procedures on metabolomics data are usually required before statistical analysis.In this review article,we comprehensively review various methods that are used to preprocess and pretreat metabolo-mics data,including MS-based data and NMR-based data preprocessing,dealing with zero and/or missing values and detecting outliers,data normalization,data centering and scaling,data transformation.We discuss the advantages and limitations of each method.The choice for a suitable preprocessing method is determined by the biological hypothesis,the characteristics of the data set,and the selected statistical data analysis method.We then provide the perspective of their applications in the microbiome and metabolome research.展开更多
Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of...Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.展开更多
Interactions between chromatin segments play a large role in functional genomic assays and developments in genomic interaction detection methods have shown interacting topological domains within the genome. Among thes...Interactions between chromatin segments play a large role in functional genomic assays and developments in genomic interaction detection methods have shown interacting topological domains within the genome. Among these methods, Hi-C plays a key role. Here, we present the Genome Interaction Tools and Resources(GITAR), a software to perform a comprehensive Hi-C data analysis, including data preprocessing, normalization, and visualization, as well as analysis of topologically-associated domains(TADs). GITAR is composed of two main modules:(1)HiCtool, a Python library to process and visualize Hi-C data, including TAD analysis; and(2)processed data library, a large collection of human and mouse datasets processed using HiCtool.HiCtool leads the user step-by-step through a pipeline, which goes from the raw Hi-C data to the computation, visualization, and optimized storage of intra-chromosomal contact matrices and TAD coordinates. A large collection of standardized processed data allows the users to compare different datasets in a consistent way, while saving time to obtain data for visualization or additional analyses. More importantly, GITAR enables users without any programming or bioinformatic expertise to work with Hi-C data. GITAR is publicly available at http://genomegitar.org as an open-source software.展开更多
A new dimension-reduction graphical method for testing high- dimensional normality is developed by using the theory of spherical distributions and the idea of principal component analysis. The dimension reduction is r...A new dimension-reduction graphical method for testing high- dimensional normality is developed by using the theory of spherical distributions and the idea of principal component analysis. The dimension reduction is realized by projecting high-dimensional data onto some selected eigenvector directions. The asymptotic statistical independence of the plotting functions on the selected eigenvector directions provides the principle for the new plot. A departure from multivariate normality of the raw data could be captured by at least one plot on the selected eigenvector direction. Acceptance regions associated with the plots are provided to enhance interpretability of the plots. Monte Carlo studies and an illustrative example show that the proposed graphical method has competitive power performance and improves the existing graphical method significantly in testing high-dimensional normality.展开更多
In this paper, we discuss the asymptotic normality of the wavelet estimator of the density function based on censored data, when the survival and the censoring times form a stationary α-mixing sequence. To simulate t...In this paper, we discuss the asymptotic normality of the wavelet estimator of the density function based on censored data, when the survival and the censoring times form a stationary α-mixing sequence. To simulate the distribution of estimator such that it is easy to perform statistical inference for the density function, a random weighted estimator of the density function is also constructed and investigated. Finite sample behavior of the estimator is investigated via simulations too.展开更多
基金This paper is supported by the National Natural Science Foundation ofChina (No .40371107) .
文摘In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2019-2016-0-00313)supervised by the IITP(Institute for Information&communication Technology Promotion)+1 种基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Science,ICT and Future Planning(2017R1E1A1A01074345).
文摘Predominantly the localization accuracy of the magnetic field-based localization approaches is severed by two limiting factors:Smartphone heterogeneity and smaller data lengths.The use of multifarioussmartphones cripples the performance of such approaches owing to the variability of the magnetic field data.In the same vein,smaller lengths of magnetic field data decrease the localization accuracy substantially.The current study proposes the use of multiple neural networks like deep neural network(DNN),long short term memory network(LSTM),and gated recurrent unit network(GRN)to perform indoor localization based on the embedded magnetic sensor of the smartphone.A voting scheme is introduced that takes predictions from neural networks into consideration to estimate the current location of the user.Contrary to conventional magnetic field-based localization approaches that rely on the magnetic field data intensity,this study utilizes the normalized magnetic field data for this purpose.Training of neural networks is carried out using Galaxy S8 data while the testing is performed with three devices,i.e.,LG G7,Galaxy S8,and LG Q6.Experiments are performed during different times of the day to analyze the impact of time variability.Results indicate that the proposed approach minimizes the impact of smartphone variability and elevates the localization accuracy.Performance comparison with three approaches reveals that the proposed approach outperforms them in mean,50%,and 75%error even using a lesser amount of magnetic field data than those of other approaches.
基金This work was supported by Research Support Fund(RSF)of Symbiosis International(Deemed University),Pune,India。
文摘In the era of digital signal processing,like graphics and computation systems,multiplication-accumulation is one of the prime operations.A MAC unit is a vital component of a digital system,like different Fast Fourier Transform(FFT)algorithms,convolution,image processing algorithms,etcetera.In the domain of digital signal processing,the use of normalization architecture is very vast.The main objective of using normalization is to performcomparison and shift operations.In this research paper,an evolutionary approach for designing an optimized normalization algorithm is proposed using basic logical blocks such as Multiplexer,Adder etc.The proposed normalization algorithm is further used in designing an 8×8 bit Signed Floating-Point Multiply-Accumulate(SFMAC)architecture.Since the SFMAC can accept an 8-bit significand and a 3-bit exponent,the input to the said architecture can be somewhere between−(7.96872)_(10) to+(7.96872)_(10).The proposed architecture is designed and implemented using the Cadence Virtuoso using 90 and 130 nm technologies(in Generic Process Design Kit(GPDK)and Taiwan Semiconductor Manufacturing Company(TSMC),respectively).To reduce the power consumption of the proposed normalization architecture,techniques such as“block enabling”and“clock gating”are used rigorously.According to the analysis done on Cadence,the proposed architecture uses the least amount of power compared to its current predecessors.
基金Under the auspices of National Natural Science Foundation of China(No.41230751,41101547)Scientific Research Foundation of Graduate School of Nanjing University(No.2012CL14)
文摘Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.
基金supported by the Crohn's&Colitis Foundation Senior Research Award(No.902766 to J.S.)The National Institute of Diabetes and Digestive and Kidney Diseases(No.R01DK105118-01 and R01DK114126 to J.S.)+1 种基金United States Department of Defense Congressionally Directed Medical Research Programs(No.BC191198 to J.S.)VA Merit Award BX-19-00 to J.S.
文摘Metabolomics as a research field and a set of techniques is to study the entire small molecules in biological samples.Metabolomics is emerging as a powerful tool generally for pre-cision medicine.Particularly,integration of microbiome and metabolome has revealed the mechanism and functionality of microbiome in human health and disease.However,metabo-lomics data are very complicated.Preprocessing/pretreating and normalizing procedures on metabolomics data are usually required before statistical analysis.In this review article,we comprehensively review various methods that are used to preprocess and pretreat metabolo-mics data,including MS-based data and NMR-based data preprocessing,dealing with zero and/or missing values and detecting outliers,data normalization,data centering and scaling,data transformation.We discuss the advantages and limitations of each method.The choice for a suitable preprocessing method is determined by the biological hypothesis,the characteristics of the data set,and the selected statistical data analysis method.We then provide the perspective of their applications in the microbiome and metabolome research.
基金This study was supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare(HI18C1216)the grant of the National Research Foundation of Korea(NRF-2020R1I1A1A01074256)the Soonchunhyang University Research Fund.
文摘Owing to technological developments,Medical image analysis has received considerable attention in the rapid detection and classification of diseases.The brain is an essential organ in humans.Brain tumors cause loss of memory,vision,and name.In 2020,approximately 18,020 deaths occurred due to brain tumors.These cases can be minimized if a brain tumor is diagnosed at a very early stage.Computer vision researchers have introduced several techniques for brain tumor detection and classification.However,owing to many factors,this is still a challenging task.These challenges relate to the tumor size,the shape of a tumor,location of the tumor,selection of important features,among others.In this study,we proposed a framework for multimodal brain tumor classification using an ensemble of optimal deep learning features.In the proposed framework,initially,a database is normalized in the form of high-grade glioma(HGG)and low-grade glioma(LGG)patients and then two pre-trained deep learning models(ResNet50 and Densenet201)are chosen.The deep learning models were modified and trained using transfer learning.Subsequently,the enhanced ant colony optimization algorithm is proposed for best feature selection from both deep models.The selected features are fused using a serial-based approach and classified using a cubic support vector machine.The experimental process was conducted on the BraTs2019 dataset and achieved accuracies of 87.8%and 84.6%for HGG and LGG,respectively.The comparison is performed using several classification methods,and it shows the significance of our proposed technique.
基金supported by the National Institutes of Health,United States(Grant Nos.U01CA200147 and DP1HD087990)awarded to SZ
文摘Interactions between chromatin segments play a large role in functional genomic assays and developments in genomic interaction detection methods have shown interacting topological domains within the genome. Among these methods, Hi-C plays a key role. Here, we present the Genome Interaction Tools and Resources(GITAR), a software to perform a comprehensive Hi-C data analysis, including data preprocessing, normalization, and visualization, as well as analysis of topologically-associated domains(TADs). GITAR is composed of two main modules:(1)HiCtool, a Python library to process and visualize Hi-C data, including TAD analysis; and(2)processed data library, a large collection of human and mouse datasets processed using HiCtool.HiCtool leads the user step-by-step through a pipeline, which goes from the raw Hi-C data to the computation, visualization, and optimized storage of intra-chromosomal contact matrices and TAD coordinates. A large collection of standardized processed data allows the users to compare different datasets in a consistent way, while saving time to obtain data for visualization or additional analyses. More importantly, GITAR enables users without any programming or bioinformatic expertise to work with Hi-C data. GITAR is publicly available at http://genomegitar.org as an open-source software.
文摘A new dimension-reduction graphical method for testing high- dimensional normality is developed by using the theory of spherical distributions and the idea of principal component analysis. The dimension reduction is realized by projecting high-dimensional data onto some selected eigenvector directions. The asymptotic statistical independence of the plotting functions on the selected eigenvector directions provides the principle for the new plot. A departure from multivariate normality of the raw data could be captured by at least one plot on the selected eigenvector direction. Acceptance regions associated with the plots are provided to enhance interpretability of the plots. Monte Carlo studies and an illustrative example show that the proposed graphical method has competitive power performance and improves the existing graphical method significantly in testing high-dimensional normality.
基金Supported by the National Natural Science Foundation of China (No.10871146)
文摘In this paper, we discuss the asymptotic normality of the wavelet estimator of the density function based on censored data, when the survival and the censoring times form a stationary α-mixing sequence. To simulate the distribution of estimator such that it is easy to perform statistical inference for the density function, a random weighted estimator of the density function is also constructed and investigated. Finite sample behavior of the estimator is investigated via simulations too.