Based on the ideas of controlling relative quality and rearranging bitplanes, a new ROI coding method for JPEG2000 was proposed, which shifts and rearranges bitplanes in units of bitplane groups. It can code arbitrary...Based on the ideas of controlling relative quality and rearranging bitplanes, a new ROI coding method for JPEG2000 was proposed, which shifts and rearranges bitplanes in units of bitplane groups. It can code arbitrary shaped ROI without shape coding, and reserve almost arbitrary percent of background information. It also can control the relative quality of progressive decoded images. In addition, it is easy to be implemented and has low computational cost.展开更多
Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great ...Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail. However, within-class variance can hamper attempts to discriminate crop classes at fine resolutions. Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery, although current methods do not exploit the available information fully. In this research, a novel Temporal Sequence Object-based Convolutional Neural Network(TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series. An object-based CNN(OCNN) model was adopted in the TS-OCNN to classify images at the object level(i.e., segmented objects or crop parcels), thus, maintaining the precise boundary information of crop parcels. The combination of image time-series was first utilized as the input to the OCNN model to produce an ‘original’ or baseline classification. Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy. By doing so, the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification. The effectiveness of the proposed approach was investigated using multitemporal SAR and optical imagery, respectively, over two heterogeneous agricultural areas. The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy, and achieved the greatest accuracies(82.68% and 87.40%) in comparison with state-of-the-art benchmark methods, including the object-based CNN(OCNN)(81.63% and85.88%), object-based image analysis(OBIA)(78.21% and 84.83%), and standard pixel-wise CNN(79.18%and 82.90%). The proposed approach is the first known attempt to explore simultaneously the joint information from image time-series with the unique information from single-date images for crop classification using a deep learning framework. The TS-OCNN, therefore, represents a new approach for agricultural landscape classification from multi-temporal FSR imagery. Besides, it is readily generalizable to other landscapes(e.g., forest landscapes), with a wide application prospect.展开更多
Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free a...Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free and valuable images to map the land cover,coastal areas often encounter significant cloud cover,especially in tropical areas,which makes the classification in those areas non-ideal.To solve this problem,we proposed a framework of combining medium-resolution optical images and synthetic aperture radar(SAR)data with the recently popular object-based image analysis(OBIA)method and used the Landsat Operational Land Imager(OLI)and Phased Array type L-band Synthetic Aperture Radar(PALSAR)images acquired in Singapore in 2017 as a case study.We designed experiments to confirm two critical factors of this framework:one is the segmentation scale that determines the average object size,and the other is the classification feature.Accuracy assessments of the land cover indicated that the optimal segmentation scale was between 40 and 80,and the features of the combination of OLI and SAR resulted in higher accuracy than any individual features,especially in areas with cloud cover.Based on the land cover generated by this framework,we assessed the vulnerability of the marine disasters of Singapore in 2008 and 2017 and found that the high-vulnerability areas mainly located in the southeast and increased by 118.97 km2 over the past decade.To clarify the disaster response plan for different geographical environments,we classified risk based on altitude and distance from shore.The newly increased high-vulnerability regions within 4 km offshore and below 30 m above sea level are at high risk;these regions may need to focus on strengthening disaster prevention construction.This study serves as a typical example of using remote sensing techniques for the vulnerability assessment of marine disasters,especially those in cloudy coastal areas.展开更多
To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize tr...To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.展开更多
Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) a...Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) and remote sensing imagery, combined with developed object-based methods enables automatic gully feature mapping. But still few studies have specifically focused on gully feature mapping on different scales. In this study, an object-based approach to two-level gully feature mapping, including gully-affected areas and bank gullies, was developed and tested on 1-m DEM and Worldview-3 imagery of a catchment in the Chinese Loess Plateau. The methodology includes a sequence of data preparation, image segmentation, metric calculation, and random forest based classification. The results of the two-level mapping were based on a random forest model after investigating the effects of feature selection and class-imbalance problem. Results show that the segmentation strategy adopted in this paper which considers the topographic information and optimal parameter combination can improve the segmentation results. The distribution of the gully-affected area is closely related to topographic information, however, the spectral features are more dominant for bank gully mapping. The highest overall accuracy of the gully-affected area mapping was 93.06% with four topographic features. The highest overall accuracy of bank gully mapping is 78.5% when all features are adopted. The proposed approach is a creditable option for hierarchical mapping of gully feature information, which is suitable for the application in hily Loess Plateau region.展开更多
The majority of the population and economic activity of the northern half of Vietnam is clustered in the Red River Delta and about half of the country’s rice production takes place here. There are significant problem...The majority of the population and economic activity of the northern half of Vietnam is clustered in the Red River Delta and about half of the country’s rice production takes place here. There are significant problems associated with its geographical position and the intensive exploitation of resources by an overabundant population (population density of 962 inhabitants/km2). Some thirty years after the economic liberalization and the opening of the country to international markets, agricultural land use patterns in the Red River Delta, particularly in the coastal area, have undergone many changes. Remote sensing is a particularly powerful tool in processing and providing spatial information for monitoring land use changes. The main methodological objective is to find a solution to process the many heterogeneous coastal land use parameters, so as to describe it in all its complexity, specifically by making use of the latest European satellite data (Sentinel-2). This complexity is due to local variations in ecological conditions, but also to anthropogenic factors that directly and indirectly influence land use dynamics. The methodological objective was to develop a new Geographic Object-based Image Analysis (GEOBIA) approach for mapping coastal areas using Sentinel-2 data and Landsat 8. By developing a new segmentation, accuracy measure, in this study was determined that segmentation accuracies decrease with increasing segmentation scales and that the negative impact of under-segmentation errors significantly increases at a large scale. An Estimation of Scale Parameter (ESP) tool was then used to determine the optimal segmentation parameter values. A popular machine learning algorithms (Random Forests-RFs) is used. For all classifications algorithm, an increase in overall accuracy was observed with the full synergistic combination of available data sets.展开更多
A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inne...A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inner-level Bregmanized method devotes to dictionary updating and sparse represention of small overlapping image patches. The introduced constraint of graph regularized sparse coding can capture local image features effectively, and consequently enables accurate reconstruction from highly undersampled partial data. Furthermore, modified sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge within a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can effectively reconstruct images and it outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.展开更多
This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of...This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of wavelet coefficients are used to reduce the number of domain blocks, which leads to lower bit cost required to represent the location information of fractal coding, and overall entropy constrained optimization is performed for the decision trees as well as for the sets of scalar quantizers and self quantizers of wavelet subtrees. Experiment results show that at the low bit rates, the proposed scheme gives about 1 dB improvement in PSNR over the reported results.展开更多
A modular architecture for two dimension (2 D) discrete wavelet transform (DWT) is designed.The image data can be wavelet transformed in real time,and the structure can be easily scaled up to higher levels of DWT.A f...A modular architecture for two dimension (2 D) discrete wavelet transform (DWT) is designed.The image data can be wavelet transformed in real time,and the structure can be easily scaled up to higher levels of DWT.A fast zerotree image coding (FZIC) algorithm is proposed by using a simple sequential scan order and two flag maps.The VLSI structure for FZIC is then presented.By combining 2 D DWT and FZIC,a wavelet image coder is finally designed.The coder is programmed,simulated,synthesized,and successfully verified by ALTERA CPLD.展开更多
Since real world communication channels are not error free, the coded data transmitted on them may be corrupted, and block based image coding systems are vulnerable to transmission impairment. So the best neighborh...Since real world communication channels are not error free, the coded data transmitted on them may be corrupted, and block based image coding systems are vulnerable to transmission impairment. So the best neighborhood match method using genetic algorithm is used to conceal the error blocks. Experimental results show that the searching space can be greatly reduced by using genetic algorithm compared with exhaustive searching method, and good image quality is achieved. The peak signal noise ratios(PSNRs) of the restored images are increased greatly.展开更多
Owing to the constraints on the fabrication ofγ-ray coding plates with many pixels,few studies have been carried out onγ-ray computational ghost imaging.Thus,the development of coding plates with fewer pixels is ess...Owing to the constraints on the fabrication ofγ-ray coding plates with many pixels,few studies have been carried out onγ-ray computational ghost imaging.Thus,the development of coding plates with fewer pixels is essential to achieveγ-ray computational ghost imaging.Based on the regional similarity between Hadamard subcoding plates,this study presents an optimization method to reduce the number of pixels of Hadamard coding plates.First,a moving distance matrix was obtained to describe the regional similarity quantitatively.Second,based on the matrix,we used two ant colony optimization arrangement algorithms to maximize the reuse of pixels in the regional similarity area and obtain new compressed coding plates.With full sampling,these two algorithms improved the pixel utilization of the coding plate,and the compression ratio values were 54.2%and 58.9%,respectively.In addition,three undersampled sequences(the Harr,Russian dolls,and cake-cutting sequences)with different sampling rates were tested and discussed.With different sampling rates,our method reduced the number of pixels of all three sequences,especially for the Russian dolls and cake-cutting sequences.Therefore,our method can reduce the number of pixels,manufacturing cost,and difficulty of the coding plate,which is beneficial for the implementation and application ofγ-ray computational ghost imaging.展开更多
Based on the Fisher–Yatess scrambling and DNA coding technology, a chaotical image encryption method is proposed. First, the SHA-3 algorithm is used to calculate the hash value of the initial password, which is used ...Based on the Fisher–Yatess scrambling and DNA coding technology, a chaotical image encryption method is proposed. First, the SHA-3 algorithm is used to calculate the hash value of the initial password, which is used as the initial value of the chaotic system. Second, the chaotic sequence and Fisher–Yatess scrambling are used to scramble the plaintext,and a sorting scrambling algorithm is used for secondary scrambling. Then, the chaotic sequence and DNA coding rules are used to change the plaintext pixel values, which makes the ciphertext more random and resistant to attacks, and thus ensures that the encrypted ciphertext is more secure. Finally, we add plaintext statistics for pixel-level diffusion to ensure plaintext sensitivity. The experimental results and security analysis show that the new algorithm has a good encryption effect and speed, and can also resist common attacks.展开更多
A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. ...A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.展开更多
In the sorting system of the production line,the object movement,fixed angle of view,light intensity and other reasons lead to obscure blurred images.It results in bar code recognition rate being low and real time bei...In the sorting system of the production line,the object movement,fixed angle of view,light intensity and other reasons lead to obscure blurred images.It results in bar code recognition rate being low and real time being poor.Aiming at the above problems,a progressive bar code compressed recognition algorithm is proposed.First,assuming that the source image is not tilted,use the direct recognition method to quickly identify the compressed source image.Failure indicates that the compression ratio is improper or the image is skewed.Then,the source image is enhanced to identify the source image directly.Finally,the inclination of the compressed image is detected by the barcode region recognition method and the source image is corrected to locate the barcode information in the barcode region recognition image.The results of multitype image experiments show that the proposed method is improved by 5+times computational efficiency compared with the former methods,and can recognize fuzzy images better.展开更多
Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Seco...Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.展开更多
In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the neces...In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the necessary wavelet coefficients with a mean shift based algorithm, and concentrates energy on the selected coefficients. It can sparsely approximate the original image, and converges faster than the existing local competition based method. Then, we propose a new compression scheme based on the above approximation method. The scheme has compression performance similar to JPEG 2000. The images decoded with the proposed compression scheme appear more pleasant to the human eyes than those with JPEG 2000.展开更多
Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality ...Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality retrieval by utilizing adequate learning instances,ignoring the extraction of the image’s essential information which leads to difficulty in the retrieval of similar category images just using one reference image.Aiming to solve this problem above,we proposed in this paper one refined sparse representation based similar category image retrieval model.On the one hand,saliency detection and multi-level decomposition could contribute to taking salient and spatial information into consideration more fully in the future.On the other hand,the cross mutual sparse coding model aims to extract the image’s essential feature to the maximumextent possible.At last,we set up a database concluding a large number of multi-source images.Adequate groups of comparative experiments show that our method could contribute to retrieving similar category images effectively.Moreover,adequate groups of ablation experiments show that nearly all procedures play their roles,respectively.展开更多
An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized r...An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized region is approximated with linear type of segments. Then a rectangle belt is constructed for each segment. Finally, the gray value distribution in the region is fitted by normal forms polynomials. The model matching and motion analysis are also based on the architecture of sensitized region. For the smooth region we use the run length scanning and linear approximating. By means of normal forms polynomial fitting and motion prediction by matching, the images are compressed. It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit per pel.展开更多
In signal processing and communication systems,digital filters are widely employed.In some circumstances,the reliability of those systems is crucial,necessitating the use of fault tolerant filter implementations.Many ...In signal processing and communication systems,digital filters are widely employed.In some circumstances,the reliability of those systems is crucial,necessitating the use of fault tolerant filter implementations.Many strategies have been presented throughout the years to achieve fault tolerance by utilising the structure and properties of the filters.As technology advances,more complicated systems with several filters become possible.Some of the filters in those complicated systems frequently function in parallel,for example,by applying the same filter to various input signals.Recently,a simple strategy for achieving fault tolerance that takes advantage of the availability of parallel filters was given.Many fault-tolerant ways that take advantage of the filter’s structure and properties have been proposed throughout the years.The primary idea is to use structured authentication scan chains to study the internal states of finite impulse response(FIR)components in order to detect and recover the exact state of faulty modules through the state of non-faulty modules.Finally,a simple solution of Double modular redundancy(DMR)based fault tolerance was developed that takes advantage of the availability of parallel filters for image denoising.This approach is expanded in this short to display how parallel filters can be protected using error correction codes(ECCs)in which each filter is comparable to a bit in a standard ECC.“Advanced error recovery for parallel systems,”the suggested technique,can find and eliminate hidden defects in FIR modules,and also restore the system from multiple failures impacting two FIR modules.From the implementation,Xilinx ISE 14.7 was found to have given significant error reduction capability in the fault calculations and reduction in the area which reduces the cost of implementation.Faults were introduced in all the outputs of the functional filters and found that the fault in every output is corrected.展开更多
A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high co...A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high correlation of the adjacent image blocks is utilized, and a searching range is obtained in the sorted codebook according to the mean value of the current processing vector. In order to gain good performance, proper THd and NS are predefined on the basis of experimental experiences and additional distortion limitation. The expermental results show that the MMCVQ algorithm is much faster than the full-search VQ algorithm, and the encoding quality degradation of the proposed algorithm is only 0.3~0.4 dB compared to the full-search VQ.展开更多
基金Electronic Development Fund of Ministry ofInformation Industry of China(No[2004]479)
文摘Based on the ideas of controlling relative quality and rearranging bitplanes, a new ROI coding method for JPEG2000 was proposed, which shifts and rearranges bitplanes in units of bitplane groups. It can code arbitrary shaped ROI without shape coding, and reserve almost arbitrary percent of background information. It also can control the relative quality of progressive decoded images. In addition, it is easy to be implemented and has low computational cost.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA28070503)the National Key Research and Development Program of China(2021YFD1500100)+2 种基金the Open Fund of State Laboratory of Information Engineering in Surveying,Mapping and Remote Sensing,Wuhan University (20R04)Land Observation Satellite Supporting Platform of National Civil Space Infrastructure Project(CASPLOS-CCSI)a PhD studentship ‘‘Deep Learning in massive area,multi-scale resolution remotely sensed imagery”(EAA7369),sponsored by Lancaster University and Ordnance Survey (the national mapping agency of Great Britain)。
文摘Accurate crop distribution mapping is required for crop yield prediction and field management. Due to rapid progress in remote sensing technology, fine spatial resolution(FSR) remotely sensed imagery now offers great opportunities for mapping crop types in great detail. However, within-class variance can hamper attempts to discriminate crop classes at fine resolutions. Multi-temporal FSR remotely sensed imagery provides a means of increasing crop classification from FSR imagery, although current methods do not exploit the available information fully. In this research, a novel Temporal Sequence Object-based Convolutional Neural Network(TS-OCNN) was proposed to classify agricultural crop type from FSR image time-series. An object-based CNN(OCNN) model was adopted in the TS-OCNN to classify images at the object level(i.e., segmented objects or crop parcels), thus, maintaining the precise boundary information of crop parcels. The combination of image time-series was first utilized as the input to the OCNN model to produce an ‘original’ or baseline classification. Then the single-date images were fed automatically into the deep learning model scene-by-scene in order of image acquisition date to increase successively the crop classification accuracy. By doing so, the joint information in the FSR multi-temporal observations and the unique individual information from the single-date images were exploited comprehensively for crop classification. The effectiveness of the proposed approach was investigated using multitemporal SAR and optical imagery, respectively, over two heterogeneous agricultural areas. The experimental results demonstrated that the newly proposed TS-OCNN approach consistently increased crop classification accuracy, and achieved the greatest accuracies(82.68% and 87.40%) in comparison with state-of-the-art benchmark methods, including the object-based CNN(OCNN)(81.63% and85.88%), object-based image analysis(OBIA)(78.21% and 84.83%), and standard pixel-wise CNN(79.18%and 82.90%). The proposed approach is the first known attempt to explore simultaneously the joint information from image time-series with the unique information from single-date images for crop classification using a deep learning framework. The TS-OCNN, therefore, represents a new approach for agricultural landscape classification from multi-temporal FSR imagery. Besides, it is readily generalizable to other landscapes(e.g., forest landscapes), with a wide application prospect.
基金Supported by the National Key Research and Development Program of China(No.2016YFC1402003)the CAS Earth Big Data Science Project(No.XDA19060303)the Innovation Project of the State Key Laboratory of Resources and Environmental Information System(No.O88RAA01YA)
文摘Efficient and accurate access to coastal land cover information is of great significance for marine disaster prevention and mitigation.Although the popular and common sensors of land resource satellites provide free and valuable images to map the land cover,coastal areas often encounter significant cloud cover,especially in tropical areas,which makes the classification in those areas non-ideal.To solve this problem,we proposed a framework of combining medium-resolution optical images and synthetic aperture radar(SAR)data with the recently popular object-based image analysis(OBIA)method and used the Landsat Operational Land Imager(OLI)and Phased Array type L-band Synthetic Aperture Radar(PALSAR)images acquired in Singapore in 2017 as a case study.We designed experiments to confirm two critical factors of this framework:one is the segmentation scale that determines the average object size,and the other is the classification feature.Accuracy assessments of the land cover indicated that the optimal segmentation scale was between 40 and 80,and the features of the combination of OLI and SAR resulted in higher accuracy than any individual features,especially in areas with cloud cover.Based on the land cover generated by this framework,we assessed the vulnerability of the marine disasters of Singapore in 2008 and 2017 and found that the high-vulnerability areas mainly located in the southeast and increased by 118.97 km2 over the past decade.To clarify the disaster response plan for different geographical environments,we classified risk based on altitude and distance from shore.The newly increased high-vulnerability regions within 4 km offshore and below 30 m above sea level are at high risk;these regions may need to focus on strengthening disaster prevention construction.This study serves as a typical example of using remote sensing techniques for the vulnerability assessment of marine disasters,especially those in cloudy coastal areas.
基金supported by the National Natural Science Foundationof China (60702012)the Scientific Research Foundation for the Re-turned Overseas Chinese Scholars, State Education Ministry
文摘To compress hyperspectral images, a low complexity discrete cosine transform (DCT)-based distributed source coding (DSC) scheme with Gray code is proposed. Unlike most of the existing DSC schemes, which utilize transform in spatial domain, the proposed algorithm applies transform in spectral domain. Set-partitioning-based approach is applied to reorganize DCT coefficients into waveletlike tree structure and extract the sign, refinement, and significance bitplanes. The extracted refinement bits are Gray encoded. Because of the dependency along the line dimension of hyperspectral images, low density paritycheck-(LDPC)-based Slepian-Wolf coder is adopted to implement the DSC strategy. Experimental results on airborne visible/infrared imaging spectrometer (AVIRIS) dataset show that the proposed paradigm achieves up to 6 dB improvement over DSC-based coders which apply transform in spatial domain, with significantly reduced computational complexity and memory storage.
基金Under the auspices of Priority Academic Program Development of Jiangsu Higher Education Institutions,National Natural Science Foundation of China(No.41271438,41471316,41401440,41671389)
文摘Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) and remote sensing imagery, combined with developed object-based methods enables automatic gully feature mapping. But still few studies have specifically focused on gully feature mapping on different scales. In this study, an object-based approach to two-level gully feature mapping, including gully-affected areas and bank gullies, was developed and tested on 1-m DEM and Worldview-3 imagery of a catchment in the Chinese Loess Plateau. The methodology includes a sequence of data preparation, image segmentation, metric calculation, and random forest based classification. The results of the two-level mapping were based on a random forest model after investigating the effects of feature selection and class-imbalance problem. Results show that the segmentation strategy adopted in this paper which considers the topographic information and optimal parameter combination can improve the segmentation results. The distribution of the gully-affected area is closely related to topographic information, however, the spectral features are more dominant for bank gully mapping. The highest overall accuracy of the gully-affected area mapping was 93.06% with four topographic features. The highest overall accuracy of bank gully mapping is 78.5% when all features are adopted. The proposed approach is a creditable option for hierarchical mapping of gully feature information, which is suitable for the application in hily Loess Plateau region.
文摘The majority of the population and economic activity of the northern half of Vietnam is clustered in the Red River Delta and about half of the country’s rice production takes place here. There are significant problems associated with its geographical position and the intensive exploitation of resources by an overabundant population (population density of 962 inhabitants/km2). Some thirty years after the economic liberalization and the opening of the country to international markets, agricultural land use patterns in the Red River Delta, particularly in the coastal area, have undergone many changes. Remote sensing is a particularly powerful tool in processing and providing spatial information for monitoring land use changes. The main methodological objective is to find a solution to process the many heterogeneous coastal land use parameters, so as to describe it in all its complexity, specifically by making use of the latest European satellite data (Sentinel-2). This complexity is due to local variations in ecological conditions, but also to anthropogenic factors that directly and indirectly influence land use dynamics. The methodological objective was to develop a new Geographic Object-based Image Analysis (GEOBIA) approach for mapping coastal areas using Sentinel-2 data and Landsat 8. By developing a new segmentation, accuracy measure, in this study was determined that segmentation accuracies decrease with increasing segmentation scales and that the negative impact of under-segmentation errors significantly increases at a large scale. An Estimation of Scale Parameter (ESP) tool was then used to determine the optimal segmentation parameter values. A popular machine learning algorithms (Random Forests-RFs) is used. For all classifications algorithm, an increase in overall accuracy was observed with the full synergistic combination of available data sets.
基金The National Natural Science Foundation of China (No.61362001,61102043,61262084,20132BAB211030,20122BAB211015)the Basic Research Program of Shenzhen(No.JC201104220219A)
文摘A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inner-level Bregmanized method devotes to dictionary updating and sparse represention of small overlapping image patches. The introduced constraint of graph regularized sparse coding can capture local image features effectively, and consequently enables accurate reconstruction from highly undersampled partial data. Furthermore, modified sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge within a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can effectively reconstruct images and it outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.
文摘This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of wavelet coefficients are used to reduce the number of domain blocks, which leads to lower bit cost required to represent the location information of fractal coding, and overall entropy constrained optimization is performed for the decision trees as well as for the sets of scalar quantizers and self quantizers of wavelet subtrees. Experiment results show that at the low bit rates, the proposed scheme gives about 1 dB improvement in PSNR over the reported results.
文摘A modular architecture for two dimension (2 D) discrete wavelet transform (DWT) is designed.The image data can be wavelet transformed in real time,and the structure can be easily scaled up to higher levels of DWT.A fast zerotree image coding (FZIC) algorithm is proposed by using a simple sequential scan order and two flag maps.The VLSI structure for FZIC is then presented.By combining 2 D DWT and FZIC,a wavelet image coder is finally designed.The coder is programmed,simulated,synthesized,and successfully verified by ALTERA CPLD.
文摘Since real world communication channels are not error free, the coded data transmitted on them may be corrupted, and block based image coding systems are vulnerable to transmission impairment. So the best neighborhood match method using genetic algorithm is used to conceal the error blocks. Experimental results show that the searching space can be greatly reduced by using genetic algorithm compared with exhaustive searching method, and good image quality is achieved. The peak signal noise ratios(PSNRs) of the restored images are increased greatly.
基金supported by the Youth Science Foundation of Sichuan Province(Nos.22NSFSC3816 and 2022NSFSC1231)the General Project of the National Natural Science Foundation of China(Nos.12075039 and 41874121)the Key Project of the National Natural Science Foundation of China(No.U19A2086).
文摘Owing to the constraints on the fabrication ofγ-ray coding plates with many pixels,few studies have been carried out onγ-ray computational ghost imaging.Thus,the development of coding plates with fewer pixels is essential to achieveγ-ray computational ghost imaging.Based on the regional similarity between Hadamard subcoding plates,this study presents an optimization method to reduce the number of pixels of Hadamard coding plates.First,a moving distance matrix was obtained to describe the regional similarity quantitatively.Second,based on the matrix,we used two ant colony optimization arrangement algorithms to maximize the reuse of pixels in the regional similarity area and obtain new compressed coding plates.With full sampling,these two algorithms improved the pixel utilization of the coding plate,and the compression ratio values were 54.2%and 58.9%,respectively.In addition,three undersampled sequences(the Harr,Russian dolls,and cake-cutting sequences)with different sampling rates were tested and discussed.With different sampling rates,our method reduced the number of pixels of all three sequences,especially for the Russian dolls and cake-cutting sequences.Therefore,our method can reduce the number of pixels,manufacturing cost,and difficulty of the coding plate,which is beneficial for the implementation and application ofγ-ray computational ghost imaging.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61173183,61672124,61370145,and 11501064)the Password Theory Project of the 13th Five-Year Plan National Cryptography Development Fund,China(Grant No.MMJJ20170203)+1 种基金the China Postdoctoral Science Foundation(Grant No.2016M590850)the Scientific and Technological Research Program of Chongqing Municipal Education Commission,China(Grant No.KJ1500605)
文摘Based on the Fisher–Yatess scrambling and DNA coding technology, a chaotical image encryption method is proposed. First, the SHA-3 algorithm is used to calculate the hash value of the initial password, which is used as the initial value of the chaotic system. Second, the chaotic sequence and Fisher–Yatess scrambling are used to scramble the plaintext,and a sorting scrambling algorithm is used for secondary scrambling. Then, the chaotic sequence and DNA coding rules are used to change the plaintext pixel values, which makes the ciphertext more random and resistant to attacks, and thus ensures that the encrypted ciphertext is more secure. Finally, we add plaintext statistics for pixel-level diffusion to ensure plaintext sensitivity. The experimental results and security analysis show that the new algorithm has a good encryption effect and speed, and can also resist common attacks.
基金Project supported by the Research Grants Council of the Hong Kong Special Administrative Region,China(Grant No.CityU123009)
文摘A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.
基金This work was supported by Scientific Research Starting Project of SWPU[Zheng,D.,No.0202002131604]Major Science and Technology Project of Sichuan Province[Zheng,D.,No.8ZDZX0143]+1 种基金Ministry of Education Collaborative Education Project of China[Zheng,D.,No.952]Fundamental Research Project[Zheng,D.,Nos.549,550].
文摘In the sorting system of the production line,the object movement,fixed angle of view,light intensity and other reasons lead to obscure blurred images.It results in bar code recognition rate being low and real time being poor.Aiming at the above problems,a progressive bar code compressed recognition algorithm is proposed.First,assuming that the source image is not tilted,use the direct recognition method to quickly identify the compressed source image.Failure indicates that the compression ratio is improper or the image is skewed.Then,the source image is enhanced to identify the source image directly.Finally,the inclination of the compressed image is detected by the barcode region recognition method and the source image is corrected to locate the barcode information in the barcode region recognition image.The results of multitype image experiments show that the proposed method is improved by 5+times computational efficiency compared with the former methods,and can recognize fuzzy images better.
文摘Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.
文摘In this paper, we propose a sparse overcomplete image approximation method based on the ideas of overcomplete log-Gabor wavelet, mean shift and energy concentration. The proposed approximation method selects the necessary wavelet coefficients with a mean shift based algorithm, and concentrates energy on the selected coefficients. It can sparsely approximate the original image, and converges faster than the existing local competition based method. Then, we propose a new compression scheme based on the above approximation method. The scheme has compression performance similar to JPEG 2000. The images decoded with the proposed compression scheme appear more pleasant to the human eyes than those with JPEG 2000.
基金sponsored by the National Natural Science Foundation of China(Grants:62002200,61772319)Shandong Natural Science Foundation of China(Grant:ZR2020QF012).
文摘Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality retrieval by utilizing adequate learning instances,ignoring the extraction of the image’s essential information which leads to difficulty in the retrieval of similar category images just using one reference image.Aiming to solve this problem above,we proposed in this paper one refined sparse representation based similar category image retrieval model.On the one hand,saliency detection and multi-level decomposition could contribute to taking salient and spatial information into consideration more fully in the future.On the other hand,the cross mutual sparse coding model aims to extract the image’s essential feature to the maximumextent possible.At last,we set up a database concluding a large number of multi-source images.Adequate groups of comparative experiments show that our method could contribute to retrieving similar category images effectively.Moreover,adequate groups of ablation experiments show that nearly all procedures play their roles,respectively.
文摘An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized region is approximated with linear type of segments. Then a rectangle belt is constructed for each segment. Finally, the gray value distribution in the region is fitted by normal forms polynomials. The model matching and motion analysis are also based on the architecture of sensitized region. For the smooth region we use the run length scanning and linear approximating. By means of normal forms polynomial fitting and motion prediction by matching, the images are compressed. It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit per pel.
文摘In signal processing and communication systems,digital filters are widely employed.In some circumstances,the reliability of those systems is crucial,necessitating the use of fault tolerant filter implementations.Many strategies have been presented throughout the years to achieve fault tolerance by utilising the structure and properties of the filters.As technology advances,more complicated systems with several filters become possible.Some of the filters in those complicated systems frequently function in parallel,for example,by applying the same filter to various input signals.Recently,a simple strategy for achieving fault tolerance that takes advantage of the availability of parallel filters was given.Many fault-tolerant ways that take advantage of the filter’s structure and properties have been proposed throughout the years.The primary idea is to use structured authentication scan chains to study the internal states of finite impulse response(FIR)components in order to detect and recover the exact state of faulty modules through the state of non-faulty modules.Finally,a simple solution of Double modular redundancy(DMR)based fault tolerance was developed that takes advantage of the availability of parallel filters for image denoising.This approach is expanded in this short to display how parallel filters can be protected using error correction codes(ECCs)in which each filter is comparable to a bit in a standard ECC.“Advanced error recovery for parallel systems,”the suggested technique,can find and eliminate hidden defects in FIR modules,and also restore the system from multiple failures impacting two FIR modules.From the implementation,Xilinx ISE 14.7 was found to have given significant error reduction capability in the fault calculations and reduction in the area which reduces the cost of implementation.Faults were introduced in all the outputs of the functional filters and found that the fault in every output is corrected.
文摘A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high correlation of the adjacent image blocks is utilized, and a searching range is obtained in the sorted codebook according to the mean value of the current processing vector. In order to gain good performance, proper THd and NS are predefined on the basis of experimental experiences and additional distortion limitation. The expermental results show that the MMCVQ algorithm is much faster than the full-search VQ algorithm, and the encoding quality degradation of the proposed algorithm is only 0.3~0.4 dB compared to the full-search VQ.