The rotary motion deblurring is an inevitable procedure when the imaging seeker is mounted in the rotating missiles.Traditional rotary motion deblurring methods suffer from ringing artifacts and noise,especially for l...The rotary motion deblurring is an inevitable procedure when the imaging seeker is mounted in the rotating missiles.Traditional rotary motion deblurring methods suffer from ringing artifacts and noise,especially for large blur extents.To solve the above problems,we propose a progressive rotary motion deblurring framework consisting of a coarse deblurring stage and a refinement stage.In the first stage,we design an adaptive blur extents factor(BE factor)to balance noise suppression and details reconstruction.And a novel deconvolution model is proposed based on BE factor.In the second stage,a triplescale deformable module CNN(TDM-CNN)is designed to reduce the ringing artifacts,which can exploit the 2D information of an image and adaptively adjust spatial sampling locations.To establish a standard evaluation benchmark,a real-world rotary motion blur dataset is proposed and released,which includes rotary blurred images and corresponding ground truth images with different blur angles.Experimental results demonstrate that the proposed method outperforms the state-of-the-art models on synthetic and real-world rotary motion blur datasets.The code and dataset are available at https://github.com/JinhuiQin/RotaryDeblurring.展开更多
In current dual porosity/permeability models,there exists a fundamental assumption that the adsorption-induced swelling is distributed uniformly within the representative elementary volume (REV),irrespective of its in...In current dual porosity/permeability models,there exists a fundamental assumption that the adsorption-induced swelling is distributed uniformly within the representative elementary volume (REV),irrespective of its internal structures and transient processes.However,both internal structures and transient processes can lead to the non-uniform swelling.In this study,we hypothesize that the non-uniform swelling is responsible for why coal permeability in experimental measurements is not only controlled by the effective stress but also is affected by the adsorption-induced swelling.We propose a concept of the swelling triangle composed of swelling paths to characterize the evolution of the non-uniform swelling and serve as a core link in coupled multiphysics.A swelling path is determined by a dimensionless volumetric ratio and a dimensionless swelling ratio.Different swelling paths have the same start and end point,and each swelling path represents a unique swelling case.The swelling path as the diagonal of the triangle represents the case of the uniform swelling while that as the two perpendicular boundaries represents the case of the localized swelling.The paths of all intermediate cases populate inside the triangle.The corresponding relations between the swelling path and the response of coal multiphysics are established by a non-uniform swelling coefficient.We define this method as the triangle approach and corresponding models as swelling path-based ones.The proposed concept and models are verified against a long-term experimental measurement of permeability and strains under constant effective stress.Our results demonstrate that during gas injection,coal multiphysics responses have a close dependence on the swelling path,and that in both future experiments and field predictions,this dependence must be considered.展开更多
Polymer flooding in fractured wells has been extensively applied in oilfields to enhance oil recovery.In contrast to water,polymer solution exhibits non-Newtonian and nonlinear behavior such as effects of shear thinni...Polymer flooding in fractured wells has been extensively applied in oilfields to enhance oil recovery.In contrast to water,polymer solution exhibits non-Newtonian and nonlinear behavior such as effects of shear thinning and shear thickening,polymer convection,diffusion,adsorption retention,inaccessible pore volume and reduced effective permeability.Meanwhile,the flux density and fracture conductivity along the hydraulic fracture are generally non-uniform due to the effects of pressure distribution,formation damage,and proppant breakage.In this paper,we present an oil-water two-phase flow model that captures these complex non-Newtonian and nonlinear behavior,and non-uniform fracture characteristics in fractured polymer flooding.The hydraulic fracture is firstly divided into two parts:high-conductivity fracture near the wellbore and low-conductivity fracture in the far-wellbore section.A hybrid grid system,including perpendicular bisection(PEBI)and Cartesian grid,is applied to discrete the partial differential flow equations,and the local grid refinement method is applied in the near-wellbore region to accurately calculate the pressure distribution and shear rate of polymer solution.The combination of polymer behavior characterizations and numerical flow simulations are applied,resulting in the calculation for the distribution of water saturation,polymer concentration and reservoir pressure.Compared with the polymer flooding well with uniform fracture conductivity,this non-uniform fracture conductivity model exhibits the larger pressure difference,and the shorter bilinear flow period due to the decrease of fracture flow ability in the far-wellbore section.The field case of the fall-off test demonstrates that the proposed method characterizes fracture characteristics more accurately,and yields fracture half-lengths that better match engineering reality,enabling a quantitative segmented characterization of the near-wellbore section with high fracture conductivity and the far-wellbore section with low fracture conductivity.The novelty of this paper is the analysis of pressure performances caused by the fracture dynamics and polymer rheology,as well as an analysis method that derives formation and fracture parameters based on the pressure and its derivative curves.展开更多
Radioheliographs can obtain solar images at high temporal and spatial resolution,with a high dynamic range.These are among the most important instruments for studying solar radio bursts,understanding solar eruption ev...Radioheliographs can obtain solar images at high temporal and spatial resolution,with a high dynamic range.These are among the most important instruments for studying solar radio bursts,understanding solar eruption events,and conducting space weather forecasting.This study aims to explore the effective use of radioheliographs for solar observations,specifically for imaging coronal mass ejections(CME),to track their evolution and provide space weather warnings.We have developed an imaging simulation program based on the principle of aperture synthesis imaging,covering the entire data processing flow from antenna configuration to dirty map generation.For grid processing,we propose an improved non-uniform fast Fourier transform(NUFFT)method to provide superior image quality.Using simulated imaging of radio coronal mass ejections,we provide practical recommendations for the performance of radioheliographs.This study provides important support for the validation and calibration of radioheliograph data processing,and is expected to profoundly enhance our understanding of solar activities.展开更多
Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are s...Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.展开更多
Blur is produced in a digital image due to low passfiltering,moving objects or defocus of the camera lens during capture.Image viewers are annoyed by blur artefact and the image's perceived quality suffers as a re...Blur is produced in a digital image due to low passfiltering,moving objects or defocus of the camera lens during capture.Image viewers are annoyed by blur artefact and the image's perceived quality suffers as a result.The high-quality input is relevant to communication service providers and imaging product makers because it may help them improve their processes.Human-based blur assessment is time-consuming,expensive and must adhere to subjective evaluation standards.This paper presents a revolutionary no-reference blur assessment algorithm based on reblurring blurred images using a special mask developed with a Markov basis and Laplacefilter.Thefinal blur score of blurred images has been calculated from the local variation in horizontal and vertical pixel intensity of blurred and re-blurred images.The objective scores are generated by applying proposed algorithm on the two image databases i.e.,Laboratory for image and video engineering(LIVE)database and Tampere image database(TID 2013).Finally,on the basis of objective and subjective scores performance analysis is done in terms of Pearson linear correlation coefficient(PLCC),Spearman rank-order correlation coefficient(SROCC),Mean absolute error(MAE),Root mean square error(RMSE)and Outliers ratio(OR).The existing no-reference blur assessment algorithms have been used various methods for the evaluation of blur from no-reference image such as Just noticeable blur(JNB),Cumulative Probability Distribution of Blur Detection(CPBD)and Edge Model based Blur Metric(EMBM).The results illustrate that the proposed method was successful in predicting high blur scores with high accuracy as compared to existing no-reference blur assessment algorithms such as JNB,CPBD and EMBM algorithms.展开更多
Background For static scenes with multiple depth layers,existing defocused image deblurring methods have the problems of edge-ringing artifacts or insufficient deblurring owing to inaccurate estimation of the blur amo...Background For static scenes with multiple depth layers,existing defocused image deblurring methods have the problems of edge-ringing artifacts or insufficient deblurring owing to inaccurate estimation of the blur amount,and prior knowledge in nonblind deconvolution is not strong,which leads to image detail recovery challenges.Methods To this end,this study proposes a blur map estimation method for defocused images based on the gradient difference of the boundary neighborhood,which uses the gradient difference of the boundary neighborhood to accurately obtain the amount of blurring,thereby preventing boundary ringing artifacts.The obtained blur map is then used for blur detection to determine whether the image needs to be deblurred,thereby improving the efficiency of deblurring without manual intervention and judgment.Finally,a nonblind deconvolution algorithm was designed to achieve image deblurring based on the blur amount selection strategy and sparse prior.Results Experimental results showed that our method improves PSNR(Peak Signal-to-Noise Ratio)and SSIM(Structural Similarity Index)by an average of 4.6%and 7.3%,respectively,compared to existing methods.Conclusions Experimental results showed that the proposed method outperforms existing methods.Compared to existing methods,our method can better solve the problems of boundary ringing artifacts and detail information preservation in defocused image deblurring.展开更多
Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor gen...Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor generates a helper data and a random string from biometric data and uses the helper data to reproduce the random string from the second reading.In 2013,Fuller et al.proposed a computational fuzzy extractor based on the learning with errors problem.Their construction,however,can tolerate a sub-linear fraction of errors and has an inefficient decoding algorithm,which causes the reproducing time to increase significantly.In 2016,Canetti et al.proposed a fuzzy extractor with inputs from low-entropy distributions based on a strong primitive,which is called digital locker.However,their construction necessitates an excessive amount of storage space for the helper data,which is stored in authentication server.Based on these observations,we propose a new efficient computational fuzzy extractorwith small size of helper data.Our scheme supports reusability and robustness,which are security notions that must be satisfied in order to use a fuzzy extractor as a secure authentication method in real life.Also,it conceals no information about the biometric data and thanks to the new decoding algorithm can tolerate linear errors.Based on the non-uniform learning with errors problem,we present a formal security proof for the proposed fuzzy extractor.Furthermore,we analyze the performance of our fuzzy extractor scheme and provide parameter sets that meet the security requirements.As a result of our implementation and analysis,we show that our scheme outperforms previous fuzzy extractor schemes in terms of the efficiency of the generation and reproduction algorithms,as well as the size of helper data.展开更多
An image can be degraded due to many environmental factors like foggy or hazy weather,low light conditions,extra light conditions etc.Image captured under the poor light conditions is generally known as non-uniform il...An image can be degraded due to many environmental factors like foggy or hazy weather,low light conditions,extra light conditions etc.Image captured under the poor light conditions is generally known as non-uniform illumination image.Non-uniform illumination hides some important information present in an image during the image capture Also,it degrades the visual quality of image which generates the need for enhancement of such images.Various techniques have been present in literature for the enhancement of such type of images.In this paper,a novel architecture has been proposed for enhancement of poor illumination images which uses radial basis approximations based BEMD(Bi-dimensional Empirical Mode Decomposition).The enhancement algorithm is applied on intensity and saturation components of image.Firstly,intensity component has been decomposed into various bi-dimensional intrinsic mode function and residue by using sifting algorithm.Secondly,some linear transformations techniques have been applied on various bidimensional intrinsic modes obtained and residue and further on joining the transformed modes with residue,enhanced intensity component is obtained.Saturation part of an image is then enhanced in accordance to the enhanced intensity component.Final enhanced image can be obtained by joining the hue,enhanced intensity and enhanced saturation parts of the given image.The proposed algorithm will not only give the visual pleasant image but maintains the naturalness of image also.展开更多
In this paper, we will be looking at our efforts to find a novel solution for motion deblurring in videos. In addition, our solution has the requirement of being camera-independent. This means that the solution is ful...In this paper, we will be looking at our efforts to find a novel solution for motion deblurring in videos. In addition, our solution has the requirement of being camera-independent. This means that the solution is fully implemented in software and is not aware of any of the characteristics of the camera. We found a solution by implementing a Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) hybrid model. Our CNN-LSTM is able to deblur video without any knowledge of the camera hardware. This allows it to be implemented on any system that allows the camera to be swapped out with any camera model with any physical characteristics.展开更多
Background: Non-uniformity in signal intensity occurs commonly in magnetic resonance (MR) imaging, which may pose substantial problems when using a 3T scanner. Therefore, image non-uniformity correction is usually app...Background: Non-uniformity in signal intensity occurs commonly in magnetic resonance (MR) imaging, which may pose substantial problems when using a 3T scanner. Therefore, image non-uniformity correction is usually applied. Purpose: To compare the correction effects of the phased-array uniformity enhancement (PURE), a calibration-based image non-uniformity correction method, among three different software versions in 3T Gd-EOB-DTPA-enhanced MR imaging. Material and Methods: Hepatobiliary-phase images of a total of 120 patients who underwent Gd-EOB-DTPA-enhanced MR imaging on the same 3T scanner were analyzed retrospectively. Forty patients each were examined using three software versions (DV25, DV25.1, and DV26). The effects of PURE were compared by visual assessment, histogram analysis of liver signal intensity, evaluation of the spatial distribution of correction effects, and evaluation of quantitative indices of liver parenchymal enhancement. Results: The visual assessment indicated the highest uniformity of PURE-corrected images for DV26, followed by DV25 and DV25.1. Histogram analysis of corrected images demonstrated significantly larger variations in liver signal for DV25.1 than for the other two versions. Although PURE caused a relative increase in pixel values for central and lateral regions, such effects were weaker for DV25.1 than for the other two versions. In the evaluation of quantitative indices of liver parenchymal enhancement, the liver-to-muscle ratio (LMR) was significantly higher for the corrected images than for the uncorrected images, but the liver-to-spleen ratio (LSR) showed no significant differences. For corrected images, the LMR was significantly higher for DV25 and DV26 than for DV25.1, but the LSR showed no significant differences among the three versions. Conclusion: There were differences in the effects of PURE among the three software versions in 3T Gd-EOB-DTPA-enhanced MR imaging. Even if the non-uniformity correction method has the same brand name, correction effects may differ depending on the software version, and these differences may affect visual and quantitative evaluations.展开更多
基金the National Natural Science Foundation of China under Grant 62075169,Grant 62003247,and Grant 62061160370the Hubei Province Key Research and Development Program under Grant 2021BBA235the Zhuhai Basic and Applied Basic Research Foundation under Grant ZH22017003200010PWC.
文摘The rotary motion deblurring is an inevitable procedure when the imaging seeker is mounted in the rotating missiles.Traditional rotary motion deblurring methods suffer from ringing artifacts and noise,especially for large blur extents.To solve the above problems,we propose a progressive rotary motion deblurring framework consisting of a coarse deblurring stage and a refinement stage.In the first stage,we design an adaptive blur extents factor(BE factor)to balance noise suppression and details reconstruction.And a novel deconvolution model is proposed based on BE factor.In the second stage,a triplescale deformable module CNN(TDM-CNN)is designed to reduce the ringing artifacts,which can exploit the 2D information of an image and adaptively adjust spatial sampling locations.To establish a standard evaluation benchmark,a real-world rotary motion blur dataset is proposed and released,which includes rotary blurred images and corresponding ground truth images with different blur angles.Experimental results demonstrate that the proposed method outperforms the state-of-the-art models on synthetic and real-world rotary motion blur datasets.The code and dataset are available at https://github.com/JinhuiQin/RotaryDeblurring.
基金supported by the Australian Research Council(Grant No.DP200101293)supported by the UWA-China Joint Scholarships(201906430030).
文摘In current dual porosity/permeability models,there exists a fundamental assumption that the adsorption-induced swelling is distributed uniformly within the representative elementary volume (REV),irrespective of its internal structures and transient processes.However,both internal structures and transient processes can lead to the non-uniform swelling.In this study,we hypothesize that the non-uniform swelling is responsible for why coal permeability in experimental measurements is not only controlled by the effective stress but also is affected by the adsorption-induced swelling.We propose a concept of the swelling triangle composed of swelling paths to characterize the evolution of the non-uniform swelling and serve as a core link in coupled multiphysics.A swelling path is determined by a dimensionless volumetric ratio and a dimensionless swelling ratio.Different swelling paths have the same start and end point,and each swelling path represents a unique swelling case.The swelling path as the diagonal of the triangle represents the case of the uniform swelling while that as the two perpendicular boundaries represents the case of the localized swelling.The paths of all intermediate cases populate inside the triangle.The corresponding relations between the swelling path and the response of coal multiphysics are established by a non-uniform swelling coefficient.We define this method as the triangle approach and corresponding models as swelling path-based ones.The proposed concept and models are verified against a long-term experimental measurement of permeability and strains under constant effective stress.Our results demonstrate that during gas injection,coal multiphysics responses have a close dependence on the swelling path,and that in both future experiments and field predictions,this dependence must be considered.
基金This work is supported by the National Natural Science Foundation of China(No.52104049)the Young Elite Scientist Sponsorship Program by Beijing Association for Science and Technology(No.BYESS2023262)Science Foundation of China University of Petroleum,Beijing(No.2462022BJRC004).
文摘Polymer flooding in fractured wells has been extensively applied in oilfields to enhance oil recovery.In contrast to water,polymer solution exhibits non-Newtonian and nonlinear behavior such as effects of shear thinning and shear thickening,polymer convection,diffusion,adsorption retention,inaccessible pore volume and reduced effective permeability.Meanwhile,the flux density and fracture conductivity along the hydraulic fracture are generally non-uniform due to the effects of pressure distribution,formation damage,and proppant breakage.In this paper,we present an oil-water two-phase flow model that captures these complex non-Newtonian and nonlinear behavior,and non-uniform fracture characteristics in fractured polymer flooding.The hydraulic fracture is firstly divided into two parts:high-conductivity fracture near the wellbore and low-conductivity fracture in the far-wellbore section.A hybrid grid system,including perpendicular bisection(PEBI)and Cartesian grid,is applied to discrete the partial differential flow equations,and the local grid refinement method is applied in the near-wellbore region to accurately calculate the pressure distribution and shear rate of polymer solution.The combination of polymer behavior characterizations and numerical flow simulations are applied,resulting in the calculation for the distribution of water saturation,polymer concentration and reservoir pressure.Compared with the polymer flooding well with uniform fracture conductivity,this non-uniform fracture conductivity model exhibits the larger pressure difference,and the shorter bilinear flow period due to the decrease of fracture flow ability in the far-wellbore section.The field case of the fall-off test demonstrates that the proposed method characterizes fracture characteristics more accurately,and yields fracture half-lengths that better match engineering reality,enabling a quantitative segmented characterization of the near-wellbore section with high fracture conductivity and the far-wellbore section with low fracture conductivity.The novelty of this paper is the analysis of pressure performances caused by the fracture dynamics and polymer rheology,as well as an analysis method that derives formation and fracture parameters based on the pressure and its derivative curves.
基金supported by the grants of National Natural Science Foundation of China(42374219,42127804)the Qilu Young Researcher Project of Shandong University.
文摘Radioheliographs can obtain solar images at high temporal and spatial resolution,with a high dynamic range.These are among the most important instruments for studying solar radio bursts,understanding solar eruption events,and conducting space weather forecasting.This study aims to explore the effective use of radioheliographs for solar observations,specifically for imaging coronal mass ejections(CME),to track their evolution and provide space weather warnings.We have developed an imaging simulation program based on the principle of aperture synthesis imaging,covering the entire data processing flow from antenna configuration to dirty map generation.For grid processing,we propose an improved non-uniform fast Fourier transform(NUFFT)method to provide superior image quality.Using simulated imaging of radio coronal mass ejections,we provide practical recommendations for the performance of radioheliographs.This study provides important support for the validation and calibration of radioheliograph data processing,and is expected to profoundly enhance our understanding of solar activities.
文摘Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.
文摘Blur is produced in a digital image due to low passfiltering,moving objects or defocus of the camera lens during capture.Image viewers are annoyed by blur artefact and the image's perceived quality suffers as a result.The high-quality input is relevant to communication service providers and imaging product makers because it may help them improve their processes.Human-based blur assessment is time-consuming,expensive and must adhere to subjective evaluation standards.This paper presents a revolutionary no-reference blur assessment algorithm based on reblurring blurred images using a special mask developed with a Markov basis and Laplacefilter.Thefinal blur score of blurred images has been calculated from the local variation in horizontal and vertical pixel intensity of blurred and re-blurred images.The objective scores are generated by applying proposed algorithm on the two image databases i.e.,Laboratory for image and video engineering(LIVE)database and Tampere image database(TID 2013).Finally,on the basis of objective and subjective scores performance analysis is done in terms of Pearson linear correlation coefficient(PLCC),Spearman rank-order correlation coefficient(SROCC),Mean absolute error(MAE),Root mean square error(RMSE)and Outliers ratio(OR).The existing no-reference blur assessment algorithms have been used various methods for the evaluation of blur from no-reference image such as Just noticeable blur(JNB),Cumulative Probability Distribution of Blur Detection(CPBD)and Edge Model based Blur Metric(EMBM).The results illustrate that the proposed method was successful in predicting high blur scores with high accuracy as compared to existing no-reference blur assessment algorithms such as JNB,CPBD and EMBM algorithms.
基金Supported by the National Natural Science Foundation of China (62172190)the“Double Creation”Plan of Jiangsu Province (JSSCRC2021532)the“Taihu Talent-Innovative Leading Talent”Plan of Wuxi City (Certificate Date:202110)。
文摘Background For static scenes with multiple depth layers,existing defocused image deblurring methods have the problems of edge-ringing artifacts or insufficient deblurring owing to inaccurate estimation of the blur amount,and prior knowledge in nonblind deconvolution is not strong,which leads to image detail recovery challenges.Methods To this end,this study proposes a blur map estimation method for defocused images based on the gradient difference of the boundary neighborhood,which uses the gradient difference of the boundary neighborhood to accurately obtain the amount of blurring,thereby preventing boundary ringing artifacts.The obtained blur map is then used for blur detection to determine whether the image needs to be deblurred,thereby improving the efficiency of deblurring without manual intervention and judgment.Finally,a nonblind deconvolution algorithm was designed to achieve image deblurring based on the blur amount selection strategy and sparse prior.Results Experimental results showed that our method improves PSNR(Peak Signal-to-Noise Ratio)and SSIM(Structural Similarity Index)by an average of 4.6%and 7.3%,respectively,compared to existing methods.Conclusions Experimental results showed that the proposed method outperforms existing methods.Compared to existing methods,our method can better solve the problems of boundary ringing artifacts and detail information preservation in defocused image deblurring.
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2022-0-00518,Blockchain privacy preserving techniques based on data encryption).
文摘Afuzzy extractor can extract an almost uniformrandom string from a noisy source with enough entropy such as biometric data.To reproduce an identical key from repeated readings of biometric data,the fuzzy extractor generates a helper data and a random string from biometric data and uses the helper data to reproduce the random string from the second reading.In 2013,Fuller et al.proposed a computational fuzzy extractor based on the learning with errors problem.Their construction,however,can tolerate a sub-linear fraction of errors and has an inefficient decoding algorithm,which causes the reproducing time to increase significantly.In 2016,Canetti et al.proposed a fuzzy extractor with inputs from low-entropy distributions based on a strong primitive,which is called digital locker.However,their construction necessitates an excessive amount of storage space for the helper data,which is stored in authentication server.Based on these observations,we propose a new efficient computational fuzzy extractorwith small size of helper data.Our scheme supports reusability and robustness,which are security notions that must be satisfied in order to use a fuzzy extractor as a secure authentication method in real life.Also,it conceals no information about the biometric data and thanks to the new decoding algorithm can tolerate linear errors.Based on the non-uniform learning with errors problem,we present a formal security proof for the proposed fuzzy extractor.Furthermore,we analyze the performance of our fuzzy extractor scheme and provide parameter sets that meet the security requirements.As a result of our implementation and analysis,we show that our scheme outperforms previous fuzzy extractor schemes in terms of the efficiency of the generation and reproduction algorithms,as well as the size of helper data.
基金This research is financially supported by the Deanship of Scientific Research at King Khalid University under research grant number(R.G.P 2/157/43).
文摘An image can be degraded due to many environmental factors like foggy or hazy weather,low light conditions,extra light conditions etc.Image captured under the poor light conditions is generally known as non-uniform illumination image.Non-uniform illumination hides some important information present in an image during the image capture Also,it degrades the visual quality of image which generates the need for enhancement of such images.Various techniques have been present in literature for the enhancement of such type of images.In this paper,a novel architecture has been proposed for enhancement of poor illumination images which uses radial basis approximations based BEMD(Bi-dimensional Empirical Mode Decomposition).The enhancement algorithm is applied on intensity and saturation components of image.Firstly,intensity component has been decomposed into various bi-dimensional intrinsic mode function and residue by using sifting algorithm.Secondly,some linear transformations techniques have been applied on various bidimensional intrinsic modes obtained and residue and further on joining the transformed modes with residue,enhanced intensity component is obtained.Saturation part of an image is then enhanced in accordance to the enhanced intensity component.Final enhanced image can be obtained by joining the hue,enhanced intensity and enhanced saturation parts of the given image.The proposed algorithm will not only give the visual pleasant image but maintains the naturalness of image also.
文摘In this paper, we will be looking at our efforts to find a novel solution for motion deblurring in videos. In addition, our solution has the requirement of being camera-independent. This means that the solution is fully implemented in software and is not aware of any of the characteristics of the camera. We found a solution by implementing a Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) hybrid model. Our CNN-LSTM is able to deblur video without any knowledge of the camera hardware. This allows it to be implemented on any system that allows the camera to be swapped out with any camera model with any physical characteristics.
文摘Background: Non-uniformity in signal intensity occurs commonly in magnetic resonance (MR) imaging, which may pose substantial problems when using a 3T scanner. Therefore, image non-uniformity correction is usually applied. Purpose: To compare the correction effects of the phased-array uniformity enhancement (PURE), a calibration-based image non-uniformity correction method, among three different software versions in 3T Gd-EOB-DTPA-enhanced MR imaging. Material and Methods: Hepatobiliary-phase images of a total of 120 patients who underwent Gd-EOB-DTPA-enhanced MR imaging on the same 3T scanner were analyzed retrospectively. Forty patients each were examined using three software versions (DV25, DV25.1, and DV26). The effects of PURE were compared by visual assessment, histogram analysis of liver signal intensity, evaluation of the spatial distribution of correction effects, and evaluation of quantitative indices of liver parenchymal enhancement. Results: The visual assessment indicated the highest uniformity of PURE-corrected images for DV26, followed by DV25 and DV25.1. Histogram analysis of corrected images demonstrated significantly larger variations in liver signal for DV25.1 than for the other two versions. Although PURE caused a relative increase in pixel values for central and lateral regions, such effects were weaker for DV25.1 than for the other two versions. In the evaluation of quantitative indices of liver parenchymal enhancement, the liver-to-muscle ratio (LMR) was significantly higher for the corrected images than for the uncorrected images, but the liver-to-spleen ratio (LSR) showed no significant differences. For corrected images, the LMR was significantly higher for DV25 and DV26 than for DV25.1, but the LSR showed no significant differences among the three versions. Conclusion: There were differences in the effects of PURE among the three software versions in 3T Gd-EOB-DTPA-enhanced MR imaging. Even if the non-uniformity correction method has the same brand name, correction effects may differ depending on the software version, and these differences may affect visual and quantitative evaluations.