The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performanc...The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performances.To overcome this,dimensionality reduction techniques are widely used.Traditional image processing applications recently propose numerous deep learning models.However,in hyperspectral image classification,the features of deep learning models are less explored.Thus,for efficient hyperspectral image classification,a depth-wise convolutional neural network is presented in this research work.To handle the dimensionality issue in the classification process,an optimized self-organized map model is employed using a water strider optimization algorithm.The network parameters of the self-organized map are optimized by the water strider optimization which reduces the dimensionality issues and enhances the classification performances.Standard datasets such as Indian Pines and the University of Pavia(UP)are considered for experimental analysis.Existing dimensionality reduction methods like Enhanced Hybrid-Graph Discriminant Learning(EHGDL),local geometric structure Fisher analysis(LGSFA),Discriminant Hyper-Laplacian projection(DHLP),Group-based tensor model(GBTM),and Lower rank tensor approximation(LRTA)methods are compared with proposed optimized SOM model.Results confirm the superior performance of the proposed model of 98.22%accuracy for the Indian pines dataset and 98.21%accuracy for the University of Pavia dataset over the existing maximum likelihood classifier,and Support vector machine(SVM).展开更多
A method and procedure is presented to reconstruct three-dimensional(3D) positions of scattering centers from multiple synthetic aperture radar(SAR) images. Firstly, two-dimensional(2D) attribute scattering centers of...A method and procedure is presented to reconstruct three-dimensional(3D) positions of scattering centers from multiple synthetic aperture radar(SAR) images. Firstly, two-dimensional(2D) attribute scattering centers of targets are extracted from 2D SAR images. Secondly, similarity measure is developed based on 2D attributed scatter centers' location, type, and radargrammetry principle between multiple SAR images. By this similarity, we can associate 2D scatter centers and then obtain candidate 3D scattering centers. Thirdly, these candidate scattering centers are clustered in 3D space to reconstruct final 3D positions. Compared with presented methods, the proposed method has a capability of describing distributed scattering center, reduces false and missing 3D scattering centers, and has fewer restrictionson modeling data. Finally, results of experiments have demonstrated the effectiveness of the proposed method.展开更多
In order to improve the diagnosis and analysis ability of 3D spiral CT and to reconstruct the contour of 3D spiral CT damage image,a contour reconstruction method based on sharpening template enhancement for 3D spiral...In order to improve the diagnosis and analysis ability of 3D spiral CT and to reconstruct the contour of 3D spiral CT damage image,a contour reconstruction method based on sharpening template enhancement for 3D spiral CT damage image is proposed.This method uses the active contour LasSO model to extract the contour feature of the 3D spiral CT damage image and enhances the information by sharpening the template en.hancement technique and makes the noise separation of the 3D spiral CT damage image.The spiral CT image was procesed with ENT,and the statistical shape model of 3D spiral CT damage image was established.The.gradient algorithm is used to decompose the feature to realize the analysis and reconstruction of the contour feature of the 3D spiral CT damage image,so as to improve the adaptive feature matching ability and the ability to locate the abnormal feature points.The simulation results show that in the 3D spiral CT damage image contour reconstruction,the proposed method performs well in the feature matching of the output pixels,shortens the contour reconstruction time by 20/ms,and provides a strong ability to express the image information.The normalized reconstruction error of CES is 30%,which improves the recognition ability of 3D spiral CT damage image,and increases the signal-to noise ratio of peak output by 40 dB over other methods.展开更多
At present, many chaos-based image encryption algorithms have proved to be unsafe, few encryption schemes permute the plain images as three-dimensional(3D) bit matrices, and thus bits cannot move to any position, th...At present, many chaos-based image encryption algorithms have proved to be unsafe, few encryption schemes permute the plain images as three-dimensional(3D) bit matrices, and thus bits cannot move to any position, the movement range of bits are limited, and based on them, in this paper we present a novel image encryption algorithm based on 3D Brownian motion and chaotic systems. The architecture of confusion and diffusion is adopted. Firstly, the plain image is converted into a 3D bit matrix and split into sub blocks. Secondly, block confusion based on 3D Brownian motion(BCB3DBM)is proposed to permute the position of the bits within the sub blocks, and the direction of particle movement is generated by logistic-tent system(LTS). Furthermore, block confusion based on position sequence group(BCBPSG) is introduced, a four-order memristive chaotic system is utilized to give random chaotic sequences, and the chaotic sequences are sorted and a position sequence group is chosen based on the plain image, then the sub blocks are confused. The proposed confusion strategy can change the positions of the bits and modify their weights, and effectively improve the statistical performance of the algorithm. Finally, a pixel level confusion is employed to enhance the encryption effect. The initial values and parameters of chaotic systems are produced by the SHA 256 hash function of the plain image. Simulation results and security analyses illustrate that our algorithm has excellent encryption performance in terms of security and speed.展开更多
A method of fabricating multi-core polymer image fiber is proposed.Image fiber preform is fabricated by stacking thousands of polymer fibers each with a 0.25-mm diameter orderly in a die by only one step.The preform i...A method of fabricating multi-core polymer image fiber is proposed.Image fiber preform is fabricated by stacking thousands of polymer fibers each with a 0.25-mm diameter orderly in a die by only one step.The preform is heated and stretched into image fiber with an outer diameter of 2mm.Then a portable eyewear-style three-dimensional(3D) endoscope system is designed,fabricated,and characterized.This endoscopic system is composed of two graded index lenses,two pieces of 0.35-m length image guide fibers,and a pair of oculars.It shows good ?exibility and portability,and can provide the depth information accordingly.展开更多
This paper advances a three-dimensional space interpolation method of grey / depth image sequence, which breaks free from the limit of original practical photographing route. Pictures can cruise at will in space. By u...This paper advances a three-dimensional space interpolation method of grey / depth image sequence, which breaks free from the limit of original practical photographing route. Pictures can cruise at will in space. By using space sparse sampling, great memorial capacity can be saved and reproduced scenes can be controlled. To solve time consuming and complex computations in three-dimensional interpolation algorithm, we have studied a fast and practical algorithm of scattered space lattice and that of 'Warp' algorithm with proper depth. By several simple aspects of three dimensional space interpolation, we succeed in developing some simple and practical algorithms. Some results of simulated experiments with computers have shown that the new method is absolutely feasible.展开更多
BACKGROUND Fewer than 200 cases of diaphragmatic tumors have been reported in the past century. Diaphragmatic hemangiomas are extremely rare. Only nine cases have been reported in English literature to date. We report...BACKGROUND Fewer than 200 cases of diaphragmatic tumors have been reported in the past century. Diaphragmatic hemangiomas are extremely rare. Only nine cases have been reported in English literature to date. We report a case of cavernous hemangioma arising from the diaphragm. Pre-operative three-dimensional(3D)simulation and minimal invasive thoracoscopic excision were performed successfully, and we describe the radiologic findings and the surgical procedure in the following article.CASE SUMMARY A 40-year-old man was referred for further examination of a mass over the right basal lung without specific symptoms. Contrast-enhanced computed tomography revealed a poorly-enhanced lesion in the right basal lung, abutting to the diaphragm, measuring 3.1 cm × 1.5 cm in size. The mediastinum showed a clear appearance without evidence of abnormal mass or lymphadenopathy. A preoperative 3D image was reconstructed, which revealed a diaphragmatic lesion. Video-assisted thoracic surgery was performed, and a red papillary tumor was found, originating from the right diaphragm. The tumor was resected, and the pathological diagnosis was cavernous hemangioma.CONCLUSION In this rare case of diaphragmatic hemangioma, 3D image simulation was helpful for the preoperative evaluation and surgical decision making.展开更多
This paper studies the problem applying Radial Basis Function Network(RBFN) which is trained by the Recursive Least Square Algorithm(RLSA) to the recognition of one dimensional images of radar targets. The equivalence...This paper studies the problem applying Radial Basis Function Network(RBFN) which is trained by the Recursive Least Square Algorithm(RLSA) to the recognition of one dimensional images of radar targets. The equivalence between the RBFN and the estimate of Parzen window probabilistic density is proved. It is pointed out that the I/O functions in RBFN hidden units can be generalized to general Parzen window probabilistic kernel function or potential function, too. This paper discusses the effects of the shape parameter a in the RBFN and the forgotten factor A in RLSA on the results of the recognition of three kinds of kernel function such as Gaussian, triangle, double-exponential, at the same time, also discusses the relationship between A and the training time in the RBFN.展开更多
The properties and characteristics of the polymer used for the preparation of matrix drug delivery systems considerably influence their performance and the extent of drug release and its mechanism. The objective of th...The properties and characteristics of the polymer used for the preparation of matrix drug delivery systems considerably influence their performance and the extent of drug release and its mechanism. The objective of this research was to examine the dimensional changes, and gel evolution of polymer matrices consisting of three different polymers Polyox, sodium alginate (hydrophilic) and Ethocel (hydrophobic), using an image analysis method. Furthermore to explore how these changes influence the release rate of a soluble drug namely, venlafaxine. All tablets displayed marked dimensional expansion and gel growth particularly those consisting of two hydrophilic polymers Polyox/sodium alginate (POL/SA/V) compared to those consisting of the hydrophilic/hydrophobic Polyox/Ethocel (POL/ET/V). Similarly the thickness of the gel layer in POL/SA/V matrices increased considerably with time up to 8 hours. In general our findings show that the POL/SA/V matrices, due to their thicker gel layer produced a more effective barrier which results in a more pronounced sustained release delivery. This accounts for the slower and smaller overall drug release observed with the POL/SA/V matrices compared to those containing POL/ET/V and indicates that the formation of a thick and durable gel barrier is a characteristic necessary for the preparation of sustained drug release systems. Moreover the solubility of venlafaxine in combination with the polymer’s properties appears to play an important role on the extent of drug release and the release mechanism. Overall the polymer mixtures examined comprise a useful and promising combination of materials for the development and manufacture of sustained release preparations based on these polymers.展开更多
Hyperspectral image(HSI)contains a wealth of spectral information,which makes fine classification of ground objects possible.In the meanwhile,overly redundant information in HSI brings many challenges.Specifically,the...Hyperspectral image(HSI)contains a wealth of spectral information,which makes fine classification of ground objects possible.In the meanwhile,overly redundant information in HSI brings many challenges.Specifically,the lack of training samples and the high computational cost are the inevitable obstacles in the design of classifier.In order to solve these problems,dimensionality reduction is usually adopted.Recently,graph-based dimensionality reduction has become a hot topic.In this paper,the graph-based methods for HSI dimensionality reduction are summarized from the following aspects.1)The traditional graph-based methods employ Euclidean distance to explore the local information of samples in spectral feature space.2)The dimensionality-reduction methods based on sparse or collaborative representation regard the sparse or collaborative coefficients as graph weights to effectively reduce reconstruction errors and represent most important information of HSI in the dictionary.3)Improved methods based on sparse or collaborative graph have made great progress by considering global low-rank information,local intra-class information and spatial information.In order to compare typical techniques,three real HSI datasets were used to carry out relevant experiments,and then the experimental results were analysed and discussed.Finally,the future development of this research field is prospected.展开更多
Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dim...Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes.展开更多
In general, to reconstruct the accurate shape of buildings, we need at least one stereomodel (two photographs) for each building. In most cases, however, only a single non-metric photograph is available, which is us...In general, to reconstruct the accurate shape of buildings, we need at least one stereomodel (two photographs) for each building. In most cases, however, only a single non-metric photograph is available, which is usually obtained either by an amateur, such as a tourist, or from a newspaper or a post card. To evaluate the validity of 3D reconstruction from a single non-metric image, this study analyzes the effects of object depth on the accuracy of dimensional shape in X and Y directions using a single non-metric image by means of simulation technique, as this was considered to be, in most cases, a main source of data acquisition in recording and documenting buildings.展开更多
The image of a city embodies its fundamental values and unique characteristics,representing the essence of its urban culture and spirit.It is considered to be one of the most valuable intangible assets of a city and s...The image of a city embodies its fundamental values and unique characteristics,representing the essence of its urban culture and spirit.It is considered to be one of the most valuable intangible assets of a city and serves as a crucial driving force for its ongoing development.Taking Chengdu as an example,this paper conducted a comprehensive analysis of Chengdu’s city image communication strategy through various dimensions,such as the city’s value orientation and modes of communication.First,it was necessary to explore the image resources of Chengdu,and based on this,the urban value orientation of“Man-Chengdu,”an extensive strategy used by Chengdu to better communicate its culture,was proposed to facilitate image communication.Second,it was necessary to expand the dimensions of Chengdu’s city image communication.This was achieved by building a resource pool of city image elements,leveraging major media events to promote communication and enhancing the correlation between content and channel platforms.Moreover,efforts were also made to develop people-oriented narrative strategies and give full play to the advantages of new technologies to form an integrated communication mode.Finally,it was crucial to bridge the official and folk communication systems to enable multiple subjects to share Chengdu’s stories from diverse perspectives,thus improving the breadth and validity of Chengdu’s image communication.展开更多
As a key technique in hyperspectral image pre-processing,dimensionality reduction has received a lot of attention.However,most of the graph-based dimensionality reduction methods only consider a single structure in th...As a key technique in hyperspectral image pre-processing,dimensionality reduction has received a lot of attention.However,most of the graph-based dimensionality reduction methods only consider a single structure in the data and ignore the interfusion of multiple structures.In this paper,we propose two methods for combining intra-class competition for locally preserved graphs by constructing a new dictionary containing neighbourhood information.These two methods explore local information into the collaborative graph through competing constraints,thus effectively improving the overcrowded distribution of intra-class coefficients in the collaborative graph and enhancing the discriminative power of the algorithm.By classifying four benchmark hyperspectral data,the proposed methods are proved to be superior to several advanced algorithms,even under small-sample-size conditions.展开更多
In this paper, we proposed a new semi-supervised multi-manifold learning method, called semi- supervised sparse multi-manifold embedding (S3MME), for dimensionality reduction of hyperspectral image data. S3MME exploit...In this paper, we proposed a new semi-supervised multi-manifold learning method, called semi- supervised sparse multi-manifold embedding (S3MME), for dimensionality reduction of hyperspectral image data. S3MME exploits both the labeled and unlabeled data to adaptively find neighbors of each sample from the same manifold by using an optimization program based on sparse representation, and naturally gives relative importance to the labeled ones through a graph-based methodology. Then it tries to extract discriminative features on each manifold such that the data points in the same manifold become closer. The effectiveness of the proposed multi-manifold learning algorithm is demonstrated and compared through experiments on a real hyperspectral images.展开更多
In order to improve the registration accuracy of brain magnetic resonance images(MRI),some deep learning registration methods use segmentation images for training model.How-ever,the segmentation values are constant fo...In order to improve the registration accuracy of brain magnetic resonance images(MRI),some deep learning registration methods use segmentation images for training model.How-ever,the segmentation values are constant for each label,which leads to the gradient variation con-centrating on the boundary.Thus,the dense deformation field(DDF)is gathered on the boundary and there even appears folding phenomenon.In order to fully leverage the label information,the morphological opening and closing information maps are introduced to enlarge the non-zero gradi-ent regions and improve the accuracy of DDF estimation.The opening information maps supervise the registration model to focus on smaller,narrow brain regions.The closing information maps supervise the registration model to pay more attention to the complex boundary region.Then,opening and closing morphology networks(OC_Net)are designed to automatically generate open-ing and closing information maps to realize the end-to-end training process.Finally,a new registra-tion architecture,VM_(seg+oc),is proposed by combining OC_Net and VoxelMorph.Experimental results show that the registration accuracy of VM_(seg+oc) is significantly improved on LPBA40 and OASIS1 datasets.Especially,VM_(seg+oc) can well improve registration accuracy in smaller brain regions and narrow regions.展开更多
The segmentation effect of Tsallis entropy method is superior to that of Shannon entropy method, and the computation speed of two-dimensional Shannon cross entropy method can be further improved by optimization. The e...The segmentation effect of Tsallis entropy method is superior to that of Shannon entropy method, and the computation speed of two-dimensional Shannon cross entropy method can be further improved by optimization. The existing two-dimensional Tsallis cross entropy method is not the strict two-dimensional extension. Thus two new methods of image thresholding using two-dimensional Tsallis cross entropy based on either Chaotic Particle Swarm Optimization (CPSO) or decomposition are proposed. The former uses CPSO to find the optimal threshold. The recursive algorithm is adopted to avoid the repetitive computation of fitness function in iterative procedure. The computing speed is improved greatly. The latter converts the two-dimensional computation into two one-dimensional spaces, which makes the computational complexity further reduced from O(L2) to O(L). The experimental results show that, compared with the proposed recently two-dimensional Shannon or Tsallis cross entropy method, the two new methods can achieve superior segmentation results and reduce running time greatly.展开更多
Machine learning methods, one type of methods used in artificial intelligence, are now widely used to analyze two-dimensional (2D) images in various fields. In these analyses, estimating the boundary between two regio...Machine learning methods, one type of methods used in artificial intelligence, are now widely used to analyze two-dimensional (2D) images in various fields. In these analyses, estimating the boundary between two regions is basic but important. If the model contains stochastic factors such as random observation errors, determining the boundary is not easy. When the probability distributions are mis-specified, ordinal methods such as probit and logit maximum likelihood estimators (MLE) have large biases. The grouping estimator is a semiparametric estimator based on the grouping of data that does not require specific probability distributions. For 2D images, the grouping is simple. Monte Carlo experiments show that the grouping estimator clearly improves the probit MLE in many cases. The grouping estimator essentially makes the resolution density lower, and the present findings imply that methods using low-resolution image analyses might not be the proper ones in high-density image analyses. It is necessary to combine and compare the results of high- and low-resolution image analyses. The grouping estimator may provide theoretical justifications for such analysis.展开更多
Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper...Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper was obtained by spatial transformation of 16 subjects by means of tensor-based standardization.Then,high dimensional standardization algorithms for diffusion tensor images,including fractional anisotropy(FA)based diffeomorphic registration algorithm,FA based elastic registration algorithm and tensor-based registration algorithm,were performed.Finally,7 kinds of evaluation methods,including normalized standard deviation,dyadic coherence,diffusion cross-correlation,overlap of eigenvalue-eigenvector pairs,Euclidean distance of diffusion tensor,and Euclidean distance of the deviatoric tensor and deviatoric of tensors,were used to qualitatively compare and summarize the above standardization algorithms.Experimental results revealed that the high-dimensional tensor-based standardization algorithms perform well and can maintain the consistency of anatomical structures.展开更多
文摘The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performances.To overcome this,dimensionality reduction techniques are widely used.Traditional image processing applications recently propose numerous deep learning models.However,in hyperspectral image classification,the features of deep learning models are less explored.Thus,for efficient hyperspectral image classification,a depth-wise convolutional neural network is presented in this research work.To handle the dimensionality issue in the classification process,an optimized self-organized map model is employed using a water strider optimization algorithm.The network parameters of the self-organized map are optimized by the water strider optimization which reduces the dimensionality issues and enhances the classification performances.Standard datasets such as Indian Pines and the University of Pavia(UP)are considered for experimental analysis.Existing dimensionality reduction methods like Enhanced Hybrid-Graph Discriminant Learning(EHGDL),local geometric structure Fisher analysis(LGSFA),Discriminant Hyper-Laplacian projection(DHLP),Group-based tensor model(GBTM),and Lower rank tensor approximation(LRTA)methods are compared with proposed optimized SOM model.Results confirm the superior performance of the proposed model of 98.22%accuracy for the Indian pines dataset and 98.21%accuracy for the University of Pavia dataset over the existing maximum likelihood classifier,and Support vector machine(SVM).
文摘A method and procedure is presented to reconstruct three-dimensional(3D) positions of scattering centers from multiple synthetic aperture radar(SAR) images. Firstly, two-dimensional(2D) attribute scattering centers of targets are extracted from 2D SAR images. Secondly, similarity measure is developed based on 2D attributed scatter centers' location, type, and radargrammetry principle between multiple SAR images. By this similarity, we can associate 2D scatter centers and then obtain candidate 3D scattering centers. Thirdly, these candidate scattering centers are clustered in 3D space to reconstruct final 3D positions. Compared with presented methods, the proposed method has a capability of describing distributed scattering center, reduces false and missing 3D scattering centers, and has fewer restrictionson modeling data. Finally, results of experiments have demonstrated the effectiveness of the proposed method.
文摘In order to improve the diagnosis and analysis ability of 3D spiral CT and to reconstruct the contour of 3D spiral CT damage image,a contour reconstruction method based on sharpening template enhancement for 3D spiral CT damage image is proposed.This method uses the active contour LasSO model to extract the contour feature of the 3D spiral CT damage image and enhances the information by sharpening the template en.hancement technique and makes the noise separation of the 3D spiral CT damage image.The spiral CT image was procesed with ENT,and the statistical shape model of 3D spiral CT damage image was established.The.gradient algorithm is used to decompose the feature to realize the analysis and reconstruction of the contour feature of the 3D spiral CT damage image,so as to improve the adaptive feature matching ability and the ability to locate the abnormal feature points.The simulation results show that in the 3D spiral CT damage image contour reconstruction,the proposed method performs well in the feature matching of the output pixels,shortens the contour reconstruction time by 20/ms,and provides a strong ability to express the image information.The normalized reconstruction error of CES is 30%,which improves the recognition ability of 3D spiral CT damage image,and increases the signal-to noise ratio of peak output by 40 dB over other methods.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.41571417 and 61305042)the National Science Foundation of the United States(Grant Nos.CNS-1253424 and ECCS-1202225)+4 种基金the Science and Technology Foundation of Henan Province,China(Grant No.152102210048)the Foundation and Frontier Project of Henan Province,China(Grant No.162300410196)China Postdoctoral Science Foundation(Grant No.2016M602235)the Natural Science Foundation of Educational Committee of Henan Province,China(Grant No.14A413015)the Research Foundation of Henan University,China(Grant No.xxjc20140006)
文摘At present, many chaos-based image encryption algorithms have proved to be unsafe, few encryption schemes permute the plain images as three-dimensional(3D) bit matrices, and thus bits cannot move to any position, the movement range of bits are limited, and based on them, in this paper we present a novel image encryption algorithm based on 3D Brownian motion and chaotic systems. The architecture of confusion and diffusion is adopted. Firstly, the plain image is converted into a 3D bit matrix and split into sub blocks. Secondly, block confusion based on 3D Brownian motion(BCB3DBM)is proposed to permute the position of the bits within the sub blocks, and the direction of particle movement is generated by logistic-tent system(LTS). Furthermore, block confusion based on position sequence group(BCBPSG) is introduced, a four-order memristive chaotic system is utilized to give random chaotic sequences, and the chaotic sequences are sorted and a position sequence group is chosen based on the plain image, then the sub blocks are confused. The proposed confusion strategy can change the positions of the bits and modify their weights, and effectively improve the statistical performance of the algorithm. Finally, a pixel level confusion is employed to enhance the encryption effect. The initial values and parameters of chaotic systems are produced by the SHA 256 hash function of the plain image. Simulation results and security analyses illustrate that our algorithm has excellent encryption performance in terms of security and speed.
基金Project supported by the National Natural Science Foundation of China (Grant No. 61275106 and 61275086)
文摘A method of fabricating multi-core polymer image fiber is proposed.Image fiber preform is fabricated by stacking thousands of polymer fibers each with a 0.25-mm diameter orderly in a die by only one step.The preform is heated and stretched into image fiber with an outer diameter of 2mm.Then a portable eyewear-style three-dimensional(3D) endoscope system is designed,fabricated,and characterized.This endoscopic system is composed of two graded index lenses,two pieces of 0.35-m length image guide fibers,and a pair of oculars.It shows good ?exibility and portability,and can provide the depth information accordingly.
文摘This paper advances a three-dimensional space interpolation method of grey / depth image sequence, which breaks free from the limit of original practical photographing route. Pictures can cruise at will in space. By using space sparse sampling, great memorial capacity can be saved and reproduced scenes can be controlled. To solve time consuming and complex computations in three-dimensional interpolation algorithm, we have studied a fast and practical algorithm of scattered space lattice and that of 'Warp' algorithm with proper depth. By several simple aspects of three dimensional space interpolation, we succeed in developing some simple and practical algorithms. Some results of simulated experiments with computers have shown that the new method is absolutely feasible.
文摘BACKGROUND Fewer than 200 cases of diaphragmatic tumors have been reported in the past century. Diaphragmatic hemangiomas are extremely rare. Only nine cases have been reported in English literature to date. We report a case of cavernous hemangioma arising from the diaphragm. Pre-operative three-dimensional(3D)simulation and minimal invasive thoracoscopic excision were performed successfully, and we describe the radiologic findings and the surgical procedure in the following article.CASE SUMMARY A 40-year-old man was referred for further examination of a mass over the right basal lung without specific symptoms. Contrast-enhanced computed tomography revealed a poorly-enhanced lesion in the right basal lung, abutting to the diaphragm, measuring 3.1 cm × 1.5 cm in size. The mediastinum showed a clear appearance without evidence of abnormal mass or lymphadenopathy. A preoperative 3D image was reconstructed, which revealed a diaphragmatic lesion. Video-assisted thoracic surgery was performed, and a red papillary tumor was found, originating from the right diaphragm. The tumor was resected, and the pathological diagnosis was cavernous hemangioma.CONCLUSION In this rare case of diaphragmatic hemangioma, 3D image simulation was helpful for the preoperative evaluation and surgical decision making.
基金Supported by the National Natural Science Foundationthe Doctoral Foundation of the State Education Commission of China
文摘This paper studies the problem applying Radial Basis Function Network(RBFN) which is trained by the Recursive Least Square Algorithm(RLSA) to the recognition of one dimensional images of radar targets. The equivalence between the RBFN and the estimate of Parzen window probabilistic density is proved. It is pointed out that the I/O functions in RBFN hidden units can be generalized to general Parzen window probabilistic kernel function or potential function, too. This paper discusses the effects of the shape parameter a in the RBFN and the forgotten factor A in RLSA on the results of the recognition of three kinds of kernel function such as Gaussian, triangle, double-exponential, at the same time, also discusses the relationship between A and the training time in the RBFN.
文摘The properties and characteristics of the polymer used for the preparation of matrix drug delivery systems considerably influence their performance and the extent of drug release and its mechanism. The objective of this research was to examine the dimensional changes, and gel evolution of polymer matrices consisting of three different polymers Polyox, sodium alginate (hydrophilic) and Ethocel (hydrophobic), using an image analysis method. Furthermore to explore how these changes influence the release rate of a soluble drug namely, venlafaxine. All tablets displayed marked dimensional expansion and gel growth particularly those consisting of two hydrophilic polymers Polyox/sodium alginate (POL/SA/V) compared to those consisting of the hydrophilic/hydrophobic Polyox/Ethocel (POL/ET/V). Similarly the thickness of the gel layer in POL/SA/V matrices increased considerably with time up to 8 hours. In general our findings show that the POL/SA/V matrices, due to their thicker gel layer produced a more effective barrier which results in a more pronounced sustained release delivery. This accounts for the slower and smaller overall drug release observed with the POL/SA/V matrices compared to those containing POL/ET/V and indicates that the formation of a thick and durable gel barrier is a characteristic necessary for the preparation of sustained drug release systems. Moreover the solubility of venlafaxine in combination with the polymer’s properties appears to play an important role on the extent of drug release and the release mechanism. Overall the polymer mixtures examined comprise a useful and promising combination of materials for the development and manufacture of sustained release preparations based on these polymers.
基金supported by the National Key Research and Development Project(No.2020YFC1512000)the National Natural Science Foundation of China(No.41601344)+2 种基金the Fundamental Research Funds for the Central Universities(Nos.300102320107 and 201924)in part by the General Projects of Key R&D Programs in Shaanxi Province(No.2020GY-060)Xi’an Science&Technology Project(Nos.2020KJRC0126 and 202018)。
文摘Hyperspectral image(HSI)contains a wealth of spectral information,which makes fine classification of ground objects possible.In the meanwhile,overly redundant information in HSI brings many challenges.Specifically,the lack of training samples and the high computational cost are the inevitable obstacles in the design of classifier.In order to solve these problems,dimensionality reduction is usually adopted.Recently,graph-based dimensionality reduction has become a hot topic.In this paper,the graph-based methods for HSI dimensionality reduction are summarized from the following aspects.1)The traditional graph-based methods employ Euclidean distance to explore the local information of samples in spectral feature space.2)The dimensionality-reduction methods based on sparse or collaborative representation regard the sparse or collaborative coefficients as graph weights to effectively reduce reconstruction errors and represent most important information of HSI in the dictionary.3)Improved methods based on sparse or collaborative graph have made great progress by considering global low-rank information,local intra-class information and spatial information.In order to compare typical techniques,three real HSI datasets were used to carry out relevant experiments,and then the experimental results were analysed and discussed.Finally,the future development of this research field is prospected.
基金supported by the National Natural Science Foundations of China(Nos.51205193,51475221)
文摘Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes.
文摘In general, to reconstruct the accurate shape of buildings, we need at least one stereomodel (two photographs) for each building. In most cases, however, only a single non-metric photograph is available, which is usually obtained either by an amateur, such as a tourist, or from a newspaper or a post card. To evaluate the validity of 3D reconstruction from a single non-metric image, this study analyzes the effects of object depth on the accuracy of dimensional shape in X and Y directions using a single non-metric image by means of simulation technique, as this was considered to be, in most cases, a main source of data acquisition in recording and documenting buildings.
基金This study was supported by the key project of Philosophy and Social Sciences Planning of Sichuan Province entitled“Research on Communication Strategies of the 31st Summer World University Games in Enhancing the Image of Chengdu under the New Media Environment”(SC21A019).
文摘The image of a city embodies its fundamental values and unique characteristics,representing the essence of its urban culture and spirit.It is considered to be one of the most valuable intangible assets of a city and serves as a crucial driving force for its ongoing development.Taking Chengdu as an example,this paper conducted a comprehensive analysis of Chengdu’s city image communication strategy through various dimensions,such as the city’s value orientation and modes of communication.First,it was necessary to explore the image resources of Chengdu,and based on this,the urban value orientation of“Man-Chengdu,”an extensive strategy used by Chengdu to better communicate its culture,was proposed to facilitate image communication.Second,it was necessary to expand the dimensions of Chengdu’s city image communication.This was achieved by building a resource pool of city image elements,leveraging major media events to promote communication and enhancing the correlation between content and channel platforms.Moreover,efforts were also made to develop people-oriented narrative strategies and give full play to the advantages of new technologies to form an integrated communication mode.Finally,it was crucial to bridge the official and folk communication systems to enable multiple subjects to share Chengdu’s stories from diverse perspectives,thus improving the breadth and validity of Chengdu’s image communication.
基金supported by the National Natural Science Foundation of China(No.41601344)the Fundamental Research Funds for the Central Universities(Nos.300102320107 and 201924)+2 种基金the National Key Research and Development Project(No.2020YFC1512000)in part by the General Projects of Key R&D Programs in Shaanxi Province(No.2020GY-060)Xi’an Science&Technology Project(Nos.2020KJRC0126 and 202018)。
文摘As a key technique in hyperspectral image pre-processing,dimensionality reduction has received a lot of attention.However,most of the graph-based dimensionality reduction methods only consider a single structure in the data and ignore the interfusion of multiple structures.In this paper,we propose two methods for combining intra-class competition for locally preserved graphs by constructing a new dictionary containing neighbourhood information.These two methods explore local information into the collaborative graph through competing constraints,thus effectively improving the overcrowded distribution of intra-class coefficients in the collaborative graph and enhancing the discriminative power of the algorithm.By classifying four benchmark hyperspectral data,the proposed methods are proved to be superior to several advanced algorithms,even under small-sample-size conditions.
文摘In this paper, we proposed a new semi-supervised multi-manifold learning method, called semi- supervised sparse multi-manifold embedding (S3MME), for dimensionality reduction of hyperspectral image data. S3MME exploits both the labeled and unlabeled data to adaptively find neighbors of each sample from the same manifold by using an optimization program based on sparse representation, and naturally gives relative importance to the labeled ones through a graph-based methodology. Then it tries to extract discriminative features on each manifold such that the data points in the same manifold become closer. The effectiveness of the proposed multi-manifold learning algorithm is demonstrated and compared through experiments on a real hyperspectral images.
基金supported by Shandong Provincial Natural Science Foundation(No.ZR2023MF062)the National Natural Science Foundation of China(No.61771230).
文摘In order to improve the registration accuracy of brain magnetic resonance images(MRI),some deep learning registration methods use segmentation images for training model.How-ever,the segmentation values are constant for each label,which leads to the gradient variation con-centrating on the boundary.Thus,the dense deformation field(DDF)is gathered on the boundary and there even appears folding phenomenon.In order to fully leverage the label information,the morphological opening and closing information maps are introduced to enlarge the non-zero gradi-ent regions and improve the accuracy of DDF estimation.The opening information maps supervise the registration model to focus on smaller,narrow brain regions.The closing information maps supervise the registration model to pay more attention to the complex boundary region.Then,opening and closing morphology networks(OC_Net)are designed to automatically generate open-ing and closing information maps to realize the end-to-end training process.Finally,a new registra-tion architecture,VM_(seg+oc),is proposed by combining OC_Net and VoxelMorph.Experimental results show that the registration accuracy of VM_(seg+oc) is significantly improved on LPBA40 and OASIS1 datasets.Especially,VM_(seg+oc) can well improve registration accuracy in smaller brain regions and narrow regions.
基金supported by National Natural Science Foundation of China under Grant No.60872065Open Foundation of State Key Laboratory for Novel Software Technology at Nanjing University under Grant No.KFKT2010B17
文摘The segmentation effect of Tsallis entropy method is superior to that of Shannon entropy method, and the computation speed of two-dimensional Shannon cross entropy method can be further improved by optimization. The existing two-dimensional Tsallis cross entropy method is not the strict two-dimensional extension. Thus two new methods of image thresholding using two-dimensional Tsallis cross entropy based on either Chaotic Particle Swarm Optimization (CPSO) or decomposition are proposed. The former uses CPSO to find the optimal threshold. The recursive algorithm is adopted to avoid the repetitive computation of fitness function in iterative procedure. The computing speed is improved greatly. The latter converts the two-dimensional computation into two one-dimensional spaces, which makes the computational complexity further reduced from O(L2) to O(L). The experimental results show that, compared with the proposed recently two-dimensional Shannon or Tsallis cross entropy method, the two new methods can achieve superior segmentation results and reduce running time greatly.
文摘Machine learning methods, one type of methods used in artificial intelligence, are now widely used to analyze two-dimensional (2D) images in various fields. In these analyses, estimating the boundary between two regions is basic but important. If the model contains stochastic factors such as random observation errors, determining the boundary is not easy. When the probability distributions are mis-specified, ordinal methods such as probit and logit maximum likelihood estimators (MLE) have large biases. The grouping estimator is a semiparametric estimator based on the grouping of data that does not require specific probability distributions. For 2D images, the grouping is simple. Monte Carlo experiments show that the grouping estimator clearly improves the probit MLE in many cases. The grouping estimator essentially makes the resolution density lower, and the present findings imply that methods using low-resolution image analyses might not be the proper ones in high-density image analyses. It is necessary to combine and compare the results of high- and low-resolution image analyses. The grouping estimator may provide theoretical justifications for such analysis.
基金Supported by the National Key Research and Development Program of China(2016YFC0100300)the National Natural Science Foundation of China(61402371,61771369)+1 种基金the Natural Science Basic Research Plan in Shaanxi Province of China(2017JM6008)the Fundamental Research Funds for the Central Universities of China(3102017zy032,3102018zy020)
文摘Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper was obtained by spatial transformation of 16 subjects by means of tensor-based standardization.Then,high dimensional standardization algorithms for diffusion tensor images,including fractional anisotropy(FA)based diffeomorphic registration algorithm,FA based elastic registration algorithm and tensor-based registration algorithm,were performed.Finally,7 kinds of evaluation methods,including normalized standard deviation,dyadic coherence,diffusion cross-correlation,overlap of eigenvalue-eigenvector pairs,Euclidean distance of diffusion tensor,and Euclidean distance of the deviatoric tensor and deviatoric of tensors,were used to qualitatively compare and summarize the above standardization algorithms.Experimental results revealed that the high-dimensional tensor-based standardization algorithms perform well and can maintain the consistency of anatomical structures.
基金This paper is supported by National Natural Science Foundation (No. 60871093, 60872126) and National Defense Prediction Foundation (No. 9140C80002080C80), Guangdong Province Natural Science Foundation (No.8151806001000002)