We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance...We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance the capability of deep neural networks in extracting geometric attributes from depth images,we developed a novel deep geometric convolution operator(DGConv).DGConv is utilized to construct a deep local geometric feature extraction module,facilitating a more comprehensive exploration of the intrinsic geometric information within depth images.Secondly,we integrate the newly proposed deep geometric feature module with the Fully Convolutional Network(FCN8)to establish a high-performance deep neural network algorithm tailored for depth image segmentation.Concurrently,we enhance the FCN8 detection head by separating the segmentation and classification processes.This enhancement significantly boosts the network’s overall detection capability.Thirdly,for a comprehensive assessment of our proposed algorithm and its applicability in real-world industrial settings,we curated a line-scan image dataset featuring weld seams.This dataset,named the Standardized Linear Depth Profile(SLDP)dataset,was collected from actual industrial sites where autonomous robots are in operation.Ultimately,we conducted experiments utilizing the SLDP dataset,achieving an average accuracy of 92.7%.Our proposed approach exhibited a remarkable performance improvement over the prior method on the identical dataset.Moreover,we have successfully deployed the proposed algorithm in genuine industrial environments,fulfilling the prerequisites of unmanned robot operations.展开更多
Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove ...Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove regional trends. Major similarities in magnetic field orientation and intensities were observed at identical locations on both the regional and TMI data grids. From the regional and TMI gridded datasets, the residual dataset was generated which represents the very shallow geological features of the basin. Processing this residual data grid using the Source Parameter Imaging (SPI) for magnetic depth suggests that the estimated depths to magnetic sources in the basin range from about 271 m to 3552 m. The highest depths are located in two main locations somewhere around the central portion of the study area which correspond to the area with positive magnetic susceptibilities, as well as the areas extending outwards across the eastern boundary of the study area. Shallow magnetic depths are prominent towards the NW portion of the basin and also correspond to areas of negative magnetic susceptibilities. The basin generally exhibits a variation in depth of magnetic sources with high, average and shallow depths. The presence of intrusive igneous rocks was also observed in this basin. This characteristic is a pointer to the existence of geologic resources of interest for exploration in the basin.展开更多
In this study,we have developed a high-sensitivity,near-infrared photodetector based on PdSe2/GaAs heterojunction,which was made by transferring a multilayered PdSe2 film onto a planar GaAs.The as-fabricated PdSe2/GaA...In this study,we have developed a high-sensitivity,near-infrared photodetector based on PdSe2/GaAs heterojunction,which was made by transferring a multilayered PdSe2 film onto a planar GaAs.The as-fabricated PdSe2/GaAs heterojunction device exhibited obvious photovoltaic behavior to 808 nm illumination,indicating that the near-infrared photodetector can be used as a self-driven device without external power supply.Further device analysis showed that the hybrid heterojunction exhibited a high on/off ratio of 1.16×10^5 measured at 808 nm under zero bias voltage.The responsivity and specific detectivity of photodetector were estimated to be 171.34 mA/W and 2.36×10^11 Jones,respectively.Moreover,the device showed excellent stability and reliable repeatability.After 2 months,the photoelectric characteristics of the near-infrared photodetector hardly degrade in air,attributable to the good stability of the PdSe2.Finally,the PdSe2/GaAs-based heterojunction device can also function as a near-infrared light sensor.展开更多
The conventional photoelectric detection system requires complex circuitry and spectroscopic systems as well as specialized personnel for its operation.To replace such a system,a method of measuring turbidity using a ...The conventional photoelectric detection system requires complex circuitry and spectroscopic systems as well as specialized personnel for its operation.To replace such a system,a method of measuring turbidity using a camera is proposed by combining the imaging characteristics of a digital camera and the high-speed information processing capability of a computer.Two turbidity measurement devices based on visible and near-infrared(NIR)light cameras and a light source driving circuit with constant light intensity were designed.The RGB data in the turbidity images were acquired using a self-developed image processing software and converted to the CIE Lab color space.Based on the relationship between the luminance,chromatic aberration,and turbidity,the turbidity detection models for luminance and chromatic aberration of visible and NIR light devices exhibiting values from 0-1000 NTU,less than 100 NTU,and more than 100 NTU were established.By comparing and analyzing the proposed models,the two measurement models with the best all-around performance were selected and fused to generate new measurement models.The experimental results prove that the correlation between the three models and the commercial turbidity meter measurements exhibite a significance value higher than 0.999.The error of the fusion model is within 1.05%,with a mean square error of 1.14.The visible light device has less error at low turbidity measurements and is less influenced by the color of the image.The NIR light device is more stable and accurate at full range and high turbidity measurements and is therefore more suitable for such measurements.展开更多
Background: Even though NIR fluorescence imaging has many advantages in SLN mapping and cancer detection, NIR fluorescence imaging shows a serious drawback that NIR cannot be detected by the naked eye without any dete...Background: Even though NIR fluorescence imaging has many advantages in SLN mapping and cancer detection, NIR fluorescence imaging shows a serious drawback that NIR cannot be detected by the naked eye without any detectors. This limitation further disturbs accurate SLN detection and adequate tumor resection resulting in the presence of cancerous cells near the boundaries of surgically removed tissues. Materials and methods: To overcome the drawback of the conventional NIR imaging method, we suggest a novel NIR imaging system which can make the NIR fluorescence image visible to the naked eye as NIR fluorescence image detected by a video camera is processed by a computer and then projected back onto the NIR fluorescence excitation position with a projector using conspicuous color light. Image processing techniques were used for projection onto the exact position of the NIR fluorescence image. Also, we implemented a phantom experiment to evaluate the performance of the developed NIR fluorescence projection system by use of the ICG. Results: The developed NIR fluorescence projection system was applied in normal mouse model to confirm the usefulness of the system in the clinical field. A BALB/c nude mouse was prepared to be applied in normal mouse model and 0.25 mg/ml stock solution of the ICG was injected through a tail vein of the mouse. From the application in normal mouse model, we could confirm that the injected ICG stayed in the liver of the mouse and verify that the projection system projected the ICG fluorescence image at the exact location of the ICG by performing laparotomy of the mouse. Conclusions: From the application in normal mouse model, we could verify that the ICG fluorescence image was precisely projected back on the site where ICG fluorescence generated. It can be demonstrated that the NIR fluorescence projection system can make it possible to visualize the invisible NIR fluorescence image and to realize that SLN mapping and cancer detection in clinical surgery.展开更多
This paper presents a new fall detection method of etderly people in a room environment based on shape analysis of 3D depth images captured by a Kinect sensor. Depth images are pre- processed by a median filter both f...This paper presents a new fall detection method of etderly people in a room environment based on shape analysis of 3D depth images captured by a Kinect sensor. Depth images are pre- processed by a median filter both for background and target. The sithouette of moving individual in depth images is achieved by a subtraction method for background frames. The depth images are converted to disparity map, which is obtained by the horizontal and vertical projection histogram statistics. The initial floor plane information is obtained by V disparity map, and the floor ptane equation is estimated by the least square method. Shape information of human subject in depth images is analyzed by a set of moment functions. Coefficients of ellipses are calculated to determine the direction of individual The centroids of the human body are catculated and the angle between the human body and the floor plane is calculated. When both the distance from the centroids of the human body to the floor plane and the angle between the human body and the floor plane are tower than some threshotds, fall incident will be detected. Experiments with different failing direction are performed. Experimental results show that the proposed method can detect fall incidents effectively.展开更多
Objective: To evaluate the imaging potential of a novel near-infrared(NIR) probe conjugated to COC183 B2 monoclonal antibodies(MAb) in ovarian cancer(OC).Methods: The expression of OC183 B2 antigen in OC was determine...Objective: To evaluate the imaging potential of a novel near-infrared(NIR) probe conjugated to COC183 B2 monoclonal antibodies(MAb) in ovarian cancer(OC).Methods: The expression of OC183 B2 antigen in OC was determined by immunohistochemical(IHC) staining using tissue microarrays with the H-score system and immunofluorescence(IF) staining of tumor cell lines.Imaging probes with the NIR fluorescent dye cyanine 7(Cy7) conjugated to COC183 B2 Mab were chemically engineered. OC183 B2-positive human OC cells(SKOV3-Luc) were injected subcutaneously into BALB/c nude mice. Bioluminescent imaging(BLI) was performed to detect tumor location and growth. COC183 B2-Cy7 at 1.1,3.3, 10, or 30 μg were used for in vivo fluorescence imaging, and phosphate-buffered saline(PBS), free Cy7 dye and mouse isotype immunoglobulin G(IgG)-Cy7(delivered at the same doses as COC183 B2-Cy7) were used as controls.Results: The expression of OC183 B2 with a high H-score was more prevalent in OC tissue than fallopian tube(FT) tissue. Among 417 OC patients, the expression of OC183 B2 was significantly correlated with the histological subtype, histological grade, residual tumor size, relapse state and survival status. IF staining demonstrated that COC183 B2 specifically expressed in SKOV3 cells but not HeLa cells. In vivo NIR fluorescence imaging indicated that COC183 B2-Cy7 was mainly distributed in the xenograft and liver with optimal tumor-to-background(T/B)ratios in the xenograft at 30 μg dose. The highest fluorescent signals in the tumor were observed at 96 h postinjection(hpi). Ex vivo fluorescence imaging revealed the fluorescent signals mainly from the tumor and liver. IHC analysis confirmed that xenografts were OC183 B2 positive.Conclusions: COC183 B2 is a good candidate for NIR fluorescence imaging and imaging-guided surgery in OC.展开更多
Surgery is still the primary curative treatment for gastric cancer,which includes resection of the tumor with adequate margins and extended lymphadenectomy.In order to improve the operative results and the quality of ...Surgery is still the primary curative treatment for gastric cancer,which includes resection of the tumor with adequate margins and extended lymphadenectomy.In order to improve the operative results and the quality of life of patients,several endeavors have been made toward precision medicine through image-guided surgery,allowing access to real-time intraoperative anatomy and accurate tumor staging.The goal of the surgeon is to achieve a more precise,individualized,and less invasive surgery without compromising oncological efficiency and safety.In this perspective,we have demonstrated the role of indocyanine green(ICG)and near-infrared(NIR)fluorescence imaging method in gastric cancer surgery.This technique may be used to improve localization of the tumor,detection of sentinel lymph nodes(SLN),real-time lymphatic mapping,and blood flow assessment(anastomosis perfusion).展开更多
Hyperspectral imaging is gaining a significant role in agricultural remote sensing applications.Its data unit is the hyperspectral cube which holds spatial information in two dimensions while spectral band information...Hyperspectral imaging is gaining a significant role in agricultural remote sensing applications.Its data unit is the hyperspectral cube which holds spatial information in two dimensions while spectral band information of each pixel in the third dimension.The classification accuracy of hyperspectral images(HSI)increases significantly by employing both spatial and spectral features.For this work,the data was acquired using an airborne hyperspectral imager system which collected HSI in the visible and near-infrared(VNIR)range of 400 to 1000 nm wavelength within 180 spectral bands.The dataset is collected for nine different crops on agricultural land with a spectral resolution of 3.3 nm wavelength for each pixel.The data was cleaned from geometric distortions and stored with the class labels and annotations of global localization using the inertial navigation system.In this study,a unique pixel-based approach was designed to improve the crops'classification accuracy by using the edge-preserving features(EPF)and principal component analysis(PCA)in conjunction.The preliminary processing generated the high-dimensional EPF stack by applying the edge-preserving filters on acquired HSI.In the second step,this high dimensional stack was treated with the PCA for dimensionality reduction without losing significant spectral information.The resultant feature space(PCA-EPF)demonstrated enhanced class separability for improved crop classification with reduced dimensionality and computational cost.The support vector machines classifier was employed for multiclass classification of target crops using PCA-EPF.The classification performance evaluation was measured in terms of individual class accuracy,overall accuracy,average accuracy,and Cohen kappa factor.The proposed scheme achieved greater than 90%results for all the performance evaluation metrics.The PCA-EPF proved to be an effective attribute for crop classification using hyperspectral imaging in the VNIR range.The proposed scheme is well-suited for practical applications of crops and landfill estimations using agricultural remote sensing methods.展开更多
AIM:To characterize spectral-domain optical coherence tomography(SD-OCT)features of chorioretinal folds in orbital mass imaged using enhanced depth imaging(EDI).METHODS:Prospective observational case-control study was...AIM:To characterize spectral-domain optical coherence tomography(SD-OCT)features of chorioretinal folds in orbital mass imaged using enhanced depth imaging(EDI).METHODS:Prospective observational case-control study was conducted in 20 eyes of 20 patients,the uninvolved eye served as a control.All the patients underwent clinical fundus photography,computed tomography,EDI SDOCT imaging before and after surgery.Two patients with cavernous hemangiomas underwent intratumoral injection of bleomycin A5;the remaining patients underwent tumor excision.Patients were followed 1 to 14mo following surgery(average follow up,5.8mo).RESULTS:Visual acuity prior to surgery ranged from 20/20 to 20/200.Following surgery,5 patients’visual acuity remained unchanged while the remaining 15 patients had a mean letter improvement of 10(range 4 to 26 letters).Photoreceptor inner/outer segment defects were found in 10 of 15 patients prior to surgery.Following surgical excision,photoreceptor inner/outer segment defects fully resolved in 8 of these 10 patients.CONCLUSION:Persistence of photoreceptor inner/outer segment defects caused by compression of the globe by an orbital mass can be associated with reduced visual prognosis.Our findings suggest that photoreceptor inner/outer segment defects on EDI SD-OCT could be an indicator for immediate surgical excision of an orbital mass causing choroidal compression.展开更多
The interleukin-11(IL-11)and the IL-11 receptorα-subunit(IL-11Rα)have been demonstrated to regulate the invasion and proliferation of tumor cells.Our study intends to evaluate a noninvasive imaging of IL-11Rαexpres...The interleukin-11(IL-11)and the IL-11 receptorα-subunit(IL-11Rα)have been demonstrated to regulate the invasion and proliferation of tumor cells.Our study intends to evaluate a noninvasive imaging of IL-11Rαexpression in breast tumors using near-infrared(NIR)fluorescent dye Cy7-labeled IL-11 mimic peptide CGRRAGGSC.This work evaluated the IL-11Rαexpression of breast tumor cells and the binding status of this peptide to IL-11Rαin vitro and in vivo by using Western blotting,immunofluorescence staining and near-infrared fluorescence imaging.Our biochemical study showed that IL-11Rαwas overexpressed in breast tumor cells(MCF-7).The cell-binding assay demonstrated specific binding of peptide CGRRAGGSC to MCF-7 cells in vitro.In vivo imaging results showed that NIR fluorescent signals of Cy7-CGRRAGGSC were selectively accumulated in tumor and metabolic organs.While in the blocking experiment,free CGRRAGGSC obviously blocked the concentration of the Cy7-CGRRAGGSC in the tumors.These results suggested that IL-11Rαmay be used as a potential target for noninvasive imaging in IL-11Rαoverexpressed tumors.Furthermore,the imaging agent of near-infrared fluorescent dye Cy7-labeled CGRRAGGSC is suitable for IL-11Rαexpression imaging study in vivo.展开更多
Epilepsy can be defined as a dysfunction of the brain network,and each type of epilepsy involves different brain-network changes that are implicated diffe rently in the control and propagation of interictal or ictal d...Epilepsy can be defined as a dysfunction of the brain network,and each type of epilepsy involves different brain-network changes that are implicated diffe rently in the control and propagation of interictal or ictal discharges.Gaining more detailed information on brain network alterations can help us to further understand the mechanisms of epilepsy and pave the way for brain network-based precise therapeutic approaches in clinical practice.An increasing number of advanced neuroimaging techniques and electrophysiological techniques such as diffusion tensor imaging-based fiber tra ctography,diffusion kurtosis imaging-based fiber tractography,fiber ball imagingbased tra ctography,electroencephalography,functional magnetic resonance imaging,magnetoencephalography,positron emission tomography,molecular imaging,and functional ultrasound imaging have been extensively used to delineate epileptic networks.In this review,we summarize the relevant neuroimaging and neuroelectrophysiological techniques for assessing structural and functional brain networks in patients with epilepsy,and extensively analyze the imaging mechanisms,advantages,limitations,and clinical application ranges of each technique.A greater focus on emerging advanced technologies,new data analysis software,a combination of multiple techniques,and the construction of personalized virtual epilepsy models can provide a theoretical basis to better understand the brain network mechanisms of epilepsy and make surgical decisions.展开更多
This paper advances a three-dimensional space interpolation method of grey / depth image sequence, which breaks free from the limit of original practical photographing route. Pictures can cruise at will in space. By u...This paper advances a three-dimensional space interpolation method of grey / depth image sequence, which breaks free from the limit of original practical photographing route. Pictures can cruise at will in space. By using space sparse sampling, great memorial capacity can be saved and reproduced scenes can be controlled. To solve time consuming and complex computations in three-dimensional interpolation algorithm, we have studied a fast and practical algorithm of scattered space lattice and that of 'Warp' algorithm with proper depth. By several simple aspects of three dimensional space interpolation, we succeed in developing some simple and practical algorithms. Some results of simulated experiments with computers have shown that the new method is absolutely feasible.展开更多
This paper proposes a new technique that is used to embed depth maps into corresponding 2-dimensional (2D) images. Since a 2D image and its depth map are integrated into one type of image format, they can be treated...This paper proposes a new technique that is used to embed depth maps into corresponding 2-dimensional (2D) images. Since a 2D image and its depth map are integrated into one type of image format, they can be treated as if they were one 2D image. Thereby, it can reduce the amount of data in 3D images by half and simplify the processes for sending them through networks because the synchronization between images for the left and right eyes becomes unnecessary. We embed depth maps in the quantized discrete cosine transform (DCT) data of 2D images. The key to this technique is whether the depth maps could be embedded into 2D images without perceivably deteriorating their quality. We try to reduce their deterioration by compressing the depth map data by using the differences from the next pixel to the left. We assume that there is only one non-zero pixel at most on one horizontal line in the DCT block because the depth map values change abruptly. We conduct an experiment to evaluate the quality of the 2D images embedded with depth maps and find that satisfactory quality could be achieved.展开更多
In general, to reconstruct the accurate shape of buildings, we need at least one stereomodel (two photographs) for each building. In most cases, however, only a single non-metric photograph is available, which is us...In general, to reconstruct the accurate shape of buildings, we need at least one stereomodel (two photographs) for each building. In most cases, however, only a single non-metric photograph is available, which is usually obtained either by an amateur, such as a tourist, or from a newspaper or a post card. To evaluate the validity of 3D reconstruction from a single non-metric image, this study analyzes the effects of object depth on the accuracy of dimensional shape in X and Y directions using a single non-metric image by means of simulation technique, as this was considered to be, in most cases, a main source of data acquisition in recording and documenting buildings.展开更多
We have examined ten human subjects with a previously developed instrument for near-infrared diffuse spectral imaging of the female breast.The instrument is based on a tandem,planar scan of two collinear optical fiber...We have examined ten human subjects with a previously developed instrument for near-infrared diffuse spectral imaging of the female breast.The instrument is based on a tandem,planar scan of two collinear optical fibers(one for illumination and one for collection)to image a gently compressed breast in a transmission geometry.The optical data collection features a spatial sampling of 25 points/cm2 over the whole breast,and a spectral sampling of 2 points/nm in the 650-900nm wavelength range.Of the ten human subjects examined,eight are healthy subjects and two are cancer patients with unilateral invasive ductal carcinoma and ductal carcinoma in situ,respectively.For each subject,we generate second-derivative images that identify a network of highly absorbing structures in the breast that we assign to blood vessels.A previously developed paired-wavelength spectral method assigns oxygenation values to the absorbing structures displayed in the second-derivative images.The resulting oxygenation images feature average values over the whole breast that are significantly lower in cancerous breasts(69±14%,n=2)than in healthy breasts(85±7%,n=18)(p<0.01).Furthermore,in the two patients with breast cancer,the average oxygenation values in the cancerous regions are also significantly lower than in the remainder of the breast(invasive ductal carcinoma:49±11%vs 61±16%,p<0.01;ductal carcinoma in situ:58±8%vs 77±11%,p<0.001).展开更多
For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the ...For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the object characteristics in the foggy environment in the training set,and the detection effect is not good.To improve the traffic object detection in foggy environment,we propose a method of generating foggy images on fog-free images from the perspective of data set construction.First,taking the KITTI objection detection data set as an original fog-free image,we generate the depth image of the original image by using improved Monodepth unsupervised depth estimation method.Then,a geometric prior depth template is constructed to fuse the image entropy taken as weight with the depth image.After that,a foggy image is acquired from the depth image based on the atmospheric scattering model.Finally,we take two typical object-detection frameworks,that is,the two-stage object-detection Fster region-based convolutional neural network(Faster-RCNN)and the one-stage object-detection network YOLOv4,to train the original data set,the foggy data set and the mixed data set,respectively.According to the test results on RESIDE-RTTS data set in the outdoor natural foggy environment,the model under the training on the mixed data set shows the best effect.The mean average precision(mAP)values are increased by 5.6%and by 5.0%under the YOLOv4 model and the Faster-RCNN network,respectively.It is proved that the proposed method can effectively improve object identification ability foggy environment.展开更多
We observed atherosclerotic plaque phantoms using a novel near-infrared (NIR) hyperspectral imaging (HSI) technique. Data were obtained through saline or blood layers to simulate an angioscopic environment for the pha...We observed atherosclerotic plaque phantoms using a novel near-infrared (NIR) hyperspectral imaging (HSI) technique. Data were obtained through saline or blood layers to simulate an angioscopic environment for the phantom. For the study, we developed a NIR-HSI system with an NIR supercontinuum light source and mercury-cadmium-telluride camera. Apparent spectral absorbance was obtained at wavelengths of 1150 - 2400 nm. Hyperspectral images of lipid were constructed using a spectral angle mapper algorithm. Bovine fat covered with saline or blood was observed using hyperspectral images at a wavelength around 1200 nm. Our results show that NIR-HSI is a promising angioscopic technique with the potential to identify lipid-rich plaques without clamping and saline injection.展开更多
基金This work was supported by the National Natural Science Foundation of China(Grant No.U20A20197).
文摘We propose a novel image segmentation algorithm to tackle the challenge of limited recognition and segmentation performance in identifying welding seam images during robotic intelligent operations.Initially,to enhance the capability of deep neural networks in extracting geometric attributes from depth images,we developed a novel deep geometric convolution operator(DGConv).DGConv is utilized to construct a deep local geometric feature extraction module,facilitating a more comprehensive exploration of the intrinsic geometric information within depth images.Secondly,we integrate the newly proposed deep geometric feature module with the Fully Convolutional Network(FCN8)to establish a high-performance deep neural network algorithm tailored for depth image segmentation.Concurrently,we enhance the FCN8 detection head by separating the segmentation and classification processes.This enhancement significantly boosts the network’s overall detection capability.Thirdly,for a comprehensive assessment of our proposed algorithm and its applicability in real-world industrial settings,we curated a line-scan image dataset featuring weld seams.This dataset,named the Standardized Linear Depth Profile(SLDP)dataset,was collected from actual industrial sites where autonomous robots are in operation.Ultimately,we conducted experiments utilizing the SLDP dataset,achieving an average accuracy of 92.7%.Our proposed approach exhibited a remarkable performance improvement over the prior method on the identical dataset.Moreover,we have successfully deployed the proposed algorithm in genuine industrial environments,fulfilling the prerequisites of unmanned robot operations.
文摘Aeromagnetic data over the Mamfe Basin have been processed. A regional magnetic gridded dataset was obtained from the Total Magnetic Intensity (TMI) data grid using a 3 × 3 convolution (Hanning) filter to remove regional trends. Major similarities in magnetic field orientation and intensities were observed at identical locations on both the regional and TMI data grids. From the regional and TMI gridded datasets, the residual dataset was generated which represents the very shallow geological features of the basin. Processing this residual data grid using the Source Parameter Imaging (SPI) for magnetic depth suggests that the estimated depths to magnetic sources in the basin range from about 271 m to 3552 m. The highest depths are located in two main locations somewhere around the central portion of the study area which correspond to the area with positive magnetic susceptibilities, as well as the areas extending outwards across the eastern boundary of the study area. Shallow magnetic depths are prominent towards the NW portion of the basin and also correspond to areas of negative magnetic susceptibilities. The basin generally exhibits a variation in depth of magnetic sources with high, average and shallow depths. The presence of intrusive igneous rocks was also observed in this basin. This characteristic is a pointer to the existence of geologic resources of interest for exploration in the basin.
基金supported by the National Natural Science Foundation of China(No.61575059,No.61675062,No.21501038)the Fundamental Research Funds for the Central Universities(No.JZ2018HGPB0275,No.JZ2018HGTA0220,and No.JZ2018HGXC0001).
文摘In this study,we have developed a high-sensitivity,near-infrared photodetector based on PdSe2/GaAs heterojunction,which was made by transferring a multilayered PdSe2 film onto a planar GaAs.The as-fabricated PdSe2/GaAs heterojunction device exhibited obvious photovoltaic behavior to 808 nm illumination,indicating that the near-infrared photodetector can be used as a self-driven device without external power supply.Further device analysis showed that the hybrid heterojunction exhibited a high on/off ratio of 1.16×10^5 measured at 808 nm under zero bias voltage.The responsivity and specific detectivity of photodetector were estimated to be 171.34 mA/W and 2.36×10^11 Jones,respectively.Moreover,the device showed excellent stability and reliable repeatability.After 2 months,the photoelectric characteristics of the near-infrared photodetector hardly degrade in air,attributable to the good stability of the PdSe2.Finally,the PdSe2/GaAs-based heterojunction device can also function as a near-infrared light sensor.
基金National Natural Science Foundation of China(No.61671434)Key Projects of Provincial Natural Science Foundation of Anhui Universities(Nos.KJ2019A0952,KJ2017ZD32)。
文摘The conventional photoelectric detection system requires complex circuitry and spectroscopic systems as well as specialized personnel for its operation.To replace such a system,a method of measuring turbidity using a camera is proposed by combining the imaging characteristics of a digital camera and the high-speed information processing capability of a computer.Two turbidity measurement devices based on visible and near-infrared(NIR)light cameras and a light source driving circuit with constant light intensity were designed.The RGB data in the turbidity images were acquired using a self-developed image processing software and converted to the CIE Lab color space.Based on the relationship between the luminance,chromatic aberration,and turbidity,the turbidity detection models for luminance and chromatic aberration of visible and NIR light devices exhibiting values from 0-1000 NTU,less than 100 NTU,and more than 100 NTU were established.By comparing and analyzing the proposed models,the two measurement models with the best all-around performance were selected and fused to generate new measurement models.The experimental results prove that the correlation between the three models and the commercial turbidity meter measurements exhibite a significance value higher than 0.999.The error of the fusion model is within 1.05%,with a mean square error of 1.14.The visible light device has less error at low turbidity measurements and is less influenced by the color of the image.The NIR light device is more stable and accurate at full range and high turbidity measurements and is therefore more suitable for such measurements.
文摘Background: Even though NIR fluorescence imaging has many advantages in SLN mapping and cancer detection, NIR fluorescence imaging shows a serious drawback that NIR cannot be detected by the naked eye without any detectors. This limitation further disturbs accurate SLN detection and adequate tumor resection resulting in the presence of cancerous cells near the boundaries of surgically removed tissues. Materials and methods: To overcome the drawback of the conventional NIR imaging method, we suggest a novel NIR imaging system which can make the NIR fluorescence image visible to the naked eye as NIR fluorescence image detected by a video camera is processed by a computer and then projected back onto the NIR fluorescence excitation position with a projector using conspicuous color light. Image processing techniques were used for projection onto the exact position of the NIR fluorescence image. Also, we implemented a phantom experiment to evaluate the performance of the developed NIR fluorescence projection system by use of the ICG. Results: The developed NIR fluorescence projection system was applied in normal mouse model to confirm the usefulness of the system in the clinical field. A BALB/c nude mouse was prepared to be applied in normal mouse model and 0.25 mg/ml stock solution of the ICG was injected through a tail vein of the mouse. From the application in normal mouse model, we could confirm that the injected ICG stayed in the liver of the mouse and verify that the projection system projected the ICG fluorescence image at the exact location of the ICG by performing laparotomy of the mouse. Conclusions: From the application in normal mouse model, we could verify that the ICG fluorescence image was precisely projected back on the site where ICG fluorescence generated. It can be demonstrated that the NIR fluorescence projection system can make it possible to visualize the invisible NIR fluorescence image and to realize that SLN mapping and cancer detection in clinical surgery.
基金AcknowledgementsThis work is financially supported by the National Natural Science Foundation of China (61005015), the third National Post-Doctoral Special Foundation of China (201003280), and 2011 Shanshai city young teachers' subsidy scheme. The authors would like to thank the reviewers for their useful comments.
文摘This paper presents a new fall detection method of etderly people in a room environment based on shape analysis of 3D depth images captured by a Kinect sensor. Depth images are pre- processed by a median filter both for background and target. The sithouette of moving individual in depth images is achieved by a subtraction method for background frames. The depth images are converted to disparity map, which is obtained by the horizontal and vertical projection histogram statistics. The initial floor plane information is obtained by V disparity map, and the floor ptane equation is estimated by the least square method. Shape information of human subject in depth images is analyzed by a set of moment functions. Coefficients of ellipses are calculated to determine the direction of individual The centroids of the human body are catculated and the angle between the human body and the floor plane is calculated. When both the distance from the centroids of the human body to the floor plane and the angle between the human body and the floor plane are tower than some threshotds, fall incident will be detected. Experiments with different failing direction are performed. Experimental results show that the proposed method can detect fall incidents effectively.
基金supported by the National Key Research and Development Program of China (No.2016YFA0201400)National Natural Science Foundation of China (No. 81671431)
文摘Objective: To evaluate the imaging potential of a novel near-infrared(NIR) probe conjugated to COC183 B2 monoclonal antibodies(MAb) in ovarian cancer(OC).Methods: The expression of OC183 B2 antigen in OC was determined by immunohistochemical(IHC) staining using tissue microarrays with the H-score system and immunofluorescence(IF) staining of tumor cell lines.Imaging probes with the NIR fluorescent dye cyanine 7(Cy7) conjugated to COC183 B2 Mab were chemically engineered. OC183 B2-positive human OC cells(SKOV3-Luc) were injected subcutaneously into BALB/c nude mice. Bioluminescent imaging(BLI) was performed to detect tumor location and growth. COC183 B2-Cy7 at 1.1,3.3, 10, or 30 μg were used for in vivo fluorescence imaging, and phosphate-buffered saline(PBS), free Cy7 dye and mouse isotype immunoglobulin G(IgG)-Cy7(delivered at the same doses as COC183 B2-Cy7) were used as controls.Results: The expression of OC183 B2 with a high H-score was more prevalent in OC tissue than fallopian tube(FT) tissue. Among 417 OC patients, the expression of OC183 B2 was significantly correlated with the histological subtype, histological grade, residual tumor size, relapse state and survival status. IF staining demonstrated that COC183 B2 specifically expressed in SKOV3 cells but not HeLa cells. In vivo NIR fluorescence imaging indicated that COC183 B2-Cy7 was mainly distributed in the xenograft and liver with optimal tumor-to-background(T/B)ratios in the xenograft at 30 μg dose. The highest fluorescent signals in the tumor were observed at 96 h postinjection(hpi). Ex vivo fluorescence imaging revealed the fluorescent signals mainly from the tumor and liver. IHC analysis confirmed that xenografts were OC183 B2 positive.Conclusions: COC183 B2 is a good candidate for NIR fluorescence imaging and imaging-guided surgery in OC.
文摘Surgery is still the primary curative treatment for gastric cancer,which includes resection of the tumor with adequate margins and extended lymphadenectomy.In order to improve the operative results and the quality of life of patients,several endeavors have been made toward precision medicine through image-guided surgery,allowing access to real-time intraoperative anatomy and accurate tumor staging.The goal of the surgeon is to achieve a more precise,individualized,and less invasive surgery without compromising oncological efficiency and safety.In this perspective,we have demonstrated the role of indocyanine green(ICG)and near-infrared(NIR)fluorescence imaging method in gastric cancer surgery.This technique may be used to improve localization of the tumor,detection of sentinel lymph nodes(SLN),real-time lymphatic mapping,and blood flow assessment(anastomosis perfusion).
文摘Hyperspectral imaging is gaining a significant role in agricultural remote sensing applications.Its data unit is the hyperspectral cube which holds spatial information in two dimensions while spectral band information of each pixel in the third dimension.The classification accuracy of hyperspectral images(HSI)increases significantly by employing both spatial and spectral features.For this work,the data was acquired using an airborne hyperspectral imager system which collected HSI in the visible and near-infrared(VNIR)range of 400 to 1000 nm wavelength within 180 spectral bands.The dataset is collected for nine different crops on agricultural land with a spectral resolution of 3.3 nm wavelength for each pixel.The data was cleaned from geometric distortions and stored with the class labels and annotations of global localization using the inertial navigation system.In this study,a unique pixel-based approach was designed to improve the crops'classification accuracy by using the edge-preserving features(EPF)and principal component analysis(PCA)in conjunction.The preliminary processing generated the high-dimensional EPF stack by applying the edge-preserving filters on acquired HSI.In the second step,this high dimensional stack was treated with the PCA for dimensionality reduction without losing significant spectral information.The resultant feature space(PCA-EPF)demonstrated enhanced class separability for improved crop classification with reduced dimensionality and computational cost.The support vector machines classifier was employed for multiclass classification of target crops using PCA-EPF.The classification performance evaluation was measured in terms of individual class accuracy,overall accuracy,average accuracy,and Cohen kappa factor.The proposed scheme achieved greater than 90%results for all the performance evaluation metrics.The PCA-EPF proved to be an effective attribute for crop classification using hyperspectral imaging in the VNIR range.The proposed scheme is well-suited for practical applications of crops and landfill estimations using agricultural remote sensing methods.
基金Supported by National Natural Science Foundation of China(No.81300805)。
文摘AIM:To characterize spectral-domain optical coherence tomography(SD-OCT)features of chorioretinal folds in orbital mass imaged using enhanced depth imaging(EDI).METHODS:Prospective observational case-control study was conducted in 20 eyes of 20 patients,the uninvolved eye served as a control.All the patients underwent clinical fundus photography,computed tomography,EDI SDOCT imaging before and after surgery.Two patients with cavernous hemangiomas underwent intratumoral injection of bleomycin A5;the remaining patients underwent tumor excision.Patients were followed 1 to 14mo following surgery(average follow up,5.8mo).RESULTS:Visual acuity prior to surgery ranged from 20/20 to 20/200.Following surgery,5 patients’visual acuity remained unchanged while the remaining 15 patients had a mean letter improvement of 10(range 4 to 26 letters).Photoreceptor inner/outer segment defects were found in 10 of 15 patients prior to surgery.Following surgical excision,photoreceptor inner/outer segment defects fully resolved in 8 of these 10 patients.CONCLUSION:Persistence of photoreceptor inner/outer segment defects caused by compression of the globe by an orbital mass can be associated with reduced visual prognosis.Our findings suggest that photoreceptor inner/outer segment defects on EDI SD-OCT could be an indicator for immediate surgical excision of an orbital mass causing choroidal compression.
基金Supported by National Natural Science Foundation of China (No.81202032)Key University Science Research Project of Jiangsu Province (No. 16KJB320004)+2 种基金Jiangsu Provincial Health and Family Planning Commission Foundation (No.Z201502)Jiangsu Provincial Health and Family Planning Research Projects (No.H2018029)Key Laboratory of Nuclear Medicine of the Ministry of Health, Jiangsu Provincial Key Laboratory of Molecular Nuclear Medicine (No.KF201501)
文摘The interleukin-11(IL-11)and the IL-11 receptorα-subunit(IL-11Rα)have been demonstrated to regulate the invasion and proliferation of tumor cells.Our study intends to evaluate a noninvasive imaging of IL-11Rαexpression in breast tumors using near-infrared(NIR)fluorescent dye Cy7-labeled IL-11 mimic peptide CGRRAGGSC.This work evaluated the IL-11Rαexpression of breast tumor cells and the binding status of this peptide to IL-11Rαin vitro and in vivo by using Western blotting,immunofluorescence staining and near-infrared fluorescence imaging.Our biochemical study showed that IL-11Rαwas overexpressed in breast tumor cells(MCF-7).The cell-binding assay demonstrated specific binding of peptide CGRRAGGSC to MCF-7 cells in vitro.In vivo imaging results showed that NIR fluorescent signals of Cy7-CGRRAGGSC were selectively accumulated in tumor and metabolic organs.While in the blocking experiment,free CGRRAGGSC obviously blocked the concentration of the Cy7-CGRRAGGSC in the tumors.These results suggested that IL-11Rαmay be used as a potential target for noninvasive imaging in IL-11Rαoverexpressed tumors.Furthermore,the imaging agent of near-infrared fluorescent dye Cy7-labeled CGRRAGGSC is suitable for IL-11Rαexpression imaging study in vivo.
基金supported by the Natural Science Foundation of Sichuan Province of China,Nos.2022NSFSC1545 (to YG),2022NSFSC1387 (to ZF)the Natural Science Foundation of Chongqing of China,Nos.CSTB2022NSCQ-LZX0038,cstc2021ycjh-bgzxm0035 (both to XT)+3 种基金the National Natural Science Foundation of China,No.82001378 (to XT)the Joint Project of Chongqing Health Commission and Science and Technology Bureau,No.2023QNXM009 (to XT)the Science and Technology Research Program of Chongqing Education Commission of China,No.KJQN202200435 (to XT)the Chongqing Talents:Exceptional Young Talents Project,No.CQYC202005014 (to XT)。
文摘Epilepsy can be defined as a dysfunction of the brain network,and each type of epilepsy involves different brain-network changes that are implicated diffe rently in the control and propagation of interictal or ictal discharges.Gaining more detailed information on brain network alterations can help us to further understand the mechanisms of epilepsy and pave the way for brain network-based precise therapeutic approaches in clinical practice.An increasing number of advanced neuroimaging techniques and electrophysiological techniques such as diffusion tensor imaging-based fiber tra ctography,diffusion kurtosis imaging-based fiber tractography,fiber ball imagingbased tra ctography,electroencephalography,functional magnetic resonance imaging,magnetoencephalography,positron emission tomography,molecular imaging,and functional ultrasound imaging have been extensively used to delineate epileptic networks.In this review,we summarize the relevant neuroimaging and neuroelectrophysiological techniques for assessing structural and functional brain networks in patients with epilepsy,and extensively analyze the imaging mechanisms,advantages,limitations,and clinical application ranges of each technique.A greater focus on emerging advanced technologies,new data analysis software,a combination of multiple techniques,and the construction of personalized virtual epilepsy models can provide a theoretical basis to better understand the brain network mechanisms of epilepsy and make surgical decisions.
文摘This paper advances a three-dimensional space interpolation method of grey / depth image sequence, which breaks free from the limit of original practical photographing route. Pictures can cruise at will in space. By using space sparse sampling, great memorial capacity can be saved and reproduced scenes can be controlled. To solve time consuming and complex computations in three-dimensional interpolation algorithm, we have studied a fast and practical algorithm of scattered space lattice and that of 'Warp' algorithm with proper depth. By several simple aspects of three dimensional space interpolation, we succeed in developing some simple and practical algorithms. Some results of simulated experiments with computers have shown that the new method is absolutely feasible.
文摘This paper proposes a new technique that is used to embed depth maps into corresponding 2-dimensional (2D) images. Since a 2D image and its depth map are integrated into one type of image format, they can be treated as if they were one 2D image. Thereby, it can reduce the amount of data in 3D images by half and simplify the processes for sending them through networks because the synchronization between images for the left and right eyes becomes unnecessary. We embed depth maps in the quantized discrete cosine transform (DCT) data of 2D images. The key to this technique is whether the depth maps could be embedded into 2D images without perceivably deteriorating their quality. We try to reduce their deterioration by compressing the depth map data by using the differences from the next pixel to the left. We assume that there is only one non-zero pixel at most on one horizontal line in the DCT block because the depth map values change abruptly. We conduct an experiment to evaluate the quality of the 2D images embedded with depth maps and find that satisfactory quality could be achieved.
文摘In general, to reconstruct the accurate shape of buildings, we need at least one stereomodel (two photographs) for each building. In most cases, however, only a single non-metric photograph is available, which is usually obtained either by an amateur, such as a tourist, or from a newspaper or a post card. To evaluate the validity of 3D reconstruction from a single non-metric image, this study analyzes the effects of object depth on the accuracy of dimensional shape in X and Y directions using a single non-metric image by means of simulation technique, as this was considered to be, in most cases, a main source of data acquisition in recording and documenting buildings.
基金supported by the National Institutes of Health,Grant CA95885.
文摘We have examined ten human subjects with a previously developed instrument for near-infrared diffuse spectral imaging of the female breast.The instrument is based on a tandem,planar scan of two collinear optical fibers(one for illumination and one for collection)to image a gently compressed breast in a transmission geometry.The optical data collection features a spatial sampling of 25 points/cm2 over the whole breast,and a spectral sampling of 2 points/nm in the 650-900nm wavelength range.Of the ten human subjects examined,eight are healthy subjects and two are cancer patients with unilateral invasive ductal carcinoma and ductal carcinoma in situ,respectively.For each subject,we generate second-derivative images that identify a network of highly absorbing structures in the breast that we assign to blood vessels.A previously developed paired-wavelength spectral method assigns oxygenation values to the absorbing structures displayed in the second-derivative images.The resulting oxygenation images feature average values over the whole breast that are significantly lower in cancerous breasts(69±14%,n=2)than in healthy breasts(85±7%,n=18)(p<0.01).Furthermore,in the two patients with breast cancer,the average oxygenation values in the cancerous regions are also significantly lower than in the remainder of the breast(invasive ductal carcinoma:49±11%vs 61±16%,p<0.01;ductal carcinoma in situ:58±8%vs 77±11%,p<0.001).
文摘For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the object characteristics in the foggy environment in the training set,and the detection effect is not good.To improve the traffic object detection in foggy environment,we propose a method of generating foggy images on fog-free images from the perspective of data set construction.First,taking the KITTI objection detection data set as an original fog-free image,we generate the depth image of the original image by using improved Monodepth unsupervised depth estimation method.Then,a geometric prior depth template is constructed to fuse the image entropy taken as weight with the depth image.After that,a foggy image is acquired from the depth image based on the atmospheric scattering model.Finally,we take two typical object-detection frameworks,that is,the two-stage object-detection Fster region-based convolutional neural network(Faster-RCNN)and the one-stage object-detection network YOLOv4,to train the original data set,the foggy data set and the mixed data set,respectively.According to the test results on RESIDE-RTTS data set in the outdoor natural foggy environment,the model under the training on the mixed data set shows the best effect.The mean average precision(mAP)values are increased by 5.6%and by 5.0%under the YOLOv4 model and the Faster-RCNN network,respectively.It is proved that the proposed method can effectively improve object identification ability foggy environment.
文摘We observed atherosclerotic plaque phantoms using a novel near-infrared (NIR) hyperspectral imaging (HSI) technique. Data were obtained through saline or blood layers to simulate an angioscopic environment for the phantom. For the study, we developed a NIR-HSI system with an NIR supercontinuum light source and mercury-cadmium-telluride camera. Apparent spectral absorbance was obtained at wavelengths of 1150 - 2400 nm. Hyperspectral images of lipid were constructed using a spectral angle mapper algorithm. Bovine fat covered with saline or blood was observed using hyperspectral images at a wavelength around 1200 nm. Our results show that NIR-HSI is a promising angioscopic technique with the potential to identify lipid-rich plaques without clamping and saline injection.