期刊文献+
共找到42篇文章
< 1 2 3 >
每页显示 20 50 100
Scientific data products and the data pre-processing subsystem of the Chang'e-3 mission 被引量:1
1
作者 Xu Tan Jian-Jun Liu +7 位作者 Chun-Lai Li Jian-Qing Feng Xin Ren Fen-Fei Wang Wei Yan Wei Zuo Xiao-Qian Wang Zhou-Bin Zhang 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2014年第12期1682-1694,共13页
The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1... The Chang'e-3 (CE-3) mission is China's first exploration mission on the surface of the Moon that uses a lander and a rover. Eight instruments that form the scientific payloads have the following objectives: (1) investigate the morphological features and geological structures at the landing site; (2) integrated in-situ analysis of minerals and chemical compositions; (3) integrated exploration of the structure of the lunar interior; (4) exploration of the lunar-terrestrial space environment, lunar sur- face environment and acquire Moon-based ultraviolet astronomical observations. The Ground Research and Application System (GRAS) is in charge of data acquisition and pre-processing, management of the payload in orbit, and managing the data products and their applications. The Data Pre-processing Subsystem (DPS) is a part of GRAS. The task of DPS is the pre-processing of raw data from the eight instruments that are part of CE-3, including channel processing, unpacking, package sorting, calibration and correction, identification of geographical location, calculation of probe azimuth angle, probe zenith angle, solar azimuth angle, and solar zenith angle and so on, and conducting quality checks. These processes produce Level 0, Level 1 and Level 2 data. The computing platform of this subsystem is comprised of a high-performance computing cluster, including a real-time subsystem used for processing Level 0 data and a post-time subsystem for generating Level 1 and Level 2 data. This paper de- scribes the CE-3 data pre-processing method, the data pre-processing subsystem, data classification, data validity and data products that are used for scientific studies. 展开更多
关键词 Moon: data products -- methods: data pre-processing -- space vehicles:instruments
下载PDF
High Speed Regular Expression Matching Engine with Fast Pre-Processing
2
作者 Zhe Fu Jun Li 《China Communications》 SCIE CSCD 2019年第2期177-188,共12页
Regular expression matching is playing an important role in deep inspection. The rapid development of SDN and NFV makes the network more dynamic, bringing serious challenges to traditional deep inspection matching eng... Regular expression matching is playing an important role in deep inspection. The rapid development of SDN and NFV makes the network more dynamic, bringing serious challenges to traditional deep inspection matching engines. However, state-of-theart matching methods often require a significant amount of pre-processing time and hence are not suitable for this fast updating scenario. In this paper, a novel matching engine called BFA is proposed to achieve high-speed regular expression matching with fast pre-processing. Experiments demonstrate that BFA obtains 5 to 20 times more update abilities compared to existing regular expression matching methods, and scales well on multi-core platforms. 展开更多
关键词 deep inspection FINITE AUTOMATON REGULAR expression MATCHING pre-processing
下载PDF
The Role of Combined OSR and SDF Method for Pre-Processing of Microarray Data that Accounts for Effective Denoising and Quantification
3
作者 Jayakishan Meher Mukesh Kumar Raval +1 位作者 Pramod Kumar Meher Gananath Dash 《Journal of Signal and Information Processing》 2011年第3期190-195,共6页
Microarray data is inherently noisy due to the noise contaminated from various sources during the preparation of microarray slide and thus it greatly affects the accuracy of the gene expression. How to eliminate the e... Microarray data is inherently noisy due to the noise contaminated from various sources during the preparation of microarray slide and thus it greatly affects the accuracy of the gene expression. How to eliminate the effect of the noise constitutes a challenging problem in microarray analysis. Efficient denoising is often a necessary and the first step to be taken before the image data is analyzed to compensate for data corruption and for effective utilization for these data. Hence preprocessing of microarray image is an essential to eliminate the background noise in order to enhance the image quality and effective quantification. Existing denoising techniques based on transformed domain have been utilized for microarray noise reduction with their own limitations. The objective of this paper is to introduce novel preprocessing techniques such as optimized spatial resolution (OSR) and spatial domain filtering (SDF) for reduction of noise from microarray data and reduction of error during quantification process for estimating the microarray spots accurately to determine expression level of genes. Besides combined optimized spatial resolution and spatial filtering is proposed and found improved denoising of microarray data with effective quantification of spots. The proposed method has been validated in microarray images of gene expression profiles of Myeloid Leukemia using Stanford Microarray Database with various quality measures such as signal to noise ratio, peak signal to noise ratio, image fidelity, structural content, absolute average difference and correlation quality. It was observed by quantitative analysis that the proposed technique is more efficient for denoising the microarray image which enables to make it suitable for effective quantification. 展开更多
关键词 DENOISING MICROARRAY pre-processing Quantification SPATIAL Domain Filtering Optimized SPATIAL Resolution Quality Measures
下载PDF
Concurrent multi-task pre-processing method for LEO mega-constellation based on dynamic spatio-temporal grids
4
作者 Xibin CAO Ning LI Shi QIU 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2023年第11期233-248,共16页
The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.How... The Low Earth Orbit(LEO)remote sensing satellite mega-constellation has the characteristics of large quantity and various types which make it have unique superiority in the realization of concurrent multiple tasks.However,the complexity of resource allocation is increased because of the large number of tasks and satellites.Therefore,the primary problem of implementing concurrent multiple tasks via LEO mega-constellation is to pre-process tasks and observation re-sources.To address the challenge,we propose a pre-processing algorithm for the mega-constellation based on highly Dynamic Spatio-Temporal Grids(DSTG).In the first stage,this paper describes the management model of mega-constellation and the multiple tasks.Then,the coding method of DSTG is proposed,based on which the description of complex mega-constellation observation resources is realized.In the third part,the DSTG algorithm is used to realize the processing of concurrent multiple tasks at multiple levels,such as task space attribute,time attribute and grid task importance evaluation.Finally,the simulation result of the proposed method in the case of constellation has been given to verify the effectiveness of concurrent multi-task pre-processing based on DSTG.The autonomous processing process of task decomposition and task fusion and mapping to grids,and the convenient indexing process of time window are verified. 展开更多
关键词 LEO mega-constellation Concurrent multiple tasks Tasks pre-processing Highly dynamic spatiotemporal grids Multi-task fusion merging Importance evaluation
原文传递
Hardware pre-processing for data of SBL underwater positioning system
5
作者 HU Gozhi(Harbin Shipbuilding Engineering Insitute) 《Chinese Journal of Acoustics》 1990年第3期249-255,共7页
The synchro double pulse signal mode is freuqently used in Short Base Line (SBL)underwater positioning system so as to obtain the information of both distance and depth of a target simultaneously. Howerer, this signal... The synchro double pulse signal mode is freuqently used in Short Base Line (SBL)underwater positioning system so as to obtain the information of both distance and depth of a target simultaneously. Howerer, this signal mode also brings about ranging indistinctness resulting in a shorter positioning distance much less than that limited by the period of the synchro signal. This paper presents a hardware distance-gate data acquiring scheme. It puts the original data sent to the computer in order of ' direct first pulse- depth information pulse (or first pulse reflected by water surface )…' to guarantee the effective positioning distance of the system. It has the advantage of reducing the processing time of the computer thus ensuring the realtime functioning of the system. A figure of the orbit of an underwater moving target measured in practice is attached to the end of the paper. 展开更多
关键词 Hardware pre-processing for data of SBL underwater positioning system DATA
原文传递
A Survey on Pre-Processing in Image Matting 被引量:4
6
作者 Gui-Lin Yao 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第1期122-138,共17页
Pre-processing is an important step in digital image matting, which aims to classify more accurate foreground and background pixels from the unknown region of the input three-region mask (Trimap). This step has no r... Pre-processing is an important step in digital image matting, which aims to classify more accurate foreground and background pixels from the unknown region of the input three-region mask (Trimap). This step has no relation with the well-known matting equation and only compares color differences between the current unknown pixel and those known pixels. These newly classified pure pixels are then fed to the matting process as samples to improve the quality of the final matte. However, in the research field of image matting, the importance of pre-processing step is still blurry. Moreover, there are no corresponding review articles for this step, and the quantitative comparison of Trimap and alpha mattes after this step still remains unsolved. In this paper, the necessity and the importance of pre-processing step in image matting are firstly discussed in details. Next, current pre-processing methods are introduced by using the following two categories: static thresholding methods and dynamic thresholding methods. Analyses and experimental results show that static thresholding methods, especially the most popular iterative method, can make accurate pixel classifications in those general Trimaps with relatively fewer unknown pixels. However, in a much larger Trimap, there methods are limited by the conservative color and spatial thresholds. In contrast, dynamic thresholding methods can make much aggressive classifications on much difficult cases, but still strongly suffer from noises and false classifications. In addition, the sharp boundary detector is further discussed as a prior of pure pixels. Finally, summaries and a more effective approach are presented for pre-processing compared with the existing methods. 展开更多
关键词 image matting pixel classification pre-processing Trimap expansion
原文传递
Optimization scheme with pre-processing for cooperative relay multicast networks in cellular system 被引量:1
7
作者 WANG Cheng-jin LI Xi ZHANG Lin JI Hong 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2011年第3期16-21,共6页
For cooperative relay multicast networks, the general cross-layer optimization approaches converge to the global optimal value slowly because of the large quantity of relay terminals. However, the mobility of relay te... For cooperative relay multicast networks, the general cross-layer optimization approaches converge to the global optimal value slowly because of the large quantity of relay terminals. However, the mobility of relay terminals requires quick converging optimization strategies to refresh the relay links frequently. Based on the capacity analysis of multiple relay channels, an improved cross-layer optimization scheme is proposed to resolve this problem, in which the bound of the relay selecting region is determined as a pre-processing. Utilizing the primal-dual algorithm, a cross-layer framework with pre-processing optimizes both the relay terminal selection and power allocation with quick convergence. The simulation results prove the effectiveness of the proposed algorithm. 展开更多
关键词 cooperative relay cross-layer optimization pre-processing MULTICAST
原文传递
Pre-processing filter design at transmitters for IBI mitigation in an OFDM system
8
作者 Xia Wang Lei Wang 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2013年第5期722-728,共7页
In order to meet the demands for high transmission rates and high service quality in broadband wireless communication systems, orthogonal frequency division multiplexing (OFDM) has been adopted in some standards. Ho... In order to meet the demands for high transmission rates and high service quality in broadband wireless communication systems, orthogonal frequency division multiplexing (OFDM) has been adopted in some standards. However, the inter-block interference (IBI) and inter-carrier interference (ICI) in an OFDM system affect the performance. To mitigate IBI and ICI, some pre-processing approaches have been proposed based on full channel state information (CSI), which improved the system performance. A pre-processing filter based on partial CSI at the transmitter is designed and investigated. The filter coefficient is given by the optimization processing, the symbol error rate (SER) is tested, and the computation complexity of the proposed scheme is analyzed. Computer simulation results show that the proposed pre-processing filter can effectively mitigate IBI and ICI and the performance can be improved. Compared with pre-processing approaches at the transmitter based on full CSI, the proposed scheme has high spectral efficiency, limited CSI feedback and low computation complexity. 展开更多
关键词 pre-processing filter inter-block interference (IBI) mitigation limited feedback orthogonal frequency division multiplexing (OFDM).
下载PDF
Review of Recent Trends in the Hybridisation of Preprocessing-Based and Parameter Optimisation-Based Hybrid Models to Forecast Univariate Streamflow
9
作者 Baydaa Abdul Kareem Salah L.Zubaidi +1 位作者 Nadhir Al-Ansari Yousif Raad Muhsen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期1-41,共41页
Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques... Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms. 展开更多
关键词 Univariate streamflow machine learning hybrid model data pre-processing performance metrics
下载PDF
Product quality prediction based on RBF optimized by firefly algorithm
10
作者 HAN Huihui WANG Jian +1 位作者 CHEN Sen YAN Manting 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2024年第1期105-117,共13页
With the development of information technology,a large number of product quality data in the entire manufacturing process is accumulated,but it is not explored and used effectively.The traditional product quality pred... With the development of information technology,a large number of product quality data in the entire manufacturing process is accumulated,but it is not explored and used effectively.The traditional product quality prediction models have many disadvantages,such as high complexity and low accuracy.To overcome the above problems,we propose an optimized data equalization method to pre-process dataset and design a simple but effective product quality prediction model:radial basis function model optimized by the firefly algorithm with Levy flight mechanism(RBFFALM).First,the new data equalization method is introduced to pre-process the dataset,which reduces the dimension of the data,removes redundant features,and improves the data distribution.Then the RBFFALFM is used to predict product quality.Comprehensive expe riments conducted on real-world product quality datasets validate that the new model RBFFALFM combining with the new data pre-processing method outperforms other previous me thods on predicting product quality. 展开更多
关键词 product quality prediction data pre-processing radial basis function swarm intelligence optimization algorithm
下载PDF
Machine-Learning Based Packet Switching Method for Providing Stable High-Quality Video Streaming in Multi-Stream Transmission
11
作者 Yumin Jo Jongho Paik 《Computers, Materials & Continua》 SCIE EI 2024年第3期4153-4176,共24页
Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as re... Broadcasting gateway equipment generally uses a method of simply switching to a spare input stream when a failure occurs in a main input stream.However,when the transmission environment is unstable,problems such as reduction in the lifespan of equipment due to frequent switching and interruption,delay,and stoppage of services may occur.Therefore,applying a machine learning(ML)method,which is possible to automatically judge and classify network-related service anomaly,and switch multi-input signals without dropping or changing signals by predicting or quickly determining the time of error occurrence for smooth stream switching when there are problems such as transmission errors,is required.In this paper,we propose an intelligent packet switching method based on the ML method of classification,which is one of the supervised learning methods,that presents the risk level of abnormal multi-stream occurring in broadcasting gateway equipment based on data.Furthermore,we subdivide the risk levels obtained from classification techniques into probabilities and then derive vectorized representative values for each attribute value of the collected input data and continuously update them.The obtained reference vector value is used for switching judgment through the cosine similarity value between input data obtained when a dangerous situation occurs.In the broadcasting gateway equipment to which the proposed method is applied,it is possible to perform more stable and smarter switching than before by solving problems of reliability and broadcasting accidents of the equipment and can maintain stable video streaming as well. 展开更多
关键词 Broadcasting and communication convergence multi-stream packet switching advanced television systems committee standard 3.0(ATSC 3.0) data pre-processing machine learning cosine similarity
下载PDF
Design of a Multi-Stage Ensemble Model for Thyroid Prediction Using Learning Approaches
12
作者 M.L.Maruthi Prasad R.Santhosh 《Intelligent Automation & Soft Computing》 2024年第1期1-13,共13页
This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated mod... This research concentrates to model an efficient thyroid prediction approach,which is considered a baseline for significant problems faced by the women community.The major research problem is the lack of automated model to attain earlier prediction.Some existing model fails to give better prediction accuracy.Here,a novel clinical decision support system is framed to make the proper decision during a time of complexity.Multiple stages are followed in the proposed framework,which plays a substantial role in thyroid prediction.These steps include i)data acquisition,ii)outlier prediction,and iii)multi-stage weight-based ensemble learning process(MS-WEL).The weighted analysis of the base classifier and other classifier models helps bridge the gap encountered in one single classifier model.Various classifiers aremerged to handle the issues identified in others and intend to enhance the prediction rate.The proposed model provides superior outcomes and gives good quality prediction rate.The simulation is done in the MATLAB 2020a environment and establishes a better trade-off than various existing approaches.The model gives a prediction accuracy of 97.28%accuracy compared to other models and shows a better trade than others. 展开更多
关键词 THYROID machine learning pre-processing classification prediction rate
下载PDF
Pre-process algorithm for satellite laser ranging data based on curve recognition from points cloud
13
作者 Liu Yanyu Zhao Dongming Wu Shan 《Geodesy and Geodynamics》 2012年第2期53-59,共7页
The satellite laser ranging (SLR) data quality from the COMPASS was analyzed, and the difference between curve recognition in computer vision and pre-process of SLR data finally proposed a new algorithm for SLR was ... The satellite laser ranging (SLR) data quality from the COMPASS was analyzed, and the difference between curve recognition in computer vision and pre-process of SLR data finally proposed a new algorithm for SLR was discussed data based on curve recognition from points cloud is proposed. The results obtained by the new algorithm are 85 % (or even higher) consistent with that of the screen displaying method, furthermore, the new method can process SLR data automatically, which makes it possible to be used in the development of the COMPASS navigation system. 展开更多
关键词 satellite laser ranging (SLR) curve recognition points cloud pre-process algorithm COM- PASS screen displaying
下载PDF
An Optimized Deep Learning Approach for Improving Airline Services
14
作者 Shimaa Ouf 《Computers, Materials & Continua》 SCIE EI 2023年第4期1213-1233,共21页
The aviation industry is one of the most competitive markets. Themost common approach for airline service providers is to improve passengersatisfaction. Passenger satisfaction in the aviation industry occurs whenpasse... The aviation industry is one of the most competitive markets. Themost common approach for airline service providers is to improve passengersatisfaction. Passenger satisfaction in the aviation industry occurs whenpassengers’ expectations are met during flights. Airline service quality iscritical in attracting new passengers and retaining existing ones. It is crucialto identify passengers’ pain points and enhance their satisfaction with theservices offered. The airlines used a variety of techniques to improve servicequality. They used data analysis approaches to analyze the passenger pointdata. These solutions have focused simply on surveys;consequently, deeplearningapproaches have received insufficient attention. In this study, deepneural networks with the adaptive moment estimation Adam optimizationalgorithm were applied to enhance classification performance. In previousstudies, the quality of the dataset has been ignored. The proposed approachwas applied to the airline passenger satisfaction dataset from the Kagglerepository. It was validated by applying artificial neural networks (ANNs),random forests, and support vector machine techniques to the same dataset. Itwas compared with other research papers that used the same dataset and had asimilar problem. The experimental results showed that the proposed approachoutperformed previous studies. It has achieved an accuracy of 99.3%. 展开更多
关键词 Adam optimizer data pre-processing AIRLINES machine learning deep learning optimization techniques
下载PDF
An Automatic Threshold Selection Using ALO for Healthcare Duplicate Record Detection with Reciprocal Neuro-Fuzzy Inference System
15
作者 Ala Saleh Alluhaidan Pushparaj +4 位作者 Anitha Subbappa Ved Prakash Mishra P.V.Chandrika Anurika Vaish Sarthak Sengupta 《Computers, Materials & Continua》 SCIE EI 2023年第3期5821-5836,共16页
ESystems based on EHRs(Electronic health records)have been in use for many years and their amplified realizations have been felt recently.They still have been pioneering collections of massive volumes of health data.D... ESystems based on EHRs(Electronic health records)have been in use for many years and their amplified realizations have been felt recently.They still have been pioneering collections of massive volumes of health data.Duplicate detections involve discovering records referring to the same practical components,indicating tasks,which are generally dependent on several input parameters that experts yield.Record linkage specifies the issue of finding identical records across various data sources.The similarity existing between two records is characterized based on domain-based similarity functions over different features.De-duplication of one dataset or the linkage of multiple data sets has become a highly significant operation in the data processing stages of different data mining programmes.The objective is to match all the records associated with the same entity.Various measures have been in use for representing the quality and complexity about data linkage algorithms,and many other novel metrics have been introduced.An outline of the problem existing in themeasurement of data linkage and de-duplication quality and complexity is presented.This article focuses on the reprocessing of health data that is horizontally divided among data custodians,with the purpose of custodians giving similar features to sets of patients.The first step in this technique is about an automatic selection of training examples with superior quality from the compared record pairs and the second step involves training the reciprocal neuro-fuzzy inference system(RANFIS)classifier.Using the Optimal Threshold classifier,it is presumed that there is information about the original match status for all compared record pairs(i.e.,Ant Lion Optimization),and therefore an optimal threshold can be computed based on the respective RANFIS.Febrl,Clinical Decision(CD),and Cork Open Research Archive(CORA)data repository help analyze the proposed method with evaluated benchmarks with current techniques. 展开更多
关键词 Duplicate detection healthcare record linkage dataset pre-processing reciprocal neuro-fuzzy inference system and ant lion optimization fuzzy system
下载PDF
Emotion Deduction from Social Media Text Data Using Machine Learning Algorithm
16
作者 Thambusamy Velmurugan Baskaran Jayapradha 《Journal of Computer and Communications》 2023年第11期183-196,共14页
Emotion represents the feeling of an individual in a given situation. There are various ways to express the emotions of an individual. It can be categorized into verbal expressions, written expressions, facial express... Emotion represents the feeling of an individual in a given situation. There are various ways to express the emotions of an individual. It can be categorized into verbal expressions, written expressions, facial expressions and gestures. Among these various ways of expressing the emotion, the written method is a challenging task to extract the emotions, as the data is in the form of textual dat. Finding the different kinds of emotions is also a tedious task as it requires a lot of pre preparations of the textual data taken for the research. This research work is carried out to analyse and extract the emotions hidden in text data. The text data taken for the analysis is from the social media dataset. Using the raw text data directly from the social media will not serve the purpose. Therefore, the text data has to be pre-processed and then utilised for further processing. Pre-processing makes the text data more efficient and would infer valuable insights of the emotions hidden in it. The preprocessing steps also help to manage the text data for identifying the emotions conveyed in the text. This work proposes to deduct the emotions taken from the social media text data by applying the machine learning algorithm. Finally, the usefulness of the emotions is suggested for various stake holders, to find the attitude of individuals at that moment, the data is produced. . 展开更多
关键词 Data pre-processing Machine Learning Algorithms Emotion Deduction Sentiment Analysis
下载PDF
Salp Swarm Algorithm with Multilevel Thresholding Based Brain Tumor Segmentation Model
17
作者 Hanan T.Halawani 《Computers, Materials & Continua》 SCIE EI 2023年第3期6775-6788,共14页
Biomedical image processing acts as an essential part of severalmedical applications in supporting computer aided disease diagnosis. MagneticResonance Image (MRI) is a commonly utilized imaging tool used tosave glioma... Biomedical image processing acts as an essential part of severalmedical applications in supporting computer aided disease diagnosis. MagneticResonance Image (MRI) is a commonly utilized imaging tool used tosave glioma for clinical examination. Biomedical image segmentation plays avital role in healthcare decision making process which also helps to identifythe affected regions in the MRI. Though numerous segmentation models areavailable in the literature, it is still needed to develop effective segmentationmodels for BT. This study develops a salp swarm algorithm with multi-levelthresholding based brain tumor segmentation (SSAMLT-BTS) model. Thepresented SSAMLT-BTS model initially employs bilateral filtering based onnoise removal and skull stripping as a pre-processing phase. In addition,Otsu thresholding approach is applied to segment the biomedical imagesand the optimum threshold values are chosen by the use of SSA. Finally,active contour (AC) technique is used to identify the suspicious regions in themedical image. A comprehensive experimental analysis of the SSAMLT-BTSmodel is performed using benchmark dataset and the outcomes are inspectedin many aspects. The simulation outcomes reported the improved outcomesof the SSAMLT-BTS model over recent approaches with maximum accuracyof 95.95%. 展开更多
关键词 Brain tumor segmentation noise removal multilevel thresholding healthcare pre-processing
下载PDF
Qualitative Abnormalities of Peripheral Blood Smear Images Using Deep Learning Techniques
18
作者 G.Arutperumjothi K.Suganya Devi +1 位作者 C.Rani P.Srinivasan 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期1069-1086,共18页
In recent years,Peripheral blood smear is a generic analysis to assess the person’s health status.Manual testing of Peripheral blood smear images are difficult,time-consuming and is subject to human intervention and ... In recent years,Peripheral blood smear is a generic analysis to assess the person’s health status.Manual testing of Peripheral blood smear images are difficult,time-consuming and is subject to human intervention and visual error.This method encouraged for researchers to present algorithms and techniques to perform the peripheral blood smear analysis with the help of computer-assisted and decision-making techniques.Existing CAD based methods are lacks in attaining the accurate detection of abnormalities present in the images.In order to mitigate this issue Deep Convolution Neural Network(DCNN)based automatic classification technique is introduced with the classification of eight groups of peripheral blood cells such as basophil,eosinophil,lymphocyte,monocyte,neutrophil,erythroblast,platelet,myocyte,promyocyte and metamyocyte.The proposed DCNN model employs transfer learning approach and additionally it carries three stages such as pre-processing,feature extraction and classification.Initially the pre-processing steps are incorporated to eliminate noisy contents present in the image by using Histogram Equalization(HE).It is enclosed to improve an image contrast.In order to distinguish the dissimilar class and segmentation approach is carried out with the help of Fuzzy C-Means(FCM)model whereas its centroid point optimality method with Slap Swarm based optimization strategy.Moreover some specific set of Gray Level Co-occurrence Matrix(GLCM)features of the segmented images are extracted to augment the performance of proposed detection algorithm.Finally the extracted features are recorded by DCNN and the proposed classifier has the capability to extract their own features.Based on this the diverse set of classes are classified and distinguished from qualitative abnormalities found in the image. 展开更多
关键词 Peripheral blood smear DCNN classifier pre-processing SEGMENTATION feature extraction salp swarm optimization classification
下载PDF
Adaptive Deep Learning Model for Software Bug Detection and Classification
19
作者 S.Sivapurnima D.Manjula 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期1233-1248,共16页
Software is unavoidable in software development and maintenance.In literature,many methods are discussed which fails to achieve efficient software bug detection and classification.In this paper,efficient Adaptive Deep... Software is unavoidable in software development and maintenance.In literature,many methods are discussed which fails to achieve efficient software bug detection and classification.In this paper,efficient Adaptive Deep Learning Model(ADLM)is developed for automatic duplicate bug report detection and classification process.The proposed ADLM is a combination of Conditional Random Fields decoding with Long Short-Term Memory(CRF-LSTM)and Dingo Optimizer(DO).In the CRF,the DO can be consumed to choose the efficient weight value in network.The proposed automatic bug report detection is proceeding with three stages like pre-processing,feature extraction in addition bug detection with classification.Initially,the bug report input dataset is gathered from the online source system.In the pre-processing phase,the unwanted information from the input data are removed by using cleaning text,convert data types and null value replacement.The pre-processed data is sent into the feature extraction phase.In the feature extraction phase,the four types of feature extraction method are utilized such as contextual,categorical,temporal and textual.Finally,the features are sent to the proposed ADLM for automatic duplication bug report detection and classification.The proposed methodology is proceeding with two phases such as training and testing phases.Based on the working process,the bugs are detected and classified from the input data.The projected technique is assessed by analyzing performance metrics such as accuracy,precision,Recall,F_Measure and kappa. 展开更多
关键词 Software bug detection classification pre-processing feature extraction deep belief neural network long short-term memory
下载PDF
Application of HEC-HMS for flood forecasting in Misai and Wan'an catchments in China 被引量:10
20
作者 James Oloche OLEYIBLO 《Water Science and Engineering》 EI CAS 2010年第1期14-22,共9页
The hydrologic model HEC-HMS (Hydrologic Engineering Center, Hydrologic Modeling System), used in combination with the Geospatial Hydrologic Modeling Extension, HEC-GeoHMS, is not a site-specific hydrologic model. A... The hydrologic model HEC-HMS (Hydrologic Engineering Center, Hydrologic Modeling System), used in combination with the Geospatial Hydrologic Modeling Extension, HEC-GeoHMS, is not a site-specific hydrologic model. Although China has seen the applications of many hydrologic and hydraulic models, HEC-HMS is seldom applied in China, and where it is applied, it is not applied holistically. This paper presents a holistic application of HEC-HMS. Its applicability, capability and suitability for flood forecasting in catchments were examined. The DEMs (digital elevation models) of the study areas were processed using HEC-GeoHMS, an ArcView GIS extension for catchment delineation, terrain pre-processing, and basin processing. The model was calibrated and verified using historical observed data. The determination coefficients and coefficients of agreement for all the flood events were above 0.9, and the relative errors in peak discharges were all within the acceptable range. 展开更多
关键词 hydrologic model HEC-HMS catchment delineation DEM terrain pre-processing Misai Catchment Wan 'an Catchment
下载PDF
上一页 1 2 3 下一页 到第
使用帮助 返回顶部