Diabetes is a metabolic disorder that results in a retinal complication called diabetic retinopathy(DR)which is one of the four main reasons for sightlessness all over the globe.DR usually has no clear symptoms before...Diabetes is a metabolic disorder that results in a retinal complication called diabetic retinopathy(DR)which is one of the four main reasons for sightlessness all over the globe.DR usually has no clear symptoms before the onset,thus making disease identication a challenging task.The healthcare industry may face unfavorable consequences if the gap in identifying DR is not lled with effective automation.Thus,our objective is to develop an automatic and cost-effective method for classifying DR samples.In this work,we present a custom Faster-RCNN technique for the recognition and classication of DR lesions from retinal images.After pre-processing,we generate the annotations of the dataset which is required for model training.Then,introduce DenseNet-65 at the feature extraction level of Faster-RCNN to compute the representative set of key points.Finally,the Faster-RCNN localizes and classies the input sample into ve classes.Rigorous experiments performed on a Kaggle dataset comprising of 88,704 images show that the introduced methodology outperforms with an accuracy of 97.2%.We have compared our technique with state-of-the-art approaches to show its robustness in term of DR localization and classication.Additionally,we performed cross-dataset validation on the Kaggle and APTOS datasets and achieved remarkable results on both training and testing phases.展开更多
COVID-19 has become a pandemic,with cases all over the world,with widespread disruption in some countries,such as Italy,US,India,South Korea,and Japan.Early and reliable detection of COVID-19 is mandatory to control t...COVID-19 has become a pandemic,with cases all over the world,with widespread disruption in some countries,such as Italy,US,India,South Korea,and Japan.Early and reliable detection of COVID-19 is mandatory to control the spread of infection.Moreover,prediction of COVID-19 spread in near future is also crucial to better plan for the disease control.For this purpose,we proposed a robust framework for the analysis,prediction,and detection of COVID-19.We make reliable estimates on key pandemic parameters and make predictions on the point of inflection and possible washout time for various countries around the world.The estimates,analysis and predictions are based on the data gathered fromJohns Hopkins Center during the time span of April 21 to June 27,2020.We use the normal distribution for simple and quick predictions of the coronavirus pandemic model and estimate the parameters of Gaussian curves using the least square parameter curve fitting for several countries in different continents.The predictions rely on the possible outcomes of Gaussian time evolution with the central limit theorem of statistics the predictions to be well justified.The parameters of Gaussian distribution,i.e.,maximumtime and width,are determined through a statisticalχ^(2)-fit for the purpose of doubling times after April 21,2020.For COVID-19 detection,we proposed a novel method based on the Histogram of Oriented Gradients(HOG)and CNN in multi-class classification scenario i.e.,Normal,COVID-19,viral pneumonia etc.Experimental results show the effectiveness of our framework for reliable prediction and detection of COVID-19.展开更多
In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT ...In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT devices.So,this tidal data is transferred to the cloud data center(CDC)for efficient processing and effective data storage.In CDC,leader nodes are responsible for higher performance,reliability,deadlock handling,reduced latency,and to provide cost-effective computational services to the users.However,the optimal leader selection is a computationally hard problem as several factors like memory,CPU MIPS,and bandwidth,etc.,are needed to be considered while selecting a leader amongst the set of available nodes.The existing approaches for leader selection are monolithic,as they identify the leader nodes without taking the optimal approach for leader resources.Therefore,for optimal leader node selection,a genetic algorithm(GA)based leader election(GLEA)approach is presented in this paper.The proposed GLEA uses the available resources to evaluate the candidate nodes during the leader election process.In the first phase of the algorithm,the cost of individual nodes,and overall cluster cost is computed on the bases of available resources.In the second phase,the best computational nodes are selected as the leader nodes by applying the genetic operations against a cost function by considering the available resources.The GLEA procedure is then compared against the Bees Life Algorithm(BLA).The experimental results show that the proposed scheme outperforms BLA in terms of execution time,SLA Violation,and their utilization with state-of-the-art schemes.展开更多
Searching images fromthe large image databases is one of the potential research areas of multimedia research.The most challenging task for nay CBIR system is to capture the high level semantic of user.The researchers ...Searching images fromthe large image databases is one of the potential research areas of multimedia research.The most challenging task for nay CBIR system is to capture the high level semantic of user.The researchers of multimedia domain are trying to fix this issue with the help of Relevance Feedback(RF).However existing RF based approaches needs a number of iteration to fulfill user’s requirements.This paper proposed a novel methodology to achieve better results in early iteration to reduce the user interaction with the system.In previous research work it is reported that SVM based RF approach generating better results for CBIR.Therefore,this paper focused on SVM based RF approach.To enhance the performance of SVM based RF approach this research work applied Particle Swarm Optimization(PSO)and Genetic Algorithm(GA)before applying SVM on user feedback.The main objective of using thesemeta-heuristic was to increase the positive image sample size from SVM.Firstly steps PSO is applied by incorporating the user feedback and secondly GA is applied on the result generated through PSO,finally SVM is applied using the positive sample generated through GA.The proposed technique is named as Particle Swarm Optimization Genetic Algorithm-Support Vector Machine Relevance Feedback(PSO-G A-SVMRF).Precisions,recall and F-score are used as performance metrics for the assessment and validation of PSO-GA-SVM-RF approach and experiments are conducted on coral image dataset having 10908 images.From experimental results it is proved that PSO-GA-SVM-RF approach outperformed then various well known CBIR approaches.展开更多
1 Introduction Advertisements detection and replacement with different ads based on the user preferences is employed during sports rebroadcasts that offers more value to both the distributor and viewer.Manual advertis...1 Introduction Advertisements detection and replacement with different ads based on the user preferences is employed during sports rebroadcasts that offers more value to both the distributor and viewer.Manual advertisements detection is a laborious activity and demands an urgent need to develop automated advertisement detection techniques to save the time,storage space,and transmission bandwidth.展开更多
Detection and segmentation of defocus blur is a challenging task in digital imaging applications as the blurry images comprise of blur and sharp regions that wrap significant information and require effective methods ...Detection and segmentation of defocus blur is a challenging task in digital imaging applications as the blurry images comprise of blur and sharp regions that wrap significant information and require effective methods for information extraction.Existing defocus blur detection and segmentation methods have several limitations i.e.,discriminating sharp smooth and blurred smooth regions,low recognition rate in noisy images,and high computational cost without having any prior knowledge of images i.e.,blur degree and camera configuration.Hence,there exists a dire need to develop an effective method for defocus blur detection,and segmentation robust to the above-mentioned limitations.This paper presents a novel features descriptor local directional mean patterns(LDMP)for defocus blur detection and employ KNN matting over the detected LDMP-Trimap for the robust segmentation of sharp and blur regions.We argue/hypothesize that most of the image fields located in blurry regions have significantly less specific local patterns than those in the sharp regions,therefore,proposed LDMP features descriptor should reliably detect the defocus blurred regions.The fusion of LDMP features with KNN matting provides superior performance in terms of obtaining high-quality segmented regions in the image.Additionally,the proposed LDMP features descriptor is robust to noise and successfully detects defocus blur in high-dense noisy images.Experimental results on Shi and Zhao datasets demonstrate the effectiveness of the proposed method in terms of defocus blur detection.Evaluation and comparative analysis signify that our method achieves superior segmentation performance and low computational cost of 15 seconds.展开更多
文摘Diabetes is a metabolic disorder that results in a retinal complication called diabetic retinopathy(DR)which is one of the four main reasons for sightlessness all over the globe.DR usually has no clear symptoms before the onset,thus making disease identication a challenging task.The healthcare industry may face unfavorable consequences if the gap in identifying DR is not lled with effective automation.Thus,our objective is to develop an automatic and cost-effective method for classifying DR samples.In this work,we present a custom Faster-RCNN technique for the recognition and classication of DR lesions from retinal images.After pre-processing,we generate the annotations of the dataset which is required for model training.Then,introduce DenseNet-65 at the feature extraction level of Faster-RCNN to compute the representative set of key points.Finally,the Faster-RCNN localizes and classies the input sample into ve classes.Rigorous experiments performed on a Kaggle dataset comprising of 88,704 images show that the introduced methodology outperforms with an accuracy of 97.2%.We have compared our technique with state-of-the-art approaches to show its robustness in term of DR localization and classication.Additionally,we performed cross-dataset validation on the Kaggle and APTOS datasets and achieved remarkable results on both training and testing phases.
文摘COVID-19 has become a pandemic,with cases all over the world,with widespread disruption in some countries,such as Italy,US,India,South Korea,and Japan.Early and reliable detection of COVID-19 is mandatory to control the spread of infection.Moreover,prediction of COVID-19 spread in near future is also crucial to better plan for the disease control.For this purpose,we proposed a robust framework for the analysis,prediction,and detection of COVID-19.We make reliable estimates on key pandemic parameters and make predictions on the point of inflection and possible washout time for various countries around the world.The estimates,analysis and predictions are based on the data gathered fromJohns Hopkins Center during the time span of April 21 to June 27,2020.We use the normal distribution for simple and quick predictions of the coronavirus pandemic model and estimate the parameters of Gaussian curves using the least square parameter curve fitting for several countries in different continents.The predictions rely on the possible outcomes of Gaussian time evolution with the central limit theorem of statistics the predictions to be well justified.The parameters of Gaussian distribution,i.e.,maximumtime and width,are determined through a statisticalχ^(2)-fit for the purpose of doubling times after April 21,2020.For COVID-19 detection,we proposed a novel method based on the Histogram of Oriented Gradients(HOG)and CNN in multi-class classification scenario i.e.,Normal,COVID-19,viral pneumonia etc.Experimental results show the effectiveness of our framework for reliable prediction and detection of COVID-19.
基金supported by the Research Management Center,Xiamen University Malaysia under XMUM Research Program Cycle 3(Grant No:XMUMRF/2019-C3/IECE/0006).
文摘In IoT networks,nodes communicate with each other for computational services,data processing,and resource sharing.Most of the time huge data is generated at the network edge due to extensive communication between IoT devices.So,this tidal data is transferred to the cloud data center(CDC)for efficient processing and effective data storage.In CDC,leader nodes are responsible for higher performance,reliability,deadlock handling,reduced latency,and to provide cost-effective computational services to the users.However,the optimal leader selection is a computationally hard problem as several factors like memory,CPU MIPS,and bandwidth,etc.,are needed to be considered while selecting a leader amongst the set of available nodes.The existing approaches for leader selection are monolithic,as they identify the leader nodes without taking the optimal approach for leader resources.Therefore,for optimal leader node selection,a genetic algorithm(GA)based leader election(GLEA)approach is presented in this paper.The proposed GLEA uses the available resources to evaluate the candidate nodes during the leader election process.In the first phase of the algorithm,the cost of individual nodes,and overall cluster cost is computed on the bases of available resources.In the second phase,the best computational nodes are selected as the leader nodes by applying the genetic operations against a cost function by considering the available resources.The GLEA procedure is then compared against the Bees Life Algorithm(BLA).The experimental results show that the proposed scheme outperforms BLA in terms of execution time,SLA Violation,and their utilization with state-of-the-art schemes.
基金This work was supported by the Deanship of Scientific Research at King Saud University through the Research Group under Grant RG-1438-071.
文摘Searching images fromthe large image databases is one of the potential research areas of multimedia research.The most challenging task for nay CBIR system is to capture the high level semantic of user.The researchers of multimedia domain are trying to fix this issue with the help of Relevance Feedback(RF).However existing RF based approaches needs a number of iteration to fulfill user’s requirements.This paper proposed a novel methodology to achieve better results in early iteration to reduce the user interaction with the system.In previous research work it is reported that SVM based RF approach generating better results for CBIR.Therefore,this paper focused on SVM based RF approach.To enhance the performance of SVM based RF approach this research work applied Particle Swarm Optimization(PSO)and Genetic Algorithm(GA)before applying SVM on user feedback.The main objective of using thesemeta-heuristic was to increase the positive image sample size from SVM.Firstly steps PSO is applied by incorporating the user feedback and secondly GA is applied on the result generated through PSO,finally SVM is applied using the positive sample generated through GA.The proposed technique is named as Particle Swarm Optimization Genetic Algorithm-Support Vector Machine Relevance Feedback(PSO-G A-SVMRF).Precisions,recall and F-score are used as performance metrics for the assessment and validation of PSO-GA-SVM-RF approach and experiments are conducted on coral image dataset having 10908 images.From experimental results it is proved that PSO-GA-SVM-RF approach outperformed then various well known CBIR approaches.
基金This work was supported and funded by the Directorate ASR&TD of UET-Taxila(UET/ASR&TD/RG-1002).
文摘1 Introduction Advertisements detection and replacement with different ads based on the user preferences is employed during sports rebroadcasts that offers more value to both the distributor and viewer.Manual advertisements detection is a laborious activity and demands an urgent need to develop automated advertisement detection techniques to save the time,storage space,and transmission bandwidth.
基金This work was supported and funded by the Directorate ASR&TD of UET-Taxila.
文摘Detection and segmentation of defocus blur is a challenging task in digital imaging applications as the blurry images comprise of blur and sharp regions that wrap significant information and require effective methods for information extraction.Existing defocus blur detection and segmentation methods have several limitations i.e.,discriminating sharp smooth and blurred smooth regions,low recognition rate in noisy images,and high computational cost without having any prior knowledge of images i.e.,blur degree and camera configuration.Hence,there exists a dire need to develop an effective method for defocus blur detection,and segmentation robust to the above-mentioned limitations.This paper presents a novel features descriptor local directional mean patterns(LDMP)for defocus blur detection and employ KNN matting over the detected LDMP-Trimap for the robust segmentation of sharp and blur regions.We argue/hypothesize that most of the image fields located in blurry regions have significantly less specific local patterns than those in the sharp regions,therefore,proposed LDMP features descriptor should reliably detect the defocus blurred regions.The fusion of LDMP features with KNN matting provides superior performance in terms of obtaining high-quality segmented regions in the image.Additionally,the proposed LDMP features descriptor is robust to noise and successfully detects defocus blur in high-dense noisy images.Experimental results on Shi and Zhao datasets demonstrate the effectiveness of the proposed method in terms of defocus blur detection.Evaluation and comparative analysis signify that our method achieves superior segmentation performance and low computational cost of 15 seconds.