With the ever-increasing popularity of Internet of Things(IoT),massive enterprises are attempting to encapsulate their developed outcomes into various lightweight Web Application Programming Interfaces(APIs)that can b...With the ever-increasing popularity of Internet of Things(IoT),massive enterprises are attempting to encapsulate their developed outcomes into various lightweight Web Application Programming Interfaces(APIs)that can be accessible remotely.In this context,finding and writing a list of existing Web APIs that can collectively meet the functional needs of software developers has become a promising approach to economically and easily develop successful mobile applications.However,the number and diversity of candidate IoT Web APIs places an additional burden on application developers’Web API selection decisions,as it is often a challenging task to simultaneously ensure the diversity and compatibility of the final set of Web APIs selected.Considering this challenge and latest successful applications of game theory in IoT,a Diversified and Compatible Web APIs Recommendation approach,namely DivCAR,is put forward in this paper.First of all,to achieve API diversity,DivCAR employs random walk sampling technique on a pre-built“API-API”correlation graph to generate diverse“API-API”correlation subgraphs.Afterwards,with the diverse“API-API”correlation subgraphs,the compatible Web APIs recommendation problem is modeled as a minimum group Steiner tree search problem.A sorted set of multiple compatible and diverse Web APIs are returned to the application developer by solving the minimum group Steiner tree search problem.At last,a set of experiments are designed and implemented on a real dataset crawled from www.programmableweb.com.Experimental results validate the effectiveness and efficiency of our proposed DivCAR approach in balancing the Web APIs recommendation diversity and compatibility.展开更多
The packet classification is a fundamental process in provisioning security and quality of service for many intelligent network-embedded systems running in the Internet of Things(IoT).In recent years,researchers have ...The packet classification is a fundamental process in provisioning security and quality of service for many intelligent network-embedded systems running in the Internet of Things(IoT).In recent years,researchers have tried to develop hardware-based solutions for the classification of Internet packets.Due to higher throughput and shorter delays,these solutions are considered as a major key to improving the quality of services.Most of these efforts have attempted to implement a software algorithm on the FPGA to reduce the processing time and enhance the throughput.The proposed architectures,however,cannot reach a compromise among power consumption,memory usage,and throughput rate.In view of this,the architecture proposed in this paper contains a pipelinebased micro-core that is used in network processors to classify packets.To this end,three architectures have been implemented using the proposed micro-core.The first architecture performs parallel classification based on header fields.The second one classifies packets in a serial manner.The last architecture is the pipeline-based classifier,which can increase performance by nine times.The proposed architectures have been implemented on an FPGA chip.The results are indicative of a reduction in memory usage as well as an increase in speedup and throughput.The architecture has a power consumption of is 1.294w,and its throughput with a frequency of 233 MHz exceeds 147 Gbps.展开更多
The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific f...The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific flows by matching them against a set of dynamic rules. This basic process accelerates the processing of data, so that instead of processing singular packets repeatedly, corresponding actions are performed on corresponding flows of packets. In this paper, first, we address limitations on a typical packet classification algorithm like Tuple Space Search (TSS). Then, we present a set of different scenarios to parallelize it on different parallel processing platforms, including Graphics Processing Units (GPUs), clusters of Central Processing Units (CPUs), and hybrid clusters. Experimental results show that the hybrid cluster provides the best platform for parallelizing packet classification algorithms, which promises the average throughput rate of 4.2 Million packets per second (Mpps). That is, the hybrid cluster produced by the integration of Compute Unified Device Architecture (CUDA), Message Passing Interface (MPI), and OpenMP programming model could classify 0.24 million packets per second more than the GPU cluster scheme. Such a packet classifier satisfies the required processing speed in the programmable network systems that would be used to communicate big medical data.展开更多
Energy management in smart homes is one of the most critical problems for the Quality of Life(QoL)and preserving energy resources.One of the relevant issues in this subject is environmental contamination,which threate...Energy management in smart homes is one of the most critical problems for the Quality of Life(QoL)and preserving energy resources.One of the relevant issues in this subject is environmental contamination,which threatens the world's future.Green computing-enabled Artificial Intelligence(Al)algorithms can provide impactful solutions to this topic.This research proposes using one of the Recurrent Neural Network(RNN)algorithms known as Long Short-Term Memory(LSTM)to comprehend how it is feasible to perform the cloud/fog/edge-enabled prediction of the building's energy.Four parameters of power electricity,power heating,power cooling,and total power in an office/home in cold-climate cities are considered as our features in the study.Based on the collected data,we evaluate the LSTM approach for forecasting parameters for the next year to predict energy consumption and online monitoring of the model's performance under various conditions.Towards implementing the Al predictive algorithm,several existing tools are studied.The results have been generated through simulations,and we find them promisingforfutureapplications.展开更多
In the construction industry,to prevent accidents,non-destructive tests are necessary and cost-effective.Electrical impedance tomography is a new technology in non-invasive imaging in which the image of the inner part...In the construction industry,to prevent accidents,non-destructive tests are necessary and cost-effective.Electrical impedance tomography is a new technology in non-invasive imaging in which the image of the inner part of conductive bodies is reconstructed by the arrays of external electrodes that are connected on the periphery of the object.The equipment is cheap,fast,and edge compatible.In this imaging method,the image of electrical conductivity distribution(or its opposite;electrical impedance)of the internal parts of the target object is reconstructed.The image reconstruction process is performed by injecting a precise electric current to the peripheral boundaries of the object,measuring the peripheral voltages induced from it and processing the collected data.In an electrical impedance tomography system,the voltages measured in the peripheral boundaries have a non-linear equation with the electrical conductivity distribution.This paper presents a cheap Electrical Impedance Tomography(EIT)instrument for detecting impurities in the concrete.A voltage-controlled current source,a micro-controller,a set of multiplexers,a set of electrodes,and a personal computer constitute the structure of the system.The conducted tests on concrete with impurities show that the designed EIT system can reveal impurities with a good accuracy in a reasonable time.展开更多
This study investigates the different aspects of multimedia computing in Video Synthetic Aperture Radar(Video-SAR)as a new mode of radar imaging for real-time remote sensing and surveillance.This research also conside...This study investigates the different aspects of multimedia computing in Video Synthetic Aperture Radar(Video-SAR)as a new mode of radar imaging for real-time remote sensing and surveillance.This research also considers new suggestions in the systematic design,research taxonomy,and future trends of radar data processing.Despite the conventional modes of SAR imaging,Video-SAR can generate video sequences to obtain online monitoring and green surveillance throughout the day and night(regardless of light sources)in all weathers.First,an introduction to Video-SAR is presented.Then,some specific properties of this imaging mode are reviewed.Particularly,this research covers one of the most important aspects of the Video-SAR systems,namely,the systematic design requirements,and also some new types of visual distortions which are different from the distortions,artifacts and noises observed in the conventional imaging radar.In addition,some topics on the general features and high-performance computing of Video-SAR towards radar communications through Unmanned Aerial Vehicle(UAV)platforms,Internet of Multimedia Things(IoMT),Video-SAR data processing issues,and real-world applications are investigated.展开更多
文摘With the ever-increasing popularity of Internet of Things(IoT),massive enterprises are attempting to encapsulate their developed outcomes into various lightweight Web Application Programming Interfaces(APIs)that can be accessible remotely.In this context,finding and writing a list of existing Web APIs that can collectively meet the functional needs of software developers has become a promising approach to economically and easily develop successful mobile applications.However,the number and diversity of candidate IoT Web APIs places an additional burden on application developers’Web API selection decisions,as it is often a challenging task to simultaneously ensure the diversity and compatibility of the final set of Web APIs selected.Considering this challenge and latest successful applications of game theory in IoT,a Diversified and Compatible Web APIs Recommendation approach,namely DivCAR,is put forward in this paper.First of all,to achieve API diversity,DivCAR employs random walk sampling technique on a pre-built“API-API”correlation graph to generate diverse“API-API”correlation subgraphs.Afterwards,with the diverse“API-API”correlation subgraphs,the compatible Web APIs recommendation problem is modeled as a minimum group Steiner tree search problem.A sorted set of multiple compatible and diverse Web APIs are returned to the application developer by solving the minimum group Steiner tree search problem.At last,a set of experiments are designed and implemented on a real dataset crawled from www.programmableweb.com.Experimental results validate the effectiveness and efficiency of our proposed DivCAR approach in balancing the Web APIs recommendation diversity and compatibility.
文摘The packet classification is a fundamental process in provisioning security and quality of service for many intelligent network-embedded systems running in the Internet of Things(IoT).In recent years,researchers have tried to develop hardware-based solutions for the classification of Internet packets.Due to higher throughput and shorter delays,these solutions are considered as a major key to improving the quality of services.Most of these efforts have attempted to implement a software algorithm on the FPGA to reduce the processing time and enhance the throughput.The proposed architectures,however,cannot reach a compromise among power consumption,memory usage,and throughput rate.In view of this,the architecture proposed in this paper contains a pipelinebased micro-core that is used in network processors to classify packets.To this end,three architectures have been implemented using the proposed micro-core.The first architecture performs parallel classification based on header fields.The second one classifies packets in a serial manner.The last architecture is the pipeline-based classifier,which can increase performance by nine times.The proposed architectures have been implemented on an FPGA chip.The results are indicative of a reduction in memory usage as well as an increase in speedup and throughput.The architecture has a power consumption of is 1.294w,and its throughput with a frequency of 233 MHz exceeds 147 Gbps.
文摘The network switches in the data plane of Software Defined Networking (SDN) are empowered by an elementary process, in which enormous number of packets which resemble big volumes of data are classified into specific flows by matching them against a set of dynamic rules. This basic process accelerates the processing of data, so that instead of processing singular packets repeatedly, corresponding actions are performed on corresponding flows of packets. In this paper, first, we address limitations on a typical packet classification algorithm like Tuple Space Search (TSS). Then, we present a set of different scenarios to parallelize it on different parallel processing platforms, including Graphics Processing Units (GPUs), clusters of Central Processing Units (CPUs), and hybrid clusters. Experimental results show that the hybrid cluster provides the best platform for parallelizing packet classification algorithms, which promises the average throughput rate of 4.2 Million packets per second (Mpps). That is, the hybrid cluster produced by the integration of Compute Unified Device Architecture (CUDA), Message Passing Interface (MPI), and OpenMP programming model could classify 0.24 million packets per second more than the GPU cluster scheme. Such a packet classifier satisfies the required processing speed in the programmable network systems that would be used to communicate big medical data.
文摘Energy management in smart homes is one of the most critical problems for the Quality of Life(QoL)and preserving energy resources.One of the relevant issues in this subject is environmental contamination,which threatens the world's future.Green computing-enabled Artificial Intelligence(Al)algorithms can provide impactful solutions to this topic.This research proposes using one of the Recurrent Neural Network(RNN)algorithms known as Long Short-Term Memory(LSTM)to comprehend how it is feasible to perform the cloud/fog/edge-enabled prediction of the building's energy.Four parameters of power electricity,power heating,power cooling,and total power in an office/home in cold-climate cities are considered as our features in the study.Based on the collected data,we evaluate the LSTM approach for forecasting parameters for the next year to predict energy consumption and online monitoring of the model's performance under various conditions.Towards implementing the Al predictive algorithm,several existing tools are studied.The results have been generated through simulations,and we find them promisingforfutureapplications.
文摘In the construction industry,to prevent accidents,non-destructive tests are necessary and cost-effective.Electrical impedance tomography is a new technology in non-invasive imaging in which the image of the inner part of conductive bodies is reconstructed by the arrays of external electrodes that are connected on the periphery of the object.The equipment is cheap,fast,and edge compatible.In this imaging method,the image of electrical conductivity distribution(or its opposite;electrical impedance)of the internal parts of the target object is reconstructed.The image reconstruction process is performed by injecting a precise electric current to the peripheral boundaries of the object,measuring the peripheral voltages induced from it and processing the collected data.In an electrical impedance tomography system,the voltages measured in the peripheral boundaries have a non-linear equation with the electrical conductivity distribution.This paper presents a cheap Electrical Impedance Tomography(EIT)instrument for detecting impurities in the concrete.A voltage-controlled current source,a micro-controller,a set of multiplexers,a set of electrodes,and a personal computer constitute the structure of the system.The conducted tests on concrete with impurities show that the designed EIT system can reveal impurities with a good accuracy in a reasonable time.
文摘This study investigates the different aspects of multimedia computing in Video Synthetic Aperture Radar(Video-SAR)as a new mode of radar imaging for real-time remote sensing and surveillance.This research also considers new suggestions in the systematic design,research taxonomy,and future trends of radar data processing.Despite the conventional modes of SAR imaging,Video-SAR can generate video sequences to obtain online monitoring and green surveillance throughout the day and night(regardless of light sources)in all weathers.First,an introduction to Video-SAR is presented.Then,some specific properties of this imaging mode are reviewed.Particularly,this research covers one of the most important aspects of the Video-SAR systems,namely,the systematic design requirements,and also some new types of visual distortions which are different from the distortions,artifacts and noises observed in the conventional imaging radar.In addition,some topics on the general features and high-performance computing of Video-SAR towards radar communications through Unmanned Aerial Vehicle(UAV)platforms,Internet of Multimedia Things(IoMT),Video-SAR data processing issues,and real-world applications are investigated.