Advancements in sensor technology have significantly enhanced atmospheric monitoring.Notably,metal oxide and carbon(MO_(x)/C)hybrids have gained attention for their exceptional sensitivity and room-temperature sensing...Advancements in sensor technology have significantly enhanced atmospheric monitoring.Notably,metal oxide and carbon(MO_(x)/C)hybrids have gained attention for their exceptional sensitivity and room-temperature sensing performance.However,previous methods of synthesizing MO_(x)/C composites suffer from problems,including inhomogeneity,aggregation,and challenges in micropatterning.Herein,we introduce a refined method that employs a metal–organic framework(MOF)as a precursor combined with direct laser writing.The inherent structure of MOFs ensures a uniform distribution of metal ions and organic linkers,yielding homogeneous MO_(x)/C structures.The laser processing facilitates precise micropatterning(<2μm,comparable to typical photolithography)of the MO_(x)/C crystals.The optimized MOF-derived MO_(x)/C sensor rapidly detected ethanol gas even at room temperature(105 and 18 s for response and recovery,respectively),with a broad range of sensing performance from 170 to 3,400 ppm and a high response value of up to 3,500%.Additionally,this sensor exhibited enhanced stability and thermal resilience compared to previous MOF-based counterparts.This research opens up promising avenues for practical applications in MOF-derived sensing devices.展开更多
The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gai...The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gait in a virtual environment was presented in previous research work titled “A Comparison of PPO, TD3, and SAC Reinforcement Algorithms for Quadruped Walking Gait Generation”. We demonstrated that the Soft Actor-Critic Reinforcement algorithm had the best performance generating the walking gait for a quadruped in certain instances of sensor configurations in the virtual environment. In this work, we present the performance analysis of the state-of-the-art Deep Reinforcement algorithms above for quadruped walking gait generation in a physical environment. The performance is determined in the physical environment by transfer learning augmented by real-time reinforcement learning for gait generation on a physical quadruped. The performance is analyzed on a quadruped equipped with a range of sensors such as position tracking using a stereo camera, contact sensing of each of the robot legs through force resistive sensors, and proprioceptive information of the robot body and legs using nine inertial measurement units. The performance comparison is presented using the metrics associated with the walking gait: average forward velocity (m/s), average forward velocity variance, average lateral velocity (m/s), average lateral velocity variance, and quaternion root mean square deviation. The strengths and weaknesses of each algorithm for the given task on the physical quadruped are discussed.展开更多
Cyber threats and risks are increasing exponentially with time. For preventing and defense against these threats and risks, precise risk perception for effective mitigation is the first step. Risk perception is necess...Cyber threats and risks are increasing exponentially with time. For preventing and defense against these threats and risks, precise risk perception for effective mitigation is the first step. Risk perception is necessary requirement to mitigate risk as it drives the security strategy at the organizational level and human attitude at individual level. Sometime, individuals understand there is a risk that a negative event or incident can occur, but they do not believe there will be a personal impact if the risk comes to realization but instead, they believe that the negative event will impact others. This belief supports the common belief that individuals tend to think of themselves as invulnerable, i.e., optimistically bias about the situation, thus affecting their attitude for taking preventive measures due to inappropriate risk perception or overconfidence. The main motivation of this meta-analysis is to assess that how the cyber optimistic bias or cyber optimism bias affects individual’s cyber security risk perception and how it changes their decisions. Applying a meta-analysis, this study found that optimistic bias has an overall negative impact on the cyber security due to the inappropriate risk perception and considering themselves invulnerable by biasing that the threat will not occur to them. Due to the cyber optimism bias, the individual will sometimes share passwords by considering it will not be maliciously used, lack in adopting of preventive measures, ignore security incidents, wrong perception of cyber threats and overconfidence on themselves in the context of cyber security.展开更多
Bundle adjustment is a camera and point refinement technique in a 3D scene reconstruction pipeline. The camera parameters and the 3D points are refined by minimizing the difference between computed projection and obse...Bundle adjustment is a camera and point refinement technique in a 3D scene reconstruction pipeline. The camera parameters and the 3D points are refined by minimizing the difference between computed projection and observed projection of the image points formulated as a non-linear least-square problem. Levenberg-Marquardt method is used to solve the non-linear least-square problem. Solving the non-linear least-square problem is computationally expensive, proportional to the number of cameras, points, and projections. In this paper, we implement the Bundle Adjustment (BA) algorithm and analyze techniques to improve algorithmic performance by reducing the mean square error. We investigate using an additional radial distortion camera parameter in the BA algorithm and demonstrate better convergence of the mean square error. We also demonstrate the use of explicitly computed analytical derivatives. In addition, we implement the BA algorithm on GPUs using the CUDA parallel programming model to reduce the computational time burden of the BA algorithm. CUDA Streams, atomic operations, and cuBLAS library in the CUDA programming model are proposed, implemented, and demonstrated to improve the performance of the BA algorithm. Our implementation has demonstrated better convergence of the BA algorithm and achieved a speedup of up to 16× on the use of the BA algorithm on various datasets.展开更多
Deep reinforcement learning (deep RL) has the potential to replace classic robotic controllers. State-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Poli...Deep reinforcement learning (deep RL) has the potential to replace classic robotic controllers. State-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient and Soft Actor-Critic Reinforcement Algorithms, to mention a few, have been investigated for training robots to walk. However, conflicting performance results of these algorithms have been reported in the literature. In this work, we present the performance analysis of the above three state-of-the-art Deep Reinforcement algorithms for a constant velocity walking task on a quadruped. The performance is analyzed by simulating the walking task of a quadruped equipped with a range of sensors present on a physical quadruped robot. Simulations of the three algorithms across a range of sensor inputs and with domain randomization are performed. The strengths and weaknesses of each algorithm for the given task are discussed. We also identify a set of sensors that contribute to the best performance of each Deep Reinforcement algorithm.展开更多
The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its p...The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its performance by implementing the algorithm on GPUs. In the previous research work, “Improving Accuracy and Computational Burden of Bundle Adjustment Algorithm using GPUs,” the authors demonstrated first the Bundle Adjustment algorithmic performance improvement by reducing the mean square error using an additional radial distorting parameter and explicitly computed analytical derivatives and reducing the computational burden of the Bundle Adjustment algorithm using GPUs. The naïve implementation of the CUDA code, a speedup of 10× for the largest dataset of 13,678 cameras, 4,455,747 points, and 28,975,571 projections was achieved. In this paper, we present the optimization of the Bundle Adjustment algorithm CUDA code on GPUs to achieve higher speedup. We propose a new data memory layout for the parameters in the Bundle Adjustment algorithm, resulting in contiguous memory access. We demonstrate that it improves the memory throughput on the GPUs, thereby improving the overall performance. We also demonstrate an increase in the computational throughput of the algorithm by optimizing the CUDA kernels to utilize the GPU resources effectively. A comparative performance study of explicitly computing an algorithm parameter versus using the Jacobians instead is presented. In the previous work, the Bundle Adjustment algorithm failed to converge for certain datasets due to several block matrices of the cameras in the augmented normal equation, resulting in rank-deficient matrices. In this work, we identify the cameras that cause rank-deficient matrices and preprocess the datasets to ensure the convergence of the BA algorithm. Our optimized CUDA implementation achieves convergence of the Bundle Adjustment algorithm in around 22 seconds for the largest dataset compared to 654 seconds for the sequential implementation, resulting in a speedup of 30×. Our optimized CUDA implementation presented in this paper has achieved a 3× speedup for the largest dataset compared to the previous naïve CUDA implementation.展开更多
In an effort to reduce vehicle collisions with snowplows in poor weather conditions, this paper details the development of a real time thermal image based machine learning approach to an early collision avoidance syst...In an effort to reduce vehicle collisions with snowplows in poor weather conditions, this paper details the development of a real time thermal image based machine learning approach to an early collision avoidance system for snowplows, which intends to detect and estimate the distance of trailing vehicles. Due to the operational conditions of snowplows, which include heavy-blowing snow, traditional optical sensors like LiDAR and visible spectrum cameras have reduced effectiveness in detecting objects in such environments. Thus, we propose using a thermal infrared camera as the primary sensor along with machine learning algorithms. First, we curate a large dataset of thermal images of vehicles in heavy snow conditions. Using the curated dataset, two machine-learning models based on the modified ResNet architectures were trained to detect and estimate the trailing vehicle distance using real-time thermal images. The trained detection network was capable of detecting trailing vehicles 99.0% of the time at 1500.0 ft distance from the snowplow. The trained trailing distance network was capable of estimating distance with an average estimation error of 10.70 ft. The inference performance of the trained models is discussed, along with the interpretation of the performance.展开更多
A machine learning model, using the transformer architecture, is used to design a feedback compensator and prefilter for various simulated plants. The output of the transformer is a sequence of compensator and prefilt...A machine learning model, using the transformer architecture, is used to design a feedback compensator and prefilter for various simulated plants. The output of the transformer is a sequence of compensator and prefilter parameters. The compensator and prefilter are linear models, preserving the ability to analyze the system with linear control theory. The input to the network is a window of recent reference and output samples. The goal of the transformer is to minimize tracking error at each time step. The plants under consideration range from simple to challenging. The more difficult plants contain closely spaced, lightly damped, complex conjugate pairs of poles and zeros. Results are compared to PID controllers tuned for a similar crossover frequency and optimal phase margin. For simple plants, the transformer converges to solutions which overly rely on the prefilter, neglecting the maximization of negative feedback. For more complex plants, the transformer designs a compensator and prefilter with more desirable qualities. In all cases, the transformer can start with random model parameters and modify them to minimize tracking error on the step reference.展开更多
The main objective of this study is estimating environmental pollution of hybrid biomass and co-generation power plants. Efficiency of direct tapping of biomass is about 15%-20%. Consequently, about 80% of energy woul...The main objective of this study is estimating environmental pollution of hybrid biomass and co-generation power plants. Efficiency of direct tapping of biomass is about 15%-20%. Consequently, about 80% of energy would be waste in this method. While in co-generation power plant, this number could improve to more than 50%. Therefore, to achieve higher efficiency in utilizing biomass energy, co-generation power plants is proposed by using biogas as fuel instead of natural gas. Proposed system would be supplied thermal and electrical energy for non-urban areas of Iran. In this regard, process of fermentation and gas production from biomass in a vertical digester is studied and simulated using analytic methods. Various factors affecting the fermentation, such as temperature, humidity, PH and optimal conditions for the extraction of gas from waste agriculture and animal are also determined. Comparing between the pollution emission from fossil fuel power plants and power plants fed by biomass shows about 88% reduction in greenhouse emission which significant number.展开更多
Flotation is a complex multifaceted process that is widely used for the separation of finely ground minerals. The theory of froth flotation is complex and is not completely understood. This fact has been brought many ...Flotation is a complex multifaceted process that is widely used for the separation of finely ground minerals. The theory of froth flotation is complex and is not completely understood. This fact has been brought many monitoring challenges in a coal processing plant. To solve those challenges, it is important to understand the effect of different parameters on the fine particle separation, and control flotation performance for a particular system. This study is going to indicate the effect of various parameters (particle Characteristics and hydrodynamic conditions) on coal flotation responses (flotation rate constant and recovery) by different modeling techniques. A comprehensive coal flotation database was prepared for the statistical and soft computing methods. Statistical factors were used for variable selections. Results were in a good agreement with recent theoretical flotation investigations. Computational models accurately can estimate flotation rate constant and coal recovery (correlation coefficient 0.85, and 0.99, respectively). According to the results, it can be concluded that the soft computing models can overcome the complexity of process and be used as an expert system to control, and optimize parameters of coal flotation process.展开更多
In this work we synthesize a novel and highly efficient photocatalyst for degradation of methyl orange and rhodamine B. In addition, a new method for synthesis of FeO@SiO@TiO@Ho magnetic core-shell nanoparticles with ...In this work we synthesize a novel and highly efficient photocatalyst for degradation of methyl orange and rhodamine B. In addition, a new method for synthesis of FeO@SiO@TiO@Ho magnetic core-shell nanoparticles with spherical morphology is proposed. The crystal structures, morphology and chemical properties of the as-synthesized nanoparticles were characterized using Fourier transform infrared spectroscopy(FT-IR), scanning electron microscopy(SEM), transmission electron microscopy(TEM), energy dispersive X-ray(EDS), X-ray diffraction(XRD), UV–vis diffuse reflectance spectroscopy(DRS) and vibrating sample magnetometer(VSM) techniques. The photocatalytic activity of FeO@SiO@TiO@Ho was investigated by degradation of methyl orange(MO) as cationic dye and rhodamine B(Rh B) as anionic dye in aqueous solution under UV/vis irradiation. The results indicate that about 92.1% of Rh B and78.4% of MO were degraded after 120 and 150 min, respectively. These degradation results show that FeO@SiO@TiO@Ho nanoparticles are better photocatalyst than Fe3O4@Si O2@TiO 2@Ho for degradation of MO and Rh B. As well as, the catalyst shows high recovery and stability even after several separation cycles.展开更多
Information-Centric Networking (ICN) is an innovative paradigm for the future internet architecture, which addresses IP network limitations in supporting content distribution and information access by decoupling conte...Information-Centric Networking (ICN) is an innovative paradigm for the future internet architecture, which addresses IP network limitations in supporting content distribution and information access by decoupling content from hosts and providing the ability to retrieve a content object by its name (identifier), rather than its storage location (IP address). Name resolution and routing is critical for content retrieval in ICN networks. In this research, we perform a comparative study of two widely used classes of ICN name resolution and routing schemes, namely flooding and Distributed Hash Table (DHT). We consider the flooding-based routing in Content-Centric Networks due to its wide acceptance. For the DHT scheme, we design a multi-level DHT that takes into account the underlying network topology and uses name aggregation to further reduce control overhead and improve network efficiency. Then, we compare the characteristics and performance of these two classes of name resolution and routing through extensive simulations. The evaluation results show that the performances of these two approaches are reliant on several factors, including network size, content location dynamics, and content popularity. Our study reveals insights into the design tradeoffs and offers guidelines for design strategies.展开更多
Considerable amounts of coal particles are accumulated in the tailing dams of washing plants which can make serious environmental problems. Recovery of these particles from tailings has economically and environmentall...Considerable amounts of coal particles are accumulated in the tailing dams of washing plants which can make serious environmental problems. Recovery of these particles from tailings has economically and environmentally several advantages. Maintaining natural resources and reducing discharges to the dams are the most important ones. This study was examined the possibility to recover coal particles from a tailing dam with 56.29% ash content by using series of processing techniques. For this purpose, gravity separation (jig, shaking table and spiral) and flotation tests were conducted to upgrade products. Based the optimum value of these processing methods, a flowsheet was designed to increase the rate of recovery for a wide range of coal particles. Results indicated that the designed circuit can recover over 90% of value coal particles and reduce ash content of product to less than 14%. These results can potentially be used for designing an industrial operation as a recycling plant and an appropriate instance for other areas to reduce the environmental issues of coal tailing dams.展开更多
Mobile Cloud Computing (MCC) brings rich computational resource to mobile users, network operators, and cloud computing providers. It can be represented in many ways, and the ultimate goal of MCC is to enable executio...Mobile Cloud Computing (MCC) brings rich computational resource to mobile users, network operators, and cloud computing providers. It can be represented in many ways, and the ultimate goal of MCC is to enable execution of rich mobile application with rich user experience. Mobility is one of the main characteristics of MCC environment where user can be able to continue their work regardless of movement. This literature review paper presents the state-of-the-art survey of MCC. Also, we provide the communication architecture of MCC and taxonomy of mobile cloud in which specifically concentrates on offloading, mobile distribution computing, and privacy. Through an extensive literature review, we found that MCC is a technologically beneficial and expedient paradigm for virtual environments in terms of virtual servers in a distributed environment, multi-tenant architecture and data storing in a cloud. We further identified the drawbacks in offloading, mobile distribution computing, privacy of MCC and how this technology can be used in an effective way.展开更多
We experimentally demonstrate the C-band wavelength conversion using four-wave mixing in a 17-mm-long silicon-on-insulator waveguide pumped by a dispersed mode-locked femtosecond laser pulse. The idler can be observed...We experimentally demonstrate the C-band wavelength conversion using four-wave mixing in a 17-mm-long silicon-on-insulator waveguide pumped by a dispersed mode-locked femtosecond laser pulse. The idler can be observed with an incident average pump power lower than 4 dBm, and about 35 nm of conversion bandwidth from 1530nm to 1565nm is measured by using a 1550-nm pump wavelength. The pulse-pumped efficiency is demonstrated to be higher, by more than 22 dB, than the cw-pumped efficiency. The conversion efficiency variations with respect to the pump and signal powers are also investigated.展开更多
Research on high voltage(HV)silicon carbide(SiC)power semiconductor devices has attracted much attention in recent years.This paper overviews the development and status of HV SiC devices.Meanwhile,benefits of HV SiC d...Research on high voltage(HV)silicon carbide(SiC)power semiconductor devices has attracted much attention in recent years.This paper overviews the development and status of HV SiC devices.Meanwhile,benefits of HV SiC devices are presented.The technologies and challenges for HV SiC device application in converter design are discussed.The state-of-the-art applications of HV SiC devices are also reviewed.展开更多
Cadmium zinc telluride (CdZnTe) semiconductor has applications in the detection of X-rays and gamma-rays at room temperature without having to use a cooling system. Chemical etching and chemo-mechanical polishing are ...Cadmium zinc telluride (CdZnTe) semiconductor has applications in the detection of X-rays and gamma-rays at room temperature without having to use a cooling system. Chemical etching and chemo-mechanical polishing are processes used to smoothen CdZnTe wafer during detector device fabrication. These processes reduce surface damages left after polishing the wafers. In this paper, we compare the effects of etching and chemo-mechanical polishing on CdZnTe nuclear detectors, using a solution of hydrogen bromide in hydrogen peroxide and ethylene glycol mixture. X-ray photoelectron spectroscopy (XPS) was used to monitor TeO2 on the wafer surfaces. Current-voltage and detector-response measurements were made to study the electrical properties and energy resolution. XPS results showed that the chemical etching process resulted in the formation of more TeO2 on the detector surfaces compared to chemo-mechanical polishing. The electrical resistivity of the detector is of the order of 1010 Ω-cm. The chemo-mechanical polishing process increased the leakage current more that chemical etching. For freshly treated surfaces, the etching process is more detrimental to the energy resolution compared to chemo-mechanically polishing.展开更多
Design patterns are object oriented software design practices for solving common design problems and they affect software quality. In this study, we investigate the relationship of design patterns and software defects...Design patterns are object oriented software design practices for solving common design problems and they affect software quality. In this study, we investigate the relationship of design patterns and software defects in a number of open source software projects. Design pattern instances are extracted from the source code repository of these open source software projects. Software defect metrics are extracted from the bug tracking systems of these projects. Using correlation and regression analysis on extracted data, we examine the relationship between design patterns and software defects. Our findings indicate that there is little correlation between the total number of design pattern instances and the number of defects. However, our regression analysis reveals that individual design pattern instances as a group have strong influences on the number of defects. Furthermore, we find that the number of design pattern instances is positively correlated to defect priority. Individual design pattern instances may have positive or negative impacts on defect priority.展开更多
The transmission delay of realtime video packet mainly depends on the sensing time delay(short-term factor) and the entire frame transmission delay(long-term factor).Therefore,the optimization problem in the spectrum ...The transmission delay of realtime video packet mainly depends on the sensing time delay(short-term factor) and the entire frame transmission delay(long-term factor).Therefore,the optimization problem in the spectrum handoff process should be formulated as the combination of microscopic optimization and macroscopic optimization.In this paper,we focus on the issue of combining these two optimization models,and propose a novel Evolution Spectrum Handoff(ESH)strategy to minimize the expected transmission delay of real-time video packet.In the microoptimized model,considering the tradeoff between Primary User's(PU's) allowable collision percentage of each channel and transmission delay of video packet,we propose a mixed integer non-linear programming scheme.The scheme is able to achieve the minimum sensing time which is termed as an optimal stopping time.In the macro-optimized model,using the optimal stopping time as reward function within the partially observable Markov decision process framework,the EHS strategy is designed to search an optimal target channel set and minimize the expected delay of packet in the long-term real-time video transmission.Meanwhile,the minimum expected transmission delay is obtained under practical cognitive radio networks' conditions,i.e.,secondary user's mobility,PU's random access,imperfect sensing information,etc..Theoretical analysis and simulation results show that the ESH strategy can effectively reduce the transmission delay of video packet in spectrum handoff process.展开更多
Tourism route planning is widely applied in the smart tourism field.The Pareto-optimal front obtained by the traditional multi-objective evolutionary algorithm exhibits long tails,sharp peaks and disconnected regions ...Tourism route planning is widely applied in the smart tourism field.The Pareto-optimal front obtained by the traditional multi-objective evolutionary algorithm exhibits long tails,sharp peaks and disconnected regions problems,which leads to uneven distribution and weak diversity of optimization solutions of tourism routes.Inspired by these limitations,we propose a multi-objective evolutionary algorithm for tourism route recommendation(MOTRR)with two-stage and Pareto layering based on decomposition.The method decomposes the multiobjective problem into several subproblems,and improves the distribution of solutions through a two-stage method.The crowding degree mechanism between extreme and intermediate populations is used in the two-stage method.The neighborhood is determined according to the weight of the subproblem for crossover mutation.Finally,Pareto layering is used to improve the updating efficiency and population diversity of the solution.The two-stage method is combined with the Pareto layering structure,which not only maintains the distribution and diversity of the algorithm,but also avoids the same solutions.Compared with several classical benchmark algorithms,the experimental results demonstrate competitive advantages on five test functions,hypervolume(HV)and inverted generational distance(IGD)metrics.Using the experimental results of real scenic spot datasets from two famous tourism social networking sites with vast amounts of users and large-scale online comments in Beijing,our proposed algorithm shows better distribution.It proves that the tourism routes recommended by our proposed algorithm have better distribution and diversity,so that the recommended routes can better meet the personalized needs of tourists.展开更多
基金supported by the National Research Foundation of Korea(NRF)grants funded by the Ministry of Science and ICT(MSIT)(RS-2023-00251283,and 2022M3D1A2083618)by the Ministry of Education(2020R1A6A1A03040516).
文摘Advancements in sensor technology have significantly enhanced atmospheric monitoring.Notably,metal oxide and carbon(MO_(x)/C)hybrids have gained attention for their exceptional sensitivity and room-temperature sensing performance.However,previous methods of synthesizing MO_(x)/C composites suffer from problems,including inhomogeneity,aggregation,and challenges in micropatterning.Herein,we introduce a refined method that employs a metal–organic framework(MOF)as a precursor combined with direct laser writing.The inherent structure of MOFs ensures a uniform distribution of metal ions and organic linkers,yielding homogeneous MO_(x)/C structures.The laser processing facilitates precise micropatterning(<2μm,comparable to typical photolithography)of the MO_(x)/C crystals.The optimized MOF-derived MO_(x)/C sensor rapidly detected ethanol gas even at room temperature(105 and 18 s for response and recovery,respectively),with a broad range of sensing performance from 170 to 3,400 ppm and a high response value of up to 3,500%.Additionally,this sensor exhibited enhanced stability and thermal resilience compared to previous MOF-based counterparts.This research opens up promising avenues for practical applications in MOF-derived sensing devices.
文摘The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gait in a virtual environment was presented in previous research work titled “A Comparison of PPO, TD3, and SAC Reinforcement Algorithms for Quadruped Walking Gait Generation”. We demonstrated that the Soft Actor-Critic Reinforcement algorithm had the best performance generating the walking gait for a quadruped in certain instances of sensor configurations in the virtual environment. In this work, we present the performance analysis of the state-of-the-art Deep Reinforcement algorithms above for quadruped walking gait generation in a physical environment. The performance is determined in the physical environment by transfer learning augmented by real-time reinforcement learning for gait generation on a physical quadruped. The performance is analyzed on a quadruped equipped with a range of sensors such as position tracking using a stereo camera, contact sensing of each of the robot legs through force resistive sensors, and proprioceptive information of the robot body and legs using nine inertial measurement units. The performance comparison is presented using the metrics associated with the walking gait: average forward velocity (m/s), average forward velocity variance, average lateral velocity (m/s), average lateral velocity variance, and quaternion root mean square deviation. The strengths and weaknesses of each algorithm for the given task on the physical quadruped are discussed.
文摘Cyber threats and risks are increasing exponentially with time. For preventing and defense against these threats and risks, precise risk perception for effective mitigation is the first step. Risk perception is necessary requirement to mitigate risk as it drives the security strategy at the organizational level and human attitude at individual level. Sometime, individuals understand there is a risk that a negative event or incident can occur, but they do not believe there will be a personal impact if the risk comes to realization but instead, they believe that the negative event will impact others. This belief supports the common belief that individuals tend to think of themselves as invulnerable, i.e., optimistically bias about the situation, thus affecting their attitude for taking preventive measures due to inappropriate risk perception or overconfidence. The main motivation of this meta-analysis is to assess that how the cyber optimistic bias or cyber optimism bias affects individual’s cyber security risk perception and how it changes their decisions. Applying a meta-analysis, this study found that optimistic bias has an overall negative impact on the cyber security due to the inappropriate risk perception and considering themselves invulnerable by biasing that the threat will not occur to them. Due to the cyber optimism bias, the individual will sometimes share passwords by considering it will not be maliciously used, lack in adopting of preventive measures, ignore security incidents, wrong perception of cyber threats and overconfidence on themselves in the context of cyber security.
文摘Bundle adjustment is a camera and point refinement technique in a 3D scene reconstruction pipeline. The camera parameters and the 3D points are refined by minimizing the difference between computed projection and observed projection of the image points formulated as a non-linear least-square problem. Levenberg-Marquardt method is used to solve the non-linear least-square problem. Solving the non-linear least-square problem is computationally expensive, proportional to the number of cameras, points, and projections. In this paper, we implement the Bundle Adjustment (BA) algorithm and analyze techniques to improve algorithmic performance by reducing the mean square error. We investigate using an additional radial distortion camera parameter in the BA algorithm and demonstrate better convergence of the mean square error. We also demonstrate the use of explicitly computed analytical derivatives. In addition, we implement the BA algorithm on GPUs using the CUDA parallel programming model to reduce the computational time burden of the BA algorithm. CUDA Streams, atomic operations, and cuBLAS library in the CUDA programming model are proposed, implemented, and demonstrated to improve the performance of the BA algorithm. Our implementation has demonstrated better convergence of the BA algorithm and achieved a speedup of up to 16× on the use of the BA algorithm on various datasets.
文摘Deep reinforcement learning (deep RL) has the potential to replace classic robotic controllers. State-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient and Soft Actor-Critic Reinforcement Algorithms, to mention a few, have been investigated for training robots to walk. However, conflicting performance results of these algorithms have been reported in the literature. In this work, we present the performance analysis of the above three state-of-the-art Deep Reinforcement algorithms for a constant velocity walking task on a quadruped. The performance is analyzed by simulating the walking task of a quadruped equipped with a range of sensors present on a physical quadruped robot. Simulations of the three algorithms across a range of sensor inputs and with domain randomization are performed. The strengths and weaknesses of each algorithm for the given task are discussed. We also identify a set of sensors that contribute to the best performance of each Deep Reinforcement algorithm.
文摘The 3D reconstruction pipeline uses the Bundle Adjustment algorithm to refine the camera and point parameters. The Bundle Adjustment algorithm is a compute-intensive algorithm, and many researchers have improved its performance by implementing the algorithm on GPUs. In the previous research work, “Improving Accuracy and Computational Burden of Bundle Adjustment Algorithm using GPUs,” the authors demonstrated first the Bundle Adjustment algorithmic performance improvement by reducing the mean square error using an additional radial distorting parameter and explicitly computed analytical derivatives and reducing the computational burden of the Bundle Adjustment algorithm using GPUs. The naïve implementation of the CUDA code, a speedup of 10× for the largest dataset of 13,678 cameras, 4,455,747 points, and 28,975,571 projections was achieved. In this paper, we present the optimization of the Bundle Adjustment algorithm CUDA code on GPUs to achieve higher speedup. We propose a new data memory layout for the parameters in the Bundle Adjustment algorithm, resulting in contiguous memory access. We demonstrate that it improves the memory throughput on the GPUs, thereby improving the overall performance. We also demonstrate an increase in the computational throughput of the algorithm by optimizing the CUDA kernels to utilize the GPU resources effectively. A comparative performance study of explicitly computing an algorithm parameter versus using the Jacobians instead is presented. In the previous work, the Bundle Adjustment algorithm failed to converge for certain datasets due to several block matrices of the cameras in the augmented normal equation, resulting in rank-deficient matrices. In this work, we identify the cameras that cause rank-deficient matrices and preprocess the datasets to ensure the convergence of the BA algorithm. Our optimized CUDA implementation achieves convergence of the Bundle Adjustment algorithm in around 22 seconds for the largest dataset compared to 654 seconds for the sequential implementation, resulting in a speedup of 30×. Our optimized CUDA implementation presented in this paper has achieved a 3× speedup for the largest dataset compared to the previous naïve CUDA implementation.
文摘In an effort to reduce vehicle collisions with snowplows in poor weather conditions, this paper details the development of a real time thermal image based machine learning approach to an early collision avoidance system for snowplows, which intends to detect and estimate the distance of trailing vehicles. Due to the operational conditions of snowplows, which include heavy-blowing snow, traditional optical sensors like LiDAR and visible spectrum cameras have reduced effectiveness in detecting objects in such environments. Thus, we propose using a thermal infrared camera as the primary sensor along with machine learning algorithms. First, we curate a large dataset of thermal images of vehicles in heavy snow conditions. Using the curated dataset, two machine-learning models based on the modified ResNet architectures were trained to detect and estimate the trailing vehicle distance using real-time thermal images. The trained detection network was capable of detecting trailing vehicles 99.0% of the time at 1500.0 ft distance from the snowplow. The trained trailing distance network was capable of estimating distance with an average estimation error of 10.70 ft. The inference performance of the trained models is discussed, along with the interpretation of the performance.
文摘A machine learning model, using the transformer architecture, is used to design a feedback compensator and prefilter for various simulated plants. The output of the transformer is a sequence of compensator and prefilter parameters. The compensator and prefilter are linear models, preserving the ability to analyze the system with linear control theory. The input to the network is a window of recent reference and output samples. The goal of the transformer is to minimize tracking error at each time step. The plants under consideration range from simple to challenging. The more difficult plants contain closely spaced, lightly damped, complex conjugate pairs of poles and zeros. Results are compared to PID controllers tuned for a similar crossover frequency and optimal phase margin. For simple plants, the transformer converges to solutions which overly rely on the prefilter, neglecting the maximization of negative feedback. For more complex plants, the transformer designs a compensator and prefilter with more desirable qualities. In all cases, the transformer can start with random model parameters and modify them to minimize tracking error on the step reference.
文摘The main objective of this study is estimating environmental pollution of hybrid biomass and co-generation power plants. Efficiency of direct tapping of biomass is about 15%-20%. Consequently, about 80% of energy would be waste in this method. While in co-generation power plant, this number could improve to more than 50%. Therefore, to achieve higher efficiency in utilizing biomass energy, co-generation power plants is proposed by using biogas as fuel instead of natural gas. Proposed system would be supplied thermal and electrical energy for non-urban areas of Iran. In this regard, process of fermentation and gas production from biomass in a vertical digester is studied and simulated using analytic methods. Various factors affecting the fermentation, such as temperature, humidity, PH and optimal conditions for the extraction of gas from waste agriculture and animal are also determined. Comparing between the pollution emission from fossil fuel power plants and power plants fed by biomass shows about 88% reduction in greenhouse emission which significant number.
文摘Flotation is a complex multifaceted process that is widely used for the separation of finely ground minerals. The theory of froth flotation is complex and is not completely understood. This fact has been brought many monitoring challenges in a coal processing plant. To solve those challenges, it is important to understand the effect of different parameters on the fine particle separation, and control flotation performance for a particular system. This study is going to indicate the effect of various parameters (particle Characteristics and hydrodynamic conditions) on coal flotation responses (flotation rate constant and recovery) by different modeling techniques. A comprehensive coal flotation database was prepared for the statistical and soft computing methods. Statistical factors were used for variable selections. Results were in a good agreement with recent theoretical flotation investigations. Computational models accurately can estimate flotation rate constant and coal recovery (correlation coefficient 0.85, and 0.99, respectively). According to the results, it can be concluded that the soft computing models can overcome the complexity of process and be used as an expert system to control, and optimize parameters of coal flotation process.
基金the council of Iran National Science Foundation and University of Kashan for supporting this work by Grant No (159271/999)
文摘In this work we synthesize a novel and highly efficient photocatalyst for degradation of methyl orange and rhodamine B. In addition, a new method for synthesis of FeO@SiO@TiO@Ho magnetic core-shell nanoparticles with spherical morphology is proposed. The crystal structures, morphology and chemical properties of the as-synthesized nanoparticles were characterized using Fourier transform infrared spectroscopy(FT-IR), scanning electron microscopy(SEM), transmission electron microscopy(TEM), energy dispersive X-ray(EDS), X-ray diffraction(XRD), UV–vis diffuse reflectance spectroscopy(DRS) and vibrating sample magnetometer(VSM) techniques. The photocatalytic activity of FeO@SiO@TiO@Ho was investigated by degradation of methyl orange(MO) as cationic dye and rhodamine B(Rh B) as anionic dye in aqueous solution under UV/vis irradiation. The results indicate that about 92.1% of Rh B and78.4% of MO were degraded after 120 and 150 min, respectively. These degradation results show that FeO@SiO@TiO@Ho nanoparticles are better photocatalyst than Fe3O4@Si O2@TiO 2@Ho for degradation of MO and Rh B. As well as, the catalyst shows high recovery and stability even after several separation cycles.
文摘Information-Centric Networking (ICN) is an innovative paradigm for the future internet architecture, which addresses IP network limitations in supporting content distribution and information access by decoupling content from hosts and providing the ability to retrieve a content object by its name (identifier), rather than its storage location (IP address). Name resolution and routing is critical for content retrieval in ICN networks. In this research, we perform a comparative study of two widely used classes of ICN name resolution and routing schemes, namely flooding and Distributed Hash Table (DHT). We consider the flooding-based routing in Content-Centric Networks due to its wide acceptance. For the DHT scheme, we design a multi-level DHT that takes into account the underlying network topology and uses name aggregation to further reduce control overhead and improve network efficiency. Then, we compare the characteristics and performance of these two classes of name resolution and routing through extensive simulations. The evaluation results show that the performances of these two approaches are reliant on several factors, including network size, content location dynamics, and content popularity. Our study reveals insights into the design tradeoffs and offers guidelines for design strategies.
文摘Considerable amounts of coal particles are accumulated in the tailing dams of washing plants which can make serious environmental problems. Recovery of these particles from tailings has economically and environmentally several advantages. Maintaining natural resources and reducing discharges to the dams are the most important ones. This study was examined the possibility to recover coal particles from a tailing dam with 56.29% ash content by using series of processing techniques. For this purpose, gravity separation (jig, shaking table and spiral) and flotation tests were conducted to upgrade products. Based the optimum value of these processing methods, a flowsheet was designed to increase the rate of recovery for a wide range of coal particles. Results indicated that the designed circuit can recover over 90% of value coal particles and reduce ash content of product to less than 14%. These results can potentially be used for designing an industrial operation as a recycling plant and an appropriate instance for other areas to reduce the environmental issues of coal tailing dams.
文摘Mobile Cloud Computing (MCC) brings rich computational resource to mobile users, network operators, and cloud computing providers. It can be represented in many ways, and the ultimate goal of MCC is to enable execution of rich mobile application with rich user experience. Mobility is one of the main characteristics of MCC environment where user can be able to continue their work regardless of movement. This literature review paper presents the state-of-the-art survey of MCC. Also, we provide the communication architecture of MCC and taxonomy of mobile cloud in which specifically concentrates on offloading, mobile distribution computing, and privacy. Through an extensive literature review, we found that MCC is a technologically beneficial and expedient paradigm for virtual environments in terms of virtual servers in a distributed environment, multi-tenant architecture and data storing in a cloud. We further identified the drawbacks in offloading, mobile distribution computing, privacy of MCC and how this technology can be used in an effective way.
基金Supported by the National Natural Science Foundation of China under Grant Nos 60708006 and 60978026, the Specialized Research Fund for the Doctoral Program of Higher Education of China under Grant No 20070335118, and the Zhejiang Provincial Natural Science Foundation of China under Grant No Y1090379.
文摘We experimentally demonstrate the C-band wavelength conversion using four-wave mixing in a 17-mm-long silicon-on-insulator waveguide pumped by a dispersed mode-locked femtosecond laser pulse. The idler can be observed with an incident average pump power lower than 4 dBm, and about 35 nm of conversion bandwidth from 1530nm to 1565nm is measured by using a 1550-nm pump wavelength. The pulse-pumped efficiency is demonstrated to be higher, by more than 22 dB, than the cw-pumped efficiency. The conversion efficiency variations with respect to the pump and signal powers are also investigated.
基金This work made use of the Engineering Research Center Shared Facilities supported by the Engineering Research Center Program of the National Science Foundation and DOE under ARPA-E and Power America Program and the CURENT Industry Partnership Program.
文摘Research on high voltage(HV)silicon carbide(SiC)power semiconductor devices has attracted much attention in recent years.This paper overviews the development and status of HV SiC devices.Meanwhile,benefits of HV SiC devices are presented.The technologies and challenges for HV SiC device application in converter design are discussed.The state-of-the-art applications of HV SiC devices are also reviewed.
文摘Cadmium zinc telluride (CdZnTe) semiconductor has applications in the detection of X-rays and gamma-rays at room temperature without having to use a cooling system. Chemical etching and chemo-mechanical polishing are processes used to smoothen CdZnTe wafer during detector device fabrication. These processes reduce surface damages left after polishing the wafers. In this paper, we compare the effects of etching and chemo-mechanical polishing on CdZnTe nuclear detectors, using a solution of hydrogen bromide in hydrogen peroxide and ethylene glycol mixture. X-ray photoelectron spectroscopy (XPS) was used to monitor TeO2 on the wafer surfaces. Current-voltage and detector-response measurements were made to study the electrical properties and energy resolution. XPS results showed that the chemical etching process resulted in the formation of more TeO2 on the detector surfaces compared to chemo-mechanical polishing. The electrical resistivity of the detector is of the order of 1010 Ω-cm. The chemo-mechanical polishing process increased the leakage current more that chemical etching. For freshly treated surfaces, the etching process is more detrimental to the energy resolution compared to chemo-mechanically polishing.
文摘Design patterns are object oriented software design practices for solving common design problems and they affect software quality. In this study, we investigate the relationship of design patterns and software defects in a number of open source software projects. Design pattern instances are extracted from the source code repository of these open source software projects. Software defect metrics are extracted from the bug tracking systems of these projects. Using correlation and regression analysis on extracted data, we examine the relationship between design patterns and software defects. Our findings indicate that there is little correlation between the total number of design pattern instances and the number of defects. However, our regression analysis reveals that individual design pattern instances as a group have strong influences on the number of defects. Furthermore, we find that the number of design pattern instances is positively correlated to defect priority. Individual design pattern instances may have positive or negative impacts on defect priority.
基金supported by the National Natural Science Foundation of China under Grant No.61301101
文摘The transmission delay of realtime video packet mainly depends on the sensing time delay(short-term factor) and the entire frame transmission delay(long-term factor).Therefore,the optimization problem in the spectrum handoff process should be formulated as the combination of microscopic optimization and macroscopic optimization.In this paper,we focus on the issue of combining these two optimization models,and propose a novel Evolution Spectrum Handoff(ESH)strategy to minimize the expected transmission delay of real-time video packet.In the microoptimized model,considering the tradeoff between Primary User's(PU's) allowable collision percentage of each channel and transmission delay of video packet,we propose a mixed integer non-linear programming scheme.The scheme is able to achieve the minimum sensing time which is termed as an optimal stopping time.In the macro-optimized model,using the optimal stopping time as reward function within the partially observable Markov decision process framework,the EHS strategy is designed to search an optimal target channel set and minimize the expected delay of packet in the long-term real-time video transmission.Meanwhile,the minimum expected transmission delay is obtained under practical cognitive radio networks' conditions,i.e.,secondary user's mobility,PU's random access,imperfect sensing information,etc..Theoretical analysis and simulation results show that the ESH strategy can effectively reduce the transmission delay of video packet in spectrum handoff process.
基金partially supported by the National Natural Science Foundation of China(41930644,61972439)the Collaborative Innovation Project of Anhui Province(GXXT-2022-093)the Key Program in the Youth Elite Support Plan in Universities of Anhui Province(gxyqZD2019010)。
文摘Tourism route planning is widely applied in the smart tourism field.The Pareto-optimal front obtained by the traditional multi-objective evolutionary algorithm exhibits long tails,sharp peaks and disconnected regions problems,which leads to uneven distribution and weak diversity of optimization solutions of tourism routes.Inspired by these limitations,we propose a multi-objective evolutionary algorithm for tourism route recommendation(MOTRR)with two-stage and Pareto layering based on decomposition.The method decomposes the multiobjective problem into several subproblems,and improves the distribution of solutions through a two-stage method.The crowding degree mechanism between extreme and intermediate populations is used in the two-stage method.The neighborhood is determined according to the weight of the subproblem for crossover mutation.Finally,Pareto layering is used to improve the updating efficiency and population diversity of the solution.The two-stage method is combined with the Pareto layering structure,which not only maintains the distribution and diversity of the algorithm,but also avoids the same solutions.Compared with several classical benchmark algorithms,the experimental results demonstrate competitive advantages on five test functions,hypervolume(HV)and inverted generational distance(IGD)metrics.Using the experimental results of real scenic spot datasets from two famous tourism social networking sites with vast amounts of users and large-scale online comments in Beijing,our proposed algorithm shows better distribution.It proves that the tourism routes recommended by our proposed algorithm have better distribution and diversity,so that the recommended routes can better meet the personalized needs of tourists.