In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of ...In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.展开更多
Offloading application to cloud can augment mobile devices' computation capabilities for the emerging resource-hungry mobile application, however it can also consume both much time and energy for mobile device off...Offloading application to cloud can augment mobile devices' computation capabilities for the emerging resource-hungry mobile application, however it can also consume both much time and energy for mobile device offloading application remotely to cloud. In this paper, we develop a newly adaptive application offloading decision-transmission scheduling scheme which can solve above problem efficiently. Specifically, we first propose an adaptive application offloading model which allows multiple target clouds coexisting. Second, based on Lyapunov optimization theory, a low complexity adaptive offloading decision-transmission scheduling scheme has been proposed. And the performance analysis is also given. Finally, simulation results show that,compared with that all applications are executed locally, mobile device can save 68.557% average execution time and 67.095% average energy consumption under situations.展开更多
Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved c...Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved computing and storage. Management has become easier, andOAM costs have been significantly reduced. Cloud desktop technology is develop ing rapidly. With this technology, users can flexibly and dynamically use virtual ma chine resources, companies' efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desk top solutions, computing and storage are bound together, and data is stored as im age files. This limits the flexibility and expandability of systems and is insufficient for meetinz customers' requirements in different scenarios.展开更多
In 2010, cloud computing gained momentum. Cloud computing is a model for real-time, on-demand, pay-for-use network access to a shared pool of configurable computing and storage resources. It has matured from a promisi...In 2010, cloud computing gained momentum. Cloud computing is a model for real-time, on-demand, pay-for-use network access to a shared pool of configurable computing and storage resources. It has matured from a promising business concept to a working reality in both the private and public IT sectors. The U.S. government, for example, has requested all its agencies to evaluate cloud computing alternatives as part of their budget submissions for new IT investment.展开更多
Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The s...Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The social network communities working on various social network domains face different hurdles, including various new research studies and challenges in social computing. The researcher should try to expand the scope and establish new ideas and methods even from other disciplines to address the various challenges. This idea has diverse academic association, social links and technical characteristics. Thus it offers an ultimate opportunity for researchers to find out the issues in social computing and provide innovative solutions for conveying the information between social online groups on network computing. In this research paper we investigate the different issues in social media like users’ privacy and security, network reliabilities, and desire data availability on these social media, users’ awareness about the social networks and problems faced by academic domains. A huge number of users operated the social networks for retrieving and disseminating their real time and offline information to various places. The information may be transmitted on local networks or may be on global networks. The main concerns of users on social media are secure and fast communication channels. Facebook and YouTube both claimed for efficient security mechanism and fast communication channels for multimedia data. In this research a survey has been conducted in the most populated cities where a large number of Facebook and YouTube users have been found. During the survey several regular users indicate the certain potential issues continuously occurred on these social web sites interfaces, for example unwanted advertisement, fake IDS, uncensored videos and unknown friend request which cause the poor speed of channel communication, poor uploading and downloading data speed, channel interferences, security of data, privacy of users, integrity and reliability of user communication on these social sites. The major issues faced by active users of Facebook and YouTube have been highlighted in this research.展开更多
Computational seismology is a relatively new interdisciplinary field spanning computational techniques in theoretical and observational seismology. It studies numerical methods and their implementation in various theo...Computational seismology is a relatively new interdisciplinary field spanning computational techniques in theoretical and observational seismology. It studies numerical methods and their implementation in various theoretical and applied problems in seismology.展开更多
Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularl...Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.展开更多
This paper presents a trainable Generative Adversarial Network(GAN)-based end-to-end system for image dehazing,which is named the DehazeGAN.DehazeGAN can be used for edge computing-based applications,such as roadside ...This paper presents a trainable Generative Adversarial Network(GAN)-based end-to-end system for image dehazing,which is named the DehazeGAN.DehazeGAN can be used for edge computing-based applications,such as roadside monitoring.It adopts two networks:one is generator(G),and the other is discriminator(D).The G adopts the U-Net architecture,whose layers are particularly designed to incorporate the atmospheric scattering model of image dehazing.By using a reformulated atmospheric scattering model,the weights of the generator network are initialized by the coarse transmission map,and the biases are adaptively adjusted by using the previous round's trained weights.Since the details may be blurry after the fog is removed,the contrast loss is added to enhance the visibility actively.Aside from the typical GAN adversarial loss,the pixel-wise Mean Square Error(MSE)loss,the contrast loss and the dark channel loss are introduced into the generator loss function.Extensive experiments on benchmark images,the results of which are compared with those of several state-of-the-art methods,demonstrate that the proposed DehazeGAN performs better and is more effective.展开更多
The publication of Tsinghua Science and Technology was started in 1996. Since then, it has been an international academic journal sponsored by Tsinghua University and published bimonthly. This journal aims at presenti...The publication of Tsinghua Science and Technology was started in 1996. Since then, it has been an international academic journal sponsored by Tsinghua University and published bimonthly. This journal aims at presenting the state-of-art scientific achievements in computer science and other IT fields.展开更多
Vehicular Ad-hoc networks(VANETs) are kinds of mobile Ad-hoc networks(MANETs), which consist of mobile vehicles with on-board units(OBUs) and roadside units(RSUs). With the rapid development of computation and...Vehicular Ad-hoc networks(VANETs) are kinds of mobile Ad-hoc networks(MANETs), which consist of mobile vehicles with on-board units(OBUs) and roadside units(RSUs). With the rapid development of computation and communication technologies, peripheral or incremental changes in VANETs evolve into a revolution in process. Cloud computing as a solution has been deployed to satisfy vehicles in VANETs which are expected to require resources(such as computing, storage and networking). Recently, with special requirements of mobility, location awareness, and low latency, there has been growing interest in research into the role of fog computing in VANETs. The merging of fog computing with VANETs opens an area of possibilities for applications and services on the edge of the cloud computing. Fog computing deploys highly virtualized computing and communication facilities at the proximity of mobile vehicles in VANET. Mobile vehicles in VANET can also demand services of low-latency and short-distance local connections via fog computing. This paper presents the current state of the research and future perspectives of fog computing in VANETs. Moreover, we discuss the characteristics of fog computing and services based on fog computing platform provided for VANETs. In this paper, some opportunities for challenges and issues are mentioned, related techniques that need to be considered have been discussed in the context of fog computing in VANETs. Finally, we discuss about research directions of potential future work for fog computing in VANETs. Within this article, readers can have a more thorough understanding of fog computing for VANETs and the trends in this domain.展开更多
The state of art and future prospects are described for the application of the computational fluid dynamics to engineering purposes.2D and 3D simulations are presented for a flow about a pair of bridges,a flow about a...The state of art and future prospects are described for the application of the computational fluid dynamics to engineering purposes.2D and 3D simulations are presented for a flow about a pair of bridges,a flow about a cylin- der in waves,a flow about an airplane and a ship,a flow past a sphere,a two layers flow and a flow in a wall boundary layer,The choice of grid system and of turbulence modei is discussed.展开更多
The dataflow architecture,which is characterized by a lack of a redundant unified control logic,has been shown to have an advantage over the control-flow architecture as it improves the computational performance and p...The dataflow architecture,which is characterized by a lack of a redundant unified control logic,has been shown to have an advantage over the control-flow architecture as it improves the computational performance and power efficiency,especially of applications used in high-performance computing(HPC).Importantly,the high computational efficiency of systems using the dataflow architecture is achieved by allowing program kernels to be activated in a simultaneous manner.Therefore,a proper acknowledgment mechanism is required to distinguish the data that logically belongs to different contexts.Possible solutions include the tagged-token matching mechanism in which the data is sent before acknowledgments are received but retried after rejection,or a handshake mechanism in which the data is only sent after acknowledgments are received.However,these mechanisms are characterized by both inefficient data transfer and increased area cost.Good performance of the dataflow architecture depends on the efficiency of data transfer.In order to optimize the efficiency of data transfer in existing dataflow architectures with a minimal increase in area and power cost,we propose a Look-Ahead Acknowledgment(LAA)mechanism.LAA accelerates the execution flow by speculatively acknowledging ahead without penalties.Our simulation analysis based on a handshake mechanism shows that our LAA increases the average utilization of computational units by 23.9%,with a reduction in the average execution time by 17.4%and an increase in the average power efficiency of dataflow processors by 22.4%.Crucially,our novel approach results in a relatively small increase in the area and power consumption of the on-chip logic of less than 0.9%.In conclusion,the evaluation results suggest that Look-Ahead Acknowledgment is an effective improvement for data transfer in existing dataflow architectures.展开更多
Off-chip replacement (capacity and conflict) and coherent read misses in a distributed shared memory system cause execution to stall for hundreds of cycles. These off-chip replacement and coherent read misses are re...Off-chip replacement (capacity and conflict) and coherent read misses in a distributed shared memory system cause execution to stall for hundreds of cycles. These off-chip replacement and coherent read misses are recurring and forming sequences of two or more misses called streams. Prior streaming techniques ignored reordering of misses and not-recently-accessed streams while streaming data. In this paper, we present stream prefetcher design that can deal with both problems. Our stream prefetcher design utilizes stream waiting rooms to store not-recently-accessed streams. Stream waiting rooms help remove more off-chip misses. Using trace based simulation% our stream prefetcher design can remove 8% to 66% (on average 40%) and 17% to 63% (on average 39%) replacement and coherent read misses, respectively. Using cycle-accurate full-system simulation, our design gives speedups from 1.00 to 1.17 of princeton application repository for shared-memory computers (PARSEC) workloads running on a distributed shared memory system with the exception of dedup and swaptions workloads.展开更多
文摘In light of the coronavirus disease 2019(COVID-19)outbreak caused by the novel coronavirus,companies and institutions have instructed their employees to work from home as a precautionary measure to reduce the risk of contagion.Employees,however,have been exposed to different security risks because of working from home.Moreover,the rapid global spread of COVID-19 has increased the volume of data generated from various sources.Working from home depends mainly on cloud computing(CC)applications that help employees to efficiently accomplish their tasks.The cloud computing environment(CCE)is an unsung hero in the COVID-19 pandemic crisis.It consists of the fast-paced practices for services that reflect the trend of rapidly deployable applications for maintaining data.Despite the increase in the use of CC applications,there is an ongoing research challenge in the domains of CCE concerning data,guaranteeing security,and the availability of CC applications.This paper,to the best of our knowledge,is the first paper that thoroughly explains the impact of the COVID-19 pandemic on CCE.Additionally,this paper also highlights the security risks of working from home during the COVID-19 pandemic.
基金supported by National Natural Science Foundation of China (Grant No.61261017, No.61571143 and No.61561014)Guangxi Natural Science Foundation (2013GXNSFAA019334 and 2014GXNSFAA118387)+3 种基金Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education (No.CRKL150112)Guangxi Key Lab of Wireless Wideband Communication & Signal Processing (GXKL0614202, GXKL0614101 and GXKL061501)Sci.and Tech.on Info.Transmission and Dissemination in Communication Networks Lab (No.ITD-U14008/KX142600015)Graduate Student Research Innovation Project of Guilin University of Electronic Technology (YJCXS201523)
文摘Offloading application to cloud can augment mobile devices' computation capabilities for the emerging resource-hungry mobile application, however it can also consume both much time and energy for mobile device offloading application remotely to cloud. In this paper, we develop a newly adaptive application offloading decision-transmission scheduling scheme which can solve above problem efficiently. Specifically, we first propose an adaptive application offloading model which allows multiple target clouds coexisting. Second, based on Lyapunov optimization theory, a low complexity adaptive offloading decision-transmission scheduling scheme has been proposed. And the performance analysis is also given. Finally, simulation results show that,compared with that all applications are executed locally, mobile device can save 68.557% average execution time and 67.095% average energy consumption under situations.
文摘Cloud computing technology is changing the development and usage patterns of IT infrastructure and applications. Virtualized and distributed systems as well as unified management and scheduling has greatly im proved computing and storage. Management has become easier, andOAM costs have been significantly reduced. Cloud desktop technology is develop ing rapidly. With this technology, users can flexibly and dynamically use virtual ma chine resources, companies' efficiency of using and allocating resources is greatly improved, and information security is ensured. In most existing virtual cloud desk top solutions, computing and storage are bound together, and data is stored as im age files. This limits the flexibility and expandability of systems and is insufficient for meetinz customers' requirements in different scenarios.
文摘In 2010, cloud computing gained momentum. Cloud computing is a model for real-time, on-demand, pay-for-use network access to a shared pool of configurable computing and storage resources. It has matured from a promising business concept to a working reality in both the private and public IT sectors. The U.S. government, for example, has requested all its agencies to evaluate cloud computing alternatives as part of their budget submissions for new IT investment.
文摘Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The social network communities working on various social network domains face different hurdles, including various new research studies and challenges in social computing. The researcher should try to expand the scope and establish new ideas and methods even from other disciplines to address the various challenges. This idea has diverse academic association, social links and technical characteristics. Thus it offers an ultimate opportunity for researchers to find out the issues in social computing and provide innovative solutions for conveying the information between social online groups on network computing. In this research paper we investigate the different issues in social media like users’ privacy and security, network reliabilities, and desire data availability on these social media, users’ awareness about the social networks and problems faced by academic domains. A huge number of users operated the social networks for retrieving and disseminating their real time and offline information to various places. The information may be transmitted on local networks or may be on global networks. The main concerns of users on social media are secure and fast communication channels. Facebook and YouTube both claimed for efficient security mechanism and fast communication channels for multimedia data. In this research a survey has been conducted in the most populated cities where a large number of Facebook and YouTube users have been found. During the survey several regular users indicate the certain potential issues continuously occurred on these social web sites interfaces, for example unwanted advertisement, fake IDS, uncensored videos and unknown friend request which cause the poor speed of channel communication, poor uploading and downloading data speed, channel interferences, security of data, privacy of users, integrity and reliability of user communication on these social sites. The major issues faced by active users of Facebook and YouTube have been highlighted in this research.
文摘Computational seismology is a relatively new interdisciplinary field spanning computational techniques in theoretical and observational seismology. It studies numerical methods and their implementation in various theoretical and applied problems in seismology.
文摘Three recent breakthroughs due to AI in arts and science serve as motivation:An award winning digital image,protein folding,fast matrix multiplication.Many recent developments in artificial neural networks,particularly deep learning(DL),applied and relevant to computational mechanics(solid,fluids,finite-element technology)are reviewed in detail.Both hybrid and pure machine learning(ML)methods are discussed.Hybrid methods combine traditional PDE discretizations with ML methods either(1)to help model complex nonlinear constitutive relations,(2)to nonlinearly reduce the model order for efficient simulation(turbulence),or(3)to accelerate the simulation by predicting certain components in the traditional integration methods.Here,methods(1)and(2)relied on Long-Short-Term Memory(LSTM)architecture,with method(3)relying on convolutional neural networks.Pure ML methods to solve(nonlinear)PDEs are represented by Physics-Informed Neural network(PINN)methods,which could be combined with attention mechanism to address discontinuous solutions.Both LSTM and attention architectures,together with modern and generalized classic optimizers to include stochasticity for DL networks,are extensively reviewed.Kernel machines,including Gaussian processes,are provided to sufficient depth for more advanced works such as shallow networks with infinite width.Not only addressing experts,readers are assumed familiar with computational mechanics,but not with DL,whose concepts and applications are built up from the basics,aiming at bringing first-time learners quickly to the forefront of research.History and limitations of AI are recounted and discussed,with particular attention at pointing out misstatements or misconceptions of the classics,even in well-known references.Positioning and pointing control of a large-deformable beam is given as an example.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(grant number NRF-2018R1D1A1B07043331).
文摘This paper presents a trainable Generative Adversarial Network(GAN)-based end-to-end system for image dehazing,which is named the DehazeGAN.DehazeGAN can be used for edge computing-based applications,such as roadside monitoring.It adopts two networks:one is generator(G),and the other is discriminator(D).The G adopts the U-Net architecture,whose layers are particularly designed to incorporate the atmospheric scattering model of image dehazing.By using a reformulated atmospheric scattering model,the weights of the generator network are initialized by the coarse transmission map,and the biases are adaptively adjusted by using the previous round's trained weights.Since the details may be blurry after the fog is removed,the contrast loss is added to enhance the visibility actively.Aside from the typical GAN adversarial loss,the pixel-wise Mean Square Error(MSE)loss,the contrast loss and the dark channel loss are introduced into the generator loss function.Extensive experiments on benchmark images,the results of which are compared with those of several state-of-the-art methods,demonstrate that the proposed DehazeGAN performs better and is more effective.
文摘The publication of Tsinghua Science and Technology was started in 1996. Since then, it has been an international academic journal sponsored by Tsinghua University and published bimonthly. This journal aims at presenting the state-of-art scientific achievements in computer science and other IT fields.
基金supported by the National Natural Science Foundation of China (61271184, 61571065)
文摘Vehicular Ad-hoc networks(VANETs) are kinds of mobile Ad-hoc networks(MANETs), which consist of mobile vehicles with on-board units(OBUs) and roadside units(RSUs). With the rapid development of computation and communication technologies, peripheral or incremental changes in VANETs evolve into a revolution in process. Cloud computing as a solution has been deployed to satisfy vehicles in VANETs which are expected to require resources(such as computing, storage and networking). Recently, with special requirements of mobility, location awareness, and low latency, there has been growing interest in research into the role of fog computing in VANETs. The merging of fog computing with VANETs opens an area of possibilities for applications and services on the edge of the cloud computing. Fog computing deploys highly virtualized computing and communication facilities at the proximity of mobile vehicles in VANET. Mobile vehicles in VANET can also demand services of low-latency and short-distance local connections via fog computing. This paper presents the current state of the research and future perspectives of fog computing in VANETs. Moreover, we discuss the characteristics of fog computing and services based on fog computing platform provided for VANETs. In this paper, some opportunities for challenges and issues are mentioned, related techniques that need to be considered have been discussed in the context of fog computing in VANETs. Finally, we discuss about research directions of potential future work for fog computing in VANETs. Within this article, readers can have a more thorough understanding of fog computing for VANETs and the trends in this domain.
文摘The state of art and future prospects are described for the application of the computational fluid dynamics to engineering purposes.2D and 3D simulations are presented for a flow about a pair of bridges,a flow about a cylin- der in waves,a flow about an airplane and a ship,a flow past a sphere,a two layers flow and a flow in a wall boundary layer,The choice of grid system and of turbulence modei is discussed.
基金This work was supported by the Project of the State Grid Corporation of China in 2020"Integration Technology Research and Prototype Development for High End Controller Chip"under Grant No.5700-202041264A-0-0-00.
文摘The dataflow architecture,which is characterized by a lack of a redundant unified control logic,has been shown to have an advantage over the control-flow architecture as it improves the computational performance and power efficiency,especially of applications used in high-performance computing(HPC).Importantly,the high computational efficiency of systems using the dataflow architecture is achieved by allowing program kernels to be activated in a simultaneous manner.Therefore,a proper acknowledgment mechanism is required to distinguish the data that logically belongs to different contexts.Possible solutions include the tagged-token matching mechanism in which the data is sent before acknowledgments are received but retried after rejection,or a handshake mechanism in which the data is only sent after acknowledgments are received.However,these mechanisms are characterized by both inefficient data transfer and increased area cost.Good performance of the dataflow architecture depends on the efficiency of data transfer.In order to optimize the efficiency of data transfer in existing dataflow architectures with a minimal increase in area and power cost,we propose a Look-Ahead Acknowledgment(LAA)mechanism.LAA accelerates the execution flow by speculatively acknowledging ahead without penalties.Our simulation analysis based on a handshake mechanism shows that our LAA increases the average utilization of computational units by 23.9%,with a reduction in the average execution time by 17.4%and an increase in the average power efficiency of dataflow processors by 22.4%.Crucially,our novel approach results in a relatively small increase in the area and power consumption of the on-chip logic of less than 0.9%.In conclusion,the evaluation results suggest that Look-Ahead Acknowledgment is an effective improvement for data transfer in existing dataflow architectures.
基金supported by Higher Education Commission(Pakistan)National High Technology Research and Development Program of China(863 Program)(No.2008AA01A201)+1 种基金Natural Science Foundation of China(Nos.60833004 and 60970002)TNList Cross-discipline Foundation
文摘Off-chip replacement (capacity and conflict) and coherent read misses in a distributed shared memory system cause execution to stall for hundreds of cycles. These off-chip replacement and coherent read misses are recurring and forming sequences of two or more misses called streams. Prior streaming techniques ignored reordering of misses and not-recently-accessed streams while streaming data. In this paper, we present stream prefetcher design that can deal with both problems. Our stream prefetcher design utilizes stream waiting rooms to store not-recently-accessed streams. Stream waiting rooms help remove more off-chip misses. Using trace based simulation% our stream prefetcher design can remove 8% to 66% (on average 40%) and 17% to 63% (on average 39%) replacement and coherent read misses, respectively. Using cycle-accurate full-system simulation, our design gives speedups from 1.00 to 1.17 of princeton application repository for shared-memory computers (PARSEC) workloads running on a distributed shared memory system with the exception of dedup and swaptions workloads.