The recent development of channel technology has promised to reduce the transaction verification time in blockchain operations.When transactions are transmitted through the channels created by nodes,the nodes need to ...The recent development of channel technology has promised to reduce the transaction verification time in blockchain operations.When transactions are transmitted through the channels created by nodes,the nodes need to cooperate with each other.If one party refuses to do so,the channel is unstable.A stable channel is thus required.Because nodes may show uncooperative behavior,they may have a negative impact on the stability of such channels.In order to address this issue,this work proposes a dynamic evolutionary game model based on node behavior.This model considers various defense strategies'cost and attack success ratio under them.Nodes can dynamically adjust their strategies according to the behavior of attackers to achieve their effective defense.The equilibrium stability of the proposed model can be achieved.The proposed model can be applied to general channel networks.It is compared with two state-of-the-art blockchain channels:Lightning network and Spirit channels.The experimental results show that the proposed model can be used to improve a channel's stability and keep it in a good cooperative stable state.Thus its use enables a blockchain to enjoy higher transaction success ratio and lower transaction transmission delay than the use of its two peers.展开更多
The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide...The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN).展开更多
Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The s...Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The social network communities working on various social network domains face different hurdles, including various new research studies and challenges in social computing. The researcher should try to expand the scope and establish new ideas and methods even from other disciplines to address the various challenges. This idea has diverse academic association, social links and technical characteristics. Thus it offers an ultimate opportunity for researchers to find out the issues in social computing and provide innovative solutions for conveying the information between social online groups on network computing. In this research paper we investigate the different issues in social media like users’ privacy and security, network reliabilities, and desire data availability on these social media, users’ awareness about the social networks and problems faced by academic domains. A huge number of users operated the social networks for retrieving and disseminating their real time and offline information to various places. The information may be transmitted on local networks or may be on global networks. The main concerns of users on social media are secure and fast communication channels. Facebook and YouTube both claimed for efficient security mechanism and fast communication channels for multimedia data. In this research a survey has been conducted in the most populated cities where a large number of Facebook and YouTube users have been found. During the survey several regular users indicate the certain potential issues continuously occurred on these social web sites interfaces, for example unwanted advertisement, fake IDS, uncensored videos and unknown friend request which cause the poor speed of channel communication, poor uploading and downloading data speed, channel interferences, security of data, privacy of users, integrity and reliability of user communication on these social sites. The major issues faced by active users of Facebook and YouTube have been highlighted in this research.展开更多
Emotions of users do not converge in a single application but are scattered across diverse applications.Mobile devices are the closest media for handling user data and these devices have the advantage of integrating p...Emotions of users do not converge in a single application but are scattered across diverse applications.Mobile devices are the closest media for handling user data and these devices have the advantage of integrating private user information and emotions spread over different applications.In this paper,we first analyze user profile on a mobile device by describing the problem of the user sentiment profile system in terms of data granularity,media diversity,and server-side solution.Fine-grained data requires additional data and structural analysis in mobile devices.Media diversity requires standard parameters to integrate user data from various applications.A server-side solution presents a potential risk when handling individual privacy information.Therefore,in order to overcome these problems,we propose a general-purposed user profile system based on sentiment analysis that extracts individual emotional preferences by comparing the difference between public and individual data based on particular features.The proposed system is built based on a sentiment hierarchy,which is created by using unstructured data on mobile devices.It can compensate for the concentration of single media,and analyze individual private data without the invasion of privacy on mobile devices.展开更多
How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable i...How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable insights by utilizing the power of cutting-edge algorithms and machine learning, empowering enterprises to make deft decisions quickly and efficiently. This article explores the idea of cognitive computing and AI in decision-making, emphasizing its function in converting unvalued data into valuable knowledge. It details the advantages of utilizing these technologies, such as greater productivity, accuracy, and efficiency. Businesses may use cognitive computing and AI to their advantage to obtain a competitive edge in today’s data-driven world by knowing their capabilities and possibilities [1].展开更多
How to represent a human face pattern?While it is presented in a continuous way in human visual system,computers often store and process it in a discrete manner with 2D arrays of pixels.The authors attempt to learn a ...How to represent a human face pattern?While it is presented in a continuous way in human visual system,computers often store and process it in a discrete manner with 2D arrays of pixels.The authors attempt to learn a continuous surface representation for face image with explicit function.First,an explicit model(EmFace)for human face representation is pro-posed in the form of a finite sum of mathematical terms,where each term is an analytic function element.Further,to estimate the unknown parameters of EmFace,a novel neural network,EmNet,is designed with an encoder-decoder structure and trained from massive face images,where the encoder is defined by a deep convolutional neural network and the decoder is an explicit mathematical expression of EmFace.The authors demonstrate that our EmFace represents face image more accurate than the comparison method,with an average mean square error of 0.000888,0.000936,0.000953 on LFW,IARPA Janus Benchmark-B,and IJB-C datasets.Visualisation results show that,EmFace has a higher representation performance on faces with various expressions,postures,and other factors.Furthermore,EmFace achieves reasonable performance on several face image processing tasks,including face image restoration,denoising,and transformation.展开更多
In recent years,container-based cloud virtualization solutions have emerged to mitigate the performance gap between non-virtualized and virtualized physical resources.However,there is a noticeable absence of technique...In recent years,container-based cloud virtualization solutions have emerged to mitigate the performance gap between non-virtualized and virtualized physical resources.However,there is a noticeable absence of techniques for predicting microservice performance in current research,which impacts cloud service users’ability to determine when to provision or de-provision microservices.Predicting microservice performance poses challenges due to overheads associated with actions such as variations in processing time caused by resource contention,which potentially leads to user confusion.In this paper,we propose,develop,and validate a probabilistic architecture named Microservice Performance Diagnosis and Prediction(MPDP).MPDP considers various factors such as response time,throughput,CPU usage,and othermetrics to dynamicallymodel interactions betweenmicroservice performance indicators for diagnosis and prediction.Using experimental data fromourmonitoring tool,stakeholders can build various networks for probabilistic analysis ofmicroservice performance diagnosis and prediction and estimate the best microservice resource combination for a given Quality of Service(QoS)level.We generated a dataset of microservices with 2726 records across four benchmarks including CPU,memory,response time,and throughput to demonstrate the efficacy of the proposed MPDP architecture.We validate MPDP and demonstrate its capability to predict microservice performance.We compared various Bayesian networks such as the Noisy-OR Network(NOR),Naive Bayes Network(NBN),and Complex Bayesian Network(CBN),achieving an overall accuracy rate of 89.98%when using CBN.展开更多
In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of ...In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.展开更多
Up to now,so much casting analysis software has been continuing to develop the new access way to real casting processes. Those include the melt flow analysis,heat transfer analysis for solidification calculation,mecha...Up to now,so much casting analysis software has been continuing to develop the new access way to real casting processes. Those include the melt flow analysis,heat transfer analysis for solidification calculation,mechanical property predictions and microstructure predictions. These trials were successful to obtain the ideal results comparing with real situations,so that CAE technologies became inevitable to design or develop new casting processes. But for manufacturing fields,CAE technologies are not so frequently being used because of their difficulties in using the software or insufficient computing performances. To introduce CAE technologies to manufacturing field,the high performance analysis is essential to shorten the gap between product designing time and prototyping time. The software code optimization can be helpful,but it is not enough,because the codes developed by software experts are already optimized enough. As an alternative proposal for high performance computations,the parallel computation technologies are eagerly being applied to CAE technologies to make the analysis time shorter. In this research,SMP (Shared Memory Processing) and MPI (Message Passing Interface) (1) methods for parallelization were applied to commercial software "Z-Cast" to calculate the casting processes. In the code parallelizing processes,the network stabilization,core optimization were also carried out under Microsoft Windows platform and their performances and results were compared with those of normal linear analysis codes.展开更多
The varied network performance in the cloud hurts application performance.This increases the tenant’s cost and becomes the key hindrance to cloud adoption.It is because virtual machines(VMs)belonging to one tenant ca...The varied network performance in the cloud hurts application performance.This increases the tenant’s cost and becomes the key hindrance to cloud adoption.It is because virtual machines(VMs)belonging to one tenant can reside in multiple physical servers and communication interference across tenants occasionally occurs when encountering network congestion.In order to prevent such unpredictability,it is critical for cloud providers to offer the guaranteed network performance at tenant level.Such a critical issue has drawn increasing attention in both academia and industry.Many elaborate mechanisms are proposed to provide guaranteed network performance,such as guaranteed bandwidth or bounded message delay across tenants.However,due to the intrinsic complexities and limited capabilities of commodity hardware,the deployment of these mechanisms still faces great challenges in current cloud datacenters.Moreover,with the rapid development of new technologies,there are new opportunities to improve the performance of existing works,but these possibilities are not under full discussion yet.Therefore,in this paper,we survey the latest development of the network performance guarantee approaches and summarize them based on their features.Then,we explore and discuss the possibilities of using emerging technologies as knobs to upgrade the performance or overcome the inherent shortcomings of existing advances.We hope this article will help readers quickly Received:Apr.07,2020 Revised:Oct.23,2020 Editor:Haifeng Zheng understand the causes of the problems and serve as a guide to motivate researchers to develop innovative algorithms and frameworks.展开更多
With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and cons...With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and construction of com- plex building structures. To satisfy the engineering requirements of HPC, a parallel FEA computing kernel, which is designed typically for the analysis of complex building structures, will be presented and illustrated in this paper. This kernel program is based on the Intel Math Kernel Library (MKL) and coded by FORTRAN 2008 syntax, which is a parallel computer language. To improve the capability and efficiency of the computing kernel program, the parallel concepts of modern FORTRAN, such as elemental procedure, do concurrent, etc., have been applied extensively in coding and the famous PARDISO solver in MKL has been called to solve the Large-sparse system of linear equations. The ultimate objective of developing the computing kernel is to make the personal computer have the ability to analysis large building structures up to ten million degree of freedoms (DOFs). Up to now, the linear static analysis and dynamic analysis have been achieved while the nonlinear analysis, including geometric and material nonlinearity, has not been finished yet. Therefore, the numerical examples in this paper will be concentrated on demonstrating the validity and efficiency of the linear analysis and modal analysis for large FE models, while ignoring the verification of the nonlinear analysis capabilities.展开更多
The paper describes modern technologies of Computer Network Reliability. Software tool is developed to estimate of the CCN critical failure probability (construction of a criticality matrix) by results of the FME(C)A-...The paper describes modern technologies of Computer Network Reliability. Software tool is developed to estimate of the CCN critical failure probability (construction of a criticality matrix) by results of the FME(C)A-technique. The internal information factors, such as collisions and congestion of switchboards, routers and servers, influence on a network reliability and safety (besides of hardware and software reliability and external extreme factors). The means and features of Failures Modes and Effects (Critical) Analysis (FME(C)A) for reliability and criticality analysis of corporate computer networks (CCN) are considered. The examples of FME(C)A-Technique for structured cable system (SCS) is given. We also discuss measures that can be used for criticality analysis and possible means of criticality reduction. Finally, we describe a technique and basic principles of dependable development and deployment of computer networks that are based on results of FMECA analysis and procedures of optimization choice of means for fault-tolerance ensuring.展开更多
After analysing and discussing the goal and problems of computer networkdesign, the authors developed a computer network CAD with detaileddescriptions. Its architecture, functions and modules are given. Lastly a set o...After analysing and discussing the goal and problems of computer networkdesign, the authors developed a computer network CAD with detaileddescriptions. Its architecture, functions and modules are given. Lastly a set ofparameters for performance evaluation are proposed.展开更多
In this study,the design of a computational heuristic based on the nonlinear Liénard model is presented using the efficiency of artificial neural networks(ANNs)along with the hybridization procedures of global an...In this study,the design of a computational heuristic based on the nonlinear Liénard model is presented using the efficiency of artificial neural networks(ANNs)along with the hybridization procedures of global and local search approaches.The global search genetic algorithm(GA)and local search sequential quadratic programming scheme(SQPS)are implemented to solve the nonlinear Liénard model.An objective function using the differential model and boundary conditions is designed and optimized by the hybrid computing strength of the GA-SQPS.The motivation of the ANN procedures along with GA-SQPS comes to present reliable,feasible and precise frameworks to tackle stiff and highly nonlinear differentialmodels.The designed procedures of ANNs along with GA-SQPS are applied for three highly nonlinear differential models.The achieved numerical outcomes on multiple trials using the designed procedures are compared to authenticate the correctness,viability and efficacy.Moreover,statistical performances based on different measures are also provided to check the reliability of the ANN along with GASQPS.展开更多
The differential equations having delays take paramount interest in the research community due to their fundamental role to interpret and analyze the mathematical models arising in biological studies.This study deals ...The differential equations having delays take paramount interest in the research community due to their fundamental role to interpret and analyze the mathematical models arising in biological studies.This study deals with the exploitation of knack of artificial intelligence-based computing paradigm for numerical treatment of the functional delay differential systems that portray the dynamics of the nonlinear influenza-A epidemic model(IA-EM)by implementation of neural network backpropagation with Levenberg-Marquardt scheme(NNBLMS).The nonlinear IA-EM represented four classes of the population dynamics including susceptible,exposed,infectious and recovered individuals.The referenced datasets for NNBLMS are assembled by employing the Adams method for sufficient large number of scenarios of nonlinear IA-EM through the variation in the infection,turnover,disease associated death and recovery rates.The arbitrary selection of training,testing as well as validation samples of dataset are utilizing by designed NNBLMS to calculate the approximate numerical solutions of the nonlinear IA-EM develop a good agreement with the reference results.The proficiency,reliability and accuracy of the designed NNBLMS are further substantiated via exhaustive simulations-based outcomes in terms of mean square error,regression index and error histogram studies.展开更多
BACKGROUND Regulatory T cells(Tregs)and natural killer(NK)cells play an essential role in the development of bladder urothelial carcinoma(BUC).AIM To construct a prognosis-related model to judge the prognosis of patie...BACKGROUND Regulatory T cells(Tregs)and natural killer(NK)cells play an essential role in the development of bladder urothelial carcinoma(BUC).AIM To construct a prognosis-related model to judge the prognosis of patients with bladder cancer,meanwhile,predict the sensitivity of patients to chemotherapy and immunotherapy.METHODS Bladder cancer information data was obtained from The Cancer Genome Atlas and GSE32894.The CIBERSORT was used to calculate the immune score of each sample.Weighted gene co-expression network analysis was used to find genes that will have the same or similar expression patterns.Subsequently,multivariate cox regression and lasso regression was used to further screen prognosis-related genes.The prrophetic package was used to predict phenotype from gene expression data,drug sensitivity of external cell line and predict clinical data.RESULTS The stage and risk scores are independent prognostic factors in patients with BUC.Mutations in FGFR3 lead to an increase in Tregs percolation and affect the prognosis of the tumor,and additionally,EMP1,TCHH and CNTNAP3B in the model are mainly positively correlated with the expression of immune checkpoints,while CMTM8,SORT1 and IQSEC1 are negatively correlated with immune checkpoints and the high-risk group had higher sensitivity to chemotherapy drugs.CONCLUSION Prognosis-related models of bladder tumor patients,based on Treg and NK cell percolation in tumor tissue.In addition to judging the prognosis of patients with bladder cancer,it can also predict the sensitivity of patients to chemotherapy and immunotherapy.At the same time,patients were divided into high and low risk groups based on this model,and differences in genetic mutations were found between the high and low risk groups.展开更多
基金supported by the National Natural Science Foundation of China(61872006)Scientific Research Activities Foundation of Academic and Technical Leaders and Reserve Candidates in Anhui Province(2020H233)+2 种基金Top-notch Discipline(specialty)Talents Foundation in Colleges and Universities of Anhui Province(gxbj2020057)the Startup Foundation for Introducing Talent of NUISTby Institutional Fund Projects from Ministry of Education and Deanship of Scientific Research(DSR),King Abdulaziz University(KAU),Jeddah,Saudi Arabia(IFPDP-216-22)。
文摘The recent development of channel technology has promised to reduce the transaction verification time in blockchain operations.When transactions are transmitted through the channels created by nodes,the nodes need to cooperate with each other.If one party refuses to do so,the channel is unstable.A stable channel is thus required.Because nodes may show uncooperative behavior,they may have a negative impact on the stability of such channels.In order to address this issue,this work proposes a dynamic evolutionary game model based on node behavior.This model considers various defense strategies'cost and attack success ratio under them.Nodes can dynamically adjust their strategies according to the behavior of attackers to achieve their effective defense.The equilibrium stability of the proposed model can be achieved.The proposed model can be applied to general channel networks.It is compared with two state-of-the-art blockchain channels:Lightning network and Spirit channels.The experimental results show that the proposed model can be used to improve a channel's stability and keep it in a good cooperative stable state.Thus its use enables a blockchain to enjoy higher transaction success ratio and lower transaction transmission delay than the use of its two peers.
基金supported by JSPS KAKENHI Grant Number JP16K00117, JP19K20250KDDI Foundationthe China Scholarship Council (201808050016)
文摘The development of communication technologies which support traffic-intensive applications presents new challenges in designing a real-time traffic analysis architecture and an accurate method that suitable for a wide variety of traffic types.Current traffic analysis methods are executed on the cloud,which needs to upload the traffic data.Fog computing is a more promising way to save bandwidth resources by offloading these tasks to the fog nodes.However,traffic analysis models based on traditional machine learning need to retrain all traffic data when updating the trained model,which are not suitable for fog computing due to the poor computing power.In this study,we design a novel fog computing based traffic analysis system using broad learning.For one thing,fog computing can provide a distributed architecture for saving the bandwidth resources.For another,we use the broad learning to incrementally train the traffic data,which is more suitable for fog computing because it can support incremental updates of models without retraining all data.We implement our system on the Raspberry Pi,and experimental results show that we have a 98%probability to accurately identify these traffic data.Moreover,our method has a faster training speed compared with Convolutional Neural Network(CNN).
文摘Social computing and online groups have accompanied in a new age of the network, where information, networking and communication technologies are enabling systematized human efforts in primarily innovative ways. The social network communities working on various social network domains face different hurdles, including various new research studies and challenges in social computing. The researcher should try to expand the scope and establish new ideas and methods even from other disciplines to address the various challenges. This idea has diverse academic association, social links and technical characteristics. Thus it offers an ultimate opportunity for researchers to find out the issues in social computing and provide innovative solutions for conveying the information between social online groups on network computing. In this research paper we investigate the different issues in social media like users’ privacy and security, network reliabilities, and desire data availability on these social media, users’ awareness about the social networks and problems faced by academic domains. A huge number of users operated the social networks for retrieving and disseminating their real time and offline information to various places. The information may be transmitted on local networks or may be on global networks. The main concerns of users on social media are secure and fast communication channels. Facebook and YouTube both claimed for efficient security mechanism and fast communication channels for multimedia data. In this research a survey has been conducted in the most populated cities where a large number of Facebook and YouTube users have been found. During the survey several regular users indicate the certain potential issues continuously occurred on these social web sites interfaces, for example unwanted advertisement, fake IDS, uncensored videos and unknown friend request which cause the poor speed of channel communication, poor uploading and downloading data speed, channel interferences, security of data, privacy of users, integrity and reliability of user communication on these social sites. The major issues faced by active users of Facebook and YouTube have been highlighted in this research.
基金This work was supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2019-0-00231,Development of artificial intelligence based video security technology and systems for public infrastructure safety).
文摘Emotions of users do not converge in a single application but are scattered across diverse applications.Mobile devices are the closest media for handling user data and these devices have the advantage of integrating private user information and emotions spread over different applications.In this paper,we first analyze user profile on a mobile device by describing the problem of the user sentiment profile system in terms of data granularity,media diversity,and server-side solution.Fine-grained data requires additional data and structural analysis in mobile devices.Media diversity requires standard parameters to integrate user data from various applications.A server-side solution presents a potential risk when handling individual privacy information.Therefore,in order to overcome these problems,we propose a general-purposed user profile system based on sentiment analysis that extracts individual emotional preferences by comparing the difference between public and individual data based on particular features.The proposed system is built based on a sentiment hierarchy,which is created by using unstructured data on mobile devices.It can compensate for the concentration of single media,and analyze individual private data without the invasion of privacy on mobile devices.
文摘How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable insights by utilizing the power of cutting-edge algorithms and machine learning, empowering enterprises to make deft decisions quickly and efficiently. This article explores the idea of cognitive computing and AI in decision-making, emphasizing its function in converting unvalued data into valuable knowledge. It details the advantages of utilizing these technologies, such as greater productivity, accuracy, and efficiency. Businesses may use cognitive computing and AI to their advantage to obtain a competitive edge in today’s data-driven world by knowing their capabilities and possibilities [1].
基金National Natural Science Foundation of China,Grant/Award Number:92370117。
文摘How to represent a human face pattern?While it is presented in a continuous way in human visual system,computers often store and process it in a discrete manner with 2D arrays of pixels.The authors attempt to learn a continuous surface representation for face image with explicit function.First,an explicit model(EmFace)for human face representation is pro-posed in the form of a finite sum of mathematical terms,where each term is an analytic function element.Further,to estimate the unknown parameters of EmFace,a novel neural network,EmNet,is designed with an encoder-decoder structure and trained from massive face images,where the encoder is defined by a deep convolutional neural network and the decoder is an explicit mathematical expression of EmFace.The authors demonstrate that our EmFace represents face image more accurate than the comparison method,with an average mean square error of 0.000888,0.000936,0.000953 on LFW,IARPA Janus Benchmark-B,and IJB-C datasets.Visualisation results show that,EmFace has a higher representation performance on faces with various expressions,postures,and other factors.Furthermore,EmFace achieves reasonable performance on several face image processing tasks,including face image restoration,denoising,and transformation.
文摘In recent years,container-based cloud virtualization solutions have emerged to mitigate the performance gap between non-virtualized and virtualized physical resources.However,there is a noticeable absence of techniques for predicting microservice performance in current research,which impacts cloud service users’ability to determine when to provision or de-provision microservices.Predicting microservice performance poses challenges due to overheads associated with actions such as variations in processing time caused by resource contention,which potentially leads to user confusion.In this paper,we propose,develop,and validate a probabilistic architecture named Microservice Performance Diagnosis and Prediction(MPDP).MPDP considers various factors such as response time,throughput,CPU usage,and othermetrics to dynamicallymodel interactions betweenmicroservice performance indicators for diagnosis and prediction.Using experimental data fromourmonitoring tool,stakeholders can build various networks for probabilistic analysis ofmicroservice performance diagnosis and prediction and estimate the best microservice resource combination for a given Quality of Service(QoS)level.We generated a dataset of microservices with 2726 records across four benchmarks including CPU,memory,response time,and throughput to demonstrate the efficacy of the proposed MPDP architecture.We validate MPDP and demonstrate its capability to predict microservice performance.We compared various Bayesian networks such as the Noisy-OR Network(NOR),Naive Bayes Network(NBN),and Complex Bayesian Network(CBN),achieving an overall accuracy rate of 89.98%when using CBN.
基金Project supported by the Research Fund for the Doctoral Program of Higher Education (No.20030001112).
文摘In the previous papers, a high performance sparse static solver with two-level unrolling based on a cell-sparse storage scheme was reported. Although the solver reaches quite a high efficiency for a big percentage of finite element analysis benchmark tests, the MFLOPS (million floating operations per second) of LDL^T factorization of benchmark tests vary on a Dell Pentium IV 850 MHz machine from 100 to 456 depending on the average size of the super-equations, i.e., on the average depth of unrolling. In this paper, a new sparse static solver with two-level unrolling that employs the concept of master-equations and searches for an appropriate depths of unrolling is proposed. The new solver provides higher MFLOPS for LDL^T factorization of benchmark tests, and therefore speeds up the solution process.
文摘Up to now,so much casting analysis software has been continuing to develop the new access way to real casting processes. Those include the melt flow analysis,heat transfer analysis for solidification calculation,mechanical property predictions and microstructure predictions. These trials were successful to obtain the ideal results comparing with real situations,so that CAE technologies became inevitable to design or develop new casting processes. But for manufacturing fields,CAE technologies are not so frequently being used because of their difficulties in using the software or insufficient computing performances. To introduce CAE technologies to manufacturing field,the high performance analysis is essential to shorten the gap between product designing time and prototyping time. The software code optimization can be helpful,but it is not enough,because the codes developed by software experts are already optimized enough. As an alternative proposal for high performance computations,the parallel computation technologies are eagerly being applied to CAE technologies to make the analysis time shorter. In this research,SMP (Shared Memory Processing) and MPI (Message Passing Interface) (1) methods for parallelization were applied to commercial software "Z-Cast" to calculate the casting processes. In the code parallelizing processes,the network stabilization,core optimization were also carried out under Microsoft Windows platform and their performances and results were compared with those of normal linear analysis codes.
基金This project is partially supported by the National Natural Science Foundation of China(No.61872401)Fok Ying Tung Education Foundation(No.171059).
文摘The varied network performance in the cloud hurts application performance.This increases the tenant’s cost and becomes the key hindrance to cloud adoption.It is because virtual machines(VMs)belonging to one tenant can reside in multiple physical servers and communication interference across tenants occasionally occurs when encountering network congestion.In order to prevent such unpredictability,it is critical for cloud providers to offer the guaranteed network performance at tenant level.Such a critical issue has drawn increasing attention in both academia and industry.Many elaborate mechanisms are proposed to provide guaranteed network performance,such as guaranteed bandwidth or bounded message delay across tenants.However,due to the intrinsic complexities and limited capabilities of commodity hardware,the deployment of these mechanisms still faces great challenges in current cloud datacenters.Moreover,with the rapid development of new technologies,there are new opportunities to improve the performance of existing works,but these possibilities are not under full discussion yet.Therefore,in this paper,we survey the latest development of the network performance guarantee approaches and summarize them based on their features.Then,we explore and discuss the possibilities of using emerging technologies as knobs to upgrade the performance or overcome the inherent shortcomings of existing advances.We hope this article will help readers quickly Received:Apr.07,2020 Revised:Oct.23,2020 Editor:Haifeng Zheng understand the causes of the problems and serve as a guide to motivate researchers to develop innovative algorithms and frameworks.
文摘With the rapid development of high-rise buildings and long-span structures in the recent years, high performance com- putation (HPC) is becoming more and more important, sometimes even crucial, for the design and construction of com- plex building structures. To satisfy the engineering requirements of HPC, a parallel FEA computing kernel, which is designed typically for the analysis of complex building structures, will be presented and illustrated in this paper. This kernel program is based on the Intel Math Kernel Library (MKL) and coded by FORTRAN 2008 syntax, which is a parallel computer language. To improve the capability and efficiency of the computing kernel program, the parallel concepts of modern FORTRAN, such as elemental procedure, do concurrent, etc., have been applied extensively in coding and the famous PARDISO solver in MKL has been called to solve the Large-sparse system of linear equations. The ultimate objective of developing the computing kernel is to make the personal computer have the ability to analysis large building structures up to ten million degree of freedoms (DOFs). Up to now, the linear static analysis and dynamic analysis have been achieved while the nonlinear analysis, including geometric and material nonlinearity, has not been finished yet. Therefore, the numerical examples in this paper will be concentrated on demonstrating the validity and efficiency of the linear analysis and modal analysis for large FE models, while ignoring the verification of the nonlinear analysis capabilities.
文摘The paper describes modern technologies of Computer Network Reliability. Software tool is developed to estimate of the CCN critical failure probability (construction of a criticality matrix) by results of the FME(C)A-technique. The internal information factors, such as collisions and congestion of switchboards, routers and servers, influence on a network reliability and safety (besides of hardware and software reliability and external extreme factors). The means and features of Failures Modes and Effects (Critical) Analysis (FME(C)A) for reliability and criticality analysis of corporate computer networks (CCN) are considered. The examples of FME(C)A-Technique for structured cable system (SCS) is given. We also discuss measures that can be used for criticality analysis and possible means of criticality reduction. Finally, we describe a technique and basic principles of dependable development and deployment of computer networks that are based on results of FMECA analysis and procedures of optimization choice of means for fault-tolerance ensuring.
文摘After analysing and discussing the goal and problems of computer networkdesign, the authors developed a computer network CAD with detaileddescriptions. Its architecture, functions and modules are given. Lastly a set ofparameters for performance evaluation are proposed.
文摘In this study,the design of a computational heuristic based on the nonlinear Liénard model is presented using the efficiency of artificial neural networks(ANNs)along with the hybridization procedures of global and local search approaches.The global search genetic algorithm(GA)and local search sequential quadratic programming scheme(SQPS)are implemented to solve the nonlinear Liénard model.An objective function using the differential model and boundary conditions is designed and optimized by the hybrid computing strength of the GA-SQPS.The motivation of the ANN procedures along with GA-SQPS comes to present reliable,feasible and precise frameworks to tackle stiff and highly nonlinear differentialmodels.The designed procedures of ANNs along with GA-SQPS are applied for three highly nonlinear differential models.The achieved numerical outcomes on multiple trials using the designed procedures are compared to authenticate the correctness,viability and efficacy.Moreover,statistical performances based on different measures are also provided to check the reliability of the ANN along with GASQPS.
文摘The differential equations having delays take paramount interest in the research community due to their fundamental role to interpret and analyze the mathematical models arising in biological studies.This study deals with the exploitation of knack of artificial intelligence-based computing paradigm for numerical treatment of the functional delay differential systems that portray the dynamics of the nonlinear influenza-A epidemic model(IA-EM)by implementation of neural network backpropagation with Levenberg-Marquardt scheme(NNBLMS).The nonlinear IA-EM represented four classes of the population dynamics including susceptible,exposed,infectious and recovered individuals.The referenced datasets for NNBLMS are assembled by employing the Adams method for sufficient large number of scenarios of nonlinear IA-EM through the variation in the infection,turnover,disease associated death and recovery rates.The arbitrary selection of training,testing as well as validation samples of dataset are utilizing by designed NNBLMS to calculate the approximate numerical solutions of the nonlinear IA-EM develop a good agreement with the reference results.The proficiency,reliability and accuracy of the designed NNBLMS are further substantiated via exhaustive simulations-based outcomes in terms of mean square error,regression index and error histogram studies.
文摘BACKGROUND Regulatory T cells(Tregs)and natural killer(NK)cells play an essential role in the development of bladder urothelial carcinoma(BUC).AIM To construct a prognosis-related model to judge the prognosis of patients with bladder cancer,meanwhile,predict the sensitivity of patients to chemotherapy and immunotherapy.METHODS Bladder cancer information data was obtained from The Cancer Genome Atlas and GSE32894.The CIBERSORT was used to calculate the immune score of each sample.Weighted gene co-expression network analysis was used to find genes that will have the same or similar expression patterns.Subsequently,multivariate cox regression and lasso regression was used to further screen prognosis-related genes.The prrophetic package was used to predict phenotype from gene expression data,drug sensitivity of external cell line and predict clinical data.RESULTS The stage and risk scores are independent prognostic factors in patients with BUC.Mutations in FGFR3 lead to an increase in Tregs percolation and affect the prognosis of the tumor,and additionally,EMP1,TCHH and CNTNAP3B in the model are mainly positively correlated with the expression of immune checkpoints,while CMTM8,SORT1 and IQSEC1 are negatively correlated with immune checkpoints and the high-risk group had higher sensitivity to chemotherapy drugs.CONCLUSION Prognosis-related models of bladder tumor patients,based on Treg and NK cell percolation in tumor tissue.In addition to judging the prognosis of patients with bladder cancer,it can also predict the sensitivity of patients to chemotherapy and immunotherapy.At the same time,patients were divided into high and low risk groups based on this model,and differences in genetic mutations were found between the high and low risk groups.