The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corros...The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys.展开更多
In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications...In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.展开更多
Early non-invasive diagnosis of coronary heart disease(CHD)is critical.However,it is challenging to achieve accurate CHD diagnosis via detecting breath.In this work,heterostructured complexes of black phosphorus(BP)an...Early non-invasive diagnosis of coronary heart disease(CHD)is critical.However,it is challenging to achieve accurate CHD diagnosis via detecting breath.In this work,heterostructured complexes of black phosphorus(BP)and two-dimensional carbide and nitride(MXene)with high gas sensitivity and photo responsiveness were formulated using a self-assembly strategy.A light-activated virtual sensor array(LAVSA)based on BP/Ti_(3)C_(2)Tx was prepared under photomodulation and further assembled into an instant gas sensing platform(IGSP).In addition,a machine learning(ML)algorithm was introduced to help the IGSP detect and recognize the signals of breath samples to diagnose CHD.Due to the synergistic effect of BP and Ti_(3)C_(2)Tx as well as photo excitation,the synthesized heterostructured complexes exhibited higher performance than pristine Ti_(3)C_(2)Tx,with a response value 26%higher than that of pristine Ti_(3)C_(2)Tx.In addition,with the help of a pattern recognition algorithm,LAVSA successfully detected and identified 15 odor molecules affiliated with alcohols,ketones,aldehydes,esters,and acids.Meanwhile,with the assistance of ML,the IGSP achieved 69.2%accuracy in detecting the breath odor of 45 volunteers from healthy people and CHD patients.In conclusion,an immediate,low-cost,and accurate prototype was designed and fabricated for the noninvasive diagnosis of CHD,which provided a generalized solution for diagnosing other diseases and other more complex application scenarios.展开更多
Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and eva...Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and evaluate the performance of participants.However,their interpretability limits the personalization of the training for individual participants.Methods Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection.Data on the use of surgical tools were collected using a surgical simulator.Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model.Five machine learning algorithms were trained to predict the skill level,and the support vector machine performed the best,with an accuracy of 92.41%and Area Under Curve value of 0.98253.The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.Results This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical performances.The use of Shapley values enables targeted training by identifying deficiencies in individual skills.Conclusions This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery.The interpretability of the machine learning models enables the development of individualized training programs.In addition,this study highlighted the potential of explanatory models in training external skills.展开更多
Virtual human is the simulation of human under the synthesis of virtual reality,artificial intelligence,and other technologies.Modern virtual human technology simulates both the external characteristics and the intern...Virtual human is the simulation of human under the synthesis of virtual reality,artificial intelligence,and other technologies.Modern virtual human technology simulates both the external characteristics and the internal emotions and personality of humans.The relationship between virtual human and human is a concrete expression of the modern symbiotic relationship between human and machine.This human-machine symbiosis can either be a fusion of the virtual human and the human or it can cause a split in the human itself.展开更多
Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified ne...Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment.展开更多
The demand for cloud computing has increased manifold in the recent past.More specifically,on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computi...The demand for cloud computing has increased manifold in the recent past.More specifically,on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computing needs.The cloud service provider fulfills different user requirements using virtualization-where a single physical machine can host multiple VirtualMachines.Each virtualmachine potentially represents a different user environment such as operating system,programming environment,and applications.However,these cloud services use a large amount of electrical energy and produce greenhouse gases.To reduce the electricity cost and greenhouse gases,energy efficient algorithms must be designed.One specific area where energy efficient algorithms are required is virtual machine consolidation.With virtualmachine consolidation,the objective is to utilize the minimumpossible number of hosts to accommodate the required virtual machines,keeping in mind the service level agreement requirements.This research work formulates the virtual machine migration as an online problem and develops optimal offline and online algorithms for the single host virtual machine migration problem under a service level agreement constraint for an over-utilized host.The online algorithm is analyzed using a competitive analysis approach.In addition,an experimental analysis of the proposed algorithm on real-world data is conducted to showcase the improved performance of the proposed algorithm against the benchmark algorithms.Our proposed online algorithm consumed 25%less energy and performed 43%fewer migrations than the benchmark algorithms.展开更多
The drug development process takes a long time since it requires sorting through a large number of inactive compounds from a large collection of compounds chosen for study and choosing just the most pertinent compound...The drug development process takes a long time since it requires sorting through a large number of inactive compounds from a large collection of compounds chosen for study and choosing just the most pertinent compounds that can bind to a disease protein.The use of virtual screening in pharmaceutical research is growing in popularity.During the early phases of medication research and development,it is crucial.Chemical compound searches are nowmore narrowly targeted.Because the databases containmore andmore ligands,thismethod needs to be quick and exact.Neural network fingerprints were created more effectively than the well-known Extended Connectivity Fingerprint(ECFP).Only the largest sub-graph is taken into consideration to learn the representation,despite the fact that the conventional graph network generates a better-encoded fingerprint.When using the average or maximum pooling layer,it also contains unrelated data.This article suggested the Graph Convolutional Attention Network(GCAN),a graph neural network with an attention mechanism,to address these problems.Additionally,it makes the nodes or sub-graphs that are used to create the molecular fingerprint more significant.The generated fingerprint is used to classify drugs using ensemble learning.As base classifiers,ensemble stacking is applied to Support Vector Machines(SVM),Random Forest,Nave Bayes,Decision Trees,AdaBoost,and Gradient Boosting.When compared to existing models,the proposed GCAN fingerprint with an ensemble model achieves relatively high accuracy,sensitivity,specificity,and area under the curve.Additionally,it is revealed that our ensemble learning with generated molecular fingerprint yields 91%accuracy,outperforming earlier approaches.展开更多
Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kern...Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kernel Same Page Merging(KSM)is a Linux service for memory pages sharing in virtualized environments.Memory deduplication is vulnerable to a memory disclosure attack,which uses covert channel establishment to reveal the contents of other colocated virtual machines.To avoid a memory disclosure attack,sharing of identical pages within a single user’s virtual machine is permitted,but sharing of contents between different users is forbidden.In our proposed approach,virtual machines with similar operating systems of active domains in a node are recognised and organised into a homogenous batch,with memory deduplication performed inside that batch,to improve the memory pages sharing efficiency.When compared to memory deduplication applied to the entire host,implementation details demonstrate a significant increase in the number of pages shared when memory deduplication applied batch-wise and CPU(Central processing unit)consumption also increased.展开更多
Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-...Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-based VM deployment algorithm for energy efficiency) is presented. The proposed algorithm uses linear weighted method to predict the load of a host and classifies the hosts in the data center, based on the predicted host load, into four classes for the purpose of VMs migration. We also propose four types of VM selection algorithms for the purpose of determining potential VMs to be migrated. We performed extensive performance analysis of the proposed algorithms. Experimental results show that, in contrast to other energy-saving algorithms, the algorithm proposed in this work significantly reduces the energy consumption and maintains low service level agreement(SLA) violations.展开更多
This paper interprets the essence of XEN and hardware virtualization technology, which make the virtual machine technology become the focus of people's attention again because of its impressive performance. The secur...This paper interprets the essence of XEN and hardware virtualization technology, which make the virtual machine technology become the focus of people's attention again because of its impressive performance. The security challenges of XEN are mainly researched from the pointes of view: security bottleneck, security isolation and share, life-cycle, digital copyright protection, trusted virtual machine and managements, etc. These security problems significantly affect the security of the virtual machine system based on XEN. At the last, these security measures are put forward, which will be a useful instruction on enhancing XEN security in the future.展开更多
Finding energetic materials with tailored properties is always a significant challenge due to low research efficiency in trial and error.Herein,a methodology combining domain knowledge,a machine learning algorithm,and...Finding energetic materials with tailored properties is always a significant challenge due to low research efficiency in trial and error.Herein,a methodology combining domain knowledge,a machine learning algorithm,and experiments is presented for accelerating the discovery of novel energetic materials.A high-throughput virtual screening(HTVS)system integrating on-demand molecular generation and machine learning models covering the prediction of molecular properties and crystal packing mode scoring is established.With the proposed HTVS system,candidate molecules with promising properties and a desirable crystal packing mode are rapidly targeted from the generated molecular space containing 25112 molecules.Furthermore,a study of the crystal structure and properties shows that the good comprehensive performances of the target molecule are in agreement with the predicted results,thus verifying the effectiveness of the proposed methodology.This work demonstrates a new research paradigm for discovering novel energetic materials and can be extended to other organic materials without manifest obstacles.展开更多
Current orchestration and choreography process engines only serve with dedicate process languages.To solve these problems,an Event-driven Process Execution Model(EPEM) was developed.Formalization and mapping principle...Current orchestration and choreography process engines only serve with dedicate process languages.To solve these problems,an Event-driven Process Execution Model(EPEM) was developed.Formalization and mapping principles of the model were presented to guarantee the correctness and efficiency for process transformation.As a case study,the EPEM descriptions of Web Services Business Process Execution Language(WS-BPEL) were represented and a Process Virtual Machine(PVM)-OncePVM was implemented in compliance with the EPEM.展开更多
Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time o...Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.展开更多
Seismic reservoir prediction plays an important role in oil exploration and development.With the progress of artificial intelligence,many achievements have been made in machine learning seismic reservoir prediction.Ho...Seismic reservoir prediction plays an important role in oil exploration and development.With the progress of artificial intelligence,many achievements have been made in machine learning seismic reservoir prediction.However,due to the factors such as economic cost,exploration maturity,and technical limitations,it is often difficult to obtain a large number of training samples for machine learning.In this case,the prediction accuracy cannot meet the requirements.To overcome this shortcoming,we develop a new machine learning reservoir prediction method based on virtual sample generation.In this method,the virtual samples,which are generated in a high-dimensional hypersphere space,are more consistent with the original data characteristics.Furthermore,at the stage of model building after virtual sample generation,virtual samples screening and model iterative optimization are used to eliminate noise samples and ensure the rationality of virtual samples.The proposed method has been applied to standard function data and real seismic data.The results show that this method can improve the prediction accuracy of machine learning significantly.展开更多
With analysis of limitations Trusted Computing Group (TCG) has encountered, we argued that virtual machine monitor (VMM) is the appropriate architecture for implementing TCG specification. Putting together the VMM...With analysis of limitations Trusted Computing Group (TCG) has encountered, we argued that virtual machine monitor (VMM) is the appropriate architecture for implementing TCG specification. Putting together the VMM architecture, TCG hardware and application-oriented "thin" virtual machine (VM), Trusted VMM-based security architecture is present in this paper with the character of reduced and distributed trusted computing base (TCB). It provides isolation and integrity guarantees based on which general security requirements can be satisfied.展开更多
A virtual computerized numerical control C CNC) processing system is built for spiral bevel and hypoid gears. The pre-designed process of the solution to locate the way of realization is investigated. A kind of combi...A virtual computerized numerical control C CNC) processing system is built for spiral bevel and hypoid gears. The pre-designed process of the solution to locate the way of realization is investigated. A kind of combined programming method and principle of solid modeling are chosen. Multienvironmental programming thought and the inter-connected mechanisms between different environments are applied in the proposed system. The problems of data exchange and compatibility of modules are settled. Environment of the system is founded with object oriented programming thought. AutoCAD is located as the graphic environment. Matlab is used for editing the computation module. Virtual C ++6.0 is the realization environment of the main module. Windows is the platform for realizing the multi-environmental method. Through establishing the virtual system based windows message handling mechanism and the component object model, the application of multienvironmental programming is realized in the manufacture system simulation. The virtual gear product can be achieved in the accomplished software.展开更多
In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the...In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.展开更多
文摘The corrosion rate is a crucial factor that impacts the longevity of materials in different applications.After undergoing friction stir processing(FSP),the refined grain structure leads to a notable decrease in corrosion rate.However,a better understanding of the correlation between the FSP process parameters and the corrosion rate is still lacking.The current study used machine learning to establish the relationship between the corrosion rate and FSP process parameters(rotational speed,traverse speed,and shoulder diameter)for WE43 alloy.The Taguchi L27 design of experiments was used for the experimental analysis.In addition,synthetic data was generated using particle swarm optimization for virtual sample generation(VSG).The application of VSG has led to an increase in the prediction accuracy of machine learning models.A sensitivity analysis was performed using Shapley Additive Explanations to determine the key factors affecting the corrosion rate.The shoulder diameter had a significant impact in comparison to the traverse speed.A graphical user interface(GUI)has been created to predict the corrosion rate using the identified factors.This study focuses on the WE43 alloy,but its findings can also be used to predict the corrosion rate of other magnesium alloys.
基金This work was supported in part by the National Science and Technology Council of Taiwan,under Contract NSTC 112-2410-H-324-001-MY2.
文摘In recent decades,fog computing has played a vital role in executing parallel computational tasks,specifically,scientific workflow tasks.In cloud data centers,fog computing takes more time to run workflow applications.Therefore,it is essential to develop effective models for Virtual Machine(VM)allocation and task scheduling in fog computing environments.Effective task scheduling,VM migration,and allocation,altogether optimize the use of computational resources across different fog nodes.This process ensures that the tasks are executed with minimal energy consumption,which reduces the chances of resource bottlenecks.In this manuscript,the proposed framework comprises two phases:(i)effective task scheduling using a fractional selectivity approach and(ii)VM allocation by proposing an algorithm by the name of Fitness Sharing Chaotic Particle Swarm Optimization(FSCPSO).The proposed FSCPSO algorithm integrates the concepts of chaos theory and fitness sharing that effectively balance both global exploration and local exploitation.This balance enables the use of a wide range of solutions that leads to minimal total cost and makespan,in comparison to other traditional optimization algorithms.The FSCPSO algorithm’s performance is analyzed using six evaluation measures namely,Load Balancing Level(LBL),Average Resource Utilization(ARU),total cost,makespan,energy consumption,and response time.In relation to the conventional optimization algorithms,the FSCPSO algorithm achieves a higher LBL of 39.12%,ARU of 58.15%,a minimal total cost of 1175,and a makespan of 85.87 ms,particularly when evaluated for 50 tasks.
基金supported by the National Natural Science Foundation of China(22278241)the National Key R&D Program of China(2018YFA0901700)+1 种基金a grant from the Institute Guo Qiang,Tsinghua University(2021GQG1016)Department of Chemical Engineering-iBHE Joint Cooperation Fund.
文摘Early non-invasive diagnosis of coronary heart disease(CHD)is critical.However,it is challenging to achieve accurate CHD diagnosis via detecting breath.In this work,heterostructured complexes of black phosphorus(BP)and two-dimensional carbide and nitride(MXene)with high gas sensitivity and photo responsiveness were formulated using a self-assembly strategy.A light-activated virtual sensor array(LAVSA)based on BP/Ti_(3)C_(2)Tx was prepared under photomodulation and further assembled into an instant gas sensing platform(IGSP).In addition,a machine learning(ML)algorithm was introduced to help the IGSP detect and recognize the signals of breath samples to diagnose CHD.Due to the synergistic effect of BP and Ti_(3)C_(2)Tx as well as photo excitation,the synthesized heterostructured complexes exhibited higher performance than pristine Ti_(3)C_(2)Tx,with a response value 26%higher than that of pristine Ti_(3)C_(2)Tx.In addition,with the help of a pattern recognition algorithm,LAVSA successfully detected and identified 15 odor molecules affiliated with alcohols,ketones,aldehydes,esters,and acids.Meanwhile,with the assistance of ML,the IGSP achieved 69.2%accuracy in detecting the breath odor of 45 volunteers from healthy people and CHD patients.In conclusion,an immediate,low-cost,and accurate prototype was designed and fabricated for the noninvasive diagnosis of CHD,which provided a generalized solution for diagnosing other diseases and other more complex application scenarios.
基金Supported by the Yunnan Key Laboratory of Opto-Electronic Information Technology,Postgraduate Research Innovation Fund of Yunnan Normal University (YJSJJ22-B79)the National Natural Science Foundation of China (62062069,62062070,62005235)。
文摘Background Virtual reality technology has been widely used in surgical simulators,providing new opportunities for assessing and training surgical skills.Machine learning algorithms are commonly used to analyze and evaluate the performance of participants.However,their interpretability limits the personalization of the training for individual participants.Methods Seventy-nine participants were recruited and divided into three groups based on their skill level in intracranial tumor resection.Data on the use of surgical tools were collected using a surgical simulator.Feature selection was performed using the Minimum Redundancy Maximum Relevance and SVM-RFE algorithms to obtain the final metrics for training the machine learning model.Five machine learning algorithms were trained to predict the skill level,and the support vector machine performed the best,with an accuracy of 92.41%and Area Under Curve value of 0.98253.The machine learning model was interpreted using Shapley values to identify the important factors contributing to the skill level of each participant.Results This study demonstrates the effectiveness of machine learning in differentiating the evaluation and training of virtual reality neurosurgical performances.The use of Shapley values enables targeted training by identifying deficiencies in individual skills.Conclusions This study provides insights into the use of machine learning for personalized training in virtual reality neurosurgery.The interpretability of the machine learning models enables the development of individualized training programs.In addition,this study highlighted the potential of explanatory models in training external skills.
文摘Virtual human is the simulation of human under the synthesis of virtual reality,artificial intelligence,and other technologies.Modern virtual human technology simulates both the external characteristics and the internal emotions and personality of humans.The relationship between virtual human and human is a concrete expression of the modern symbiotic relationship between human and machine.This human-machine symbiosis can either be a fusion of the virtual human and the human or it can cause a split in the human itself.
基金This work was funded by the Deanship of Scientific Research at Jouf University under Grant Number(DSR2022-RG-0102).
文摘Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment.
文摘The demand for cloud computing has increased manifold in the recent past.More specifically,on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computing needs.The cloud service provider fulfills different user requirements using virtualization-where a single physical machine can host multiple VirtualMachines.Each virtualmachine potentially represents a different user environment such as operating system,programming environment,and applications.However,these cloud services use a large amount of electrical energy and produce greenhouse gases.To reduce the electricity cost and greenhouse gases,energy efficient algorithms must be designed.One specific area where energy efficient algorithms are required is virtual machine consolidation.With virtualmachine consolidation,the objective is to utilize the minimumpossible number of hosts to accommodate the required virtual machines,keeping in mind the service level agreement requirements.This research work formulates the virtual machine migration as an online problem and develops optimal offline and online algorithms for the single host virtual machine migration problem under a service level agreement constraint for an over-utilized host.The online algorithm is analyzed using a competitive analysis approach.In addition,an experimental analysis of the proposed algorithm on real-world data is conducted to showcase the improved performance of the proposed algorithm against the benchmark algorithms.Our proposed online algorithm consumed 25%less energy and performed 43%fewer migrations than the benchmark algorithms.
文摘The drug development process takes a long time since it requires sorting through a large number of inactive compounds from a large collection of compounds chosen for study and choosing just the most pertinent compounds that can bind to a disease protein.The use of virtual screening in pharmaceutical research is growing in popularity.During the early phases of medication research and development,it is crucial.Chemical compound searches are nowmore narrowly targeted.Because the databases containmore andmore ligands,thismethod needs to be quick and exact.Neural network fingerprints were created more effectively than the well-known Extended Connectivity Fingerprint(ECFP).Only the largest sub-graph is taken into consideration to learn the representation,despite the fact that the conventional graph network generates a better-encoded fingerprint.When using the average or maximum pooling layer,it also contains unrelated data.This article suggested the Graph Convolutional Attention Network(GCAN),a graph neural network with an attention mechanism,to address these problems.Additionally,it makes the nodes or sub-graphs that are used to create the molecular fingerprint more significant.The generated fingerprint is used to classify drugs using ensemble learning.As base classifiers,ensemble stacking is applied to Support Vector Machines(SVM),Random Forest,Nave Bayes,Decision Trees,AdaBoost,and Gradient Boosting.When compared to existing models,the proposed GCAN fingerprint with an ensemble model achieves relatively high accuracy,sensitivity,specificity,and area under the curve.Additionally,it is revealed that our ensemble learning with generated molecular fingerprint yields 91%accuracy,outperforming earlier approaches.
文摘Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kernel Same Page Merging(KSM)is a Linux service for memory pages sharing in virtualized environments.Memory deduplication is vulnerable to a memory disclosure attack,which uses covert channel establishment to reveal the contents of other colocated virtual machines.To avoid a memory disclosure attack,sharing of identical pages within a single user’s virtual machine is permitted,but sharing of contents between different users is forbidden.In our proposed approach,virtual machines with similar operating systems of active domains in a node are recognised and organised into a homogenous batch,with memory deduplication performed inside that batch,to improve the memory pages sharing efficiency.When compared to memory deduplication applied to the entire host,implementation details demonstrate a significant increase in the number of pages shared when memory deduplication applied batch-wise and CPU(Central processing unit)consumption also increased.
基金Projects(61572525,61272148)supported by the National Natural Science Foundation of ChinaProject(20120162110061)supported by the PhD Programs Foundation of Ministry of Education of China+1 种基金Project(CX2014B066)supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044)supported by the Fundamental Research Funds for the Central Universities,China
文摘Cloud data centers consume a multitude of power leading to the problem of high energy consumption. In order to solve this problem, an energy-efficient virtual machine(VM) consolidation algorithm named PVDE(prediction-based VM deployment algorithm for energy efficiency) is presented. The proposed algorithm uses linear weighted method to predict the load of a host and classifies the hosts in the data center, based on the predicted host load, into four classes for the purpose of VMs migration. We also propose four types of VM selection algorithms for the purpose of determining potential VMs to be migrated. We performed extensive performance analysis of the proposed algorithms. Experimental results show that, in contrast to other energy-saving algorithms, the algorithm proposed in this work significantly reduces the energy consumption and maintains low service level agreement(SLA) violations.
基金Supported by the National Natural Science Foundation of China (90104005, 60373087, 60473023) and Network and Information Security Key Laboratory Program of Ministry of Education of China
文摘This paper interprets the essence of XEN and hardware virtualization technology, which make the virtual machine technology become the focus of people's attention again because of its impressive performance. The security challenges of XEN are mainly researched from the pointes of view: security bottleneck, security isolation and share, life-cycle, digital copyright protection, trusted virtual machine and managements, etc. These security problems significantly affect the security of the virtual machine system based on XEN. At the last, these security measures are put forward, which will be a useful instruction on enhancing XEN security in the future.
基金the Science Challenge Project(TZ2018004)the National Natural Science Foundation of China(21875228 and 21702195)for financial support。
文摘Finding energetic materials with tailored properties is always a significant challenge due to low research efficiency in trial and error.Herein,a methodology combining domain knowledge,a machine learning algorithm,and experiments is presented for accelerating the discovery of novel energetic materials.A high-throughput virtual screening(HTVS)system integrating on-demand molecular generation and machine learning models covering the prediction of molecular properties and crystal packing mode scoring is established.With the proposed HTVS system,candidate molecules with promising properties and a desirable crystal packing mode are rapidly targeted from the generated molecular space containing 25112 molecules.Furthermore,a study of the crystal structure and properties shows that the good comprehensive performances of the target molecule are in agreement with the predicted results,thus verifying the effectiveness of the proposed methodology.This work demonstrates a new research paradigm for discovering novel energetic materials and can be extended to other organic materials without manifest obstacles.
文摘Current orchestration and choreography process engines only serve with dedicate process languages.To solve these problems,an Event-driven Process Execution Model(EPEM) was developed.Formalization and mapping principles of the model were presented to guarantee the correctness and efficiency for process transformation.As a case study,the EPEM descriptions of Web Services Business Process Execution Language(WS-BPEL) were represented and a Process Virtual Machine(PVM)-OncePVM was implemented in compliance with the EPEM.
基金supported by the National Natural Science Foundation of China(6120235461272422)the Scientific and Technological Support Project(Industry)of Jiangsu Province(BE2011189)
文摘Cloud computing represents a novel computing model in the contemporary technology world. In a cloud system, the com- puting power of virtual machines (VMs) and network status can greatly affect the completion time of data intensive tasks. How- ever, most of the current resource allocation policies focus only on network conditions and physical hosts. And the computing power of VMs is largely ignored. This paper proposes a comprehensive resource allocation policy which consists of a data intensive task scheduling algorithm that takes account of computing power of VMs and a VM allocation policy that considers bandwidth between storage nodes and hosts. The VM allocation policy includes VM placement and VM migration algorithms. Related simulations show that the proposed algorithms can greatly reduce the task comple- tion time and keep good load balance of physical hosts at the same time.
基金supported by National Natural Science Foundation of China under Grants 41874146 and 42030103。
文摘Seismic reservoir prediction plays an important role in oil exploration and development.With the progress of artificial intelligence,many achievements have been made in machine learning seismic reservoir prediction.However,due to the factors such as economic cost,exploration maturity,and technical limitations,it is often difficult to obtain a large number of training samples for machine learning.In this case,the prediction accuracy cannot meet the requirements.To overcome this shortcoming,we develop a new machine learning reservoir prediction method based on virtual sample generation.In this method,the virtual samples,which are generated in a high-dimensional hypersphere space,are more consistent with the original data characteristics.Furthermore,at the stage of model building after virtual sample generation,virtual samples screening and model iterative optimization are used to eliminate noise samples and ensure the rationality of virtual samples.The proposed method has been applied to standard function data and real seismic data.The results show that this method can improve the prediction accuracy of machine learning significantly.
基金Supported by the National Program on Key Basic Re-search Project of China (G1999035801)
文摘With analysis of limitations Trusted Computing Group (TCG) has encountered, we argued that virtual machine monitor (VMM) is the appropriate architecture for implementing TCG specification. Putting together the VMM architecture, TCG hardware and application-oriented "thin" virtual machine (VM), Trusted VMM-based security architecture is present in this paper with the character of reduced and distributed trusted computing base (TCB). It provides isolation and integrity guarantees based on which general security requirements can be satisfied.
基金Supported by Natural Science Foundation of China (No. 50475117).
文摘A virtual computerized numerical control C CNC) processing system is built for spiral bevel and hypoid gears. The pre-designed process of the solution to locate the way of realization is investigated. A kind of combined programming method and principle of solid modeling are chosen. Multienvironmental programming thought and the inter-connected mechanisms between different environments are applied in the proposed system. The problems of data exchange and compatibility of modules are settled. Environment of the system is founded with object oriented programming thought. AutoCAD is located as the graphic environment. Matlab is used for editing the computation module. Virtual C ++6.0 is the realization environment of the main module. Windows is the platform for realizing the multi-environmental method. Through establishing the virtual system based windows message handling mechanism and the component object model, the application of multienvironmental programming is realized in the manufacture system simulation. The virtual gear product can be achieved in the accomplished software.
基金Project(61272148) supported by the National Natural Science Foundation of ChinaProject(20120162110061) supported by the Doctoral Programs of Ministry of Education of China+1 种基金Project(CX2014B066) supported by the Hunan Provincial Innovation Foundation for Postgraduate,ChinaProject(2014zzts044) supported by the Fundamental Research Funds for the Central Universities,China
文摘In order to improve the energy efficiency of large-scale data centers, a virtual machine(VM) deployment algorithm called three-threshold energy saving algorithm(TESA), which is based on the linear relation between the energy consumption and(processor) resource utilization, is proposed. In TESA, according to load, hosts in data centers are divided into four classes, that is,host with light load, host with proper load, host with middle load and host with heavy load. By defining TESA, VMs on lightly loaded host or VMs on heavily loaded host are migrated to another host with proper load; VMs on properly loaded host or VMs on middling loaded host are kept constant. Then, based on the TESA, five kinds of VM selection policies(minimization of migrations policy based on TESA(MIMT), maximization of migrations policy based on TESA(MAMT), highest potential growth policy based on TESA(HPGT), lowest potential growth policy based on TESA(LPGT) and random choice policy based on TESA(RCT)) are presented, and MIMT is chosen as the representative policy through experimental comparison. Finally, five research directions are put forward on future energy management. The results of simulation indicate that, as compared with single threshold(ST) algorithm and minimization of migrations(MM) algorithm, MIMT significantly improves the energy efficiency in data centers.