In the graph signal processing(GSP)framework,distributed algorithms are highly desirable in processing signals defined on large-scale networks.However,in most existing distributed algorithms,all nodes homogeneously pe...In the graph signal processing(GSP)framework,distributed algorithms are highly desirable in processing signals defined on large-scale networks.However,in most existing distributed algorithms,all nodes homogeneously perform the local computation,which calls for heavy computational and communication costs.Moreover,in many real-world networks,such as those with straggling nodes,the homogeneous manner may result in serious delay or even failure.To this end,we propose active network decomposition algorithms to select non-straggling nodes(normal nodes)that perform the main computation and communication across the network.To accommodate the decomposition in different kinds of networks,two different approaches are developed,one is centralized decomposition that leverages the adjacency of the network and the other is distributed decomposition that employs the indicator message transmission between neighboring nodes,which constitutes the main contribution of this paper.By incorporating the active decomposition scheme,a distributed Newton method is employed to solve the least squares problem in GSP,where the Hessian inverse is approximately evaluated by patching a series of inverses of local Hessian matrices each of which is governed by one normal node.The proposed algorithm inherits the fast convergence of the second-order algorithms while maintains low computational and communication cost.Numerical examples demonstrate the effectiveness of the proposed algorithm.展开更多
A reduction in network energy consumption and the establishment of green networks have become key scientific problems in academic and industrial research.Existing energy efficiency schemes are based on a known traffic...A reduction in network energy consumption and the establishment of green networks have become key scientific problems in academic and industrial research.Existing energy efficiency schemes are based on a known traffic matrix,and acquiring a real-time traffic matrix in current complex networks is difficult.Therefore,this research investigates how to reduce network energy consumption without a real-time traffic matrix.In particular,this paper proposes an intra-domain energy-efficient routing scheme based on multipath routing.It analyzes the relationship between routing availability and energy-efficient routing and integrates the two mechanisms to satisfy the requirements of availability and energy efficiency.The main research focus is as follows:(1)A link criticality model is evaluated to quantitatively measure the importance of links in a network.(2)On the basis of the link criticality model,this paper analyzes an energy-efficient routing technology based on multipath routing to achieve the goals of availability and energy efficiency simultaneously.(3)An energy-efficient routing algorithm based on multipath routing in large-scale networks is proposed.(4)The proposed method does not require a real-time traffic matrix in the network and is thus easy to apply in practice.(5)The proposed algorithm is verified in several network topologies.Experimental results show that the algorithm can not only reduce network energy consumption but can also ensure routing availability.展开更多
With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have...With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have shown that network failures occur on the Internet inevitably and frequently.The current routing protocols deployed on the Internet adopt the reconvergence mechanism to cope with network failures.During the reconvergence process,the packets may be lost because of inconsistent routing information,which reduces the network’s availability greatly and affects the Internet service provider’s(ISP’s)service quality and reputation seriously.Therefore,improving network availability has become an urgent problem.As such,the Internet Engineering Task Force suggests the use of downstream path criterion(DC)to address all single-link failure scenarios.However,existing methods for implementing DC schemes are time consuming,require a large amount of router CPU resources,and may deteriorate router capability.Thus,the computation overhead introduced by existing DC schemes is significant,especially in large-scale networks.Therefore,this study proposes an efficient intra-domain routing protection algorithm(ERPA)in large-scale networks.Theoretical analysis indicates that the time complexity of ERPA is less than that of constructing a shortest path tree.Experimental results show that ERPA can reduce the computation overhead significantly compared with the existing algorithms while offering the same network availability as DC.展开更多
RECENT advances in sensing,communication and computing have open the door to the deployment of largescale networks of sensors and actuators that allow fine-grain monitoring and control of a multitude of physical proce...RECENT advances in sensing,communication and computing have open the door to the deployment of largescale networks of sensors and actuators that allow fine-grain monitoring and control of a multitude of physical processes and infrastructures.The appellation used by field experts for these paradigms is Cyber-Physical Systems(CPS)because the dynamics among computers,networking media/resources and physical systems interact in a way that multi-disciplinary technologies(embedded systems,computers,communications and controls)are required to accomplish prescribed missions.Moreover,they are expected to play a significant role in the design and development of future engineering applications such as smart grids,transportation systems,nuclear plants and smart factories.展开更多
A major challenge of network virtualization is the virtual network resource allocation problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. However, the ex...A major challenge of network virtualization is the virtual network resource allocation problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. However, the existing algorithms are almost concentrated on the randomly small-scale network topology, which is not suitable for practical large-scale network environments, because more time is spent on traversing SN and VN, resulting in VN requests congestion. To address this problem, virtual network mapping algorithm is proposed for large-scale network based on small-world characteristic of complex network and network coordinate system. Compared our algorithm with algorithm D-ViNE, experimental results show that our algorithm improves the overall performance.展开更多
Virtual network embedding problem which is NP-hard is a key issue for implementing software-defined network which is brought about by network virtualization. Compared with other studies which focus on designing heuris...Virtual network embedding problem which is NP-hard is a key issue for implementing software-defined network which is brought about by network virtualization. Compared with other studies which focus on designing heuristic algorithms to reduce the hardness of the NP-hard problem we propose a robust VNE algorithm based on component connectivity in large-scale network. We distinguish the different components and embed VN requests onto them respectively. And k-core is applied to identify different VN topologies so that the VN request can be embedded onto its corresponding component. On the other hand, load balancing is also considered in this paper. It could avoid blocked or bottlenecked area of substrate network. Simulation experiments show that compared with other algorithms in large-scale network, acceptance ratio, average revenue and robustness can be obviously improved by our algorithm and average cost can be reduced. It also shows the relationship between the component connectivity including giant component and small components and the performance metrics.展开更多
Many science and engineering applications involve solvinga linear least-squares system formed from some field measurements. In the distributed cyber-physical systems(CPS),each sensor node used for measurement often on...Many science and engineering applications involve solvinga linear least-squares system formed from some field measurements. In the distributed cyber-physical systems(CPS),each sensor node used for measurement often only knowspartial independent rows of the least-squares system. To solve the least-squares all the measurements must be gathered at a centralized location and then perform the computa-tion. Such data collection and computation are inefficient because of bandwidth and time constraints and sometimes areinfeasible because of data privacy concerns. Iterative methods are natural candidates for solving the aforementionedproblem and there are many studies regarding this. However,most of the proposed solutions are related to centralized/parallel computations while only a few have the potential to beapplied in distributed networks. Thus distributed computations are strongly preferred or demanded in many of the realworld applications, e.g. smart-grid, target tracking, etc. Thispaper surveys the representative iterative methods for distributed least-squares in networks.展开更多
With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that consid...With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.展开更多
Objective:Epigenetic abnormalities have a critical role in breast cancer by regulating gene expression;however,the intricate interrelationships and key roles of approximately 400 epigenetic regulators in breast cancer...Objective:Epigenetic abnormalities have a critical role in breast cancer by regulating gene expression;however,the intricate interrelationships and key roles of approximately 400 epigenetic regulators in breast cancer remain elusive.It is important to decipher the comprehensive epigenetic regulatory network in breast cancer cells to identify master epigenetic regulators and potential therapeutic targets.Methods:We employed high-throughput sequencing-based high-throughput screening(HTS^(2))to effectively detect changes in the expression of 2,986 genes following the knockdown of 400 epigenetic regulators.Then,bioinformatics analysis tools were used for the resulting gene expression signatures to investigate the epigenetic regulations in breast cancer.Results:Utilizing these gene expression signatures,we classified the epigenetic regulators into five distinct clusters,each characterized by specific functions.We discovered functional similarities between BAZ2B and SETMAR,as well as CLOCK and CBX3.Moreover,we observed that CLOCK functions in a manner opposite to that of HDAC8 in downstream gene regulation.Notably,we constructed an epigenetic regulatory network based on the gene expression signatures,which revealed 8 distinct modules and identified 10 master epigenetic regulators in breast cancer.Conclusions:Our work deciphered the extensive regulation among hundreds of epigenetic regulators.The identification of 10 master epigenetic regulators offers promising therapeutic targets for breast cancer treatment.展开更多
Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss pos...Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.展开更多
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework...Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
WiFi has become one of the most popular ways to access the Internet.However,in large-scale campus wireless networks,it is challenging for network administrators to provide optimized access quality without knowledge on...WiFi has become one of the most popular ways to access the Internet.However,in large-scale campus wireless networks,it is challenging for network administrators to provide optimized access quality without knowledge on fine-grained traffic characteristics and real network performance.In this paper,we implement PerfMon,a network performance measurement and diagnosis system,which integrates collected multi-source datasets and analysis methods.Based on PerfMon,we first conduct a comprehensive measurement on application-level traffic patterns and behaviors from multiple dimensions in the wireless network of T university(TWLAN),which is one of the largest campus wireless networks.Then we systematically study the application-level network performance.We observe that the application-level traffic behaviors and performance vary greatly across different locations and device types.The performance is far from satisfactory in some cases.To diagnose these problems,we distinguish locations and device types,and further locate the most crucial factors that affect the performance.The results of case studies show that the influential factors can effectively characterize performance changes and explain for performance degradation.展开更多
The financial aspects of large-scale engineering construction projects profoundly influence their success.Strengthening cost control and establishing a scientific financial evaluation system can enhance the project’s...The financial aspects of large-scale engineering construction projects profoundly influence their success.Strengthening cost control and establishing a scientific financial evaluation system can enhance the project’s economic benefits,minimize unnecessary costs,and provide decision-makers with a robust financial foundation.Additionally,implementing an effective cash flow control mechanism and conducting a comprehensive assessment of potential project risks can ensure financial stability and mitigate the risk of fund shortages.Developing a practical and feasible fundraising plan,along with stringent fund management practices,can prevent fund wastage and optimize fund utilization efficiency.These measures not only facilitate smooth project progression and improve project management efficiency but also enhance the project’s economic and social outcomes.展开更多
The global energy transition is a widespread phenomenon that requires international exchange of experiences and mutual learning.Germany’s success in its first phase of energy transition can be attributed to its adopt...The global energy transition is a widespread phenomenon that requires international exchange of experiences and mutual learning.Germany’s success in its first phase of energy transition can be attributed to its adoption of smart energy technology and implementation of electricity futures and spot marketization,which enabled the achievement of multiple energy spatial–temporal complementarities and overall grid balance through energy conversion and reconversion technologies.While China can draw from Germany’s experience to inform its own energy transition efforts,its 11-fold higher annual electricity consumption requires a distinct approach.We recommend a clean energy system based on smart sector coupling(ENSYSCO)as a suitable pathway for achieving sustainable energy in China,given that renewable energy is expected to guarantee 85%of China’s energy production by 2060,requiring significant future electricity storage capacity.Nonetheless,renewable energy storage remains a significant challenge.We propose four large-scale underground energy storage methods based on ENSYSCO to address this challenge,while considering China’s national conditions.These proposals have culminated in pilot projects for large-scale underground energy storage in China,which we believe is a necessary choice for achieving carbon neutrality in China and enabling efficient and safe grid integration of renewable energy within the framework of ENSYSCO.展开更多
As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerou...As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerous advantages,resource management among various domains in large-scale UAV communication networks is the key challenge to be solved urgently.Specifically,due to the inherent requirements and future development trend,distributed resource management is suitable.In this article,we investigate the resource management problem for large-scale UAV communication networks from game-theoretic perspective which are exactly coincident with the distributed and autonomous manner.By exploring the inherent features,the distinctive challenges are discussed.Then,we explore several gametheoretic models that not only combat the challenges but also have broad application prospects.We provide the basics of each game-theoretic model and discuss the potential applications for resource management in large-scale UAV communication networks.Specifically,mean-field game,graphical game,Stackelberg game,coalition game and potential game are included.After that,we propose two innovative case studies to highlight the feasibility of such novel game-theoretic models.Finally,we give some future research directions to shed light on future opportunities and applications.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
System design and optimization problems require large-scale chemical kinetic models. Pure kinetic models of naphtha pyrolysis need to solve a complete set of stiff ODEs and is therefore too computational expensive. On...System design and optimization problems require large-scale chemical kinetic models. Pure kinetic models of naphtha pyrolysis need to solve a complete set of stiff ODEs and is therefore too computational expensive. On the other hand, artificial neural networks that completely neglect the topology of the reaction networks often have poor generalization. In this paper, a framework is proposed for learning local representations from largescale chemical reaction networks. At first, the features of naphtha pyrolysis reactions are extracted by applying complex network characterization methods. The selected features are then used as inputs in convolutional architectures. Different CNN models are established and compared to optimize the neural network structure.After the pre-training and fine-tuning step, the ultimate CNN model reduces the computational cost of the previous kinetic model by over 300 times and predicts the yields of main products with the average error of less than 3%. The obtained results demonstrate the high efficiency of the proposed framework.展开更多
In order to improve the ductility of commercial WE43 alloy and reduce its cost,a Mg-3Y-2Gd-1Nd-0.4Zr alloy with a low amount of rare earths was developed and prepared by sand casting with a differential pressure casti...In order to improve the ductility of commercial WE43 alloy and reduce its cost,a Mg-3Y-2Gd-1Nd-0.4Zr alloy with a low amount of rare earths was developed and prepared by sand casting with a differential pressure casting system.Its microstructure,mechanical properties and fracture behaviors in the as-cast,solution-treated and as-aged states were evaluated.It is found that the aged alloy exhibited excellent comprehensive mechanical properties owing to the fine dense plate-shapedβ'precipitates formed on prismatic habits during aging at 200℃for 192 hrs after solution-treated at 500℃for 24 hrs.Its ultimate tensile strength,yield strength,and elongation at ambient temperature reach to 319±10 MPa,202±2 MPa and 8.7±0.3%as well as 230±4 MPa,155±1 MPa and 16.0±0.5%at 250℃.The fracture mode of as-aged alloy was transferred from cleavage at room temperature to quasi-cleavage and ductile fracture at the test temperature 300℃.The properties of large-scale components fabricated using the developed Mg-3Y-2Gd-1Nd-0.4Zr alloy are better than those of commercial WE43 alloy,suggesting that the new developed alloy is a good candidate to fabricate the large complex thin-walled components.展开更多
基金supported by National Natural Science Foundation of China(Grant No.61761011)Natural Science Foundation of Guangxi(Grant No.2020GXNSFBA297078).
文摘In the graph signal processing(GSP)framework,distributed algorithms are highly desirable in processing signals defined on large-scale networks.However,in most existing distributed algorithms,all nodes homogeneously perform the local computation,which calls for heavy computational and communication costs.Moreover,in many real-world networks,such as those with straggling nodes,the homogeneous manner may result in serious delay or even failure.To this end,we propose active network decomposition algorithms to select non-straggling nodes(normal nodes)that perform the main computation and communication across the network.To accommodate the decomposition in different kinds of networks,two different approaches are developed,one is centralized decomposition that leverages the adjacency of the network and the other is distributed decomposition that employs the indicator message transmission between neighboring nodes,which constitutes the main contribution of this paper.By incorporating the active decomposition scheme,a distributed Newton method is employed to solve the least squares problem in GSP,where the Hessian inverse is approximately evaluated by patching a series of inverses of local Hessian matrices each of which is governed by one normal node.The proposed algorithm inherits the fast convergence of the second-order algorithms while maintains low computational and communication cost.Numerical examples demonstrate the effectiveness of the proposed algorithm.
基金supported by the Program of Hainan Association for Science and Technology Plans to Youth R&D Innovation(QCXM201910)the National Natural Science Foundation of China(Nos.61702315,61802092)+1 种基金the Applied Basic Research Plan of Shanxi Province(No.2201901D211168)the Key R&D Program(International Science and Technology Cooperation Project)of Shanxi Province China(No.201903D421003).
文摘A reduction in network energy consumption and the establishment of green networks have become key scientific problems in academic and industrial research.Existing energy efficiency schemes are based on a known traffic matrix,and acquiring a real-time traffic matrix in current complex networks is difficult.Therefore,this research investigates how to reduce network energy consumption without a real-time traffic matrix.In particular,this paper proposes an intra-domain energy-efficient routing scheme based on multipath routing.It analyzes the relationship between routing availability and energy-efficient routing and integrates the two mechanisms to satisfy the requirements of availability and energy efficiency.The main research focus is as follows:(1)A link criticality model is evaluated to quantitatively measure the importance of links in a network.(2)On the basis of the link criticality model,this paper analyzes an energy-efficient routing technology based on multipath routing to achieve the goals of availability and energy efficiency simultaneously.(3)An energy-efficient routing algorithm based on multipath routing in large-scale networks is proposed.(4)The proposed method does not require a real-time traffic matrix in the network and is thus easy to apply in practice.(5)The proposed algorithm is verified in several network topologies.Experimental results show that the algorithm can not only reduce network energy consumption but can also ensure routing availability.
基金the National Natural Science Foundation of China(No.61702315)the Key R&D program(international science and technology cooperation project)of Shanxi Province China(No.201903D421003)the National Key Research and Development Program of China(No.2018YFB1800401).
文摘With an increasing urgent demand for fast recovery routing mechanisms in large-scale networks,minimizing network disruption caused by network failure has become critical.However,a large number of relevant studies have shown that network failures occur on the Internet inevitably and frequently.The current routing protocols deployed on the Internet adopt the reconvergence mechanism to cope with network failures.During the reconvergence process,the packets may be lost because of inconsistent routing information,which reduces the network’s availability greatly and affects the Internet service provider’s(ISP’s)service quality and reputation seriously.Therefore,improving network availability has become an urgent problem.As such,the Internet Engineering Task Force suggests the use of downstream path criterion(DC)to address all single-link failure scenarios.However,existing methods for implementing DC schemes are time consuming,require a large amount of router CPU resources,and may deteriorate router capability.Thus,the computation overhead introduced by existing DC schemes is significant,especially in large-scale networks.Therefore,this study proposes an efficient intra-domain routing protection algorithm(ERPA)in large-scale networks.Theoretical analysis indicates that the time complexity of ERPA is less than that of constructing a shortest path tree.Experimental results show that ERPA can reduce the computation overhead significantly compared with the existing algorithms while offering the same network availability as DC.
文摘RECENT advances in sensing,communication and computing have open the door to the deployment of largescale networks of sensors and actuators that allow fine-grain monitoring and control of a multitude of physical processes and infrastructures.The appellation used by field experts for these paradigms is Cyber-Physical Systems(CPS)because the dynamics among computers,networking media/resources and physical systems interact in a way that multi-disciplinary technologies(embedded systems,computers,communications and controls)are required to accomplish prescribed missions.Moreover,they are expected to play a significant role in the design and development of future engineering applications such as smart grids,transportation systems,nuclear plants and smart factories.
基金Sponsored by the Funds for Creative Research Groups of China(Grant No. 60821001)National Natural Science Foundation of China(Grant No.60973108 and 60902050)973 Project of China (Grant No.2007CB310703)
文摘A major challenge of network virtualization is the virtual network resource allocation problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. However, the existing algorithms are almost concentrated on the randomly small-scale network topology, which is not suitable for practical large-scale network environments, because more time is spent on traversing SN and VN, resulting in VN requests congestion. To address this problem, virtual network mapping algorithm is proposed for large-scale network based on small-world characteristic of complex network and network coordinate system. Compared our algorithm with algorithm D-ViNE, experimental results show that our algorithm improves the overall performance.
基金supported in part by the National Natural Science Foundation of China under Grant No.61471055
文摘Virtual network embedding problem which is NP-hard is a key issue for implementing software-defined network which is brought about by network virtualization. Compared with other studies which focus on designing heuristic algorithms to reduce the hardness of the NP-hard problem we propose a robust VNE algorithm based on component connectivity in large-scale network. We distinguish the different components and embed VN requests onto them respectively. And k-core is applied to identify different VN topologies so that the VN request can be embedded onto its corresponding component. On the other hand, load balancing is also considered in this paper. It could avoid blocked or bottlenecked area of substrate network. Simulation experiments show that compared with other algorithms in large-scale network, acceptance ratio, average revenue and robustness can be obviously improved by our algorithm and average cost can be reduced. It also shows the relationship between the component connectivity including giant component and small components and the performance metrics.
基金partially supported by US NSF under Grant No.NSF-CNS-1066391and No.NSF-CNS-0914371,NSF-CPS-1135814 and NSF-CDI-1125165
文摘Many science and engineering applications involve solvinga linear least-squares system formed from some field measurements. In the distributed cyber-physical systems(CPS),each sensor node used for measurement often only knowspartial independent rows of the least-squares system. To solve the least-squares all the measurements must be gathered at a centralized location and then perform the computa-tion. Such data collection and computation are inefficient because of bandwidth and time constraints and sometimes areinfeasible because of data privacy concerns. Iterative methods are natural candidates for solving the aforementionedproblem and there are many studies regarding this. However,most of the proposed solutions are related to centralized/parallel computations while only a few have the potential to beapplied in distributed networks. Thus distributed computations are strongly preferred or demanded in many of the realworld applications, e.g. smart-grid, target tracking, etc. Thispaper surveys the representative iterative methods for distributed least-squares in networks.
基金The work was supported by Humanities and Social Sciences Fund of the Ministry of Education(No.22YJA630119)the National Natural Science Foundation of China(No.71971051)Natural Science Foundation of Hebei Province(No.G2021501004).
文摘With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.
基金supported by grants from the National Natural Science Foundation of China(Grant No.82172723)the Natural Science Foundation of Sichuan(Grant Nos.2023NSFSC1828 and 2022NSFSC1289)+2 种基金the“Xinglin Scholar”Scientific Research Promotion Plan of Chengdu University of Transitional Chinese Medicine(Grant No.BSH2021003)the Innovation Team and Talents Cultivation Program of National Administration of Traditional Chinese Medicine(Grant No.ZYYCXTD-D-202209)the Research Funding of Department of Science and Technology of Qinghai Province(Grant No.2023-ZJ-729)。
文摘Objective:Epigenetic abnormalities have a critical role in breast cancer by regulating gene expression;however,the intricate interrelationships and key roles of approximately 400 epigenetic regulators in breast cancer remain elusive.It is important to decipher the comprehensive epigenetic regulatory network in breast cancer cells to identify master epigenetic regulators and potential therapeutic targets.Methods:We employed high-throughput sequencing-based high-throughput screening(HTS^(2))to effectively detect changes in the expression of 2,986 genes following the knockdown of 400 epigenetic regulators.Then,bioinformatics analysis tools were used for the resulting gene expression signatures to investigate the epigenetic regulations in breast cancer.Results:Utilizing these gene expression signatures,we classified the epigenetic regulators into five distinct clusters,each characterized by specific functions.We discovered functional similarities between BAZ2B and SETMAR,as well as CLOCK and CBX3.Moreover,we observed that CLOCK functions in a manner opposite to that of HDAC8 in downstream gene regulation.Notably,we constructed an epigenetic regulatory network based on the gene expression signatures,which revealed 8 distinct modules and identified 10 master epigenetic regulators in breast cancer.Conclusions:Our work deciphered the extensive regulation among hundreds of epigenetic regulators.The identification of 10 master epigenetic regulators offers promising therapeutic targets for breast cancer treatment.
文摘Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.
文摘Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
基金supported by the National Key Research and Development Program of China(No.2020YFE0200500)。
文摘WiFi has become one of the most popular ways to access the Internet.However,in large-scale campus wireless networks,it is challenging for network administrators to provide optimized access quality without knowledge on fine-grained traffic characteristics and real network performance.In this paper,we implement PerfMon,a network performance measurement and diagnosis system,which integrates collected multi-source datasets and analysis methods.Based on PerfMon,we first conduct a comprehensive measurement on application-level traffic patterns and behaviors from multiple dimensions in the wireless network of T university(TWLAN),which is one of the largest campus wireless networks.Then we systematically study the application-level network performance.We observe that the application-level traffic behaviors and performance vary greatly across different locations and device types.The performance is far from satisfactory in some cases.To diagnose these problems,we distinguish locations and device types,and further locate the most crucial factors that affect the performance.The results of case studies show that the influential factors can effectively characterize performance changes and explain for performance degradation.
文摘The financial aspects of large-scale engineering construction projects profoundly influence their success.Strengthening cost control and establishing a scientific financial evaluation system can enhance the project’s economic benefits,minimize unnecessary costs,and provide decision-makers with a robust financial foundation.Additionally,implementing an effective cash flow control mechanism and conducting a comprehensive assessment of potential project risks can ensure financial stability and mitigate the risk of fund shortages.Developing a practical and feasible fundraising plan,along with stringent fund management practices,can prevent fund wastage and optimize fund utilization efficiency.These measures not only facilitate smooth project progression and improve project management efficiency but also enhance the project’s economic and social outcomes.
基金Henan Institute for Chinese Development Strategy of Engineering&Technology(No.2022HENZDA02)the Science&Technology Department of Sichuan Province(No.2021YFH0010)。
文摘The global energy transition is a widespread phenomenon that requires international exchange of experiences and mutual learning.Germany’s success in its first phase of energy transition can be attributed to its adoption of smart energy technology and implementation of electricity futures and spot marketization,which enabled the achievement of multiple energy spatial–temporal complementarities and overall grid balance through energy conversion and reconversion technologies.While China can draw from Germany’s experience to inform its own energy transition efforts,its 11-fold higher annual electricity consumption requires a distinct approach.We recommend a clean energy system based on smart sector coupling(ENSYSCO)as a suitable pathway for achieving sustainable energy in China,given that renewable energy is expected to guarantee 85%of China’s energy production by 2060,requiring significant future electricity storage capacity.Nonetheless,renewable energy storage remains a significant challenge.We propose four large-scale underground energy storage methods based on ENSYSCO to address this challenge,while considering China’s national conditions.These proposals have culminated in pilot projects for large-scale underground energy storage in China,which we believe is a necessary choice for achieving carbon neutrality in China and enabling efficient and safe grid integration of renewable energy within the framework of ENSYSCO.
基金This work was supported by National Key R&D Program of China under Grant 2018YFB1800802in part by the National Natural Science Foundation of China under Grant No.61771488,No.61631020 and No.61827801+1 种基金in part by State Key Laboratory of Air Traffic Management System and Technology under Grant No.SKLATM201808in part by Postgraduate Research and Practice Innovation Program of Jiangsu Province under No.KYCX190188.
文摘As a result of rapid development in electronics and communication technology,large-scale unmanned aerial vehicles(UAVs)are harnessed for various promising applications in a coordinated manner.Although it poses numerous advantages,resource management among various domains in large-scale UAV communication networks is the key challenge to be solved urgently.Specifically,due to the inherent requirements and future development trend,distributed resource management is suitable.In this article,we investigate the resource management problem for large-scale UAV communication networks from game-theoretic perspective which are exactly coincident with the distributed and autonomous manner.By exploring the inherent features,the distinctive challenges are discussed.Then,we explore several gametheoretic models that not only combat the challenges but also have broad application prospects.We provide the basics of each game-theoretic model and discuss the potential applications for resource management in large-scale UAV communication networks.Specifically,mean-field game,graphical game,Stackelberg game,coalition game and potential game are included.After that,we propose two innovative case studies to highlight the feasibility of such novel game-theoretic models.Finally,we give some future research directions to shed light on future opportunities and applications.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.
基金Supported by the National Natural Science Foundation of China(U1462206)
文摘System design and optimization problems require large-scale chemical kinetic models. Pure kinetic models of naphtha pyrolysis need to solve a complete set of stiff ODEs and is therefore too computational expensive. On the other hand, artificial neural networks that completely neglect the topology of the reaction networks often have poor generalization. In this paper, a framework is proposed for learning local representations from largescale chemical reaction networks. At first, the features of naphtha pyrolysis reactions are extracted by applying complex network characterization methods. The selected features are then used as inputs in convolutional architectures. Different CNN models are established and compared to optimize the neural network structure.After the pre-training and fine-tuning step, the ultimate CNN model reduces the computational cost of the previous kinetic model by over 300 times and predicts the yields of main products with the average error of less than 3%. The obtained results demonstrate the high efficiency of the proposed framework.
基金This work was funded by the National Natural Science Foundation of China(No.U2037601 and No.52074183)The authors appreciate Ge Chen,Wenbin Zou as well as Shiwei Wang for preparing the alloys,Wenyu Liu as well as Xuehao Zheng from ZKKF(Beijing)Science&Technology Co.,Ltd for the TEM measurement,Gert Wiese as well as Petra Fischer for SEM and hardness measurement and Yunting Li from the Instrument Analysis Center of Shanghai Jiao Tong University(China)for SEM measurement.Lixiang Yang also gratefully thanks the China Scholarship Council(201906230111)for awarding a fellowship to support his study stay at Helmholtz-Zentrum Geesthacht.
文摘In order to improve the ductility of commercial WE43 alloy and reduce its cost,a Mg-3Y-2Gd-1Nd-0.4Zr alloy with a low amount of rare earths was developed and prepared by sand casting with a differential pressure casting system.Its microstructure,mechanical properties and fracture behaviors in the as-cast,solution-treated and as-aged states were evaluated.It is found that the aged alloy exhibited excellent comprehensive mechanical properties owing to the fine dense plate-shapedβ'precipitates formed on prismatic habits during aging at 200℃for 192 hrs after solution-treated at 500℃for 24 hrs.Its ultimate tensile strength,yield strength,and elongation at ambient temperature reach to 319±10 MPa,202±2 MPa and 8.7±0.3%as well as 230±4 MPa,155±1 MPa and 16.0±0.5%at 250℃.The fracture mode of as-aged alloy was transferred from cleavage at room temperature to quasi-cleavage and ductile fracture at the test temperature 300℃.The properties of large-scale components fabricated using the developed Mg-3Y-2Gd-1Nd-0.4Zr alloy are better than those of commercial WE43 alloy,suggesting that the new developed alloy is a good candidate to fabricate the large complex thin-walled components.