The feasibility of using an ANN method to predict the mercury emission and speciation in the flue gas of a power station under un-tested combustion/operational conditions is evaluated. Based on existing field testing ...The feasibility of using an ANN method to predict the mercury emission and speciation in the flue gas of a power station under un-tested combustion/operational conditions is evaluated. Based on existing field testing datasets for the emissions of three utility boilers, a 3-layer back-propagation network is applied to predict the mercury speciation at the stack. The whole prediction procedure includes: collection of data, structuring an artificial neural network (ANN) model, training process and error evaluation. A total of 59 parameters of coal and ash analyses and power plant operating conditions are treated as input variables, and the actual mercury emissions and their speciation data are used to supervise the training process and verify the performance of prediction modeling. The precision of model prediction ( root- mean-square error is 0. 8 μg/Nm3 for elemental mercury and 0. 9 μg/Nm3 for total mercury) is acceptable since the spikes of semi- mercury continuous emission monitor (SCEM) with wet conversion modules are taken into consideration.展开更多
A system model is formulated as the maximization of a total utility function to achieve fair downlink data scheduling in multiuser orthogonal frequency division multiplexing (OFDM) wireless networks. A dynamic subca...A system model is formulated as the maximization of a total utility function to achieve fair downlink data scheduling in multiuser orthogonal frequency division multiplexing (OFDM) wireless networks. A dynamic subcarrier allocation algorithm (DSAA) is proposed, to optimize the system model. The subcarrier allocation decision is made by the proposed DSAA according to the maximum value of total utility function with respect to the queue mean waiting time. Simulation results demonstrate that compared to the conventional algorithms, the proposed algorithm has better delay performance and can provide fairness under different loads by using different utility functions.展开更多
We study the tradeoff between network utility and network lifetime using a cross-layer optimization approach. The tradeoff model in this paper is based on the framework of layering as optimization decomposition. Our t...We study the tradeoff between network utility and network lifetime using a cross-layer optimization approach. The tradeoff model in this paper is based on the framework of layering as optimization decomposition. Our tradeoff model is the first one that incorporates time slots allocation into this framework. By using Lagrangian dual decomposition method, we decompose the tradeoff model into two subproblems: routing problem at network layer and resource allocation problem at medium access control (MAC) layer. The interfaces between the layers are precisely the dual variables. A partially distributed algorithm is proposed to solve the nonlinear, convex, and separable tradeoff model. Numerical simulation results are presented to support our algorithm.展开更多
A new approach, named TCP-I2NC, is proposed to improve the interaction between network coding and TCP and to maximize the network utility in interference-free multi-radio multi-channel wireless mesh networks. It is gr...A new approach, named TCP-I2NC, is proposed to improve the interaction between network coding and TCP and to maximize the network utility in interference-free multi-radio multi-channel wireless mesh networks. It is grounded on a Network Utility Maxmization (NUM) formulation which can be decomposed into a rate control problem and a packet scheduling problem. The solutions to these two problems perform resource allocation among different flows. Simulations demonstrate that TCP-I2NC results in a significant throughput gain and a small delay jitter. Network resource is fairly allocated via the solution to the NUM problem and the whole system also runs stably. Moreover, TCP-I2NC is compatible with traditional TCP variants.展开更多
The service and application of a network is a behavioral process that is oriented toward its operations and tasks, whose metrics and evaluation are still somewhat of a rough comparison, This paper describes sce- nes o...The service and application of a network is a behavioral process that is oriented toward its operations and tasks, whose metrics and evaluation are still somewhat of a rough comparison, This paper describes sce- nes of network behavior as differential manifolds, Using the homeomorphic transformation of smooth differential manifolds, we provide a mathematical definition of network behavior and propose a mathe- matical description of the network behavior path and behavior utility, Based on the principle of differen- tial geometry, this paper puts forward the function of network behavior and a calculation method to determine behavior utility, and establishes the calculation principle of network behavior utility, We also provide a calculation framework for assessment of the network's attack-defense confrontation on the strength of behavior utility, Therefore, this paper establishes a mathematical foundation for the objective measurement and precise evaluation of network behavior,展开更多
The combination of orthogonal frequency division multiple access(OFDMA) with relaying techniques provides plentiful opportunities for high-performance and cost-effective networks.It requires intelligent radio resource...The combination of orthogonal frequency division multiple access(OFDMA) with relaying techniques provides plentiful opportunities for high-performance and cost-effective networks.It requires intelligent radio resource management schemes to harness these opportunities.This paper investigates the utility-based resource allocation problem in a real-time and non-real-time traffics mixed OFDMA cellular relay network to exploit the potentiality of relay.In order to apply utility theory to obtain an efficient tradeoff between throughput and fairness as well as satisfy the delay requirements of real-time traffics,a joint routing and scheduling scheme is proposed to resolve the resource allocation problem.Additionally,a low-complexity iterative algorithm is introduced to realize the scheme.The numerical results indicate that besides meeting the delay requirements of real-time traffic,the scheme can achieve the tradeoff between throughput and fairness effectively.展开更多
Background Promoting the synchronization of glucose and amino acid release in the digestive tract of pigs could effectively improve dietary nitrogen utilization.The rational allocation of dietary starch sources and th...Background Promoting the synchronization of glucose and amino acid release in the digestive tract of pigs could effectively improve dietary nitrogen utilization.The rational allocation of dietary starch sources and the exploration of appropriate dietary glucose release kinetics may promote the dynamic balance of dietary glucose and amino acid supplies.However,research on the effects of diets with different glucose release kinetic profiles on amino acid absorption and portal amino acid appearance in piglets is limited.This study aimed to investigate the effects of the kinetic pattern of dietary glucose release on nitrogen utilization,the portal amino acid profile,and nutrient transporter expression in intestinal enterocytes in piglets.Methods Sixty-four barrows(15.00±1.12 kg)were randomly allotted to 4 groups and fed diets formulated with starch from corn,corn/barley,corn/sorghum,or corn/cassava combinations(diets were coded A,B,C,or D respectively).Protein retention,the concentrations of portal amino acid and glucose,and the relative expression of amino acid and glucose transporter m RNAs were investigated.In vitro digestion was used to compare the dietary glucose release profiles.Results Four piglet diets with different glucose release kinetics were constructed by adjusting starch sources.The in vivo appearance dynamics of portal glucose were consistent with those of in vitro dietary glucose release kinetics.Total nitrogen excretion was reduced in the piglets in group B,while apparent nitrogen digestibility and nitrogen retention increased(P<0.05).Regardless of the time(2 h or 4 h after morning feeding),the portal total free amino acids content and contents of some individual amino acids(Thr,Glu,Gly,Ala,and Ile)of the piglets in group B were significantly higher than those in groups A,C,and D(P<0.05).Cluster analysis showed that different glucose release kinetic patterns resulted in different portal amino acid patterns in piglets,which decreased gradually with the extension of feeding time.The portal His/Phe,Pro/Glu,Leu/Val,Lys/Met,Tyr/Ile and Ala/Gly appeared higher similarity among the diet treatments.In the anterior jejunum,the glucose transporter SGLT1 was significantly positively correlated with the amino acid transporters B0AT1,EAAC1,and CAT1.Conclusions Rational allocation of starch resources could regulate dietary glucose release kinetics.In the present study,group B(corn/barley)diet exhibited a better glucose release kinetic pattern than the other groups,which could affect the portal amino acid contents and patterns by regulating the expression of amino acid transporters in the small intestine,thereby promoting nitrogen deposition in the body,and improving the utilization efficiency of dietary nitrogen.展开更多
A novel backoff algorithm in CSMA/CA-based medium access control (MAC) protocols for clustered sensor networks was proposed. The algorithm requires that all sensor nodes have the same value of contention window (CW) i...A novel backoff algorithm in CSMA/CA-based medium access control (MAC) protocols for clustered sensor networks was proposed. The algorithm requires that all sensor nodes have the same value of contention window (CW) in a cluster, which is revealed by formulating resource allocation as a network utility maximization problem. Then, by maximizing the total network utility with constrains of minimizing collision probability, the optimal value of CW (Wopt) can be computed according to the number of sensor nodes. The new backoff algorithm uses the common optimal value Wopt and leads to fewer collisions than binary exponential backoff algorithm. The simulation results show that the proposed algorithm outperforms standard 802.11 DCF and S-MAC in average collision times, packet delay, total energy consumption, and system throughput.展开更多
The decision system based on network economy is the foundation of enterprise's making good winning in its market. This paper describes the decision makers' utility model based on network economy, considers the...The decision system based on network economy is the foundation of enterprise's making good winning in its market. This paper describes the decision makers' utility model based on network economy, considers the roles decision-makers not only play in the enterprises are decision making, coordinating, controlling and monitoring, but also they are mainly designers, executants and educators in the mode of network economy展开更多
In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structu...In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.展开更多
This paper proposes a joint layer scheme for fair downlink data scheduling in muhiuser OFDM wireless networks. Based on the optimization model formulated as the maximization of total utility function with respect to t...This paper proposes a joint layer scheme for fair downlink data scheduling in muhiuser OFDM wireless networks. Based on the optimization model formulated as the maximization of total utility function with respect to the mean waiting time of user queue, we present an algorithm with low complexity for dynamic subcarrier allocation (DSA). The decision for subcarrier allocation was made according to delay utility function obtained by the algorithm that instantaneously estimated both channel condition and queue length using an exponentially weighted low-pass time window and pilot signals resPectively. The complexity of algorithm was reduced by varying the length of the time window to make use of time diversity, which provided higher throughput ratio. Simulation results demonstrate that compared with the conventional approach, the proposed scheme achieves better performance and can significantly improve fairness among users, with very limited delay performance degradation by using a decreasing concave utility function when the traffic load increases.展开更多
In Wireless Mesh Networks (WMNs),the performance of conventional TCP significantly deteriorates due to the unreliable wireless channel.To enhance TCP performance in WMNs,TCP/LT is proposed in this paper.It introduces ...In Wireless Mesh Networks (WMNs),the performance of conventional TCP significantly deteriorates due to the unreliable wireless channel.To enhance TCP performance in WMNs,TCP/LT is proposed in this paper.It introduces fountain codes into packet reorganization in the protocol stack of mesh gateways and mesh clients.Furthermore,it is compatible with conventional TCP.Regarded as a Performance Enhancement Proxies (PEP),a mesh gateway buffers TCP packets into several blocks.It simultaneously processes them by using fountain encoders and then sends them to mesh clients.Apart from the improvement of the throughput of a unitary TCP flow,the entire network utility maximization can also be ensured by adjusting the scale of coding blocks for each TCP flow adaptively.Simulations show that TCP/LT presents high throughput gains over single TCP in lossy links of WMNs while preserving the fairness for multiple TCPs.As losses increase,the transmission delay of TCP/LT experiences a slow linear growth in contrast to the exponential growth of TCP.展开更多
This paper studies the optimal portfolio allocation of a fund manager when he bases decisions on both the absolute level of terminal relative performance and the change value of terminal relative performance compariso...This paper studies the optimal portfolio allocation of a fund manager when he bases decisions on both the absolute level of terminal relative performance and the change value of terminal relative performance comparison to a predefined reference point. We find the optimal investment strategy by maximizing a weighted average utility of a concave utility and an Sshaped utility via a concavification technique and the martingale method. Numerical results are carried out to show the impact of the extent to which the manager pays attention to the change of relative performance related to the reference point on the optimal terminal relative performance.展开更多
Dear Editor,This letter deals with a new second-level-discretization method with higher precision than the traditional first-level-discretization method.Specifically,the traditional discretization method utilizes the ...Dear Editor,This letter deals with a new second-level-discretization method with higher precision than the traditional first-level-discretization method.Specifically,the traditional discretization method utilizes the first-order time derivative information,and it is termed first-level-discretization method.By contrast,the new discretization method not only utilizes the first-order time derivative information,but also makes use of the second-order derivative information.By combining the new second-level-discretization method with zeroing neural network(ZNN),the second-level-discrete ZNN(SLDZNN)model is proposed to solve dynamic(i.e.,time-variant or time-dependent)linear system.Numerical experiments and application to angle-of-arrival(AoA)localization show the effectiveness and superiority of the SLDZNN model.展开更多
High-energy gamma-ray radiography has exceptional penetration ability and has become an indispensable nondestructive testing(NDT)tool in various fields.For high-energy photons,point projection radiography is almost th...High-energy gamma-ray radiography has exceptional penetration ability and has become an indispensable nondestructive testing(NDT)tool in various fields.For high-energy photons,point projection radiography is almost the only feasible imaging method,and its spatial resolution is primarily constrained by the size of the gamma-ray source.In conventional industrial applications,gamma-ray sources are commonly based on electron beams driven by accelerators,utilizing the process of bremsstrahlung radiation.The size of the gamma-ray source is dependent on the dimensional characteristics of the electron beam.Extensive research has been conducted on various advanced accelerator technologies that have the potential to greatly improve spatial resolution in NDT.In our investigation of laser-driven gamma-ray sources,a spatial resolution of about 90μm is achieved when the areal density of the penetrated object is 120 g/cm^(2).A virtual source approach is proposed to optimize the size of the gamma-ray source used for imaging,with the aim of maximizing spatial resolution.In this virtual source approach,the gamma ray can be considered as being emitted from a virtual source within the convertor,where the equivalent gamma-ray source size in imaging is much smaller than the actual emission area.On the basis of Monte Carlo simulations,we derive a set of evaluation formulas for virtual source scale and gamma-ray emission angle.Under optimal conditions,the virtual source size can be as small as 15μm,which can significantly improve the spatial resolution of high-penetration imaging to less than 50μm.展开更多
Nowadays, the global climate is constantly being destroyed and the fluctuations in ambient temperature are becoming more frequent. However, conventional single-mode thermal management strategies(heating or cooling) fa...Nowadays, the global climate is constantly being destroyed and the fluctuations in ambient temperature are becoming more frequent. However, conventional single-mode thermal management strategies(heating or cooling) failed to resolve such dynamic temperature changes. Moreover, developing thermal management devices capable of accommodating these temperature variations while remaining simple to fabricate and durable has remained a formidable obstacle. To address these bottlenecks, we design and successfully fabricate a novel dual-mode hierarchical(DMH) composite film featuring a micronanofiber network structure, achieved through a straightforward two-step continuous electrospinning process. In cooling mode, it presents a high solar reflectivity of up to 97.7% and an excellent atmospheric transparent window(ATW) infrared emissivity of up to 98.9%. Noted that this DMH film could realize a cooling of 8.1 ℃ compared to the ambient temperature outdoors. In heating mode, it also exhibits a high solar absorptivity of 94.7% and heats up to 11.9 ℃ higher than black cotton fabric when utilized by individuals. In practical application scenarios, a seamless transition between efficient cooling and heating is achieved by simply flipping the film. More importantly, the DMH film combining the benefits of composites demonstrates portability, durability, and easy-cleaning, promising to achieve large-scale production and use of thermally managed textiles in the future. The energy savings offered by film applications provide a viable solution for the early realization of carbon neutrality.展开更多
Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict th...Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.展开更多
Evolutionary algorithms(EAs)have been used in high utility itemset mining(HUIM)to address the problem of discover-ing high utility itemsets(HUIs)in the exponential search space.EAs have good running and mining perform...Evolutionary algorithms(EAs)have been used in high utility itemset mining(HUIM)to address the problem of discover-ing high utility itemsets(HUIs)in the exponential search space.EAs have good running and mining performance,but they still require huge computational resource and may miss many HUIs.Due to the good combination of EA and graphics processing unit(GPU),we propose a parallel genetic algorithm(GA)based on the platform of GPU for mining HUIM(PHUI-GA).The evolution steps with improvements are performed in central processing unit(CPU)and the CPU intensive steps are sent to GPU to eva-luate with multi-threaded processors.Experiments show that the mining performance of PHUI-GA outperforms the existing EAs.When mining 90%HUIs,the PHUI-GA is up to 188 times better than the existing EAs and up to 36 times better than the CPU parallel approach.展开更多
基金The National Basic Research Program of China (973Program) (No.2006CB200302)the Natural Science Foundation of JiangsuProvince (No.BK2007224).
文摘The feasibility of using an ANN method to predict the mercury emission and speciation in the flue gas of a power station under un-tested combustion/operational conditions is evaluated. Based on existing field testing datasets for the emissions of three utility boilers, a 3-layer back-propagation network is applied to predict the mercury speciation at the stack. The whole prediction procedure includes: collection of data, structuring an artificial neural network (ANN) model, training process and error evaluation. A total of 59 parameters of coal and ash analyses and power plant operating conditions are treated as input variables, and the actual mercury emissions and their speciation data are used to supervise the training process and verify the performance of prediction modeling. The precision of model prediction ( root- mean-square error is 0. 8 μg/Nm3 for elemental mercury and 0. 9 μg/Nm3 for total mercury) is acceptable since the spikes of semi- mercury continuous emission monitor (SCEM) with wet conversion modules are taken into consideration.
文摘A system model is formulated as the maximization of a total utility function to achieve fair downlink data scheduling in multiuser orthogonal frequency division multiplexing (OFDM) wireless networks. A dynamic subcarrier allocation algorithm (DSAA) is proposed, to optimize the system model. The subcarrier allocation decision is made by the proposed DSAA according to the maximum value of total utility function with respect to the queue mean waiting time. Simulation results demonstrate that compared to the conventional algorithms, the proposed algorithm has better delay performance and can provide fairness under different loads by using different utility functions.
基金supported by the Natural Science Foundation of China(No.60704046,60725312,60804067)the National 863 High Technology Research and Development Plan(No.2007AA04Z173,2007AA041201)
文摘We study the tradeoff between network utility and network lifetime using a cross-layer optimization approach. The tradeoff model in this paper is based on the framework of layering as optimization decomposition. Our tradeoff model is the first one that incorporates time slots allocation into this framework. By using Lagrangian dual decomposition method, we decompose the tradeoff model into two subproblems: routing problem at network layer and resource allocation problem at medium access control (MAC) layer. The interfaces between the layers are precisely the dual variables. A partially distributed algorithm is proposed to solve the nonlinear, convex, and separable tradeoff model. Numerical simulation results are presented to support our algorithm.
基金This work was supported by the State Key Program of Na- tional Nature Science Foundation of China under Grants No. U0835003, No. 60872087.
文摘A new approach, named TCP-I2NC, is proposed to improve the interaction between network coding and TCP and to maximize the network utility in interference-free multi-radio multi-channel wireless mesh networks. It is grounded on a Network Utility Maxmization (NUM) formulation which can be decomposed into a rate control problem and a packet scheduling problem. The solutions to these two problems perform resource allocation among different flows. Simulations demonstrate that TCP-I2NC results in a significant throughput gain and a small delay jitter. Network resource is fairly allocated via the solution to the NUM problem and the whole system also runs stably. Moreover, TCP-I2NC is compatible with traditional TCP variants.
文摘The service and application of a network is a behavioral process that is oriented toward its operations and tasks, whose metrics and evaluation are still somewhat of a rough comparison, This paper describes sce- nes of network behavior as differential manifolds, Using the homeomorphic transformation of smooth differential manifolds, we provide a mathematical definition of network behavior and propose a mathe- matical description of the network behavior path and behavior utility, Based on the principle of differen- tial geometry, this paper puts forward the function of network behavior and a calculation method to determine behavior utility, and establishes the calculation principle of network behavior utility, We also provide a calculation framework for assessment of the network's attack-defense confrontation on the strength of behavior utility, Therefore, this paper establishes a mathematical foundation for the objective measurement and precise evaluation of network behavior,
基金Sponsored by the Self-Determined Research Funds of Huazhong Normal University from the Colleges’Basic Research and Operation of MOE
文摘The combination of orthogonal frequency division multiple access(OFDMA) with relaying techniques provides plentiful opportunities for high-performance and cost-effective networks.It requires intelligent radio resource management schemes to harness these opportunities.This paper investigates the utility-based resource allocation problem in a real-time and non-real-time traffics mixed OFDMA cellular relay network to exploit the potentiality of relay.In order to apply utility theory to obtain an efficient tradeoff between throughput and fairness as well as satisfy the delay requirements of real-time traffics,a joint routing and scheduling scheme is proposed to resolve the resource allocation problem.Additionally,a low-complexity iterative algorithm is introduced to realize the scheme.The numerical results indicate that besides meeting the delay requirements of real-time traffic,the scheme can achieve the tradeoff between throughput and fairness effectively.
基金partially supported by the National Key Research and Development Program of China(2021YFD1300201)Jilin Province Key Research and Development Program of China(20220202044NC)。
文摘Background Promoting the synchronization of glucose and amino acid release in the digestive tract of pigs could effectively improve dietary nitrogen utilization.The rational allocation of dietary starch sources and the exploration of appropriate dietary glucose release kinetics may promote the dynamic balance of dietary glucose and amino acid supplies.However,research on the effects of diets with different glucose release kinetic profiles on amino acid absorption and portal amino acid appearance in piglets is limited.This study aimed to investigate the effects of the kinetic pattern of dietary glucose release on nitrogen utilization,the portal amino acid profile,and nutrient transporter expression in intestinal enterocytes in piglets.Methods Sixty-four barrows(15.00±1.12 kg)were randomly allotted to 4 groups and fed diets formulated with starch from corn,corn/barley,corn/sorghum,or corn/cassava combinations(diets were coded A,B,C,or D respectively).Protein retention,the concentrations of portal amino acid and glucose,and the relative expression of amino acid and glucose transporter m RNAs were investigated.In vitro digestion was used to compare the dietary glucose release profiles.Results Four piglet diets with different glucose release kinetics were constructed by adjusting starch sources.The in vivo appearance dynamics of portal glucose were consistent with those of in vitro dietary glucose release kinetics.Total nitrogen excretion was reduced in the piglets in group B,while apparent nitrogen digestibility and nitrogen retention increased(P<0.05).Regardless of the time(2 h or 4 h after morning feeding),the portal total free amino acids content and contents of some individual amino acids(Thr,Glu,Gly,Ala,and Ile)of the piglets in group B were significantly higher than those in groups A,C,and D(P<0.05).Cluster analysis showed that different glucose release kinetic patterns resulted in different portal amino acid patterns in piglets,which decreased gradually with the extension of feeding time.The portal His/Phe,Pro/Glu,Leu/Val,Lys/Met,Tyr/Ile and Ala/Gly appeared higher similarity among the diet treatments.In the anterior jejunum,the glucose transporter SGLT1 was significantly positively correlated with the amino acid transporters B0AT1,EAAC1,and CAT1.Conclusions Rational allocation of starch resources could regulate dietary glucose release kinetics.In the present study,group B(corn/barley)diet exhibited a better glucose release kinetic pattern than the other groups,which could affect the portal amino acid contents and patterns by regulating the expression of amino acid transporters in the small intestine,thereby promoting nitrogen deposition in the body,and improving the utilization efficiency of dietary nitrogen.
基金Project(60772088) supported by the National Natural Science Foundation of China
文摘A novel backoff algorithm in CSMA/CA-based medium access control (MAC) protocols for clustered sensor networks was proposed. The algorithm requires that all sensor nodes have the same value of contention window (CW) in a cluster, which is revealed by formulating resource allocation as a network utility maximization problem. Then, by maximizing the total network utility with constrains of minimizing collision probability, the optimal value of CW (Wopt) can be computed according to the number of sensor nodes. The new backoff algorithm uses the common optimal value Wopt and leads to fewer collisions than binary exponential backoff algorithm. The simulation results show that the proposed algorithm outperforms standard 802.11 DCF and S-MAC in average collision times, packet delay, total energy consumption, and system throughput.
文摘The decision system based on network economy is the foundation of enterprise's making good winning in its market. This paper describes the decision makers' utility model based on network economy, considers the roles decision-makers not only play in the enterprises are decision making, coordinating, controlling and monitoring, but also they are mainly designers, executants and educators in the mode of network economy
基金The project partly supported by the State 0utstanding Youth Foundation under Grant No. 70225005, National Natural Science Foundation of China under Grant Nos. 70501005, 70501004, and 70471088, the Natural Science Foundation of Beijing under Grant No. 9042006, the Special Program for Preliminary Research of Momentous Fundamental Research under Grant No. 2005CCA03900, the Innovation Foundation of Science and Technology for Excellent Doctorial Candidate of Beijing Jiaotong University under Grant No. 48006
文摘In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.
文摘This paper proposes a joint layer scheme for fair downlink data scheduling in muhiuser OFDM wireless networks. Based on the optimization model formulated as the maximization of total utility function with respect to the mean waiting time of user queue, we present an algorithm with low complexity for dynamic subcarrier allocation (DSA). The decision for subcarrier allocation was made according to delay utility function obtained by the algorithm that instantaneously estimated both channel condition and queue length using an exponentially weighted low-pass time window and pilot signals resPectively. The complexity of algorithm was reduced by varying the length of the time window to make use of time diversity, which provided higher throughput ratio. Simulation results demonstrate that compared with the conventional approach, the proposed scheme achieves better performance and can significantly improve fairness among users, with very limited delay performance degradation by using a decreasing concave utility function when the traffic load increases.
基金supported by the State Key Program of National Nature Science Foundation of China under Grants No.U0835003,No.60872087
文摘In Wireless Mesh Networks (WMNs),the performance of conventional TCP significantly deteriorates due to the unreliable wireless channel.To enhance TCP performance in WMNs,TCP/LT is proposed in this paper.It introduces fountain codes into packet reorganization in the protocol stack of mesh gateways and mesh clients.Furthermore,it is compatible with conventional TCP.Regarded as a Performance Enhancement Proxies (PEP),a mesh gateway buffers TCP packets into several blocks.It simultaneously processes them by using fountain encoders and then sends them to mesh clients.Apart from the improvement of the throughput of a unitary TCP flow,the entire network utility maximization can also be ensured by adjusting the scale of coding blocks for each TCP flow adaptively.Simulations show that TCP/LT presents high throughput gains over single TCP in lossy links of WMNs while preserving the fairness for multiple TCPs.As losses increase,the transmission delay of TCP/LT experiences a slow linear growth in contrast to the exponential growth of TCP.
基金Supported by the National Natural Science Foundation of China(12071335)the Humanities and Social Science Research Projects in Ministry of Education(20YJAZH025).
文摘This paper studies the optimal portfolio allocation of a fund manager when he bases decisions on both the absolute level of terminal relative performance and the change value of terminal relative performance comparison to a predefined reference point. We find the optimal investment strategy by maximizing a weighted average utility of a concave utility and an Sshaped utility via a concavification technique and the martingale method. Numerical results are carried out to show the impact of the extent to which the manager pays attention to the change of relative performance related to the reference point on the optimal terminal relative performance.
基金supported in part by the National Natural Science Foundation of China(62303174)the Fundamental Research Funds for the Central Universities(531118010815)the Changsha Municipal Natural Science Foundation(kq2208043).
文摘Dear Editor,This letter deals with a new second-level-discretization method with higher precision than the traditional first-level-discretization method.Specifically,the traditional discretization method utilizes the first-order time derivative information,and it is termed first-level-discretization method.By contrast,the new discretization method not only utilizes the first-order time derivative information,but also makes use of the second-order derivative information.By combining the new second-level-discretization method with zeroing neural network(ZNN),the second-level-discrete ZNN(SLDZNN)model is proposed to solve dynamic(i.e.,time-variant or time-dependent)linear system.Numerical experiments and application to angle-of-arrival(AoA)localization show the effectiveness and superiority of the SLDZNN model.
基金This work was supported by the National Natural Science Foundation of China(Grant Nos.12175212,11991071,12004353,11975214,and 11905202)the National Key R&D Program of China(Grant No.2022YFA1603300)+1 种基金the Science Challenge Project(Project No.TZ2018005)the Sciences and Technology on Plasma Physics Laboratory at CAEP(Grant No.6142A04200103).
文摘High-energy gamma-ray radiography has exceptional penetration ability and has become an indispensable nondestructive testing(NDT)tool in various fields.For high-energy photons,point projection radiography is almost the only feasible imaging method,and its spatial resolution is primarily constrained by the size of the gamma-ray source.In conventional industrial applications,gamma-ray sources are commonly based on electron beams driven by accelerators,utilizing the process of bremsstrahlung radiation.The size of the gamma-ray source is dependent on the dimensional characteristics of the electron beam.Extensive research has been conducted on various advanced accelerator technologies that have the potential to greatly improve spatial resolution in NDT.In our investigation of laser-driven gamma-ray sources,a spatial resolution of about 90μm is achieved when the areal density of the penetrated object is 120 g/cm^(2).A virtual source approach is proposed to optimize the size of the gamma-ray source used for imaging,with the aim of maximizing spatial resolution.In this virtual source approach,the gamma ray can be considered as being emitted from a virtual source within the convertor,where the equivalent gamma-ray source size in imaging is much smaller than the actual emission area.On the basis of Monte Carlo simulations,we derive a set of evaluation formulas for virtual source scale and gamma-ray emission angle.Under optimal conditions,the virtual source size can be as small as 15μm,which can significantly improve the spatial resolution of high-penetration imaging to less than 50μm.
基金financially Fundamental Research Funds for the Central Universities (2232021G-04 and 2232020D-20)Student Innovation Fund of Donghua University (GSIF-DH-M-2021003)。
文摘Nowadays, the global climate is constantly being destroyed and the fluctuations in ambient temperature are becoming more frequent. However, conventional single-mode thermal management strategies(heating or cooling) failed to resolve such dynamic temperature changes. Moreover, developing thermal management devices capable of accommodating these temperature variations while remaining simple to fabricate and durable has remained a formidable obstacle. To address these bottlenecks, we design and successfully fabricate a novel dual-mode hierarchical(DMH) composite film featuring a micronanofiber network structure, achieved through a straightforward two-step continuous electrospinning process. In cooling mode, it presents a high solar reflectivity of up to 97.7% and an excellent atmospheric transparent window(ATW) infrared emissivity of up to 98.9%. Noted that this DMH film could realize a cooling of 8.1 ℃ compared to the ambient temperature outdoors. In heating mode, it also exhibits a high solar absorptivity of 94.7% and heats up to 11.9 ℃ higher than black cotton fabric when utilized by individuals. In practical application scenarios, a seamless transition between efficient cooling and heating is achieved by simply flipping the film. More importantly, the DMH film combining the benefits of composites demonstrates portability, durability, and easy-cleaning, promising to achieve large-scale production and use of thermally managed textiles in the future. The energy savings offered by film applications provide a viable solution for the early realization of carbon neutrality.
基金The Deanship of Scientific Research at Hashemite University partially funds this workDeanship of Scientific Research at the Northern Border University,Arar,KSA for funding this research work through the project number“NBU-FFR-2024-1580-08”.
文摘Cloud Datacenter Network(CDN)providers usually have the option to scale their network structures to allow for far more resource capacities,though such scaling options may come with exponential costs that contradict their utility objectives.Yet,besides the cost of the physical assets and network resources,such scaling may also imposemore loads on the electricity power grids to feed the added nodes with the required energy to run and cool,which comes with extra costs too.Thus,those CDNproviders who utilize their resources better can certainly afford their services at lower price-units when compared to others who simply choose the scaling solutions.Resource utilization is a quite challenging process;indeed,clients of CDNs usually tend to exaggerate their true resource requirements when they lease their resources.Service providers are committed to their clients with Service Level Agreements(SLAs).Therefore,any amendment to the resource allocations needs to be approved by the clients first.In this work,we propose deploying a Stackelberg leadership framework to formulate a negotiation game between the cloud service providers and their client tenants.Through this,the providers seek to retrieve those leased unused resources from their clients.Cooperation is not expected from the clients,and they may ask high price units to return their extra resources to the provider’s premises.Hence,to motivate cooperation in such a non-cooperative game,as an extension to theVickery auctions,we developed an incentive-compatible pricingmodel for the returned resources.Moreover,we also proposed building a behavior belief function that shapes the way of negotiation and compensation for each client.Compared to other benchmark models,the assessment results showthat our proposed models provide for timely negotiation schemes,allowing for better resource utilization rates,higher utilities,and grid-friend CDNs.
基金This work was supported by the National Natural Science Foundation of China(62073155,62002137,62106088,62206113)the High-End Foreign Expert Recruitment Plan(G2023144007L)the Fundamental Research Funds for the Central Universities(JUSRP221028).
文摘Evolutionary algorithms(EAs)have been used in high utility itemset mining(HUIM)to address the problem of discover-ing high utility itemsets(HUIs)in the exponential search space.EAs have good running and mining performance,but they still require huge computational resource and may miss many HUIs.Due to the good combination of EA and graphics processing unit(GPU),we propose a parallel genetic algorithm(GA)based on the platform of GPU for mining HUIM(PHUI-GA).The evolution steps with improvements are performed in central processing unit(CPU)and the CPU intensive steps are sent to GPU to eva-luate with multi-threaded processors.Experiments show that the mining performance of PHUI-GA outperforms the existing EAs.When mining 90%HUIs,the PHUI-GA is up to 188 times better than the existing EAs and up to 36 times better than the CPU parallel approach.