This article broadens terminology and approaches that continue to advance time modelling within a relationalist framework. Time is modeled as a single dimension, flowing continuously through independent privileged poi...This article broadens terminology and approaches that continue to advance time modelling within a relationalist framework. Time is modeled as a single dimension, flowing continuously through independent privileged points. Introduced as absolute point-time, abstract continuous time is a backdrop for concrete relational-based time that is finite and discrete, bound to the limits of a real-world system. We discuss how discrete signals at a point are used to temporally anchor zero-temporal points [t = 0] in linear time. Object-oriented temporal line elements, flanked by temporal point elements, have a proportional geometric identity quantifiable by a standard unit system and can be mapped on a natural number line. Durations, line elements, are divisible into ordered unit ratio elements using ancient timekeeping formulas. The divisional structure provides temporal classes for rotational (Rt24t) and orbital (Rt18) sample periods, as well as a more general temporal class (Rt12) applicable to either sample or frame periods. We introduce notation for additive cyclic counts of sample periods, including divisional units, for calendar-like formatting. For system modeling, unit structures with dihedral symmetry, group order, and numerical order are shown to be applicable to Euclidean modelling. We introduce new functions for bijective and non-bijective mapping, modular arithmetic for cyclic-based time counts, and a novel formula relating to a subgroup of Pythagorean triples, preserving dihedral n-polygon symmetries. This article presents a new approach to model time in a relationalistic framework.展开更多
Electronic processes within atoms and molecules reside on the timescale of attoseconds. Recent advances in the laserbased pump-probe interrogation techniques have made possible the temporal resolution of ultrafast ele...Electronic processes within atoms and molecules reside on the timescale of attoseconds. Recent advances in the laserbased pump-probe interrogation techniques have made possible the temporal resolution of ultrafast electronic processes on the attosecond timescale, including photoionization and tunneling ionization. These interrogation techniques include the attosecond streak camera, the reconstruction of attosecond beating by interference of two-photon transitions, and the attoclock. While the former two are usually employed to study photoionization processes, the latter is typically used to investigate tunneling ionization. In this review, we briefly overview these timing techniques towards an attosecond temporal resolution of ionization processes in atoms and molecules under intense laser fields. In particular, we review the backpropagation method, which is a novel hybrid quantum-classical approach towards the full characterization of tunneling ionization dynamics. Continued advances in the interrogation techniques promise to pave the pathway towards the exploration of ever faster dynamical processes on an ever shorter timescale.展开更多
The approach of Li and Zhou(2014)is adopted to find the Laplace transform of occupation time over interval(0,a)and joint occupation times over semi-infinite intervals(-∞,a)and(b,∞)for a time-homogeneous diffusion pr...The approach of Li and Zhou(2014)is adopted to find the Laplace transform of occupation time over interval(0,a)and joint occupation times over semi-infinite intervals(-∞,a)and(b,∞)for a time-homogeneous diffusion process up to an independent exponential time e_(q)for 0<a<b.The results are expressed in terms of solutions to the differential equations associated with the diffusion generator.Applying these results,we obtain explicit expressions on the Laplace transform of occupation time and joint occupation time for Brownian motion with drift.展开更多
Newton already mentioned indivisible time in Principia. In 1899, Max Planck derived a unique time period from three universal constants: G, c, and ħ, and today this is known as the Planck time. The Planck time is of t...Newton already mentioned indivisible time in Principia. In 1899, Max Planck derived a unique time period from three universal constants: G, c, and ħ, and today this is known as the Planck time. The Planck time is of the order of about 10<sup>−44</sup> seconds while the best atomic clocks are down to 10<sup>−19</sup> seconds. An approach has recently been outlined that puts an upper limit on the quantization of time to 10<sup>−33</sup> seconds;this is, however, still far away from the Planck time. We demonstrate that the Planck time can easily be measured without any knowledge of any other physical constants. This is remarkable as this means we have demonstrated that the Planck time and therefore the Planck scale is real and detectable. It has taken more than 100 years to understand this. The reason for the breakthrough in Planck scale physics in recent years comes from understanding that G is a composite constant and that the true matter wavelength is the Compton wavelength rather than the de Broglie wavelength. When this is understood, the mysteries of the Planck scale can be uncovered. In this paper, we also demonstrate how to measure the number of Planck events in a gravitational mass without relying on any constants. This directly relates to a new and simple method for quantizing general relativity theory that we also will shortly discuss.展开更多
The frequency of any periodic event can be defined in terms of units of Time. Planck constructed a unit of time called the Plank time from other physical constants. Vyasa defined a natural unit of time, kshana, or mom...The frequency of any periodic event can be defined in terms of units of Time. Planck constructed a unit of time called the Plank time from other physical constants. Vyasa defined a natural unit of time, kshana, or moment based on the motion of a fundamental particle. It is the time taken by an elementary particle, to change its direction from east to north. According to Vyasa, kshana is discrete, exceedingly small, indivisible, and is a constant time quantum. When the intrinsic spin angular momentum of an electron was related to the angular momentum of a simple thin circular plate, spherical shell, and solid sphere model of an electron, we found that the value of kshana in seconds was equal to ten to a power of minus twenty-one second. The disc model for the spinning electron provides an accurate value of the number of kshanas per second as determined previously and compared with other spinning models of electrons. These results indicate that the disk-like model of spinning electrons is the correct model for electrons. Vyasa’s definition of kshana opens the possibility of a new foundation for the theory of physical time, and perspectives in theoretical and philosophical research.展开更多
Time series segmentation has attracted more interests in recent years,which aims to segment time series into different segments,each reflects a state of the monitored objects.Although there have been many surveys on t...Time series segmentation has attracted more interests in recent years,which aims to segment time series into different segments,each reflects a state of the monitored objects.Although there have been many surveys on time series segmentation,most of them focus more on change point detection(CPD)methods and overlook the advances in boundary detection(BD)and state detection(SD)methods.In this paper,we categorize time series segmentation methods into CPD,BD,and SD methods,with a specific focus on recent advances in BD and SD methods.Within the scope of BD and SD,we subdivide the methods based on their underlying models/techniques and focus on the milestones that have shaped the development trajectory of each category.As a conclusion,we found that:(1)Existing methods failed to provide sufficient support for online working,with only a few methods supporting online deployment;(2)Most existing methods require the specification of parameters,which hinders their ability to work adaptively;(3)Existing SD methods do not attach importance to accurate detection of boundary points in evaluation,which may lead to limitations in boundary point detection.We highlight the ability to working online and adaptively as important attributes of segmentation methods,the boundary detection accuracy as a neglected metrics for SD methods.展开更多
In the fast-evolving landscape of digital networks,the incidence of network intrusions has escalated alarmingly.Simultaneously,the crucial role of time series data in intrusion detection remains largely underappreciat...In the fast-evolving landscape of digital networks,the incidence of network intrusions has escalated alarmingly.Simultaneously,the crucial role of time series data in intrusion detection remains largely underappreciated,with most systems failing to capture the time-bound nuances of network traffic.This leads to compromised detection accuracy and overlooked temporal patterns.Addressing this gap,we introduce a novel SSAE-TCN-BiLSTM(STL)model that integrates time series analysis,significantly enhancing detection capabilities.Our approach reduces feature dimensionalitywith a Stacked Sparse Autoencoder(SSAE)and extracts temporally relevant features through a Temporal Convolutional Network(TCN)and Bidirectional Long Short-term Memory Network(Bi-LSTM).By meticulously adjusting time steps,we underscore the significance of temporal data in bolstering detection accuracy.On the UNSW-NB15 dataset,ourmodel achieved an F1-score of 99.49%,Accuracy of 99.43%,Precision of 99.38%,Recall of 99.60%,and an inference time of 4.24 s.For the CICDS2017 dataset,we recorded an F1-score of 99.53%,Accuracy of 99.62%,Precision of 99.27%,Recall of 99.79%,and an inference time of 5.72 s.These findings not only confirm the STL model’s superior performance but also its operational efficiency,underpinning its significance in real-world cybersecurity scenarios where rapid response is paramount.Our contribution represents a significant advance in cybersecurity,proposing a model that excels in accuracy and adaptability to the dynamic nature of network traffic,setting a new benchmark for intrusion detection systems.展开更多
In this paper,we mainly discuss a discrete estimation of the average differential entropy for a continuous time-stationary ergodic space-time random field.By estimating the probability value of a time-stationary rando...In this paper,we mainly discuss a discrete estimation of the average differential entropy for a continuous time-stationary ergodic space-time random field.By estimating the probability value of a time-stationary random field in a small range,we give an entropy estimation and obtain the average entropy estimation formula in a certain bounded space region.It can be proven that the estimation of the average differential entropy converges to the theoretical value with a probability of 1.In addition,we also conducted numerical experiments for different parameters to verify the convergence result obtained in the theoretical proofs.展开更多
The composite time scale(CTS)provides a stable,accurate,and reliable time scale for modern society.The improvement of CTS’s real-time performance will improve its stability,which strengths related applications’perfo...The composite time scale(CTS)provides a stable,accurate,and reliable time scale for modern society.The improvement of CTS’s real-time performance will improve its stability,which strengths related applications’performance.Aiming at this goal,a method achieved by determining the optimal calculation interval and accelerating adjustment stage is proposed in this paper.The determinants of the CTS’s calculation interval(characteristics of the clock ensemble,the measurement noise,the time and frequency synchronization system’s noise and the auxiliary output generator noise floor)are studied and the optimal calculation interval is obtained.We also investigate the effect of ensemble algorithm’s initial parameters on the CTS’s adjustment stage.A strategy to get the reasonable initial parameters of ensemble algorithm is designed.The results show that the adjustment stage can be finished rapidly or even can be shorten to zero with reasonable initial parameters.On this basis,we experimentally generate a distributed CTS with a calculation interval of 500 s and its stability outperforms those of the member clocks when the averaging time is longer than1700 s.The experimental result proves that the CTS’s real-time performance is significantly improved.展开更多
The composite time scale(CTS) provides an accurate and stable time-frequency reference for modern science and technology. Conventional CTS always features a centralized network topology, which means that the CTS is ac...The composite time scale(CTS) provides an accurate and stable time-frequency reference for modern science and technology. Conventional CTS always features a centralized network topology, which means that the CTS is accompanied by a local master clock. This largely restricts the stability and reliability of the CTS. We simulate the restriction and analyze the influence of the master clock on the CTS. It proves that the CTS's long-term stability is also positively related to that of the master clock, until the region dominated by the frequency drift of the H-maser(averaging time longer than ~10~5s).Aiming at this restriction, a real-time clock network is utilized. Based on the network, a real-time CTS referenced by a stable remote master clock is achieved. The experiment comparing two real-time CTSs referenced by a local and a remote master clock respectively reveals that under open-loop steering, the stability of the CTS is improved by referencing to a remote and more stable master clock instead of a local and less stable master clock. In this way, with the help of the proposed scheme, the CTS can be referenced to the most stable master clock within the network in real time, no matter whether it is local or remote, making democratic polycentric timekeeping possible.展开更多
To solve the finite-time error-tracking problem in mis-sile guidance,this paper presents a unified design approach through error dynamics and free-time convergence theory.The proposed approach is initiated by establis...To solve the finite-time error-tracking problem in mis-sile guidance,this paper presents a unified design approach through error dynamics and free-time convergence theory.The proposed approach is initiated by establishing a desired model for free-time convergent error dynamics,characterized by its independence from initial conditions and guidance parameters,and adjustable convergence time.This foundation facilitates the derivation of specific guidance laws that integrate constraints such as leading angle,impact angle,and impact time.The theoretical framework of this study elucidates the nuances and synergies between the proposed guidance laws and existing methodologies.Empirical evaluations through simulation comparisons underscore the enhanced accuracy and adaptability of the proposed laws.展开更多
The standard approach to organ preservation in liver transplantation is by static cold storage and the time between the cross-clamping of a graft in a donor and its reperfusion in the recipient is defined as cold isch...The standard approach to organ preservation in liver transplantation is by static cold storage and the time between the cross-clamping of a graft in a donor and its reperfusion in the recipient is defined as cold ischemia time(CIT).This simple definition reveals a multifactorial time frame that depends on donor hepatectomy time,transit time,and recipient surgery time,and is one of the most important donor-related risk factors which may influence the graft and recipient’s survival.Recently,the growing demand for the use of marginal liver grafts has prompted scientific exploration to analyze ischemia time factors and develop different organ preservation strategies.This review details the CIT definition and analyzes its different factors.It also explores the most recent strategies developed to implement each timestamp of CIT and to protect the graft from ischemic injury.展开更多
Electric vehicle(EV)is an ideal solution to resolve the carbon emission issue and the fossil fuels scarcity problem in the future.However,a large number of EVs will be concentrated on charging during the valley hours ...Electric vehicle(EV)is an ideal solution to resolve the carbon emission issue and the fossil fuels scarcity problem in the future.However,a large number of EVs will be concentrated on charging during the valley hours leading to new load peaks under the guidance of static time-of-use tariff.Therefore,this paper proposes a dynamic time-of-use tariff mechanism,which redefines the peak and valley time periods according to the predicted loads using the fuzzy C-mean(FCM)clustering algorithm,and then dynamically adjusts the peak and valley tariffs according to the actual load of each time period.Based on the proposed tariff mechanism,an EV charging optimization model with the lowest cost to the users and the lowest variance of the grid-side load as the objective function is established.Then,a weight selection principle with an equal loss rate of the two objectives is proposed to transform the multi-objective optimization problem into a single-objective optimization problem.Finally,the EV charging load optimization model under three tariff strategies is set up and solved with the mathematical solver GROUBI.The results show that the EV charging load optimization strategy based on the dynamic time-of-use tariff can better balance the benefits between charging stations and users under different numbers and proportions of EVs connected to the grid,and can effectively reduce the grid load variance and improve the grid load curve.展开更多
This paper presents a hypothesis regarding the existence of time fused in spacetime, assuming that time possesses the properties of both a particle and a field. This duality is referred to as the field-particle of tim...This paper presents a hypothesis regarding the existence of time fused in spacetime, assuming that time possesses the properties of both a particle and a field. This duality is referred to as the field-particle of time (FPT). The analysis shows that when the FPT moves through matter, it causes time dilation. The FPT is also a significant element that appears in relativistic kinetic energy (KE = (γ - 1) · mc<sup>2</sup>). Accelerating matter to near the speed of light requires relativistic energy approaching infinity, which corresponds to the relativistic kinetic energy. Meanwhile, the potential energy (PE = mc<sup>2</sup>) from the rest mass remains constant. Then, the mass-energy equation can be rearranged in terms of PE and KE, as shown in E = (1 + (γ - 1)) · mc<sup>2</sup>. The relativistic energy of the FPT also directly affects the gravitational attraction of matter. It transfers energy to each other through spacetime. The analysis demonstrates that the gravitational force is inversely proportional to the distance squared, following Newton’s law of gravity, and it varies with the relative velocity of matter. The relationship equation between relative time and the gravitational constant indicates that a higher intensity of the gravitational field leads to a slower reference time for matter, in accordance with the general theory of relativity. A thought experiment presents a comparison of two atomic clocks placed in different locations. The first one is placed in a room temperature, around 25°C, on the surface of the Earth, and the second one is placed in high-density areas. The analysis, considering the presence of the FPT, shows that the reference time slows down in high-density areas. Therefore, the second clock must be noticeably slower than the first one, indicating the existence of the FPT passing through both atomic clocks at different speeds.展开更多
This paper is a further elaboration of the author’s Time Dilation Cosmology (TDC) holographic model that ties gravitation and celestial mechanics and kinematics directly to time dilation, resolving all the major conu...This paper is a further elaboration of the author’s Time Dilation Cosmology (TDC) holographic model that ties gravitation and celestial mechanics and kinematics directly to time dilation, resolving all the major conundrums in astrophysics, and ties astrophysics directly to quantum physics. It begins with a brief summary of the TDC model and contains the new derivation for the time dilation version of the formula for summing relativistic velocities, Einstein’s gravitational constant and the time dilation versions for the Lorentz factor and the Euclidean norm of the 3d velocity vector, the two of which can then be used in the Four-velocity formula. It is demonstrated how orbital curvature is manifested as the resultant of two time dilation-manifested velocities. It also explains why an interferometer cannot distinguish free fall from zero gravity and further elaborates on the author’s previous explanations of how spiral galaxies are formed, and contains mathematical proof that Black Holes are actually Magnetospheric Eternally Collapsing Objects (MECOs) that are massless spacetime vortices.展开更多
A time series model is used in this paper to describe the progress of circulating direct condensation heat recovery of the compound condensing process (CCP) which is made of two water cooling condensing processes in s...A time series model is used in this paper to describe the progress of circulating direct condensation heat recovery of the compound condensing process (CCP) which is made of two water cooling condensing processes in series for a centrifugal chiller in the paper. A finite-time thermodynamics method is used to set up the time series simulation model. As a result, an upper bound of recoverable condensation heat for the compound condensing process is obtained which is in good agreement with experimental result. And the result is valuable and useful to optimization design of condensing heat recovery.展开更多
The IndyCar series distinguishes itself by providing the same design and operation of the single-seater to its pilots.The difference in times is then attributable to the skills of the drivers,but considering the data ...The IndyCar series distinguishes itself by providing the same design and operation of the single-seater to its pilots.The difference in times is then attributable to the skills of the drivers,but considering the data from the races could test this assumption.The objective of this work was to establish a trajectory model to predict race times.A cross-sectional,correlational,and explanatory work was carried out with a sample of 18,474 records in the period from 2020 to 2023 of the IndyCar series.The results show that the time span predicts the time differences.In relation to the studies of acceptance of the technology,the adjustment of this to human capacities to explain the time differences in the series of racing cars is discussed.展开更多
Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stab...Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stability of reservoir banks changes with the long-term dynamics of external disastercausing factors.Thus,assessing the time-varying reliability of reservoir landslides remains a challenge.In this paper,a machine learning(ML)based approach is proposed to analyze the long-term reliability of reservoir bank landslides in spatially variable soils through time series prediction.This study systematically investigated the prediction performances of three ML algorithms,i.e.multilayer perceptron(MLP),convolutional neural network(CNN),and long short-term memory(LSTM).Additionally,the effects of the data quantity and data ratio on the predictive power of deep learning models are considered.The results show that all three ML models can accurately depict the changes in the time-varying failure probability of reservoir landslides.The CNN model outperforms both the MLP and LSTM models in predicting the failure probability.Furthermore,selecting the right data ratio can improve the prediction accuracy of the failure probability obtained by ML models.展开更多
文摘This article broadens terminology and approaches that continue to advance time modelling within a relationalist framework. Time is modeled as a single dimension, flowing continuously through independent privileged points. Introduced as absolute point-time, abstract continuous time is a backdrop for concrete relational-based time that is finite and discrete, bound to the limits of a real-world system. We discuss how discrete signals at a point are used to temporally anchor zero-temporal points [t = 0] in linear time. Object-oriented temporal line elements, flanked by temporal point elements, have a proportional geometric identity quantifiable by a standard unit system and can be mapped on a natural number line. Durations, line elements, are divisible into ordered unit ratio elements using ancient timekeeping formulas. The divisional structure provides temporal classes for rotational (Rt24t) and orbital (Rt18) sample periods, as well as a more general temporal class (Rt12) applicable to either sample or frame periods. We introduce notation for additive cyclic counts of sample periods, including divisional units, for calendar-like formatting. For system modeling, unit structures with dihedral symmetry, group order, and numerical order are shown to be applicable to Euclidean modelling. We introduce new functions for bijective and non-bijective mapping, modular arithmetic for cyclic-based time counts, and a novel formula relating to a subgroup of Pythagorean triples, preserving dihedral n-polygon symmetries. This article presents a new approach to model time in a relationalistic framework.
基金Project supported by the National Natural Science Foundation of China (Grant Nos.92150105,11834004,12227807,and 12241407)the Science and Technology Commission of Shanghai Municipality (Grant No.21ZR1420100)。
文摘Electronic processes within atoms and molecules reside on the timescale of attoseconds. Recent advances in the laserbased pump-probe interrogation techniques have made possible the temporal resolution of ultrafast electronic processes on the attosecond timescale, including photoionization and tunneling ionization. These interrogation techniques include the attosecond streak camera, the reconstruction of attosecond beating by interference of two-photon transitions, and the attoclock. While the former two are usually employed to study photoionization processes, the latter is typically used to investigate tunneling ionization. In this review, we briefly overview these timing techniques towards an attosecond temporal resolution of ionization processes in atoms and molecules under intense laser fields. In particular, we review the backpropagation method, which is a novel hybrid quantum-classical approach towards the full characterization of tunneling ionization dynamics. Continued advances in the interrogation techniques promise to pave the pathway towards the exploration of ever faster dynamical processes on an ever shorter timescale.
基金Supported by the National Natural Science Foundation of China(12271062,11731012)by the Hunan Provincial National Natural Science Foundation of China(2019JJ50405)。
文摘The approach of Li and Zhou(2014)is adopted to find the Laplace transform of occupation time over interval(0,a)and joint occupation times over semi-infinite intervals(-∞,a)and(b,∞)for a time-homogeneous diffusion process up to an independent exponential time e_(q)for 0<a<b.The results are expressed in terms of solutions to the differential equations associated with the diffusion generator.Applying these results,we obtain explicit expressions on the Laplace transform of occupation time and joint occupation time for Brownian motion with drift.
文摘Newton already mentioned indivisible time in Principia. In 1899, Max Planck derived a unique time period from three universal constants: G, c, and ħ, and today this is known as the Planck time. The Planck time is of the order of about 10<sup>−44</sup> seconds while the best atomic clocks are down to 10<sup>−19</sup> seconds. An approach has recently been outlined that puts an upper limit on the quantization of time to 10<sup>−33</sup> seconds;this is, however, still far away from the Planck time. We demonstrate that the Planck time can easily be measured without any knowledge of any other physical constants. This is remarkable as this means we have demonstrated that the Planck time and therefore the Planck scale is real and detectable. It has taken more than 100 years to understand this. The reason for the breakthrough in Planck scale physics in recent years comes from understanding that G is a composite constant and that the true matter wavelength is the Compton wavelength rather than the de Broglie wavelength. When this is understood, the mysteries of the Planck scale can be uncovered. In this paper, we also demonstrate how to measure the number of Planck events in a gravitational mass without relying on any constants. This directly relates to a new and simple method for quantizing general relativity theory that we also will shortly discuss.
文摘The frequency of any periodic event can be defined in terms of units of Time. Planck constructed a unit of time called the Plank time from other physical constants. Vyasa defined a natural unit of time, kshana, or moment based on the motion of a fundamental particle. It is the time taken by an elementary particle, to change its direction from east to north. According to Vyasa, kshana is discrete, exceedingly small, indivisible, and is a constant time quantum. When the intrinsic spin angular momentum of an electron was related to the angular momentum of a simple thin circular plate, spherical shell, and solid sphere model of an electron, we found that the value of kshana in seconds was equal to ten to a power of minus twenty-one second. The disc model for the spinning electron provides an accurate value of the number of kshanas per second as determined previously and compared with other spinning models of electrons. These results indicate that the disk-like model of spinning electrons is the correct model for electrons. Vyasa’s definition of kshana opens the possibility of a new foundation for the theory of physical time, and perspectives in theoretical and philosophical research.
基金This work is supported by the National Key Research and Development Program of China(2022YFF1203001)National Natural Science Foundation of China(Nos.62072465,62102425)the Science and Technology Innovation Program of Hunan Province(Nos.2022RC3061,2023RC3027).
文摘Time series segmentation has attracted more interests in recent years,which aims to segment time series into different segments,each reflects a state of the monitored objects.Although there have been many surveys on time series segmentation,most of them focus more on change point detection(CPD)methods and overlook the advances in boundary detection(BD)and state detection(SD)methods.In this paper,we categorize time series segmentation methods into CPD,BD,and SD methods,with a specific focus on recent advances in BD and SD methods.Within the scope of BD and SD,we subdivide the methods based on their underlying models/techniques and focus on the milestones that have shaped the development trajectory of each category.As a conclusion,we found that:(1)Existing methods failed to provide sufficient support for online working,with only a few methods supporting online deployment;(2)Most existing methods require the specification of parameters,which hinders their ability to work adaptively;(3)Existing SD methods do not attach importance to accurate detection of boundary points in evaluation,which may lead to limitations in boundary point detection.We highlight the ability to working online and adaptively as important attributes of segmentation methods,the boundary detection accuracy as a neglected metrics for SD methods.
基金supported in part by the Gansu Province Higher Education Institutions Industrial Support Program:Security Situational Awareness with Artificial Intelligence and Blockchain Technology.Project Number(2020C-29).
文摘In the fast-evolving landscape of digital networks,the incidence of network intrusions has escalated alarmingly.Simultaneously,the crucial role of time series data in intrusion detection remains largely underappreciated,with most systems failing to capture the time-bound nuances of network traffic.This leads to compromised detection accuracy and overlooked temporal patterns.Addressing this gap,we introduce a novel SSAE-TCN-BiLSTM(STL)model that integrates time series analysis,significantly enhancing detection capabilities.Our approach reduces feature dimensionalitywith a Stacked Sparse Autoencoder(SSAE)and extracts temporally relevant features through a Temporal Convolutional Network(TCN)and Bidirectional Long Short-term Memory Network(Bi-LSTM).By meticulously adjusting time steps,we underscore the significance of temporal data in bolstering detection accuracy.On the UNSW-NB15 dataset,ourmodel achieved an F1-score of 99.49%,Accuracy of 99.43%,Precision of 99.38%,Recall of 99.60%,and an inference time of 4.24 s.For the CICDS2017 dataset,we recorded an F1-score of 99.53%,Accuracy of 99.62%,Precision of 99.27%,Recall of 99.79%,and an inference time of 5.72 s.These findings not only confirm the STL model’s superior performance but also its operational efficiency,underpinning its significance in real-world cybersecurity scenarios where rapid response is paramount.Our contribution represents a significant advance in cybersecurity,proposing a model that excels in accuracy and adaptability to the dynamic nature of network traffic,setting a new benchmark for intrusion detection systems.
基金supported by the Shenzhen sustainable development project:KCXFZ 20201221173013036 and the National Natural Science Foundation of China(91746107).
文摘In this paper,we mainly discuss a discrete estimation of the average differential entropy for a continuous time-stationary ergodic space-time random field.By estimating the probability value of a time-stationary random field in a small range,we give an entropy estimation and obtain the average entropy estimation formula in a certain bounded space region.It can be proven that the estimation of the average differential entropy converges to the theoretical value with a probability of 1.In addition,we also conducted numerical experiments for different parameters to verify the convergence result obtained in the theoretical proofs.
基金the National Key Research and Development Program of China(Grant No.2021YFA1402102)the National Natural Science Foundation of China(Grant No.62171249)the Fund by Tsinghua University Initiative Scientific Research Program.
文摘The composite time scale(CTS)provides a stable,accurate,and reliable time scale for modern society.The improvement of CTS’s real-time performance will improve its stability,which strengths related applications’performance.Aiming at this goal,a method achieved by determining the optimal calculation interval and accelerating adjustment stage is proposed in this paper.The determinants of the CTS’s calculation interval(characteristics of the clock ensemble,the measurement noise,the time and frequency synchronization system’s noise and the auxiliary output generator noise floor)are studied and the optimal calculation interval is obtained.We also investigate the effect of ensemble algorithm’s initial parameters on the CTS’s adjustment stage.A strategy to get the reasonable initial parameters of ensemble algorithm is designed.The results show that the adjustment stage can be finished rapidly or even can be shorten to zero with reasonable initial parameters.On this basis,we experimentally generate a distributed CTS with a calculation interval of 500 s and its stability outperforms those of the member clocks when the averaging time is longer than1700 s.The experimental result proves that the CTS’s real-time performance is significantly improved.
基金supported in part by the National Natural Science Foundation of China (Grant No.61971259)the National Key R&D Program of China (Grant No.2021YFA1402102)Tsinghua University Initiative Scientific Research Program。
文摘The composite time scale(CTS) provides an accurate and stable time-frequency reference for modern science and technology. Conventional CTS always features a centralized network topology, which means that the CTS is accompanied by a local master clock. This largely restricts the stability and reliability of the CTS. We simulate the restriction and analyze the influence of the master clock on the CTS. It proves that the CTS's long-term stability is also positively related to that of the master clock, until the region dominated by the frequency drift of the H-maser(averaging time longer than ~10~5s).Aiming at this restriction, a real-time clock network is utilized. Based on the network, a real-time CTS referenced by a stable remote master clock is achieved. The experiment comparing two real-time CTSs referenced by a local and a remote master clock respectively reveals that under open-loop steering, the stability of the CTS is improved by referencing to a remote and more stable master clock instead of a local and less stable master clock. In this way, with the help of the proposed scheme, the CTS can be referenced to the most stable master clock within the network in real time, no matter whether it is local or remote, making democratic polycentric timekeeping possible.
基金supported by the National Natural Science Foundation of China(12002370).
文摘To solve the finite-time error-tracking problem in mis-sile guidance,this paper presents a unified design approach through error dynamics and free-time convergence theory.The proposed approach is initiated by establishing a desired model for free-time convergent error dynamics,characterized by its independence from initial conditions and guidance parameters,and adjustable convergence time.This foundation facilitates the derivation of specific guidance laws that integrate constraints such as leading angle,impact angle,and impact time.The theoretical framework of this study elucidates the nuances and synergies between the proposed guidance laws and existing methodologies.Empirical evaluations through simulation comparisons underscore the enhanced accuracy and adaptability of the proposed laws.
文摘The standard approach to organ preservation in liver transplantation is by static cold storage and the time between the cross-clamping of a graft in a donor and its reperfusion in the recipient is defined as cold ischemia time(CIT).This simple definition reveals a multifactorial time frame that depends on donor hepatectomy time,transit time,and recipient surgery time,and is one of the most important donor-related risk factors which may influence the graft and recipient’s survival.Recently,the growing demand for the use of marginal liver grafts has prompted scientific exploration to analyze ischemia time factors and develop different organ preservation strategies.This review details the CIT definition and analyzes its different factors.It also explores the most recent strategies developed to implement each timestamp of CIT and to protect the graft from ischemic injury.
基金Key R&D Program of Tianjin,China(No.20YFYSGX00060).
文摘Electric vehicle(EV)is an ideal solution to resolve the carbon emission issue and the fossil fuels scarcity problem in the future.However,a large number of EVs will be concentrated on charging during the valley hours leading to new load peaks under the guidance of static time-of-use tariff.Therefore,this paper proposes a dynamic time-of-use tariff mechanism,which redefines the peak and valley time periods according to the predicted loads using the fuzzy C-mean(FCM)clustering algorithm,and then dynamically adjusts the peak and valley tariffs according to the actual load of each time period.Based on the proposed tariff mechanism,an EV charging optimization model with the lowest cost to the users and the lowest variance of the grid-side load as the objective function is established.Then,a weight selection principle with an equal loss rate of the two objectives is proposed to transform the multi-objective optimization problem into a single-objective optimization problem.Finally,the EV charging load optimization model under three tariff strategies is set up and solved with the mathematical solver GROUBI.The results show that the EV charging load optimization strategy based on the dynamic time-of-use tariff can better balance the benefits between charging stations and users under different numbers and proportions of EVs connected to the grid,and can effectively reduce the grid load variance and improve the grid load curve.
文摘This paper presents a hypothesis regarding the existence of time fused in spacetime, assuming that time possesses the properties of both a particle and a field. This duality is referred to as the field-particle of time (FPT). The analysis shows that when the FPT moves through matter, it causes time dilation. The FPT is also a significant element that appears in relativistic kinetic energy (KE = (γ - 1) · mc<sup>2</sup>). Accelerating matter to near the speed of light requires relativistic energy approaching infinity, which corresponds to the relativistic kinetic energy. Meanwhile, the potential energy (PE = mc<sup>2</sup>) from the rest mass remains constant. Then, the mass-energy equation can be rearranged in terms of PE and KE, as shown in E = (1 + (γ - 1)) · mc<sup>2</sup>. The relativistic energy of the FPT also directly affects the gravitational attraction of matter. It transfers energy to each other through spacetime. The analysis demonstrates that the gravitational force is inversely proportional to the distance squared, following Newton’s law of gravity, and it varies with the relative velocity of matter. The relationship equation between relative time and the gravitational constant indicates that a higher intensity of the gravitational field leads to a slower reference time for matter, in accordance with the general theory of relativity. A thought experiment presents a comparison of two atomic clocks placed in different locations. The first one is placed in a room temperature, around 25°C, on the surface of the Earth, and the second one is placed in high-density areas. The analysis, considering the presence of the FPT, shows that the reference time slows down in high-density areas. Therefore, the second clock must be noticeably slower than the first one, indicating the existence of the FPT passing through both atomic clocks at different speeds.
文摘This paper is a further elaboration of the author’s Time Dilation Cosmology (TDC) holographic model that ties gravitation and celestial mechanics and kinematics directly to time dilation, resolving all the major conundrums in astrophysics, and ties astrophysics directly to quantum physics. It begins with a brief summary of the TDC model and contains the new derivation for the time dilation version of the formula for summing relativistic velocities, Einstein’s gravitational constant and the time dilation versions for the Lorentz factor and the Euclidean norm of the 3d velocity vector, the two of which can then be used in the Four-velocity formula. It is demonstrated how orbital curvature is manifested as the resultant of two time dilation-manifested velocities. It also explains why an interferometer cannot distinguish free fall from zero gravity and further elaborates on the author’s previous explanations of how spiral galaxies are formed, and contains mathematical proof that Black Holes are actually Magnetospheric Eternally Collapsing Objects (MECOs) that are massless spacetime vortices.
文摘A time series model is used in this paper to describe the progress of circulating direct condensation heat recovery of the compound condensing process (CCP) which is made of two water cooling condensing processes in series for a centrifugal chiller in the paper. A finite-time thermodynamics method is used to set up the time series simulation model. As a result, an upper bound of recoverable condensation heat for the compound condensing process is obtained which is in good agreement with experimental result. And the result is valuable and useful to optimization design of condensing heat recovery.
文摘The IndyCar series distinguishes itself by providing the same design and operation of the single-seater to its pilots.The difference in times is then attributable to the skills of the drivers,but considering the data from the races could test this assumption.The objective of this work was to establish a trajectory model to predict race times.A cross-sectional,correlational,and explanatory work was carried out with a sample of 18,474 records in the period from 2020 to 2023 of the IndyCar series.The results show that the time span predicts the time differences.In relation to the studies of acceptance of the technology,the adjustment of this to human capacities to explain the time differences in the series of racing cars is discussed.
基金supported by the National Natural Science Foundation of China(Grant No.52308340)the Innovative Projects of Universities in Guangdong(Grant No.2022KTSCX208)Sichuan Transportation Science and Technology Project(Grant No.2018-ZL-01).
文摘Historically,landslides have been the primary type of geological disaster worldwide.Generally,the stability of reservoir banks is primarily affected by rainfall and reservoir water level fluctuations.Moreover,the stability of reservoir banks changes with the long-term dynamics of external disastercausing factors.Thus,assessing the time-varying reliability of reservoir landslides remains a challenge.In this paper,a machine learning(ML)based approach is proposed to analyze the long-term reliability of reservoir bank landslides in spatially variable soils through time series prediction.This study systematically investigated the prediction performances of three ML algorithms,i.e.multilayer perceptron(MLP),convolutional neural network(CNN),and long short-term memory(LSTM).Additionally,the effects of the data quantity and data ratio on the predictive power of deep learning models are considered.The results show that all three ML models can accurately depict the changes in the time-varying failure probability of reservoir landslides.The CNN model outperforms both the MLP and LSTM models in predicting the failure probability.Furthermore,selecting the right data ratio can improve the prediction accuracy of the failure probability obtained by ML models.