In this study, we provide the first detailed analysis of variations in the spacecraft potential (Vs) of the three Swarm satellites, which are flying at about 400-500 km. Unlike previous studies that have investigated ...In this study, we provide the first detailed analysis of variations in the spacecraft potential (Vs) of the three Swarm satellites, which are flying at about 400-500 km. Unlike previous studies that have investigated extreme charging events, usually with spacecraft potentials as negative as −100 V, this study is focused on variations of Swarm Vs readings, which fall within a few negative volts. The Swarm observations show that spacecraft at low Earth orbital (LEO) altitudes are charged only slightly negatively, varying between −7 V and 0 V, with the majority of recorded potentials at these altitudes clustering close to −2 V. However, a second peak of Vs data is found at −5.5 V, though the event numbers for these more-negative observations are less, by an order of magnitude, than for incidents near the −2 V peak. These two distinct Vs peaks suggest two different causes. We have thus divided the Swarm spacecraft Vs data into two categories: less-negatively charged (−5 < Vs < 0 V) and more-negatively-charged (−6.5 < Vs < −5 V). These two Vs categories exhibit different spatial and temporal distributions. The Vs observations in the first category remain relatively closer to 0 V above the magnetic equator, but become much more negative at low and middle latitudes on the day side;at high latitudes, these first-category Vs readings are relatively more-negative during local summer. Second-category Vs events cluster into two bands at the middle latitudes (between ±20°-50° magnetic latitude), but with slightly more negative readings at the South Atlantic Anomaly (SAA) region;at high latitudes, these rarer but more-negative second-category Vs events exhibit relatively more-negative values during local winter, which is opposite to the seasonal pattern seen in the first category. By comparing Vs data to the distributions of background plasma density at Swarm altitudes, we find for the first category that more-negative Vs readings are recorded at regions with higher background plasma density, while for the second category the more-negative Vs data are observed at regions with lower background plasma density. This can be explained as follows: the electron and ion fluxes incident on Swarm surface, whose differences determine the potential of Swarm, are dominated by the background “cold” plasma (due to ionization) and “hot” plasma (due to precipitated particles from magnetosphere) for the two Vs categories, respectively.展开更多
Bone tissue engineering has emerged as a promising alternative therapy for patients who suffer bone fractures or defects caused by trauma,congenital diseases or tumours.However,the reconstruction of bone defects combi...Bone tissue engineering has emerged as a promising alternative therapy for patients who suffer bone fractures or defects caused by trauma,congenital diseases or tumours.However,the reconstruction of bone defects combined with osteoporosis remains a great challenge for clinicians and researchers.Based on our previous study,Ca–Si-based bioceramics(MSCs)showed enhanced bone formation capabilities under normal conditions,and strontium was demonstrated to be therapeutic in promoting bone quality in osteoporosis patients.Therefore,in the present study,we attempted to enlarge the application range of MSCs with Sr incorporation in an osteoporotic bone regeneration model to evaluate whether Sr could assist in regeneration outcomes.In vitro readout suggested that Sr-incorporated MSC scaffolds could enhance the expression level of osteogenic and angiogenic markers of osteoporotic bone mesenchymal stem cells(OVX BMSCs).Animal experiments showed a larger new bone area;in particular,there was a tendency for blood vessel formation to be enhanced in the Sr-MSC scaffold group,showing its positive osteogenic capacity in bone regeneration.This study systematically illustrated the effective delivery of a low-cost therapeutic Sr agent in an osteoporotic model and provided new insight into the treatment of bone defects in osteoporosis patients.展开更多
In a convective scheme featuring a discretized cloud size density, the assumed lateral mixing rate is inversely proportional to the exponential coefficient of plume size. This follows a typical assumption of-1, but it...In a convective scheme featuring a discretized cloud size density, the assumed lateral mixing rate is inversely proportional to the exponential coefficient of plume size. This follows a typical assumption of-1, but it has unveiled inherent uncertainties, especially for deep layer clouds. Addressing this knowledge gap, we conducted comprehensive large eddy simulations and comparative analyses focused on terrestrial regions. Our investigation revealed that cloud formation adheres to the tenets of Bernoulli trials, illustrating power-law scaling that remains consistent regardless of the inherent deep layer cloud attributes existing between cloud size and the number of clouds. This scaling paradigm encompasses liquid, ice, and mixed phases in deep layer clouds. The exponent characterizing the interplay between cloud scale and number in the deep layer cloud, specifically for liquid, ice, or mixed-phase clouds, resembles that of shallow convection,but converges closely to zero. This convergence signifies a propensity for diminished cloud numbers and sizes within deep layer clouds. Notably, the infusion of abundant moisture and the release of latent heat by condensation within the lower atmospheric strata make substantial contributions. However, this role in ice phase formation is limited. The emergence of liquid and ice phases in deep layer clouds is facilitated by the latent heat and influenced by the wind shear inherent in the middle levels. These interrelationships hold potential applications in formulating parameterizations and post-processing model outcomes.展开更多
It is fundamentally challenging to build a secure system atop the current computer architecture.The complexity in software,hardware and ASIC manufacture has reached beyond the capability of existing verification metho...It is fundamentally challenging to build a secure system atop the current computer architecture.The complexity in software,hardware and ASIC manufacture has reached beyond the capability of existing verification methodologies.Without whole-system verification,current systems have no proven security.It is observed that current systems are exposed to a variety of attacks due to the existence of a large number of exploitable security vulnerabilities.Some vulnerabilities are difficult to remove without significant performance impact because performance and security can be conflicting with each other.Even worse,attacks are constantly evolving,and sophisticated attacks are now capable of systematically exploiting multiple vulnerabilities while remain hidden from detection.Eagering to achieve security hardening of current computer architecture,existing defenses are mostly ad hoc and passive in nature.They are normally developed in responding to specific attacks spontaneously after specific vulnerabilities were discovered.As a result,they are not yet systematic in protecting systems from existing attacks and likely defenseless in front of zero-day attacks.To confront the aforementioned challenges,this paper proposes Security-first Architecture,a concept which enforces systematic and active defenses using Active Security Processors.In systems built based on this concept,traditional processors(i.e.,Computation Processors)are monitored and protected by Active Security Processors.The two types of processors execute on their own physically-isolated resources,including memory,disks,network and I/O devices.The Active Security Processors are provided with dedicated channels to access all the resources of the Computation Processors but not vice versa.This allows the Active Security Processors to actively detect and tackle malicious activities in the Computation Processors with minimum performance degradation while protecting themselves from the attacks launched from the Computation Processors thanks to the resource isolation.展开更多
基金supported by the National Key R&D Program of China (Grant No. 2022YFF0503700)the special found of Hubei Luojia Laboratory (220100011)supported by the Dragon 5 cooperation 2020-2024 (project no. 59236)
文摘In this study, we provide the first detailed analysis of variations in the spacecraft potential (Vs) of the three Swarm satellites, which are flying at about 400-500 km. Unlike previous studies that have investigated extreme charging events, usually with spacecraft potentials as negative as −100 V, this study is focused on variations of Swarm Vs readings, which fall within a few negative volts. The Swarm observations show that spacecraft at low Earth orbital (LEO) altitudes are charged only slightly negatively, varying between −7 V and 0 V, with the majority of recorded potentials at these altitudes clustering close to −2 V. However, a second peak of Vs data is found at −5.5 V, though the event numbers for these more-negative observations are less, by an order of magnitude, than for incidents near the −2 V peak. These two distinct Vs peaks suggest two different causes. We have thus divided the Swarm spacecraft Vs data into two categories: less-negatively charged (−5 < Vs < 0 V) and more-negatively-charged (−6.5 < Vs < −5 V). These two Vs categories exhibit different spatial and temporal distributions. The Vs observations in the first category remain relatively closer to 0 V above the magnetic equator, but become much more negative at low and middle latitudes on the day side;at high latitudes, these first-category Vs readings are relatively more-negative during local summer. Second-category Vs events cluster into two bands at the middle latitudes (between ±20°-50° magnetic latitude), but with slightly more negative readings at the South Atlantic Anomaly (SAA) region;at high latitudes, these rarer but more-negative second-category Vs events exhibit relatively more-negative values during local winter, which is opposite to the seasonal pattern seen in the first category. By comparing Vs data to the distributions of background plasma density at Swarm altitudes, we find for the first category that more-negative Vs readings are recorded at regions with higher background plasma density, while for the second category the more-negative Vs data are observed at regions with lower background plasma density. This can be explained as follows: the electron and ion fluxes incident on Swarm surface, whose differences determine the potential of Swarm, are dominated by the background “cold” plasma (due to ionization) and “hot” plasma (due to precipitated particles from magnetosphere) for the two Vs categories, respectively.
基金This work has been jointly supported by the National Natural Science Foundation of China(No.81900970 and 81921002)Young Elite Scientists Sponsorship Program CAST(2018QNRC001)Shanghai Sailing Program(19YF1426000).
文摘Bone tissue engineering has emerged as a promising alternative therapy for patients who suffer bone fractures or defects caused by trauma,congenital diseases or tumours.However,the reconstruction of bone defects combined with osteoporosis remains a great challenge for clinicians and researchers.Based on our previous study,Ca–Si-based bioceramics(MSCs)showed enhanced bone formation capabilities under normal conditions,and strontium was demonstrated to be therapeutic in promoting bone quality in osteoporosis patients.Therefore,in the present study,we attempted to enlarge the application range of MSCs with Sr incorporation in an osteoporotic bone regeneration model to evaluate whether Sr could assist in regeneration outcomes.In vitro readout suggested that Sr-incorporated MSC scaffolds could enhance the expression level of osteogenic and angiogenic markers of osteoporotic bone mesenchymal stem cells(OVX BMSCs).Animal experiments showed a larger new bone area;in particular,there was a tendency for blood vessel formation to be enhanced in the Sr-MSC scaffold group,showing its positive osteogenic capacity in bone regeneration.This study systematically illustrated the effective delivery of a low-cost therapeutic Sr agent in an osteoporotic model and provided new insight into the treatment of bone defects in osteoporosis patients.
基金supported by the Second Tibetan Plateau Scientific Expedition and Research Program (STEP) (Grant No.2019QZKK010203)the National Natural Science Foundation of China (Grant No.42175174 and 41975130)+1 种基金the Natural Science Foundation of Sichuan Province (Grant No.2022NSFSC1092)the Sichuan Provincial Innovation Training Program for College Students (Grant No.S202210621009)。
文摘In a convective scheme featuring a discretized cloud size density, the assumed lateral mixing rate is inversely proportional to the exponential coefficient of plume size. This follows a typical assumption of-1, but it has unveiled inherent uncertainties, especially for deep layer clouds. Addressing this knowledge gap, we conducted comprehensive large eddy simulations and comparative analyses focused on terrestrial regions. Our investigation revealed that cloud formation adheres to the tenets of Bernoulli trials, illustrating power-law scaling that remains consistent regardless of the inherent deep layer cloud attributes existing between cloud size and the number of clouds. This scaling paradigm encompasses liquid, ice, and mixed phases in deep layer clouds. The exponent characterizing the interplay between cloud scale and number in the deep layer cloud, specifically for liquid, ice, or mixed-phase clouds, resembles that of shallow convection,but converges closely to zero. This convergence signifies a propensity for diminished cloud numbers and sizes within deep layer clouds. Notably, the infusion of abundant moisture and the release of latent heat by condensation within the lower atmospheric strata make substantial contributions. However, this role in ice phase formation is limited. The emergence of liquid and ice phases in deep layer clouds is facilitated by the latent heat and influenced by the wind shear inherent in the middle levels. These interrelationships hold potential applications in formulating parameterizations and post-processing model outcomes.
文摘It is fundamentally challenging to build a secure system atop the current computer architecture.The complexity in software,hardware and ASIC manufacture has reached beyond the capability of existing verification methodologies.Without whole-system verification,current systems have no proven security.It is observed that current systems are exposed to a variety of attacks due to the existence of a large number of exploitable security vulnerabilities.Some vulnerabilities are difficult to remove without significant performance impact because performance and security can be conflicting with each other.Even worse,attacks are constantly evolving,and sophisticated attacks are now capable of systematically exploiting multiple vulnerabilities while remain hidden from detection.Eagering to achieve security hardening of current computer architecture,existing defenses are mostly ad hoc and passive in nature.They are normally developed in responding to specific attacks spontaneously after specific vulnerabilities were discovered.As a result,they are not yet systematic in protecting systems from existing attacks and likely defenseless in front of zero-day attacks.To confront the aforementioned challenges,this paper proposes Security-first Architecture,a concept which enforces systematic and active defenses using Active Security Processors.In systems built based on this concept,traditional processors(i.e.,Computation Processors)are monitored and protected by Active Security Processors.The two types of processors execute on their own physically-isolated resources,including memory,disks,network and I/O devices.The Active Security Processors are provided with dedicated channels to access all the resources of the Computation Processors but not vice versa.This allows the Active Security Processors to actively detect and tackle malicious activities in the Computation Processors with minimum performance degradation while protecting themselves from the attacks launched from the Computation Processors thanks to the resource isolation.