Cutaneous neurofibroma(cNF)is a prevalent clinical manifestation of neurofibromatosis type 1,significantly affecting the well-being and quality of life of the affected individuals.The adoption of reliable and reproduc...Cutaneous neurofibroma(cNF)is a prevalent clinical manifestation of neurofibromatosis type 1,significantly affecting the well-being and quality of life of the affected individuals.The adoption of reliable and reproducible volumetric measurement techniques is essential for precisely evaluating tumor burden and plays a critical role in the development of effective treatments for cNF.This study focuses on widely used volumetric measurement techniques,including vernier calipers,ultrasound,computed tomography,magnetic resonance imaging,and three-dimensional scanning imaging.It outlines the merits and drawbacks of each technique in assessing the cNF load,providing an overview of their current applications and ongoing research advancements in this domain.展开更多
This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contra...This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network.展开更多
We present a quantitative measurement of the horizontal component of the microwave magnetic field of a coplanar waveguide using a quantum diamond probe in fiber format.The measurement results are compared in detail wi...We present a quantitative measurement of the horizontal component of the microwave magnetic field of a coplanar waveguide using a quantum diamond probe in fiber format.The measurement results are compared in detail with simulation,showing a good consistence.Further simulation shows fiber diamond probe brings negligible disturbance to the field under measurement compared to bulk diamond.This method will find important applications ranging from electromagnetic compatibility test and failure analysis of high frequency and high complexity integrated circuits.展开更多
With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders...With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders lack a balance between data benefits and privacy threats,leading to conservative data uploads and low revenue or excessive uploads and privacy breaches.To solve this problem,a Dynamic Privacy Measurement and Protection(DPMP)framework is proposed based on differential privacy and reinforcement learning.Firstly,a DPM model is designed to quantify the amount of data privacy,and a calculation method for personalized privacy threshold of different users is also designed.Furthermore,a Dynamic Private sensing data Selection(DPS)algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds.Finally,theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection,in particular,the proposed DPMP framework has 63%and 23%higher training efficiency and data benefits,respectively,compared to the Monte Carlo algorithm.展开更多
A dedicated weak current measurement system was designed to measure the weak currents generated by the neutron ionization chamber.This system incorporates a second-order low-pass filter circuit and the Kalman filterin...A dedicated weak current measurement system was designed to measure the weak currents generated by the neutron ionization chamber.This system incorporates a second-order low-pass filter circuit and the Kalman filtering algorithm to effectively filter out noise and minimize interference in the measurement results.Testing conducted under normal temperature conditions has demonstrated the system's high precision performance.However,it was observed that temperature variations can affect the measurement performance.Data were collected across temperatures ranging from -20 to 70℃,and a temperature correction model was established through linear regression fitting to address this issue.The feasibility of the temperature correction model was confirmed at temperatures of -5 and 40℃,where relative errors remained below 0.1% after applying the temperature correction.The research indicates that the designed measurement system exhibits excellent temperature adaptability and high precision,making it particularly suitable for measuring weak currents.展开更多
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour...Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.展开更多
Knowledge about the seismic elastic modulus dispersion,and associated attenuation,in fluid-saturated rocks is essential for better interpretation of seismic observations taken as part of hydrocarbon identification and...Knowledge about the seismic elastic modulus dispersion,and associated attenuation,in fluid-saturated rocks is essential for better interpretation of seismic observations taken as part of hydrocarbon identification and time-lapse seismic surveillance of both conventional and unconventional reservoir and overburden performances.A Seismic Elastic Moduli Module has been developed,based on the forced-oscillations method,to experimentally investigate the frequency dependence of Young's modulus and Poisson's ratio,as well as the inferred attenuation,of cylindrical samples under different confining pressure conditions.Calibration with three standard samples showed that the measured elastic moduli were consistent with the published data,indicating that the new apparatus can operate reliably over a wide frequency range of f∈[1-2000,10^(6)]Hz.The Young's modulus and Poisson's ratio of the shale and the tight sandstone samples were measured under axial stress oscillations to assess the frequency-and pressure-dependent effects.Under dry condition,both samples appear to be nearly frequency independent,with weak pressure dependence for the shale and significant pressure dependence for the sandstone.In particular,it was found that the tight sandstone with complex pore microstructure exhibited apparent dispersion and attenuation under brine or glycerin saturation conditions,the levels of which were strongly influenced by the increased effective pressure.In addition,the measured Young's moduli results were compared with the theoretical predictions from a scaled poroelastic model with a reasonably good agreement,revealing that the combined fluid flow mechanisms at both mesoscopic and microscopic scales possibly responsible for the measured dispersion.展开更多
This study employs the generalized method of moments(GMM)and panel vector autoregression(PVAR)models for a multi-factor quantitative dissection of China’s poverty reduction process across multiple stages,using provin...This study employs the generalized method of moments(GMM)and panel vector autoregression(PVAR)models for a multi-factor quantitative dissection of China’s poverty reduction process across multiple stages,using provincial panel data from 2000 to 2019.According to our research,economic growth and social development are the key drivers of poverty reduction in China,but the trickle-down effect of economic growth is diminishing and marketization is having a lesser pro-poor effect.Public expenditure has failed to provide social protection and income redistribution benefits due to issues such as targeting error and elite capture.Increasing the efficiency of the poverty reduction system calls for adaptive adjustments.Finally,this study highlights China’s poverty reduction experiences and analyzes current challenges,which serve as inspiration for consolidating poverty-reduction achievements,combating relative poverty,and attaining countryside vitalization.展开更多
In this paper,to study the mechanical responses of a solid propellant subjected to ultrahigh acceleration overload during the gun-launch process,specifically designed projectile flight tests with an onboard measuremen...In this paper,to study the mechanical responses of a solid propellant subjected to ultrahigh acceleration overload during the gun-launch process,specifically designed projectile flight tests with an onboard measurement system were performed.Two projectiles containing dummy HTPB propellant grains were successfully recovered after the flight tests with an ultrahigh acceleration overload value of 8100 g.The onboard-measured time-resolved axial displacement,contact stress and overload values were successfully obtained and analysed.Uniaxial compression tests of the dummy HTPB propellant used in the gunlaunched tests were carried out at low and intermediate strain rates to characterize the propellant's dynamic properties.A linear viscoelastic constitutive model was employed and applied in finite-element simulations of the projectile-launching process.During the launch process,the dummy propellant grain exhibited large deformation due to the high acceleration overload,possibly leading to friction between the motor case and propellant grain.The calculated contact stress showed good agreement with the experimental results,though discrepancies in the overall displacement of the dummy propellant grain were observed.The dynamic mechanical response process of the dummy propellant grain was analysed in detail.The results can be used to estimate the structural integrity of the analysed dummy propellant grain during the gun-launch process.展开更多
Ultrafast charge exchange recombination spectroscopy(UF-CXRS)has been developed on the EAST tokamak(Yingying Li et al 2019 Fusion Eng.Des.146522)to measure fast evolutions of ion temperature and toroidal velocity.Here...Ultrafast charge exchange recombination spectroscopy(UF-CXRS)has been developed on the EAST tokamak(Yingying Li et al 2019 Fusion Eng.Des.146522)to measure fast evolutions of ion temperature and toroidal velocity.Here,we report the preliminary diagnostic measurements after relative sensitivity calibration.The measurement results show a much higher temporal resolution compared with conventional CXRS,benefiting from the usage of a prismcoupled,high-dispersion volume-phase holographic transmission grating and a high quantum efficiency,high-gain detector array.Utilizing the UF-CXRS diagnostic,the fast evolutions of the ion temperature and rotation velocity during a set of high-frequency small-amplitude edgelocalized modes(ELMs)are obtained on the EAST tokamak,which are then compared with the case of large-amplitude ELMs.展开更多
This study aims to improve the accuracy and safety of steel plate thickness calibration.A differential noncontact thickness measurement calibration system based on laser displacement sensors was designed to address th...This study aims to improve the accuracy and safety of steel plate thickness calibration.A differential noncontact thickness measurement calibration system based on laser displacement sensors was designed to address the problems of low precision of traditional contact thickness gauges and radiation risks of radiation-based thickness gauges.First,the measurement method and measurement structure of the thickness calibration system were introduced.Then,the hardware circuit of the thickness system was established based on the STM32 core chip.Finally,the system software was designed to implement system control to filter algorithms and human-computer interaction.Experiments have proven the excellent performance of the differential noncontact thickness measurement calibration system based on laser displacement sensors,which not only considerably improves measurement accuracy but also effectively reduces safety risks during the measurement process.The system offers guiding significance and application value in the field of steel plate production and processing.展开更多
To compare finite element analysis(FEA)predictions and stereovision digital image correlation(StereoDIC)strain measurements at the same spatial positions throughout a region of interest,a field comparison procedure is...To compare finite element analysis(FEA)predictions and stereovision digital image correlation(StereoDIC)strain measurements at the same spatial positions throughout a region of interest,a field comparison procedure is developed.The procedure includes(a)conversion of the finite element data into a triangular mesh,(b)selection of a common coordinate system,(c)determination of the rigid body transformation to place both measurements and FEA data in the same system and(d)interpolation of the FEA nodal information to the same spatial locations as the StereoDIC measurements using barycentric coordinates.For an aluminum Al-6061 double edge notched tensile specimen,FEA results are obtained using both the von Mises isotropic yield criterion and Hill’s quadratic anisotropic yield criterion,with the unknown Hill model parameters determined using full-field specimen strain measurements for the nominally plane stress specimen.Using Hill’s quadratic anisotropic yield criterion,the point-by-point comparison of experimentally based full-field strains and stresses to finite element predictions are shown to be in excellent agreement,confirming the effectiveness of the field comparison process.展开更多
Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decoup...The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model.展开更多
The measurement uncertainty analysis is carried out to investigate the measurable dimensions of cylindrical workpieces by the rotary-scan method in this paper.Due to the difficult alignment of the workpiece with a dia...The measurement uncertainty analysis is carried out to investigate the measurable dimensions of cylindrical workpieces by the rotary-scan method in this paper.Due to the difficult alignment of the workpiece with a diameter of less than 3 mm by the rotary scan method,the measurement uncertainty of the cylindrical workpiece with a diameter of 3 mm and length of 50 mm which is measured by a roundness measuring machine,is evaluated according to GUM(Guide to the Expression of Uncertainty in Measurement)as an example.Since the uncertainty caused by the eccentricity of the measured workpiece is different with the dimension changing,the measurement uncertainty of cylindrical workpieces with other dimensions can be evaluated the same as the diameter of 3 mm but with different eccentricity.Measurement uncertainty caused by different eccentricities concerning the dimension of the measured cylindrical workpiece is set to simulate the evaluations.Compared to the target value of the measurement uncertainty of 0.1μm,the measurable dimensions of the cylindrical workpiece can be obtained.Experiments and analysis are presented to quantitatively evaluate the reliability of the rotary-scan method for the roundness measurement of cylindrical workpieces.展开更多
Cancer is a major societal public health and economic problem, responsible for one in every six deaths. Radiotherapy is the main technique of treatment for more than half of cancer patients. To achieve a successful ou...Cancer is a major societal public health and economic problem, responsible for one in every six deaths. Radiotherapy is the main technique of treatment for more than half of cancer patients. To achieve a successful outcome, the radiation dose must be delivered accurately and precisely to the tumor, within ± 5% accuracy. Smaller uncertainties are required for better treatment outcome. The objective of the study is to investigate the uncertainty of measurement of external radiotherapy beam using a standard ionization chamber under reference conditions. Clinical farmers type ionization chamber measurement was compared against the National Reference standard, by exposing it in a beam 60Co gamma source. The measurement set up was carried out according to IAEA TRS 498 protocol and uncertainty of measurement evaluated according to GUM TEDDOC-1585. Evaluation and analysis were done for the identified subjects of uncertainty contributors. The expanded uncertainty associated with 56 mGy/nC ND,W was found to be 0.9% corresponding to a confidence level of approximately 95% with a coverage factor of k = 2. The study established the impact of dosimetry uncertainty of measurement in estimating external radiotherapy dose. The investigation established that the largest contributor of uncertainty is the stability of the ionization chamber at 36%, followed by temperature at 22% and positioning of the chamber in the beam at 8%. The effect of pressure, electrometer, resolution, and reproducibility were found to be minimal to the overall uncertainty. The study indicate that there is no flawless measurement, as there are many prospective sources of variation. Measurement results have component of unreliability and should be regarded as best estimates of the true value. .展开更多
Platforms facilitate information exchange,streamline resources,and reduce production and management costs for companies.However,some viral information may invade and steal company resources,or lead to information leak...Platforms facilitate information exchange,streamline resources,and reduce production and management costs for companies.However,some viral information may invade and steal company resources,or lead to information leakage.For this reason,this paper discusses the standards for cybersecurity protection,examines the current state of cybersecurity management and the risks faced by cloud platforms,expands the time and space for training on cloud platforms,and provides recommendations for measuring the level of cybersecurity protection within cloud platforms in order to build a solid foundation for them.展开更多
The method using pulsed eddy currents to determine the thickness of a conduction plate is extended to enable the simultaneous measurement of the plate thickness and material properties. For optimal performance, a prob...The method using pulsed eddy currents to determine the thickness of a conduction plate is extended to enable the simultaneous measurement of the plate thickness and material properties. For optimal performance, a probe must be designed depending on the thickness range that should be accessible. The need for a calibration of the material properties of a conducting plate to enable the measurement of its thickness has been removed. All that is needed is a probe with known dimensions and suitable hardware to create a current pulse and measure a transient magnetic induction.展开更多
Spacecraft orbit evasion is an effective method to ensure space safety. In the spacecraft’s orbital plane, the space non-cooperate target with autonomous approaching to the spacecraft may have a dangerous rendezvous....Spacecraft orbit evasion is an effective method to ensure space safety. In the spacecraft’s orbital plane, the space non-cooperate target with autonomous approaching to the spacecraft may have a dangerous rendezvous. To deal with this problem, an optimal maneuvering strategy based on the relative navigation observability degree is proposed with angles-only measurements. A maneuver evasion relative navigation model in the spacecraft’s orbital plane is constructed and the observability measurement criteria with process noise and measurement noise are defined based on the posterior Cramer-Rao lower bound. Further, the optimal maneuver evasion strategy in spacecraft’s orbital plane based on the observability is proposed. The strategy provides a new idea for spacecraft to evade safety threats autonomously. Compared with the spacecraft evasion problem based on the absolute navigation, more accurate evasion results can be obtained. The simulation indicates that this optimal strategy can weaken the system’s observability and reduce the state estimation accuracy of the non-cooperative target, making it impossible for the non-cooperative target to accurately approach the spacecraft.展开更多
文摘Cutaneous neurofibroma(cNF)is a prevalent clinical manifestation of neurofibromatosis type 1,significantly affecting the well-being and quality of life of the affected individuals.The adoption of reliable and reproducible volumetric measurement techniques is essential for precisely evaluating tumor burden and plays a critical role in the development of effective treatments for cNF.This study focuses on widely used volumetric measurement techniques,including vernier calipers,ultrasound,computed tomography,magnetic resonance imaging,and three-dimensional scanning imaging.It outlines the merits and drawbacks of each technique in assessing the cNF load,providing an overview of their current applications and ongoing research advancements in this domain.
基金the Arab Open University for Funding this work through AOU Research Fund No.(AOURG-2023-006).
文摘This work carried out a measurement study of the Ethereum Peer-to-Peer(P2P)network to gain a better understanding of the underlying nodes.Ethereum was applied because it pioneered distributed applications,smart contracts,and Web3.Moreover,its application layer language“Solidity”is widely used in smart contracts across different public and private blockchains.To this end,we wrote a new Ethereum client based on Geth to collect Ethereum node information.Moreover,various web scrapers have been written to collect nodes’historical data fromthe Internet Archive and the Wayback Machine project.The collected data has been compared with two other services that harvest the number of Ethereumnodes.Ourmethod has collectedmore than 30% more than the other services.The data trained a neural network model regarding time series to predict the number of online nodes in the future.Our findings show that there are less than 20% of the same nodes daily,indicating thatmost nodes in the network change frequently.It poses a question of the stability of the network.Furthermore,historical data shows that the top ten countries with Ethereum clients have not changed since 2016.The popular operating system of the underlying nodes has shifted from Windows to Linux over time,increasing node security.The results have also shown that the number of Middle East and North Africa(MENA)Ethereum nodes is neglected compared with nodes recorded from other regions.It opens the door for developing new mechanisms to encourage users from these regions to contribute to this technology.Finally,the model has been trained and demonstrated an accuracy of 92% in predicting the future number of nodes in the Ethereum network.
基金Project supported by the National Key Research and Development Program of China (Grant No.2021YFB2012600)。
文摘We present a quantitative measurement of the horizontal component of the microwave magnetic field of a coplanar waveguide using a quantum diamond probe in fiber format.The measurement results are compared in detail with simulation,showing a good consistence.Further simulation shows fiber diamond probe brings negligible disturbance to the field under measurement compared to bulk diamond.This method will find important applications ranging from electromagnetic compatibility test and failure analysis of high frequency and high complexity integrated circuits.
基金supported in part by the National Natural Science Foundation of China under Grant U1905211,Grant 61872088,Grant 62072109,Grant 61872090,and Grant U1804263in part by the Guangxi Key Laboratory of Trusted Software under Grant KX202042+3 种基金in part by the Science and Technology Major Support Program of Guizhou Province under Grant 20183001in part by the Science and Technology Program of Guizhou Province under Grant 20191098in part by the Project of High-level Innovative Talents of Guizhou Province under Grant 20206008in part by the Open Research Fund of Key Laboratory of Cryptography of Zhejiang Province under Grant ZCL21015.
文摘With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders lack a balance between data benefits and privacy threats,leading to conservative data uploads and low revenue or excessive uploads and privacy breaches.To solve this problem,a Dynamic Privacy Measurement and Protection(DPMP)framework is proposed based on differential privacy and reinforcement learning.Firstly,a DPM model is designed to quantify the amount of data privacy,and a calculation method for personalized privacy threshold of different users is also designed.Furthermore,a Dynamic Private sensing data Selection(DPS)algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds.Finally,theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection,in particular,the proposed DPMP framework has 63%and 23%higher training efficiency and data benefits,respectively,compared to the Monte Carlo algorithm.
基金supported by the Youth Science Foundation of Sichuan Province(Nos.2022NSFSC1230 and 2022NSFSC1231)the Science and Technology Innovation Seedling Project of Sichuan Province(No.MZGC20230080)+1 种基金the General project of the National Natural Science Foundation of China(No.12075039)the Key project of the National Natural Science Foundation of China(No.U19A2086)。
文摘A dedicated weak current measurement system was designed to measure the weak currents generated by the neutron ionization chamber.This system incorporates a second-order low-pass filter circuit and the Kalman filtering algorithm to effectively filter out noise and minimize interference in the measurement results.Testing conducted under normal temperature conditions has demonstrated the system's high precision performance.However,it was observed that temperature variations can affect the measurement performance.Data were collected across temperatures ranging from -20 to 70℃,and a temperature correction model was established through linear regression fitting to address this issue.The feasibility of the temperature correction model was confirmed at temperatures of -5 and 40℃,where relative errors remained below 0.1% after applying the temperature correction.The research indicates that the designed measurement system exhibits excellent temperature adaptability and high precision,making it particularly suitable for measuring weak currents.
基金This work is supported by National Natural Science Foundation of China(Nos.U23B20151 and 52171253).
文摘Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.
基金The authors would like to acknowledge financial support from NSFC Basic Research Program on Deep Petroleum Resource Accumulation and Key Engineering Technologies(U19B6003-04-03)National Natural Science Foundation of China(41930425)+2 种基金Beijing Natural Science Foundation(8222073),R&D Department of China National Petroleum Corporation(Investigations on fundamental experiments and advanced theoretical methods in geophysical prospecting applications,2022DQ0604-01)Scientific Research and Technology Development Project of PetroChina(2021DJ1206)National Key Research and Development Program of China(2018YFA0702504).
文摘Knowledge about the seismic elastic modulus dispersion,and associated attenuation,in fluid-saturated rocks is essential for better interpretation of seismic observations taken as part of hydrocarbon identification and time-lapse seismic surveillance of both conventional and unconventional reservoir and overburden performances.A Seismic Elastic Moduli Module has been developed,based on the forced-oscillations method,to experimentally investigate the frequency dependence of Young's modulus and Poisson's ratio,as well as the inferred attenuation,of cylindrical samples under different confining pressure conditions.Calibration with three standard samples showed that the measured elastic moduli were consistent with the published data,indicating that the new apparatus can operate reliably over a wide frequency range of f∈[1-2000,10^(6)]Hz.The Young's modulus and Poisson's ratio of the shale and the tight sandstone samples were measured under axial stress oscillations to assess the frequency-and pressure-dependent effects.Under dry condition,both samples appear to be nearly frequency independent,with weak pressure dependence for the shale and significant pressure dependence for the sandstone.In particular,it was found that the tight sandstone with complex pore microstructure exhibited apparent dispersion and attenuation under brine or glycerin saturation conditions,the levels of which were strongly influenced by the increased effective pressure.In addition,the measured Young's moduli results were compared with the theoretical predictions from a scaled poroelastic model with a reasonably good agreement,revealing that the combined fluid flow mechanisms at both mesoscopic and microscopic scales possibly responsible for the measured dispersion.
基金Key Project of the National Social Science Foundation of China(NSSFC)“Study on the Theory and Practice of Inclusive Green Growth(19ZDA048)General Project of the China Postdoctoral Science Fund“Study on the Impact and Mechanism of Talent Dividend on High Quality Development of Manufacturing Industry from the Perspective of Common Prosperity”(2023M733865).
文摘This study employs the generalized method of moments(GMM)and panel vector autoregression(PVAR)models for a multi-factor quantitative dissection of China’s poverty reduction process across multiple stages,using provincial panel data from 2000 to 2019.According to our research,economic growth and social development are the key drivers of poverty reduction in China,but the trickle-down effect of economic growth is diminishing and marketization is having a lesser pro-poor effect.Public expenditure has failed to provide social protection and income redistribution benefits due to issues such as targeting error and elite capture.Increasing the efficiency of the poverty reduction system calls for adaptive adjustments.Finally,this study highlights China’s poverty reduction experiences and analyzes current challenges,which serve as inspiration for consolidating poverty-reduction achievements,combating relative poverty,and attaining countryside vitalization.
文摘In this paper,to study the mechanical responses of a solid propellant subjected to ultrahigh acceleration overload during the gun-launch process,specifically designed projectile flight tests with an onboard measurement system were performed.Two projectiles containing dummy HTPB propellant grains were successfully recovered after the flight tests with an ultrahigh acceleration overload value of 8100 g.The onboard-measured time-resolved axial displacement,contact stress and overload values were successfully obtained and analysed.Uniaxial compression tests of the dummy HTPB propellant used in the gunlaunched tests were carried out at low and intermediate strain rates to characterize the propellant's dynamic properties.A linear viscoelastic constitutive model was employed and applied in finite-element simulations of the projectile-launching process.During the launch process,the dummy propellant grain exhibited large deformation due to the high acceleration overload,possibly leading to friction between the motor case and propellant grain.The calculated contact stress showed good agreement with the experimental results,though discrepancies in the overall displacement of the dummy propellant grain were observed.The dynamic mechanical response process of the dummy propellant grain was analysed in detail.The results can be used to estimate the structural integrity of the analysed dummy propellant grain during the gun-launch process.
基金supported by the National Magnetic Confinement Fusion Science Program of China (No. 2019YFE 03030004)National Natural Science Foundation of China (Nos. 11535013 and 11975232)
文摘Ultrafast charge exchange recombination spectroscopy(UF-CXRS)has been developed on the EAST tokamak(Yingying Li et al 2019 Fusion Eng.Des.146522)to measure fast evolutions of ion temperature and toroidal velocity.Here,we report the preliminary diagnostic measurements after relative sensitivity calibration.The measurement results show a much higher temporal resolution compared with conventional CXRS,benefiting from the usage of a prismcoupled,high-dispersion volume-phase holographic transmission grating and a high quantum efficiency,high-gain detector array.Utilizing the UF-CXRS diagnostic,the fast evolutions of the ion temperature and rotation velocity during a set of high-frequency small-amplitude edgelocalized modes(ELMs)are obtained on the EAST tokamak,which are then compared with the case of large-amplitude ELMs.
文摘This study aims to improve the accuracy and safety of steel plate thickness calibration.A differential noncontact thickness measurement calibration system based on laser displacement sensors was designed to address the problems of low precision of traditional contact thickness gauges and radiation risks of radiation-based thickness gauges.First,the measurement method and measurement structure of the thickness calibration system were introduced.Then,the hardware circuit of the thickness system was established based on the STM32 core chip.Finally,the system software was designed to implement system control to filter algorithms and human-computer interaction.Experiments have proven the excellent performance of the differential noncontact thickness measurement calibration system based on laser displacement sensors,which not only considerably improves measurement accuracy but also effectively reduces safety risks during the measurement process.The system offers guiding significance and application value in the field of steel plate production and processing.
基金Financial support provided by Correlated Solutions Incorporated to perform StereoDIC experimentsthe Department of Mechanical Engineering at the University of South Carolina for simulation studies is deeply appreciated.
文摘To compare finite element analysis(FEA)predictions and stereovision digital image correlation(StereoDIC)strain measurements at the same spatial positions throughout a region of interest,a field comparison procedure is developed.The procedure includes(a)conversion of the finite element data into a triangular mesh,(b)selection of a common coordinate system,(c)determination of the rigid body transformation to place both measurements and FEA data in the same system and(d)interpolation of the FEA nodal information to the same spatial locations as the StereoDIC measurements using barycentric coordinates.For an aluminum Al-6061 double edge notched tensile specimen,FEA results are obtained using both the von Mises isotropic yield criterion and Hill’s quadratic anisotropic yield criterion,with the unknown Hill model parameters determined using full-field specimen strain measurements for the nominally plane stress specimen.Using Hill’s quadratic anisotropic yield criterion,the point-by-point comparison of experimentally based full-field strains and stresses to finite element predictions are shown to be in excellent agreement,confirming the effectiveness of the field comparison process.
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model.
基金supported by the National Defense Basic Scientific Research Program of China(Grant numbers JCKY2019427D002)。
文摘The measurement uncertainty analysis is carried out to investigate the measurable dimensions of cylindrical workpieces by the rotary-scan method in this paper.Due to the difficult alignment of the workpiece with a diameter of less than 3 mm by the rotary scan method,the measurement uncertainty of the cylindrical workpiece with a diameter of 3 mm and length of 50 mm which is measured by a roundness measuring machine,is evaluated according to GUM(Guide to the Expression of Uncertainty in Measurement)as an example.Since the uncertainty caused by the eccentricity of the measured workpiece is different with the dimension changing,the measurement uncertainty of cylindrical workpieces with other dimensions can be evaluated the same as the diameter of 3 mm but with different eccentricity.Measurement uncertainty caused by different eccentricities concerning the dimension of the measured cylindrical workpiece is set to simulate the evaluations.Compared to the target value of the measurement uncertainty of 0.1μm,the measurable dimensions of the cylindrical workpiece can be obtained.Experiments and analysis are presented to quantitatively evaluate the reliability of the rotary-scan method for the roundness measurement of cylindrical workpieces.
文摘Cancer is a major societal public health and economic problem, responsible for one in every six deaths. Radiotherapy is the main technique of treatment for more than half of cancer patients. To achieve a successful outcome, the radiation dose must be delivered accurately and precisely to the tumor, within ± 5% accuracy. Smaller uncertainties are required for better treatment outcome. The objective of the study is to investigate the uncertainty of measurement of external radiotherapy beam using a standard ionization chamber under reference conditions. Clinical farmers type ionization chamber measurement was compared against the National Reference standard, by exposing it in a beam 60Co gamma source. The measurement set up was carried out according to IAEA TRS 498 protocol and uncertainty of measurement evaluated according to GUM TEDDOC-1585. Evaluation and analysis were done for the identified subjects of uncertainty contributors. The expanded uncertainty associated with 56 mGy/nC ND,W was found to be 0.9% corresponding to a confidence level of approximately 95% with a coverage factor of k = 2. The study established the impact of dosimetry uncertainty of measurement in estimating external radiotherapy dose. The investigation established that the largest contributor of uncertainty is the stability of the ionization chamber at 36%, followed by temperature at 22% and positioning of the chamber in the beam at 8%. The effect of pressure, electrometer, resolution, and reproducibility were found to be minimal to the overall uncertainty. The study indicate that there is no flawless measurement, as there are many prospective sources of variation. Measurement results have component of unreliability and should be regarded as best estimates of the true value. .
文摘Platforms facilitate information exchange,streamline resources,and reduce production and management costs for companies.However,some viral information may invade and steal company resources,or lead to information leakage.For this reason,this paper discusses the standards for cybersecurity protection,examines the current state of cybersecurity management and the risks faced by cloud platforms,expands the time and space for training on cloud platforms,and provides recommendations for measuring the level of cybersecurity protection within cloud platforms in order to build a solid foundation for them.
文摘The method using pulsed eddy currents to determine the thickness of a conduction plate is extended to enable the simultaneous measurement of the plate thickness and material properties. For optimal performance, a probe must be designed depending on the thickness range that should be accessible. The need for a calibration of the material properties of a conducting plate to enable the measurement of its thickness has been removed. All that is needed is a probe with known dimensions and suitable hardware to create a current pulse and measure a transient magnetic induction.
基金supported by the National Key R&D Program of China (2020YFA0713502)the Special Fund Project for Guiding Local Scientific and Technological Development (2020ZYT003)+1 种基金the National Natural Science Foundation of China (U20B2055,61773021,61903086)the Natural Science Foundation of Hunan Province (2019JJ20018,2020JJ4280)。
文摘Spacecraft orbit evasion is an effective method to ensure space safety. In the spacecraft’s orbital plane, the space non-cooperate target with autonomous approaching to the spacecraft may have a dangerous rendezvous. To deal with this problem, an optimal maneuvering strategy based on the relative navigation observability degree is proposed with angles-only measurements. A maneuver evasion relative navigation model in the spacecraft’s orbital plane is constructed and the observability measurement criteria with process noise and measurement noise are defined based on the posterior Cramer-Rao lower bound. Further, the optimal maneuver evasion strategy in spacecraft’s orbital plane based on the observability is proposed. The strategy provides a new idea for spacecraft to evade safety threats autonomously. Compared with the spacecraft evasion problem based on the absolute navigation, more accurate evasion results can be obtained. The simulation indicates that this optimal strategy can weaken the system’s observability and reduce the state estimation accuracy of the non-cooperative target, making it impossible for the non-cooperative target to accurately approach the spacecraft.