As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The ...As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.展开更多
In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages ...In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages that have uniform structure, only differing in main information. A web page which contains many links that link to isomorphic web pages is called a directory page. Algorithm 1 can find directory web pages in a web using adjacent links similar analysis method. It first sorts the link, and then counts the links in each directory. If the count is greater than a given valve then finds the similar sub-page links in the directory and gives the results. A function for an isomorphic web page judgment is also proposed. Algorithm 2 can mine data records from an isomorphic page using a noise information filter. It is based on the fact that the noise information is the same in two isomorphic pages, only the main information is different. Algorithm 3 can mine data records from an entire website using the technology of spider. The experiment shows that the proposed algorithms can mine data records more intactly than the existing algorithms. Mining data records from isomorphic pages is an efficient method.展开更多
This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. Durin...This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. During a data cleaning process task, some rules exist such as included/being included relations with each other. The equivalence degree of the being-included rule is smaller than that of the including rule, so a rule merging and simplifying method is introduced to reduce the total computing time. And this kind of relation will affect the deviation of fuzzy equivalence degree. An improved analysis deviation algorithm that omits the influence of the included rules' equivalence degree was also presented. Normally the duplicate records are logged in a file, and users have to check and verify them one by one. It's time-cost. The proposed algorithm can save users' labor during duplicate records checking. Finally, an experiment was presented which demonstrates the possibility of the rule.展开更多
The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical...The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially.展开更多
In the software of data management system, there are some different lengths of records needed storing in an array, and the number of records often increases in use of the software. A universal data structure is presen...In the software of data management system, there are some different lengths of records needed storing in an array, and the number of records often increases in use of the software. A universal data structure is presented in the design, and it provide an unified interface for dynamic storage records in different length, so that the developers can call the unified interface directly for the data storage to simplify the design of data management system.展开更多
An overview of basic research on climate change in recent years in China is presented. In the past 100 years in China, average annual mean surface air temperature (SAT) has increased at a rate ranging from 0.03℃ (...An overview of basic research on climate change in recent years in China is presented. In the past 100 years in China, average annual mean surface air temperature (SAT) has increased at a rate ranging from 0.03℃ (10 yr)-1 to 0.12℃ (10 yr)-1. This warming is more evident in northern China and is more significant in winter and spring. In the past 50 years in China, at least 27% of the average annual warming has been caused by urbanization. Overall, no significant trends have been detected in annual and/or summer precipitation in China on a whole for the past 100 years or 50 years. Both increases and decreases in frequencies of major extreme climate events have been observed for the past 50 years. The frequencies of extreme temperature events have generally displayed a consistent pattern of change across the country, while the frequencies of extreme precipitation events have shown only regionally and seasonally significant trends. The frequency of tropical cyclone landfall decreased slightly, but the frequency of sand/dust storms decreased significantly. Proxy records indicate that the annual mean SAT in the past a few decades is the highest in the past 400-500 years in China, but it may not have exceeded the highest level of the Medieval Warm Period (1000 1300 AD). Proxy records also indicate that droughts and floods in eastern China have been characterized by continuously abnormal rainfall periods, with the frequencies of extreme droughts and floods in the 20th century most likely being near the average levels of the past 2000 years. The attribution studies suggest that increasing greenhouse gas (GHG) concentrations in the atmosphere are likely to be a main factor for the observed surface warming nationwide. The Yangtze River and Huaihe River basins underwent a cooling trend in summer over the past 50 years, which might have been caused by increased aerosol concentrations and cloud cover. However, natural climate variability might have been a main driver for the mean and extreme precipitation variations observed over the past century. Climate models generally perform well in simulating the variations of annual mean SAT in China. They have also been used to project future changes in SAT under varied GHG emission scenarios. Large uncertainties have remained in these model-based projections, however, especially for the projected trends of regional precipitation and extreme climate events.展开更多
Sea state bias(SSB)is an important component of errors for the radar altimeter measurements of sea surface height(SSH).However,existing SSB estimation methods are almost all based on single-task learning(STL),where on...Sea state bias(SSB)is an important component of errors for the radar altimeter measurements of sea surface height(SSH).However,existing SSB estimation methods are almost all based on single-task learning(STL),where one model is built on the data from only one radar altimeter.In this paper,taking account of the data from multiple radar altimeters available,we introduced a multi-task learning method,called trace-norm regularized multi-task learning(TNR-MTL),for SSB estimation.Corresponding to each individual task,TNR-MLT involves only three parameters.Hence,it is easy to implement.More importantly,the convergence of TNR-MLT is theoretically guaranteed.Compared with the commonly used STL models,TNR-MTL can effectively utilize the shared information between data from multiple altimeters.During the training of TNR-MTL,we used the JASON-2 and JASON-3 cycle data to solve two correlated SSB estimation tasks.Then the optimal model was selected to estimate SSB on the JASON-2 and the HY-270-71 cycle intersection data.For the JSAON-2 cycle intersection data,the corrected variance(M)has been reduced by 0.60 cm^2 compared to the geophysical data records(GDR);while for the HY-2 cycle intersection data,M has been reduced by 1.30 cm^2 compared to GDR.Therefore,TNR-MTL is proved to be effective for the SSB estimation tasks.展开更多
Four different states of Si15Sb85 and Ge2Sb2Te5 phase change memory thin films are obtained by crystallization degree modulation through laser initialization at different powers or annealing at different temperatures....Four different states of Si15Sb85 and Ge2Sb2Te5 phase change memory thin films are obtained by crystallization degree modulation through laser initialization at different powers or annealing at different temperatures. The polarization characteristics of these two four-level phase change recording media are analyzed systematically. A simple and effective readout scheme is then proposed, and the readout signal is numerically simulated. The results show that a high-contrast polarization readout can be obtained in an extensive wavelength range for the four-level phase change recording media using common phase change materials. This study will help in-depth understanding of the physical mechanisms and provide technical approaches to multilevel phase change recording.展开更多
The recording and playback of information using a reverse stimulated photon—echo hologram when exposed to the recording medium pulse of non-resonant electromagnetic standing wave was considered. It was shown that the...The recording and playback of information using a reverse stimulated photon—echo hologram when exposed to the recording medium pulse of non-resonant electromagnetic standing wave was considered. It was shown that the spatial intensity distribution in stimulated echo hologram response depended on the electric field intensity of non-resonant standing wave that allowed controlling by a reproducible image.展开更多
Flight data of a twin-jet transport aircraft in revenue flight are analyzed for potential safety problems. Data from the quick access recorder (QAR) are first filtered through the kinematic compatibility analysis. T...Flight data of a twin-jet transport aircraft in revenue flight are analyzed for potential safety problems. Data from the quick access recorder (QAR) are first filtered through the kinematic compatibility analysis. The filtered data are then organized into longitudinal- and lateral-directional aerodynamic model data with dynamic ground effect. The dynamic ground effect requires the radio height and sink rate in the models. The model data are then refined into numerical models through a fuzzy logic algorithm without data smoothing in advance. These numerical models describe nonlinear and unsteady aerodynamics and are used in nonlinear flight dynamics simulation. For the jet transport under study, it is found that the effect of crosswind is significant enough to excite the Dutch roll motion. Through a linearized analysis in flight dynamics at every instant of time, the Dutch roll motion is found to be in nonlinear oscillation without clear damping of the amplitude. In the analysis, all stability derivatives vary with time and hence are nonlinear functions of state variables. Since the Dutch roll motion is not damped despite the fact that a full-time yaw damper is engaged, it is concluded that the design data for the yaw damper is not sufficiently realistic and the contribution of time derivative of sideslip angle to damping should be considered. As a result of nonlinear flight simulation, the vertical wind acting on the aircraft is estimated to be mostly updraft which varies along the flight path before touchdown. Varying updraft appears to make the descent rate more difficult to control to result in a higher g-load at touchdown.展开更多
The need for accessing historical Earth Observation(EO)data series strongly increased in the last ten years,particularly for long-term science and environmental monitoring applications.This trend is likely to increase...The need for accessing historical Earth Observation(EO)data series strongly increased in the last ten years,particularly for long-term science and environmental monitoring applications.This trend is likely to increase even more in the future,in particular regarding the growing interest on global change monitoring which is driving users to request time-series of data spanning 20 years and more,and also due to the need to support the United Nations Framework Convention on Climate Change(UNFCCC).While much of the satellite observations are accessible from different data centers,the solution for analyzing measurements collected from various instruments for time series analysis is both difficult and critical.Climate research is a big data problem that involves high data volume of measurements,methods for on-the-fly extraction and reduction to keep up with the speed and data volume,and the ability to address uncertainties from data collections,processing,and analysis.The content of EO data archives is extending from a few years to decades and therefore,their value as a scientific time-series is continuously increasing.Hence there is a strong need to preserve the EO space data without time constraints and to keep them accessible and exploitable.The preservation of EO space data can also be considered as responsibility of the Space Agencies or data owners as they constitute a humankind asset.This publication aims at describing the activities supported by the European Space Agency relating to the Long Time Series generation with all relevant best practices and models needed to organise and measure the preservation and stewardship processes.The Data Stewardship Reference Model has been defined to give an overview and a way to help the data owners and space agencies in order to preserve and curate the space datasets to be ready for long time data series composition and analysis.展开更多
The accurate prediction of vehicle speed plays an important role in vehicle's real-time energy management and online optimization control. However, the current forecast methods are mostly based on traffic conditio...The accurate prediction of vehicle speed plays an important role in vehicle's real-time energy management and online optimization control. However, the current forecast methods are mostly based on traffic conditions to predict the speed, while ignoring the impact of the driver-vehicle-road system on the actual speed profile. In this paper, the correlation of velocity and its effect factors under various driving conditions were firstly analyzed based on driver-vehicle-road-traffic data records for a more accurate prediction model. With the modeling time and prediction time considered separately, the effectiveness and accuracy of several typical artificial-intelligence speed prediction algorithms were analyzed. The results show that the combination of niche immunegenetic algorithm-support vector machine(NIGA-SVM) prediction algorithm on the city roads with genetic algorithmsupport vector machine(GA-SVM) prediction algorithm on the suburb roads and on the freeway can sharply improve the accuracy and timeliness of vehicle speed forecasting. Afterwards, the optimized GA-SVM vehicle speed prediction model was established in accordance with the optimized GA-SVM prediction algorithm at different times. And the test results verified its validity and rationality of the prediction algorithm.展开更多
The presence of green spaces within city centres has been recognized as a valuable component of the city landscape.Vegetation provides a variety of benefits including energy saving,improved air quality,reduced noise p...The presence of green spaces within city centres has been recognized as a valuable component of the city landscape.Vegetation provides a variety of benefits including energy saving,improved air quality,reduced noise pollution,decreased ambient temperature and psychological restoration.Evidence also shows that the amount of vegetation,known as‘greenness’,in densely populated areas,can also be an indicator of the relative wealth of a neighbourhood.The‘grey-green divide’,the contrast between built-up areas with a dominant grey colour and green spaces,is taken as a proxy indicator of sustainable management of cities and planning of urban growth.Consistent and continuous assessment of greenness in cities is therefore essential for monitoring progress towards the United Nations Sustainable Development Goal 11.The availability of multi-temporal greenness information from Landsat data archives together with data derived from the city centres database of the Global Human Settlement Layer(GHSL)initiative,offers a unique perspective to quantify and analyse changes in greenness across 10,323 urban centres all around the globe.In this research,we assess differences between greenness within and outside the built-up area for all the urban centres described by the city centres database of the GHSL.We also analyse changes in the amount of green space over time considering changes in the built-up areas in the periods 1990,2000 and 2014.The results show an overall trend of increased greenness between 1990 and 2014 in most cities.The effect of greening is observed also for most of the 32 world megacities.We conclude that using simple yet effective approaches exploiting open and free global data it is possible to provide quantitative information on the greenness of cities and its changes over time.This information is of direct interest for urban planners and decision-makers to mitigate urban related environmental and social impacts.展开更多
基金supported by the Meteorological Soft Science Project(Grant No.2023ZZXM29)the Natural Science Fund Project of Tianjin,China(Grant No.21JCYBJC00740)the Key Research and Development-Social Development Program of Jiangsu Province,China(Grant No.BE2021685).
文摘As the risks associated with air turbulence are intensified by climate change and the growth of the aviation industry,it has become imperative to monitor and mitigate these threats to ensure civil aviation safety.The eddy dissipation rate(EDR)has been established as the standard metric for quantifying turbulence in civil aviation.This study aims to explore a universally applicable symbolic classification approach based on genetic programming to detect turbulence anomalies using quick access recorder(QAR)data.The detection of atmospheric turbulence is approached as an anomaly detection problem.Comparative evaluations demonstrate that this approach performs on par with direct EDR calculation methods in identifying turbulence events.Moreover,comparisons with alternative machine learning techniques indicate that the proposed technique is the optimal methodology currently available.In summary,the use of symbolic classification via genetic programming enables accurate turbulence detection from QAR data,comparable to that with established EDR approaches and surpassing that achieved with machine learning algorithms.This finding highlights the potential of integrating symbolic classifiers into turbulence monitoring systems to enhance civil aviation safety amidst rising environmental and operational hazards.
文摘In order to improve the accuracy and integrality of mining data records from the web, the concepts of isomorphic page and directory page and three algorithms are proposed. An isomorphic web page is a set of web pages that have uniform structure, only differing in main information. A web page which contains many links that link to isomorphic web pages is called a directory page. Algorithm 1 can find directory web pages in a web using adjacent links similar analysis method. It first sorts the link, and then counts the links in each directory. If the count is greater than a given valve then finds the similar sub-page links in the directory and gives the results. A function for an isomorphic web page judgment is also proposed. Algorithm 2 can mine data records from an isomorphic page using a noise information filter. It is based on the fact that the noise information is the same in two isomorphic pages, only the main information is different. Algorithm 3 can mine data records from an entire website using the technology of spider. The experiment shows that the proposed algorithms can mine data records more intactly than the existing algorithms. Mining data records from isomorphic pages is an efficient method.
文摘This paper presented a rule merging and simplifying method and an improved analysis deviation algorithm. The fuzzy equivalence theory avoids the rigid way (either this or that) of traditional equivalence theory. During a data cleaning process task, some rules exist such as included/being included relations with each other. The equivalence degree of the being-included rule is smaller than that of the including rule, so a rule merging and simplifying method is introduced to reduce the total computing time. And this kind of relation will affect the deviation of fuzzy equivalence degree. An improved analysis deviation algorithm that omits the influence of the included rules' equivalence degree was also presented. Normally the duplicate records are logged in a file, and users have to check and verify them one by one. It's time-cost. The proposed algorithm can save users' labor during duplicate records checking. Finally, an experiment was presented which demonstrates the possibility of the rule.
文摘The capability of accurately predicting mineralogical brittleness index (BI) from basic suites of well logs is desirable as it provides a useful indicator of the fracability of tight formations.Measuring mineralogical components in rocks is expensive and time consuming.However,the basic well log curves are not well correlated with BI so correlation-based,machine-learning methods are not able to derive highly accurate BI predictions using such data.A correlation-free,optimized data-matching algorithm is configured to predict BI on a supervised basis from well log and core data available from two published wells in the Lower Barnett Shale Formation (Texas).This transparent open box (TOB) algorithm matches data records by calculating the sum of squared errors between their variables and selecting the best matches as those with the minimum squared errors.It then applies optimizers to adjust weights applied to individual variable errors to minimize the root mean square error (RMSE)between calculated and predicted (BI).The prediction accuracy achieved by TOB using just five well logs (Gr,ρb,Ns,Rs,Dt) to predict BI is dependent on the density of data records sampled.At a sampling density of about one sample per 0.5 ft BI is predicted with RMSE~0.056 and R^(2)~0.790.At a sampling density of about one sample per0.1 ft BI is predicted with RMSE~0.008 and R^(2)~0.995.Adding a stratigraphic height index as an additional (sixth)input variable method improves BI prediction accuracy to RMSE~0.003 and R^(2)~0.999 for the two wells with only 1 record in 10,000 yielding a BI prediction error of>±0.1.The model has the potential to be applied in an unsupervised basis to predict BI from basic well log data in surrounding wells lacking mineralogical measurements but with similar lithofacies and burial histories.The method could also be extended to predict elastic rock properties in and seismic attributes from wells and seismic data to improve the precision of brittleness index and fracability mapping spatially.
文摘In the software of data management system, there are some different lengths of records needed storing in an array, and the number of records often increases in use of the software. A universal data structure is presented in the design, and it provide an unified interface for dynamic storage records in different length, so that the developers can call the unified interface directly for the data storage to simplify the design of data management system.
基金supported by the Ministry of Science and Technology of China (Grant Nos. 2007BAC29B02, 2007BAC03A01 and GYHY201206012)
文摘An overview of basic research on climate change in recent years in China is presented. In the past 100 years in China, average annual mean surface air temperature (SAT) has increased at a rate ranging from 0.03℃ (10 yr)-1 to 0.12℃ (10 yr)-1. This warming is more evident in northern China and is more significant in winter and spring. In the past 50 years in China, at least 27% of the average annual warming has been caused by urbanization. Overall, no significant trends have been detected in annual and/or summer precipitation in China on a whole for the past 100 years or 50 years. Both increases and decreases in frequencies of major extreme climate events have been observed for the past 50 years. The frequencies of extreme temperature events have generally displayed a consistent pattern of change across the country, while the frequencies of extreme precipitation events have shown only regionally and seasonally significant trends. The frequency of tropical cyclone landfall decreased slightly, but the frequency of sand/dust storms decreased significantly. Proxy records indicate that the annual mean SAT in the past a few decades is the highest in the past 400-500 years in China, but it may not have exceeded the highest level of the Medieval Warm Period (1000 1300 AD). Proxy records also indicate that droughts and floods in eastern China have been characterized by continuously abnormal rainfall periods, with the frequencies of extreme droughts and floods in the 20th century most likely being near the average levels of the past 2000 years. The attribution studies suggest that increasing greenhouse gas (GHG) concentrations in the atmosphere are likely to be a main factor for the observed surface warming nationwide. The Yangtze River and Huaihe River basins underwent a cooling trend in summer over the past 50 years, which might have been caused by increased aerosol concentrations and cloud cover. However, natural climate variability might have been a main driver for the mean and extreme precipitation variations observed over the past century. Climate models generally perform well in simulating the variations of annual mean SAT in China. They have also been used to project future changes in SAT under varied GHG emission scenarios. Large uncertainties have remained in these model-based projections, however, especially for the projected trends of regional precipitation and extreme climate events.
基金This work was supported by the Major Project for New Generation of AI(No.2018AAA0100400)the National Natural Science Foundation of China(No.41706010)+1 种基金the Joint Fund of the Equipments Pre-Research and Ministry of Education of China(No.6141A020337)and the Fundamental Research Funds for the Central Universities of China.
文摘Sea state bias(SSB)is an important component of errors for the radar altimeter measurements of sea surface height(SSH).However,existing SSB estimation methods are almost all based on single-task learning(STL),where one model is built on the data from only one radar altimeter.In this paper,taking account of the data from multiple radar altimeters available,we introduced a multi-task learning method,called trace-norm regularized multi-task learning(TNR-MTL),for SSB estimation.Corresponding to each individual task,TNR-MLT involves only three parameters.Hence,it is easy to implement.More importantly,the convergence of TNR-MLT is theoretically guaranteed.Compared with the commonly used STL models,TNR-MTL can effectively utilize the shared information between data from multiple altimeters.During the training of TNR-MTL,we used the JASON-2 and JASON-3 cycle data to solve two correlated SSB estimation tasks.Then the optimal model was selected to estimate SSB on the JASON-2 and the HY-270-71 cycle intersection data.For the JSAON-2 cycle intersection data,the corrected variance(M)has been reduced by 0.60 cm^2 compared to the geophysical data records(GDR);while for the HY-2 cycle intersection data,M has been reduced by 1.30 cm^2 compared to GDR.Therefore,TNR-MTL is proved to be effective for the SSB estimation tasks.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61178059 and 61137002)the Key Program of the Science and Technology Commission of Shanghai Municipality,China(Grant No.11jc1413300)
文摘Four different states of Si15Sb85 and Ge2Sb2Te5 phase change memory thin films are obtained by crystallization degree modulation through laser initialization at different powers or annealing at different temperatures. The polarization characteristics of these two four-level phase change recording media are analyzed systematically. A simple and effective readout scheme is then proposed, and the readout signal is numerically simulated. The results show that a high-contrast polarization readout can be obtained in an extensive wavelength range for the four-level phase change recording media using common phase change materials. This study will help in-depth understanding of the physical mechanisms and provide technical approaches to multilevel phase change recording.
文摘The recording and playback of information using a reverse stimulated photon—echo hologram when exposed to the recording medium pulse of non-resonant electromagnetic standing wave was considered. It was shown that the spatial intensity distribution in stimulated echo hologram response depended on the electric field intensity of non-resonant standing wave that allowed controlling by a reproducible image.
基金Foundation item: National Natural Science Foundation of China (60832012)
文摘Flight data of a twin-jet transport aircraft in revenue flight are analyzed for potential safety problems. Data from the quick access recorder (QAR) are first filtered through the kinematic compatibility analysis. The filtered data are then organized into longitudinal- and lateral-directional aerodynamic model data with dynamic ground effect. The dynamic ground effect requires the radio height and sink rate in the models. The model data are then refined into numerical models through a fuzzy logic algorithm without data smoothing in advance. These numerical models describe nonlinear and unsteady aerodynamics and are used in nonlinear flight dynamics simulation. For the jet transport under study, it is found that the effect of crosswind is significant enough to excite the Dutch roll motion. Through a linearized analysis in flight dynamics at every instant of time, the Dutch roll motion is found to be in nonlinear oscillation without clear damping of the amplitude. In the analysis, all stability derivatives vary with time and hence are nonlinear functions of state variables. Since the Dutch roll motion is not damped despite the fact that a full-time yaw damper is engaged, it is concluded that the design data for the yaw damper is not sufficiently realistic and the contribution of time derivative of sideslip angle to damping should be considered. As a result of nonlinear flight simulation, the vertical wind acting on the aircraft is estimated to be mostly updraft which varies along the flight path before touchdown. Varying updraft appears to make the descent rate more difficult to control to result in a higher g-load at touchdown.
文摘The need for accessing historical Earth Observation(EO)data series strongly increased in the last ten years,particularly for long-term science and environmental monitoring applications.This trend is likely to increase even more in the future,in particular regarding the growing interest on global change monitoring which is driving users to request time-series of data spanning 20 years and more,and also due to the need to support the United Nations Framework Convention on Climate Change(UNFCCC).While much of the satellite observations are accessible from different data centers,the solution for analyzing measurements collected from various instruments for time series analysis is both difficult and critical.Climate research is a big data problem that involves high data volume of measurements,methods for on-the-fly extraction and reduction to keep up with the speed and data volume,and the ability to address uncertainties from data collections,processing,and analysis.The content of EO data archives is extending from a few years to decades and therefore,their value as a scientific time-series is continuously increasing.Hence there is a strong need to preserve the EO space data without time constraints and to keep them accessible and exploitable.The preservation of EO space data can also be considered as responsibility of the Space Agencies or data owners as they constitute a humankind asset.This publication aims at describing the activities supported by the European Space Agency relating to the Long Time Series generation with all relevant best practices and models needed to organise and measure the preservation and stewardship processes.The Data Stewardship Reference Model has been defined to give an overview and a way to help the data owners and space agencies in order to preserve and curate the space datasets to be ready for long time data series composition and analysis.
基金supported by the Nanjing University of Aeronautics and Astronautics Research Funding(Grant No.NS2015028)
文摘The accurate prediction of vehicle speed plays an important role in vehicle's real-time energy management and online optimization control. However, the current forecast methods are mostly based on traffic conditions to predict the speed, while ignoring the impact of the driver-vehicle-road system on the actual speed profile. In this paper, the correlation of velocity and its effect factors under various driving conditions were firstly analyzed based on driver-vehicle-road-traffic data records for a more accurate prediction model. With the modeling time and prediction time considered separately, the effectiveness and accuracy of several typical artificial-intelligence speed prediction algorithms were analyzed. The results show that the combination of niche immunegenetic algorithm-support vector machine(NIGA-SVM) prediction algorithm on the city roads with genetic algorithmsupport vector machine(GA-SVM) prediction algorithm on the suburb roads and on the freeway can sharply improve the accuracy and timeliness of vehicle speed forecasting. Afterwards, the optimized GA-SVM vehicle speed prediction model was established in accordance with the optimized GA-SVM prediction algorithm at different times. And the test results verified its validity and rationality of the prediction algorithm.
文摘The presence of green spaces within city centres has been recognized as a valuable component of the city landscape.Vegetation provides a variety of benefits including energy saving,improved air quality,reduced noise pollution,decreased ambient temperature and psychological restoration.Evidence also shows that the amount of vegetation,known as‘greenness’,in densely populated areas,can also be an indicator of the relative wealth of a neighbourhood.The‘grey-green divide’,the contrast between built-up areas with a dominant grey colour and green spaces,is taken as a proxy indicator of sustainable management of cities and planning of urban growth.Consistent and continuous assessment of greenness in cities is therefore essential for monitoring progress towards the United Nations Sustainable Development Goal 11.The availability of multi-temporal greenness information from Landsat data archives together with data derived from the city centres database of the Global Human Settlement Layer(GHSL)initiative,offers a unique perspective to quantify and analyse changes in greenness across 10,323 urban centres all around the globe.In this research,we assess differences between greenness within and outside the built-up area for all the urban centres described by the city centres database of the GHSL.We also analyse changes in the amount of green space over time considering changes in the built-up areas in the periods 1990,2000 and 2014.The results show an overall trend of increased greenness between 1990 and 2014 in most cities.The effect of greening is observed also for most of the 32 world megacities.We conclude that using simple yet effective approaches exploiting open and free global data it is possible to provide quantitative information on the greenness of cities and its changes over time.This information is of direct interest for urban planners and decision-makers to mitigate urban related environmental and social impacts.