Purpose-The purpose of this paper is to eliminate the fluctuations in train arrival and departure times caused by skewed distributions in interval operation times.These fluctuations arise from random origin and proces...Purpose-The purpose of this paper is to eliminate the fluctuations in train arrival and departure times caused by skewed distributions in interval operation times.These fluctuations arise from random origin and process factors during interval operations and can accumulate over multiple intervals.The aim is to enhance the robustness of high-speed rail station arrival and departure track utilization schemes.Design/methodologylapproach-To achieve this objective,the paper simulates actual train operations,incorporating the fluctuations in interval operation times into the utilization of arrival and departure tracks at the station.The Monte Carlo simulation method is adopted to solve this problem.This approach transforms a nonlinear model,which includes constraints from probability distribution functions and is difficult to solve directly,into a linear programming model that is easier to handle.The method then linearly weights two objectives to optimize the solution.Findings-Through the application of Monte Carlo simulation,the study successfully converts the complex nonlinear model with probability distribution function constraints into a manageable linear programming model.By continuously adjusting the weighting coefficients of the linear objectives,the method is able to optimize the Pareto solution.Notably,this approach does not require extensive scene data to obtain a satisfactory Pareto solution set.Originality/value-The paper contributes to the field by introducing a novel method for optimizing high-speed rail station arrival and departure track utilization in the presence of fluctuations in interval operation times.The use of Monte Carlo simulation to transform the problem into a tractable linear programming model represents a significant advancement.Furthermore,the method's ability to produce satisfactory Pareto solutions without relying on extensive data sets adds to its practical value and applicability in real-world scenarios.展开更多
In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene p...In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene production by acetonitrile.The accuracy of five prediction methods,UNIFAC(UNIQUAC Functional-group Activity Coefficients),UNIFAC-LL,UNIFAC-LBY,UNIFAC-DMD and COSMO-RS,applied to the butadiene extraction process was verified using partial phase equilibrium data.The results showed that the UNIFAC-DMD method had the highest accuracy in predicting phase equilibrium data for the missing system.COSMO-RS-predicted multiple systems showed good accuracy,and a large number of missing phase equilibrium data were estimated using the UNIFAC-DMD method and COSMO-RS method.The predicted phase equilibrium data were checked for consistency.The NRTL-RK(non-Random Two Liquid-Redlich-Kwong Equation of State)and UNIQUAC thermodynamic models were used to correlate the phase equilibrium data.Industrial device simulations were used to verify the accuracy of the thermodynamic model applied to the butadiene extraction process.The simulation results showed that the average deviations of the simulated results using the correlated thermodynamic model from the actual values were less than 2%compared to that using the commercial simulation software,Aspen Plus and its database.The average deviation was much smaller than that of the simulations using the Aspen Plus database(>10%),indicating that the obtained phase equilibrium data are highly accurate and reliable.The best phase equilibrium data and thermodynamic model parameters for butadiene extraction are provided.This improves the accuracy and reliability of the design,optimization and control of the process,and provides a basis and guarantee for developing a more environmentally friendly and economical butadiene extraction process.展开更多
Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In exist...Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.展开更多
A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for det...A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for determining band-pass filter parameters based on signal-to-noise ratio gain,smoothness index,and cross-correlation coefficient is designed using the Chebyshev optimal consistent approximation theory.Additionally,a wavelet denoising evaluation function is constructed,with the dmey wavelet basis function identified as most effective for processing gravity gradient data.The results of hard-in-the-loop simulation and prototype experiments show that the proposed processing method has shown a 14%improvement in the measurement variance of gravity gradient signals,and the measurement accuracy has reached within 4E,compared to other commonly used methods,which verifies that the proposed method effectively removes noise from the gradient signals,improved gravity gradiometry accuracy,and has certain technical insights for high-precision airborne gravity gradiometry.展开更多
The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial...The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
The interleaving/multiplexing technique was used to realize a 200?MHz real time data acquisition system. Two 100?MHz ADC modules worked parallelly and every ADC plays out data in ping pang fashion. The design improv...The interleaving/multiplexing technique was used to realize a 200?MHz real time data acquisition system. Two 100?MHz ADC modules worked parallelly and every ADC plays out data in ping pang fashion. The design improved the system conversion rata to 200?MHz and reduced the speed of data transporting and storing to 50?MHz. The high speed HDPLD and ECL logic parts were used to control system timing and the memory address. The multi layer print board and the shield were used to decrease interference produced by the high speed circuit. The system timing was designed carefully. The interleaving/multiplexing technique could improve the system conversion rata greatly while reducing the speed of external digital interfaces greatly. The design resolved the difficulties in high speed system effectively. The experiment proved the data acquisition system is stable and accurate.展开更多
Since 2008 a network of five sea-level monitoring stations was progressively installed in French Polynesia.The stations are autonomous and data,collected at a sampling rate of 1 or 2 min,are not only recorded locally,...Since 2008 a network of five sea-level monitoring stations was progressively installed in French Polynesia.The stations are autonomous and data,collected at a sampling rate of 1 or 2 min,are not only recorded locally,but also transferred in real time by a radio-link to the NOAA through the GOES satellite.The new ET34-ANA-V80 version of ETERNA,initially developed for Earth Tides analysis,is now able to analyze ocean tides records.Through a two-step validation scheme,we took advantage of the flexibility of this new version,operated in conjunction with the preprocessing facilities of the Tsoft software,to recover co rrected data series able to model sea-level variations after elimination of the ocean tides signal.We performed the tidal analysis of the tide gauge data with the highest possible selectivity(optimal wave grouping)and a maximum of additional terms(shallow water constituents).Our goal was to provide corrected data series and modelled ocean tides signal to compute tide-free sea-level variations as well as tidal prediction models with centimeter precision.We also present in this study the characteristics of the ocean tides in French Polynesia and preliminary results concerning the non-tidal variations of the sea level concerning the tide gauge setting.展开更多
The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size ...The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size the complexity increases and our ability to analyze them using the current state of the art is at severe risk of failing to keep pace.Therefore,this paper initiates a discussion on graph signal processing for large-scale data analysis.We first provide a comprehensive overview of core ideas in Graph signal processing(GSP)and their connection to conventional digital signal processing(DSP).We then summarize recent developments in developing basic GSP tools,including methods for graph filtering or graph learning,graph signal,graph Fourier transform(GFT),spectrum,graph frequency,etc.Graph filtering is a basic task that allows for isolating the contribution of individual frequencies and therefore enables the removal of noise.We then consider a graph filter as a model that helps to extend the application of GSP methods to large datasets.To show the suitability and the effeteness,we first created a noisy graph signal and then applied it to the filter.After several rounds of simulation results.We see that the filtered signal appears to be smoother and is closer to the original noise-free distance-based signal.By using this example application,we thoroughly demonstrated that graph filtration is efficient for big data analytics.展开更多
The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement method...The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.展开更多
The delay-causing text data contain valuable information such as the specific reasons for the delay,location and time of the disturbance,which can provide an efficient support for the prediction of train delays and im...The delay-causing text data contain valuable information such as the specific reasons for the delay,location and time of the disturbance,which can provide an efficient support for the prediction of train delays and improve the guidance of train control efficiency.Based on the train operation data and delay-causing data of the Wuhan-Guangzhou high-speed railway,the relevant algorithms in the natural language processing field are used to process the delay-causing text data.It also integrates the train operatingenvironment information and delay-causing text information so as to develop a cause-based train delay propagation prediction model.The Word2vec model is first used to vectorize the delay-causing text description after word segmentation.The mean model or the term frequency-inverse document frequency-weighted model is then used to generate the delay-causing sentence vector based on the original word vector.Afterward,the train operating-environment features and delay-causing sentence vector are input into the extreme gradient boosting(XGBoost)regression algorithm to develop a delay propagation prediction model.In this work,4 text feature processing methods and 8 regression algorithms are considered.The results demonstrate that the XGBoost regression algorithm has the highest prediction accuracy using the test features processed by the continuous bag of words and the mean models.Compared with the prediction model that only considers the train-operating-environment features,the results show that the prediction accuracy of the model is significantly improved with multi-ple regression algorithms after integrating the delay-causing feature.展开更多
The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark sour...The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.展开更多
The Main Optical Telescope (MOT) is an important payload of the Space Solar Telescope (SST) with various instruments and observation modes. Its real-time data handling and management and control tasks are arduous. Bas...The Main Optical Telescope (MOT) is an important payload of the Space Solar Telescope (SST) with various instruments and observation modes. Its real-time data handling and management and control tasks are arduous. Based on the advanced techniques of foreign countries, an improved structure of onboard data handling systems feasible for SST, is proposed. This article concentrated on the development of a Central Management & Control Unit (MCU) based on FPGA and DSP. Through reconfigurating the FPGA and DSP programs, the prototype could perform different tasks. Thus the inheritability of the whole system is improved. The completed dual-channel prototype proves that the system meets all requirements of the MOT. Its high reliability and safety features also meet the requirements under harsh conditions such as mine detection.展开更多
With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive opti...With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive option.In this contribution,we provide an overview of the current status of UDUC GNSS data processing activities in China.These activities encompass the formulation of Precise Point Positioning(PPP)models and PPP-Real-Time Kinematic(PPP-RTK)models for processing single-station and multi-station GNSS data,respectively.Regarding single-station data processing,we discuss the advancements in PPP models,particularly the extension from a single system to multiple systems,and from dual frequencies to single and multiple frequencies.Additionally,we introduce the modified PPP model,which accounts for the time variation of receiver code biases,a departure from the conventional PPP model that typically assumes these biases to be time-constant.In the realm of multi-station PPP-RTK data processing,we introduce the ionosphere-weighted PPP-RTK model,which enhances the model strength by considering the spatial correlation of ionospheric delays.We also review the phase-only PPP-RTK model,designed to mitigate the impact of unmodelled code-related errors.Furthermore,we explore GLONASS PPP-RTK,achieved through the application of the integer-estimable model.For large-scale network data processing,we introduce the all-in-view PPP-RTK model,which alleviates the strict common-view requirement at all receivers.Moreover,we present the decentralized PPP-RTK data processing strategy,designed to improve computational efficiency.Overall,this work highlights the various advancements in UDUC GNSS data processing,providing insights into the state-of-the-art techniques employed in China to achieve precise GNSS applications.展开更多
A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth ...A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth is200 MHz,which solves the large bandwidth,high-speed signal acquisition and processing problems.At present,the data acquisition system is successfully used in broadband receiver test systems.展开更多
The inter-agency government information sharing(IAGIS)plays an important role in improving service and efficiency of government agencies.Currently,there is still no effective and secure way for data-driven IAGIS to fu...The inter-agency government information sharing(IAGIS)plays an important role in improving service and efficiency of government agencies.Currently,there is still no effective and secure way for data-driven IAGIS to fulfill dynamic demands of information sharing between government agencies.Motivated by blockchain and data mining,a data-driven framework is proposed for IAGIS in this paper.Firstly,the blockchain is used as the core to design the whole framework for monitoring and preventing leakage and abuse of government information,in order to guarantee information security.Secondly,a four-layer architecture is designed for implementing the proposed framework.Thirdly,the classical data mining algorithms PageRank and Apriori are applied to dynamically design smart contracts for information sharing,for the purposed of flexibly adjusting the information sharing strategies according to the practical demands of government agencies for public management and public service.Finally,a case study is presented to illustrate the operation of the proposed framework.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelli...One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.展开更多
The Yutu-2 rover onboard the Chang’E-4 mission performed the first lunar penetrating radar detection on the farside of the Moon.The high-frequency channel presented us with many unprecedented details of the subsurfac...The Yutu-2 rover onboard the Chang’E-4 mission performed the first lunar penetrating radar detection on the farside of the Moon.The high-frequency channel presented us with many unprecedented details of the subsurface structures within a depth of approximately 50 m.However,it was still difficult to identify finer layers from the cluttered reflections and scattering waves.We applied deconvolution to improve the vertical resolution of the radar profile by extending the limited bandwidth associated with the emissive radar pulse.To overcome the challenges arising from the mixed-phase wavelets and the problematic amplification of noise,we performed predictive deconvolution to remove the minimum-phase components from the Chang’E-4 dataset,followed by a comprehensive phase rotation to rectify phase anomalies in the radar image.Subsequently,we implemented irreversible migration filtering to mitigate the noise and diminutive clutter echoes amplified by deconvolution.The processed data showed evident enhancement of the vertical resolution with a widened bandwidth in the frequency domain and better signal clarity in the time domain,providing us with more undisputed details of subsurface structures near the Chang’E-4 landing site.展开更多
In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data...In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data of the GPS control network of Guanyinge Reservoir,and compares the results obtained by several kinds of software.According to the test results,the reasons for the accuracy differences between different software are analyzed,and the optimal results are obtained in the analysis and comparison.The purpose of this paper is to provide useful reference for GPS software users to process data.展开更多
文摘Purpose-The purpose of this paper is to eliminate the fluctuations in train arrival and departure times caused by skewed distributions in interval operation times.These fluctuations arise from random origin and process factors during interval operations and can accumulate over multiple intervals.The aim is to enhance the robustness of high-speed rail station arrival and departure track utilization schemes.Design/methodologylapproach-To achieve this objective,the paper simulates actual train operations,incorporating the fluctuations in interval operation times into the utilization of arrival and departure tracks at the station.The Monte Carlo simulation method is adopted to solve this problem.This approach transforms a nonlinear model,which includes constraints from probability distribution functions and is difficult to solve directly,into a linear programming model that is easier to handle.The method then linearly weights two objectives to optimize the solution.Findings-Through the application of Monte Carlo simulation,the study successfully converts the complex nonlinear model with probability distribution function constraints into a manageable linear programming model.By continuously adjusting the weighting coefficients of the linear objectives,the method is able to optimize the Pareto solution.Notably,this approach does not require extensive scene data to obtain a satisfactory Pareto solution set.Originality/value-The paper contributes to the field by introducing a novel method for optimizing high-speed rail station arrival and departure track utilization in the presence of fluctuations in interval operation times.The use of Monte Carlo simulation to transform the problem into a tractable linear programming model represents a significant advancement.Furthermore,the method's ability to produce satisfactory Pareto solutions without relying on extensive data sets adds to its practical value and applicability in real-world scenarios.
基金supported by the National Natural Science Foundation of China(22178190)。
文摘In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene production by acetonitrile.The accuracy of five prediction methods,UNIFAC(UNIQUAC Functional-group Activity Coefficients),UNIFAC-LL,UNIFAC-LBY,UNIFAC-DMD and COSMO-RS,applied to the butadiene extraction process was verified using partial phase equilibrium data.The results showed that the UNIFAC-DMD method had the highest accuracy in predicting phase equilibrium data for the missing system.COSMO-RS-predicted multiple systems showed good accuracy,and a large number of missing phase equilibrium data were estimated using the UNIFAC-DMD method and COSMO-RS method.The predicted phase equilibrium data were checked for consistency.The NRTL-RK(non-Random Two Liquid-Redlich-Kwong Equation of State)and UNIQUAC thermodynamic models were used to correlate the phase equilibrium data.Industrial device simulations were used to verify the accuracy of the thermodynamic model applied to the butadiene extraction process.The simulation results showed that the average deviations of the simulated results using the correlated thermodynamic model from the actual values were less than 2%compared to that using the commercial simulation software,Aspen Plus and its database.The average deviation was much smaller than that of the simulations using the Aspen Plus database(>10%),indicating that the obtained phase equilibrium data are highly accurate and reliable.The best phase equilibrium data and thermodynamic model parameters for butadiene extraction are provided.This improves the accuracy and reliability of the design,optimization and control of the process,and provides a basis and guarantee for developing a more environmentally friendly and economical butadiene extraction process.
基金supported by National Natural Sciences Foundation of China(No.62271165,62027802,62201307)the Guangdong Basic and Applied Basic Research Foundation(No.2023A1515030297)+2 种基金the Shenzhen Science and Technology Program ZDSYS20210623091808025Stable Support Plan Program GXWD20231129102638002the Major Key Project of PCL(No.PCL2024A01)。
文摘Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.
文摘A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for determining band-pass filter parameters based on signal-to-noise ratio gain,smoothness index,and cross-correlation coefficient is designed using the Chebyshev optimal consistent approximation theory.Additionally,a wavelet denoising evaluation function is constructed,with the dmey wavelet basis function identified as most effective for processing gravity gradient data.The results of hard-in-the-loop simulation and prototype experiments show that the proposed processing method has shown a 14%improvement in the measurement variance of gravity gradient signals,and the measurement accuracy has reached within 4E,compared to other commonly used methods,which verifies that the proposed method effectively removes noise from the gradient signals,improved gravity gradiometry accuracy,and has certain technical insights for high-precision airborne gravity gradiometry.
基金supported by China Southern Power Grid Technology Project under Grant 03600KK52220019(GDKJXM20220253).
文摘The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
文摘The interleaving/multiplexing technique was used to realize a 200?MHz real time data acquisition system. Two 100?MHz ADC modules worked parallelly and every ADC plays out data in ping pang fashion. The design improved the system conversion rata to 200?MHz and reduced the speed of data transporting and storing to 50?MHz. The high speed HDPLD and ECL logic parts were used to control system timing and the memory address. The multi layer print board and the shield were used to decrease interference produced by the high speed circuit. The system timing was designed carefully. The interleaving/multiplexing technique could improve the system conversion rata greatly while reducing the speed of external digital interfaces greatly. The design resolved the difficulties in high speed system effectively. The experiment proved the data acquisition system is stable and accurate.
基金funding from the“Talent Introduction Scientific Research Start-Up Fund”of Shandong University of Science and Technology(Grant number 0104060510217)the“Open Fund of State Key Laboratory of Geodesy and Earth’s Dynamics”(Grant number SKLGED2021-3-5)。
文摘Since 2008 a network of five sea-level monitoring stations was progressively installed in French Polynesia.The stations are autonomous and data,collected at a sampling rate of 1 or 2 min,are not only recorded locally,but also transferred in real time by a radio-link to the NOAA through the GOES satellite.The new ET34-ANA-V80 version of ETERNA,initially developed for Earth Tides analysis,is now able to analyze ocean tides records.Through a two-step validation scheme,we took advantage of the flexibility of this new version,operated in conjunction with the preprocessing facilities of the Tsoft software,to recover co rrected data series able to model sea-level variations after elimination of the ocean tides signal.We performed the tidal analysis of the tide gauge data with the highest possible selectivity(optimal wave grouping)and a maximum of additional terms(shallow water constituents).Our goal was to provide corrected data series and modelled ocean tides signal to compute tide-free sea-level variations as well as tidal prediction models with centimeter precision.We also present in this study the characteristics of the ocean tides in French Polynesia and preliminary results concerning the non-tidal variations of the sea level concerning the tide gauge setting.
基金supported in part by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2019R1A2C1006159)and(NRF-2021R1A6A1A03039493)by the 2021 Yeungnam University Research Grant.
文摘The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size the complexity increases and our ability to analyze them using the current state of the art is at severe risk of failing to keep pace.Therefore,this paper initiates a discussion on graph signal processing for large-scale data analysis.We first provide a comprehensive overview of core ideas in Graph signal processing(GSP)and their connection to conventional digital signal processing(DSP).We then summarize recent developments in developing basic GSP tools,including methods for graph filtering or graph learning,graph signal,graph Fourier transform(GFT),spectrum,graph frequency,etc.Graph filtering is a basic task that allows for isolating the contribution of individual frequencies and therefore enables the removal of noise.We then consider a graph filter as a model that helps to extend the application of GSP methods to large datasets.To show the suitability and the effeteness,we first created a noisy graph signal and then applied it to the filter.After several rounds of simulation results.We see that the filtered signal appears to be smoother and is closer to the original noise-free distance-based signal.By using this example application,we thoroughly demonstrated that graph filtration is efficient for big data analytics.
基金The National Natural Science Foundation of China under contract No.42206033the Marine Geological Survey Program of China Geological Survey under contract No.DD20221706+1 种基金the Research Foundation of National Engineering Research Center for Gas Hydrate Exploration and Development,Innovation Team Project,under contract No.2022GMGSCXYF41003the Scientific Research Fund of the Second Institute of Oceanography,Ministry of Natural Resources,under contract No.JG2006.
文摘The current velocity observation of LADCP(Lowered Acoustic Doppler Current Profiler)has the advantages of a large vertical range of observation and high operability compared with traditional current measurement methods,and is being widely used in the field of ocean observation.Shear and inverse methods are now commonly used by the international marine community to process LADCP data and calculate ocean current profiles.The two methods have their advantages and shortcomings.The shear method calculates the value of current shear more accurately,while the accuracy in an absolute value of the current is lower.The inverse method calculates the absolute value of the current velocity more accurately,but the current shear is less accurate.Based on the shear method,this paper proposes a layering shear method to calculate the current velocity profile by“layering averaging”,and proposes corresponding current calculation methods according to the different types of problems in several field observation data from the western Pacific,forming an independent LADCP data processing system.The comparison results have shown that the layering shear method can achieve the same effect as the inverse method in the calculation of the absolute value of current velocity,while retaining the advantages of the shear method in the calculation of a value of the current shear.
基金This work was supported by the National Nature Science Foundation of China(Nos.71871188 and U1834209)the Research and development project of China National Railway Group Co.,Ltd(No.P2020X016).
文摘The delay-causing text data contain valuable information such as the specific reasons for the delay,location and time of the disturbance,which can provide an efficient support for the prediction of train delays and improve the guidance of train control efficiency.Based on the train operation data and delay-causing data of the Wuhan-Guangzhou high-speed railway,the relevant algorithms in the natural language processing field are used to process the delay-causing text data.It also integrates the train operatingenvironment information and delay-causing text information so as to develop a cause-based train delay propagation prediction model.The Word2vec model is first used to vectorize the delay-causing text description after word segmentation.The mean model or the term frequency-inverse document frequency-weighted model is then used to generate the delay-causing sentence vector based on the original word vector.Afterward,the train operating-environment features and delay-causing sentence vector are input into the extreme gradient boosting(XGBoost)regression algorithm to develop a delay propagation prediction model.In this work,4 text feature processing methods and 8 regression algorithms are considered.The results demonstrate that the XGBoost regression algorithm has the highest prediction accuracy using the test features processed by the continuous bag of words and the mean models.Compared with the prediction model that only considers the train-operating-environment features,the results show that the prediction accuracy of the model is significantly improved with multi-ple regression algorithms after integrating the delay-causing feature.
基金Supported by the National Key R&D Program of China(No.2016YFC0303900)the Laoshan Laboratory(Nos.MGQNLM-KF201807,LSKJ202203604)the National Natural Science Foundation of China(No.42106072)。
文摘The Kuiyang-ST2000 deep-towed high-resolution multichannel seismic system was designed by the First Institute of Oceanography,Ministry of Natural Resources(FIO,MNR).The system is mainly composed of a plasma spark source(source level:216 dB,main frequency:750 Hz,frequency bandwidth:150-1200 Hz)and a towed hydrophone streamer with 48 channels.Because the source and the towed hydrophone streamer are constantly moving according to the towing configuration,the accurate positioning of the towing hydrophone array and the moveout correction of deep-towed multichannel seismic data processing before imaging are challenging.Initially,according to the characteristics of the system and the towing streamer shape in deep water,travel-time positioning method was used to construct the hydrophone streamer shape,and the results were corrected by using the polynomial curve fitting method.Then,a new data-processing workflow for Kuiyang-ST2000 system data was introduced,mainly including float datum setting,residual static correction,phase-based moveout correction,which allows the imaging algorithms of conventional marine seismic data processing to extend to deep-towed seismic data.We successfully applied the Kuiyang-ST2000 system and methodology of data processing to a gas hydrate survey of the Qiongdongnan and Shenhu areas in the South China Sea,and the results show that the profile has very high vertical and lateral resolutions(0.5 m and 8 m,respectively),which can provide full and accurate details of gas hydrate-related and geohazard sedimentary and structural features in the South China Sea.
基金Project 863-2.5.2.25 supported by the National High Technology Research & Development (863) Program of China
文摘The Main Optical Telescope (MOT) is an important payload of the Space Solar Telescope (SST) with various instruments and observation modes. Its real-time data handling and management and control tasks are arduous. Based on the advanced techniques of foreign countries, an improved structure of onboard data handling systems feasible for SST, is proposed. This article concentrated on the development of a Central Management & Control Unit (MCU) based on FPGA and DSP. Through reconfigurating the FPGA and DSP programs, the prototype could perform different tasks. Thus the inheritability of the whole system is improved. The completed dual-channel prototype proves that the system meets all requirements of the MOT. Its high reliability and safety features also meet the requirements under harsh conditions such as mine detection.
基金National Natural Science Foundation of China(No.42022025)。
文摘With the continued development of multiple Global Navigation Satellite Systems(GNSS)and the emergence of various frequencies,UnDifferenced and UnCombined(UDUC)data processing has become an increasingly attractive option.In this contribution,we provide an overview of the current status of UDUC GNSS data processing activities in China.These activities encompass the formulation of Precise Point Positioning(PPP)models and PPP-Real-Time Kinematic(PPP-RTK)models for processing single-station and multi-station GNSS data,respectively.Regarding single-station data processing,we discuss the advancements in PPP models,particularly the extension from a single system to multiple systems,and from dual frequencies to single and multiple frequencies.Additionally,we introduce the modified PPP model,which accounts for the time variation of receiver code biases,a departure from the conventional PPP model that typically assumes these biases to be time-constant.In the realm of multi-station PPP-RTK data processing,we introduce the ionosphere-weighted PPP-RTK model,which enhances the model strength by considering the spatial correlation of ionospheric delays.We also review the phase-only PPP-RTK model,designed to mitigate the impact of unmodelled code-related errors.Furthermore,we explore GLONASS PPP-RTK,achieved through the application of the integer-estimable model.For large-scale network data processing,we introduce the all-in-view PPP-RTK model,which alleviates the strict common-view requirement at all receivers.Moreover,we present the decentralized PPP-RTK data processing strategy,designed to improve computational efficiency.Overall,this work highlights the various advancements in UDUC GNSS data processing,providing insights into the state-of-the-art techniques employed in China to achieve precise GNSS applications.
文摘A field-programmable gate array(FPGA)based high-speed broadband data acquisition system is designed.The system has a dual channel simultaneous acquisition function.The maximum sampling rate is 500 MSa/s and bandwidth is200 MHz,which solves the large bandwidth,high-speed signal acquisition and processing problems.At present,the data acquisition system is successfully used in broadband receiver test systems.
基金Supported by the Project of Guangdong Science and Technology Department(2020B010166005)the Post-Doctoral Research Project(Z000158)+2 种基金the Ministry of Education Social Science Fund(22YJ630167)the Fund project of Department of Science and Technology of Guangdong Province(GDK TP2021032500)the Guangdong Philosophy and Social Science(GD22YYJ15).
文摘The inter-agency government information sharing(IAGIS)plays an important role in improving service and efficiency of government agencies.Currently,there is still no effective and secure way for data-driven IAGIS to fulfill dynamic demands of information sharing between government agencies.Motivated by blockchain and data mining,a data-driven framework is proposed for IAGIS in this paper.Firstly,the blockchain is used as the core to design the whole framework for monitoring and preventing leakage and abuse of government information,in order to guarantee information security.Secondly,a four-layer architecture is designed for implementing the proposed framework.Thirdly,the classical data mining algorithms PageRank and Apriori are applied to dynamically design smart contracts for information sharing,for the purposed of flexibly adjusting the information sharing strategies according to the practical demands of government agencies for public management and public service.Finally,a case study is presented to illustrate the operation of the proposed framework.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
文摘One of the biggest dangers to society today is terrorism, where attacks have become one of the most significantrisks to international peace and national security. Big data, information analysis, and artificial intelligence (AI) havebecome the basis for making strategic decisions in many sensitive areas, such as fraud detection, risk management,medical diagnosis, and counter-terrorism. However, there is still a need to assess how terrorist attacks are related,initiated, and detected. For this purpose, we propose a novel framework for classifying and predicting terroristattacks. The proposed framework posits that neglected text attributes included in the Global Terrorism Database(GTD) can influence the accuracy of the model’s classification of terrorist attacks, where each part of the datacan provide vital information to enrich the ability of classifier learning. Each data point in a multiclass taxonomyhas one or more tags attached to it, referred as “related tags.” We applied machine learning classifiers to classifyterrorist attack incidents obtained from the GTD. A transformer-based technique called DistilBERT extracts andlearns contextual features from text attributes to acquiremore information from text data. The extracted contextualfeatures are combined with the “key features” of the dataset and used to perform the final classification. Thestudy explored different experimental setups with various classifiers to evaluate the model’s performance. Theexperimental results show that the proposed framework outperforms the latest techniques for classifying terroristattacks with an accuracy of 98.7% using a combined feature set and extreme gradient boosting classifier.
基金supported by the National Natural Science Foundation of China(Grant Nos.42325406 and 42304187)the China Postdoctoral Science Foundation(Grant No.2023M733476)+3 种基金the CAS Project for Young Scientists in Basic Research(Grant No.YSBR082)the National Key R&D Program of China(Grant No.2022YFF0503203)the Key Research Program of the Institute of Geology and GeophysicsChinese Academy of Sciences(Grant Nos.IGGCAS-202101 and IGGCAS-202401).
文摘The Yutu-2 rover onboard the Chang’E-4 mission performed the first lunar penetrating radar detection on the farside of the Moon.The high-frequency channel presented us with many unprecedented details of the subsurface structures within a depth of approximately 50 m.However,it was still difficult to identify finer layers from the cluttered reflections and scattering waves.We applied deconvolution to improve the vertical resolution of the radar profile by extending the limited bandwidth associated with the emissive radar pulse.To overcome the challenges arising from the mixed-phase wavelets and the problematic amplification of noise,we performed predictive deconvolution to remove the minimum-phase components from the Chang’E-4 dataset,followed by a comprehensive phase rotation to rectify phase anomalies in the radar image.Subsequently,we implemented irreversible migration filtering to mitigate the noise and diminutive clutter echoes amplified by deconvolution.The processed data showed evident enhancement of the vertical resolution with a widened bandwidth in the frequency domain and better signal clarity in the time domain,providing us with more undisputed details of subsurface structures near the Chang’E-4 landing site.
文摘In order to obtain high-precision GPS control point results and provide high-precision known points for various projects,this study uses a variety of mature GPS post-processing software to process the observation data of the GPS control network of Guanyinge Reservoir,and compares the results obtained by several kinds of software.According to the test results,the reasons for the accuracy differences between different software are analyzed,and the optimal results are obtained in the analysis and comparison.The purpose of this paper is to provide useful reference for GPS software users to process data.